Thoughts on a computer playing Jeopardy

Saturday January 15, 2011

IBM made a computer that can play Jeopardy really well, as reported here.

On the one hand, of course a computer should be great at trivia.  It can remember everything you put in it, perfectly.  Computers are really good at remembering trivia.

On the other hand, it's impressive that they've got the language processing working well enough to successfully match the questions to the answers.  Language processing is hard for computers.

But even if this computer can beat people at Jeopardy, as computers have for some time beat people at chess, I'm not sure I should be that impressed.  The computers are definitely achieving ends that humans achieve, but the means are so clearly contrived and mechanical.  The computers don't do anything that I would call thinking.

There is a divide, I guess, in AI:

A: Build a system by whatever means, usually explicitly oriented toward an outcome, or goal performance.  The Jeopardy computer does this, although it does do it by a pretty neat unstructured data analysis method using Apache, Java, and C++, of all things.  Chess computers do this.  This approach is usually pretty quick to achieve some results, and can scale to really good results in specific applications (chess, Jeopardy).  But it doesn't transfer to other domains.  I can't deny the usefulness of this approach, but it's just not very interesting to me.  It's like house painting.  It gets a job done.

B: Build a general system, possibly based on human biology or general ideas about thinking (neural networks, etc.).  This kind of system has a harder time excelling at specific tasks like playing chess, but has more promise of future flexibility, and of becoming a kind of intelligence comparable to what humans have.  It may be a long shot, but I think this stuff is more interesting, and I think it is more likely to be really cool in the future.

So, like the chat program that fooled people into thinking it was human by deliberately making spelling mistakes, things that are explicitly programmed by humans may achieve short-term success in special case situations, but are not the right path to real advances in AI.

This post was originally hosted elsewhere.