IBM's Watson does the Jeopardy IBM Challenge

IBM’s celebrated supercomputer Watson will square off against Jeopardy champions Ken Jennings and Brad Rutter in a first-of-its-kind competition to be aired over three nights in February.

The grand prize is $1 million; second place wins $300,000; third place receives $200,000. Jennings and Rutter have pledged 50 percent of their winnings to charity; IBM will donate all of its prize.

During a demonstration round Thursday, Watson handily defeated the two Jeopardy champions.

The IBM Jeopardy Challenge represents a milestone in the development of artificial intelligence, and is part of Big Blue’s centennial celebration.

“We are at a very special moment in time,” said Dr. John E. Kelly III, IBM Senior Vice President and Director of IBM Research. “We are at a moment where computers and computer technology now have approached humans. We have created a computer system that has the ability to understand natural human language, which is a very difficult thing for computers to do.”

Named after IBM founder Thomas J. Watson, the supercomputer is one of the most advanced systems on Earth and was programmed by 25 IBM scientists over the last four years. Researchers scanned some 200 million pages of content — or the equivalent of about one million books — into the system, including books, movie scripts and entire encyclopedias.

Watson is not your run-of-the-mill computer. The system is powered by 10 racks of IBM POWER 750 servers running Linux, and uses 15 terabytes of RAM, 2,880 processor cores and can operate at 80 teraflops.

That’s 80 trillion operations per second.

Watson scans the 2 million pages of content in its “brain” in less than three seconds. The system is not connected to the internet, but totally self-contained.

The machine is the size of 10 refrigerators.

“This is the culmination of four years of hard work and we didn’t know that we’d get here,” said David A. Ferrucci, the principal investigator for IBM’s Watson project.

Watson follows Deep Blue, the IBM supercomputer that ultimately defeated chess grandmaster Garry Kasparov in 1997.

Kelly said the lessons IBM learned from developing Watson would be applicable across industries, including law, business, and especially medicine.

“Watson can read all of the health-care texts in the world in seconds,” Kelly said. “And that’s our first priority, creating a Dr. Watson, if you will.”

“Imagine if a doctor in Africa could access all of the world’s medical texts from the cloud, in seconds, to learn about potential drug interactions,” he added.

During a press conference Thursday morning at IBM Research headquarters in Yorktown Heights, New York, the company showcased Watson and held a practice Jeopardy round between the supercomputer, Jennings, who won over $2.5 million on a 74-game run in 2004-2005, and Rutter, the all-time money leader at $3,255,102.

The scene was slightly surreal. Watson “stood” in between the two champions, its “avatar” — which the company describes as “a global map projection with a halo of ‘thought rays’” — flickering and flashing, as if it was thinking.

“The threads and thought rays that make up Watson’s avatar change color and speed depending on what happens during the game,” according to Watson’s official “bio.” “For example, when Watson feels confident in an answer the rays on the avatar turn green; they turn orange when Watson gets the answer wrong. Viewers will see the avatar speed up and activate when Watson’s algorithms are working hard to answer a clue.”

Watson jumped out to an early lead. For the first four questions of the round, the supercomputer “read” the clue, “pressed” its buzzer, and provided the correct answer. Its human opponents tried valiantly to catch up, but the end of the round, Watson was in first place with $4,400. Jennings was second, with $3,400. Rutter was third, with $1,200.

None of the three contestants appeared rattled. Of course, Watson lacks the capacity to get rattled.

“Watson doesn’t have any emotions, but it knows that humans do,” Ferrucci said.

Asked if he was nervous to be competing against a computer, Rutter quipped, “Not nervous, but I will be when Watson’s progeny comes back from the future to kill me.”

The score after one round. Photo by Sam Gustin/Wired.com

Thursday’s round was just a demonstration. Watson will go head-to-head with Jennings and Rutter in two matches, to be aired February 14, 15, and 16.

Source: http://www.wired.com/epicenter/2011/01/ibm-watson-jeopardy/
http://www.engadget.com/2011/01/13/ibms-watson-supercomputer-destroys-all-humans-in-jeopardy-pract/
http://en.wikipedia.org/wiki/Watson_(artificial_intelligence_software)
http://www.washingtonpost.com/wp-dyn/content/article/2011/02/11/AR2011021107031.html

Just watched the first two parts and it's pretty amazing what IBM have done with Watson. It did get some of the answers incorrect but a majority of the questions were answered correctly.

Pretty good.
 
Can it do speech and visual recognition, or does it receive the questions in text/digital format?
 
Would be more impressive - especially with that many flops sitting behind its back - if it could do at least speech recognition. Feeding it the questions in text form is...well, not exactly cheating (especially since the questions are shown on screens during the TV show), but a little cheap anyway. :p

I wonder what the source(s) for its knowledge is. OCR'd dictionaries and encyclopedias, I guess...
 
I'm also disappointed a little. This technology was already there in 1992, though it then disappeared off the face of the earth presumably because the CIA wanted it. By now I would have guessed that a computer could do better in terms of voice and image recognition, but apparently not yet (or at least not Watson).
 
presumably because the CIA wanted it.
Sorry, but LOL. :) I think that's probably presuming quite a bit there! What's your source for such a suspicion?

Anyway, this tech can't be commandeered by the CIA or anyone else... It will emerge, sooner or later. That it hasn't already is most likely because there hasn't been any real need for it so far. Other than in Star Trek where do you find people who are comfortable with conversing with a computer? Heh... :)

Some applications, like dealing with physically handicapped people and so on would benefit a lot, but I guess they don't have enough money to be targetted for a big R&D project like would be needed to really crack this particular nut.
 
Speech recognition (as far as the machine is expected to do more than write what you say and answer a few basic commands) is and always was a largely unsolved problem. Heck, we don't even have a proper theory for natural language semantic representation.

Nevertheless, it would be fascinating if we would once get there...
 
Sorry, but LOL. :) I think that's probably presuming quite a bit there! What's your source for such a suspicion?

I studied Artificial Intelligence back in 1992. There was a large project that could already for instance determine logical fallacies in text such as that two people couldn't have met because they lived in different periods. It was pretty well known in academic circles and the way it worked (it had over 30GB of human knowledge modelled in predicates back then already), and then suddenly just completely disappeared. This is a fairly common thing with stuff like sattelite camera fidelity being held from the public for a while and reserved to the army too.
 
Much more logical explanation is we are simply clueless about how to do proper speech recognition and haven't really got much better from what we had a couple of decades ago :)
 
Much more logical explanation is we are simply clueless about how to do proper speech recognition and haven't really got much better from what we had a couple of decades ago :)

No, in this case I wasn't talking about speech recognition, but simply that what Watson is doing today just isn't that exciting to me. It is just what Grall says, there weren't that many other commercial incentives yet (it is very expensive, even more so back then when you didn't have the whole internet as material you could use to keep feeding on and testing with).

But what we learnt from speech recognition is that the human ear is highly context sensitive for understanding a sentence. However, coupling modern speech recognition with the knowledge base inside Watson should yield pretty good results. The complications to properly implement this in Watson though are obviously pretty significant - you'd have to be able to know who to listen to, to start with, and then isolate that signal. That alone requires a lot more than just what they were doing now, and if it is not up to par it will detract from the quality of Watson's question answering engine, so it is understandable that they are showing this separately first. I don't doubt speech recognition will follow soon. Image recognition is even more difficult, so I don't know when that will show up, but 3D depth perception is going to help there significantly - even for 2D perception in this instance, it would help determine where the 2D screen is that shows the question clip, at wat angle it is etc.

But you are right, there are all sorts of other factors at play. Take expert systems: back in 1992, there were several projects to model medical diagnostics data and legal jurisprudence. The technology was advanced enough at that point to make a pretty well working system already, that at the very least could have been used by doctors to verify their diagnosis, or judges to verify their sentence. But these are so called free jobs that pay according to your expertise. Such experts rarely want to share their expertise voluntarily even with peers.

I think it is still a total travesty though that the medical world doesn't produce lots of diagnostic expert systems that could at the very least be used to help prevent making stupid mistakes. I wouldn't be surprised if as many people die of mistakes as die of cancer.
 
Arwin, that medical thing was actually what the IBM people were thinking. NOVA did a show on this a bit back which describes in detail how everything works. And yes it should have been forced to do speech recognition, but oh well.
 
Arwin, that medical thing was actually what the IBM people were thinking. NOVA did a show on this a bit back which describes in detail how everything works.

It would be cool if IBM pushes it through. Like I said there are stupid political, personal pride and financial reasons that keep many doctors and medical experts from wanting to cooperate in this field, but the landscape may be changing sufficiently - for instance in the netherlands there is now an interesting balance between government, insurance company and hospital. That could easily go the wrong way, but right now it offers openings for some interesting changes. The insurance company for instance could work with IBM on this matter and offer discounts to hospitals that work with the technology, say.

And IBM is one of the very few companies that could even just do it for the heck of it, to show that it can be done.
 
You know what would be even cooler though?
If they had integrated speech recognition as you stated. Then imagine you are in a doctors office and there is a microphone. As you talk to the doc the computer tries to come up with its own findings. Then the doc glances at the screen and sees 5 different things with the statistics of likelihood and says "hm I never thought of X, or yeah it is as I would suspect" I know there could be privacy concerns, but they could actually be dealt with pretty easy. The problem is that privacy concerns and malpractice concerns work in opposite directions. For privacy the easy thing is to not associate with records and delete right after. For malpractice though the opposite is true.
 
BTW hilarious
http://www.dallasnews.com/business/...-revolutionize-human-computer-interaction.ece
IBM will use its Watson natural language technology with Nuance’s speech recognition software to create a system that allows a doctor to input a patient’s symptoms.

The computer will transcribe that speech into text, figure out what the doctor is asking for, scan related medical textbooks and journals, integrate data from the patient’s lab tests and family history, and offer the most likely diagnosis.

I was a day ahead.
 
I now officially love IBM. It took only nine years ... :cool:

Of course, all this still depends on the doctor properly inputting the symptoms (including from touching), but this is a good start. Definitely though they should pay a lot of attention to helping to make the right diagnostic steps. But I guess that you'll typically start from the complaints that the patient voices.
 
I see in the first post that IBM talks about easy drug interaction lookup.

I know there's a popular program called Epocrates that has been in use for about a decade now that is a sort of instant access Physician's Desk Reference (a tome that Docs used to have to dig into for info). Epoc is pretty cool because it is constantly updated and can be run off of a smart phone or PDA.

The idea of a medical "decision engine" is pretty exciting to say the least.
 
I see in the first post that IBM talks about easy drug interaction lookup.

I know there's a popular program called Epocrates that has been in use for about a decade now that is a sort of instant access Physician's Desk Reference (a tome that Docs used to have to dig into for info). Epoc is pretty cool because it is constantly updated and can be run off of a smart phone or PDA.

The idea of a medical "decision engine" is pretty exciting to say the least.

Yeah, the basic idea is good if not for the fact that many things have the very same symptoms and that the input would come from the user, who may be bad at explaining said symptoms.

What would be needed is a machine that can, more or less, instantly analyze a blood sample (and other things) to accurately give you a diagnosis, which backed by its scientific nature is bound to be as accurate as our knowledge goes with medicine.
 
Not really though. The computer in set up to do machine learning. That is why it does as well as it did. Even if patients suck at describing symptoms, as long as they suck in a similar way it will still learn what diagnosis makes sense.
 
Back
Top