Doctors are not doing so well. In addition to being extremely expensive to train, maintain, and, of course, to visit, they have a lot of other problems. If your doctor is a drunk, an addict, or just plain-old incompetent, his or her colleagues may not tell you or anyone else. Even when doctors are sober and sharp, their diagnoses are often, ahem, less than correct. Mark Walker’s “Uninsured, Heal Thyself” paints a pretty terrifying picture:
Physicians can and do misdiagnose frequently: they prescribe for nonexistent diseases or injuries and fail to notice symptoms or make the correct inferences. An article in the Journal of the American Medical Association noted: “Two 1998 studies validate the continued truth that there is an approximately 40% discordance between what clinical physicians diagnose as causes of death antemortem and what the postmortem diagnoses are” (Lunberg, 1998). This is a pretty shocking statistic: in 4 out of 10 deaths there is a disagreement between what physicians think is the cause of death prior to autopsy, and autopsy findings.
Egads. Is there any solution to the doctor debacle? Read More
In his 1950 paper “Computing Machinery and Intelligence,” Alan Turing proposed what is now known as the Turing test in artificial intelligence. The idea is that if you are unable to discriminate between a computer and a human who is answering your questions via a keyboard and screen, then the computer is intelligent.
There are many problems with this idea, but despite these problems, it still remains a compelling benchmark, and one that has yet to be reached. But think of the following variation: rather than have your computer and human team answer any old question, the questions have to be similar to what you would expect on the quiz TV show Jeopardy! – clues about trivia in the form of answers to a question that you must come up with.
Even this greatly restricted version of the Turing test is very challenging, but I.B.M.’s machine called “Watson” has recently made intriguing steps toward passing it. Watson takes any Jeopardy-type question and gives a response. It was not developed as a new type of intelligence test, but instead as a grand challenge to beat a human at a language-based task, like a Deep Blue of language (IBM’s Deep Blue chess playing computer beat the world chess champion in 1997). You can challenge it yourself here. It currently uses a fixed set of a large number (in the millions) of documents and a sophisticated parallelized statistical algorithm running on a supercomputer. By being parallelized, the algorithm can try a large number of possible interpretations of the question out at once, and pick the most likely interpretation.
Over on 80 beats, my colleague Eliza Strickland points out some interesting research on an autonomous laboratory. A group of four networked computers connected to a range of lab equipment was left alone to tease out some aspects of yeast genetics. The computers came up with some hypotheses about how various genes operated, then came up with experiments to test these hypotheses out. The upshot was a number of minor, but worthwhile, advances in our knowledge of yeast biology.
Teaching a computer how to learn is a perennial topic in artificial intelligence research, and one that’s long been mined in science fiction. The moment when the computer demonstrates it has learned how to learn is usually a pretty significant moment in any story it’s in, not least because it is one of the Laws Of Science Fiction that once a computer has started to learn, it will continue to learn at an ever accelerating rate. (A corollary of this Law states that if the computer isn’t already self-aware, sentience will arise by the end of the next chapter or act at the very latest.) Interestingly, the “My God! It’s learnt how to learn!” moment seems to be dwelt on by movie and TV shows (Wargames, Colossus, Terminator 3) much more than it crops up in literary science fiction. In literary science fiction, artificial intelligence is often simply presented as fait accompli. So does anyone have recommendations for a good literary treatment of the birth of an A.I.? (Frederic Brown’s 1954 short-short story “Answer” is of course taken as a given classic of the genre).
It must be nice to have a car like KITT that can, amongst his many other handy abilities, transform. Sure it’s handy for crime fighting and all, but being able to turn into a van or a truck means Michael Knight never needs to rent a moving truck or worry about delivery when there’s a big Ikea sale. But since KITT’s ability to rearrange himself at the molecular level means that he can transform himself into any number of car-like shapes, even ones he’s never experienced before. And that means that he — and his deceased creator Dr. Graiman — has solved the problem of getting an artificial intelligence to use newly added parts. Typically a robot has to have a whole new set of code to be able to handle a new tool or sensor. Sure, most computers can handle plug-and-play attachments these days, but they still require a set of pre-written code to drive the newly added part. Artificial intelligence designers want the robot to be able to design that code itself.
Monday night was the last new episode of Terminator: The Sarah Connor Chronicles until February. The subplot featured Agent Ellison’s hesitant attempts to tutor a nascent artificial intelligence that may or may not grow up to become Skynet, the computer system that attempts to destroy humanity in the future. To speed the process, Ellison’s boss has hooked the A.I. up to the recovered body of a previously-dispatched terminator, explaining to the horrified Ellison that “Many believe that tactile experience is integral to A.I. development.” This was a spot on statement, directly echoing the work of people like Rodney Brooks and his colleagues at the MIT Computer Science & Artificial Intelligence Laboratory.