Last night DiscoBlog traipsed down to the fairly swanky headquarters of giant advertising firm Saatchi & Saatchi, where the British-based ad folks recognized 10 “world-changing ideas”—inventions to improve people’s lives in one way or another.
The winner among the finalists was the LifeStraw, a foot-long filtering tube that purports to let you (or your friends in the developing world) drink even the filthiest, most microbe-infested water without getting sick. We’re not sure what the criteria were for winning this award—the LifeStraw isn’t exactly new, having been named a Best Invention of the Year by Time in 2005—but it seems a legitimately great item. Wiley event attendees insist they knew it would win because it fit in with what Saatchi chose in the past.
Whereas LifeStraw may indeed be the most world-changing “idea” at the event, it did not have the most compelling presentation. (Perhaps it was handicapped in this regard by the fact that the plentiful Saatchi-provided wine seemed to be downright hygienic.)
Some other finalists’ presentations were both more future-looking and more exciting for the short-attention-spanned blogger in all of us.
Harvard/MGH neuroscientist John Pezaris showed off part of the system he’s using to try to restore sight to the blind. Unlike other devices that try to tap into a blind person’s functional retina, Pezaris aims to plant electrodes into the lateral geniculate nucleus (LGN), an area deep inside the brain, within the thalamus, that relays visual signals from the optic nerve to the primary visual cortex at the back of the brain. So far he and a collaborator have put a single electrode in a monkey’s LGN and shown that they can use the electrode to make the monkeys “see” single spots of light.
But one pixel obviously won’t take you very far. So now Pezaris is trying to figure out how many pixels of visual signal a blind person would need to be able to see to have some useful level of vision—i.e., how many electrodes to stick in the LGN. To figure this out, he set up a machine that simulates what you would see if you were using the artificial-vision device. The machine follows your eye movements around an obscured image on a computer screen and reveals X number pixels near where you’re looking; given a certain number of pixels and the eye’s inquisitive habit of scanning to see different parts of the image, the picture becomes identifiable.
He also mentioned that funding agencies have started to lean on artificial-sight researchers to standardize the different levels of vision they hope to bestow on the blind. (“Congratulations, you have level 3 sight!”) The first level might be the ability to see a distinguish a door from a window (easy to take for granted when you have normal vision), the next might be to recognize people you know, and so on. As for the ability to drive, Pezaris says that is unfortunately quite a ways off.