Recently released scenes of the upcoming remake of V combine two of our favorite things: creepy aliens and Party of Five! [via thrfeed]
But there’s plenty of room for a condensed run-through of all the latest technology, from motion capture to the ever-ubiquitous CGI. Which is reason enough to like the Science Channel’s Science of the Movies series, premiering Tuesday, May 26. Hosted by AchieveNerdvana.com blogger and Geekscape columnist Nar Williams, it’s six episodes on the behind-the-scenes geekosity that’s responsible for everything from Terminator 3 to The Fast and the Furious to Dexter to, yes, Star Wars.
Of course, take away all the blockbuster jargon and Hollywood sheen, and what you’re really watching is a tour through the ranks of ironic T-shirted, scraggly-facial-haired dudes that create the world’s biggest movies. Williams hobnobs with the best and baddest, from John Dykstra (yup, the guy who blew up the Death Star) to the Strause brothers, whose visual effects shop, Hydraulx, dominates the CGI market (300, anyone?).
And the first Star Wars may have been 30+ years ago, but its spirit lives on in the hearts of harp music loving pre-teens everywhere [via The Website at the End of the Universe] :
Over on 80 beats, my colleague Eliza Strickland points out some interesting research on an autonomous laboratory. A group of four networked computers connected to a range of lab equipment was left alone to tease out some aspects of yeast genetics. The computers came up with some hypotheses about how various genes operated, then came up with experiments to test these hypotheses out. The upshot was a number of minor, but worthwhile, advances in our knowledge of yeast biology.
Teaching a computer how to learn is a perennial topic in artificial intelligence research, and one that’s long been mined in science fiction. The moment when the computer demonstrates it has learned how to learn is usually a pretty significant moment in any story it’s in, not least because it is one of the Laws Of Science Fiction that once a computer has started to learn, it will continue to learn at an ever accelerating rate. (A corollary of this Law states that if the computer isn’t already self-aware, sentience will arise by the end of the next chapter or act at the very latest.) Interestingly, the “My God! It’s learnt how to learn!” moment seems to be dwelt on by movie and TV shows (Wargames, Colossus, Terminator 3) much more than it crops up in literary science fiction. In literary science fiction, artificial intelligence is often simply presented as fait accompli. So does anyone have recommendations for a good literary treatment of the birth of an A.I.? (Frederic Brown’s 1954 short-short story “Answer” is of course taken as a given classic of the genre).
Monday night was the last new episode of Terminator: The Sarah Connor Chronicles until February. The subplot featured Agent Ellison’s hesitant attempts to tutor a nascent artificial intelligence that may or may not grow up to become Skynet, the computer system that attempts to destroy humanity in the future. To speed the process, Ellison’s boss has hooked the A.I. up to the recovered body of a previously-dispatched terminator, explaining to the horrified Ellison that “Many believe that tactile experience is integral to A.I. development.” This was a spot on statement, directly echoing the work of people like Rodney Brooks and his colleagues at the MIT Computer Science & Artificial Intelligence Laboratory.
Last night’s episode of Terminator: The Sarah Connor Chronicles centered on Skynet going after a non-Connor-clan target: an unborn child whose natural immunity would would one day provide a cure for a lethal bioweapon developed in the future. It would be easy to think that this would be overkill, even for Skynet — instead of going through all the trouble of sending a terminator back through time, why not just brew up a different bioweapon? The answer is that making militarily effective bioweapons is actually quite tough.
On Monday night’s Terminator, part of the plot revolved around a new microprocessor that promised to work at the “12-nanometer node.”
The Connor clan became very interested in this chip, since it’s exactly the kind of technology that might enable a cyborg to have an artificial intelligence system powerful enough to make it a lethal killing machine and deliver clever quips.
Vision, for the SciFi robot, is a much richer affair than it is for us ordinary mortals. Even the eyes of a trash compactor like Wall-E can home in on an object, zoom in or out as needed, apply light filters, and maintain a heads up display showing velocity or coordinates, as needed. It’s so common in TV and movies that when a movie starts with a view through cross hairs, a light filter, and a rapid zoom on something or someone, it’s an instant signifier that we, the audience, are seeing the world from a robot’s point of view. But not for long,perhaps. A couple of University of Washington researchers are ready to take the cool-vision mantle back from the robots.
In essence, what Dr. Babak Parviz has accomplished is to put an integrated circuit into a contact lens. Using a process called self-assembly, Parviz arranges nanometer-thick metal onto the organic polymer that makes up the contact lens, and then connects them to tiny light emitting diodes. The LEDs will be able to paint information on top of whatever scene you are looking at. They haven’t gotten to the point of lighting up the diodes, but they have begun testing them on animals. So far, rabbits can withstand wearing the lenses for 20 minutes with no ill effects.
But once the microchip is in place, Parviz thinks it will be a short hop, technologically speaking, to getting those robot features built into the lens. Perhaps most of us don’t need targeting computers, but the zoom feature could sure be handy when I have to watch baseball from the nosebleed section, and I have to figure that recording video straight from the contact lens, Finder style, can’t be far behind.
Most of the gadgetry on the lens will be arranged into a ring that surrounds the transparent part of the eye. As contact lens wearers know, the sclera has no nerves in it, which makes it a great spot for putting wireless communications or other features for this lens. Actually, they’re hoping to use that space for solar panels.
The one thing these contact lenses can’t do? Fix your eyesight. I imagine that wll be along soon.
Last night’s Terminator: The Sarah Connor Chronicles brought into the foreground an idea that’s been floating around in the background of the Terminator franchise for some time: that the flesh-and-blood bodies that surround terminator exoskeletons are based on real people. In the future, a young woman called Allison Young falls into the hands of Skynet, and given that she looks exactly like terminator Cameron, we have a fair idea of how things are going to turn out for her. In the real world, how close are we to creating not just a generic individual, but a doppleganger of a specific individual?
Ever since the first Terminator movie in 1984, Terminator cyborgs have had the ability to duplicate the voice of any given human they hear, an ability deployed again in last night episode of Terminator: The Sarah Connor Chronicles, when our plucky band of heroes has its cell phones intercepted. It’s not so far fetched — pretty much this exact scenario has been worrying real security researchers for some time.