It’s been a hectic year end, I’ve been overwhelmed with year-end stuff, and have been a bad, bad blogger. The good news is that I’m back at it now, but the fatalistic part of me asks “What’s the point? Afterall the world is going to end in a couple hours.” You’ve not noticed? Perhaps that’s best, because it reduces the likelihood of widespread panic, but our Gregorian calendar ends at midnight December 31st! The obvious implication is that it’s the end of the world! Clearly Pope Gregory XIII had advanced divinely-inspired knowledge of the coming cataclysm.
At least that’s the logic being used to advance the whole 2012 mythos.
For both of you who haven’t heard about this, the ancient Mayan calendar ostensibly comes to an end in 2012, and there are no shortage of doomsayers who claim that the Mayans somehow had advance knowledge of the end of the world, and their calendar reflects this. With 2012 slightly over a year away, you can be certain that this is a topic to which we’ll be turning here fairly regularly, even though it more correctly falls under the purview of “Fiction not Science”.
It’s understandable, actually. From an evolutionary standpoint, it was practically yesterday that we hunted/gathered our own food, and lived in constant fear of being eaten by the saber toothed cat. So in some senses our bodies are still wired for a way of life that hasn’t existed for several thousands of years. Most of us, with varying frequencies and intensities, still need to feel that primal surge of adrenaline. Some of us, myself among them, enjoy violent games like football, rugby, or hockey. Some of us, myself sometimes among them, get the ol’ adrenaline pumping through extreme sports. Some of us, myself rarely among them, enjoy roller coasters (not a fan). Many of us in all the previous categories scare ourselves by watching horror or action movies.
Some, myself definitely not among them, worry about the End of the World Scenario Du Jour. This is neither uncommon nor surprising, humans have worried about the end of the world since somebody first realized that it might, in fact, have an end. With 2012 now a year away, The End seems to be more of a player in the zeitgeist and is an ever-increasing topic of relevance in media and popular conversation. The popularity of my friend (and fellow Discover blogger) Phil Plait’s book Death From the Skies: These are the Ways the World Will End speaks to this. Even mainstream media outlets like Fox News, LiveScience , and Fox News again, recently ran pieces examining end of the world scenarios (and even though the second Fox entry was about debunked scenarios for the End, it still implies that it’s in the forefront of thought).
In just a few days, the first decade of the 21st Century will be over. Can we finally admit we live in the future? Sure, we won’t be celebrating New Years by flying our jetpacks through the snow or watching the countdown from our colony on Mars, and so what if I can’t teleport to work? Thanks to a combination of 3G internet, a touch-screen interface, and Wikipedia, the smartphone in my front pocket is pretty much the Hitchhiker’s Guide to the Galaxy. I can communicate with anyone anywhere at anytime. I can look up any fact I want, from which puppeteers played A.L.F. to how many flavors of quark are in the Standard Model, and then use the same touch-screen device to take a picture, deposit a check, and navigate the subway system. We live in the future, ladies and gentleman.
But you may still have your doubts. Allow me to put things in perspective. Imagine it’s 1995: almost no one but Gordon Gekko and Zack Morris have cellphones, pagers are the norm; dial-up modems screech and scream to connect you an internet without Google, Facebook, or YouTube; Dolly has not yet been cloned; the first Playstation is the cutting edge in gaming technology; the Human Genome Project is creeping along; Mir is still in space; MTV still plays music; Forrest Gump wins an academy award and Pixar releases their first feature film, Toy Story. Now take that mindset and pretend you’re reading the first page of a new sci-fi novel:
The year is 2010. America has been at war for the first decade of the 21st century and is recovering from the largest recession since the Great Depression. Air travel security uses full-body X-rays to detect weapons and bombs. The president, who is African-American, uses a wireless phone, which he keeps in his pocket, to communicate with his aides and cabinet members from anywhere in the world. This smart phone, called a “Blackberry,” allows him to access the world wide web at high speed, take pictures, and send emails.
It’s just after Christmas. The average family’s wish-list includes smart phones like the president’s “Blackberry” as well as other items like touch-screen tablet computers, robotic vacuums, and 3-D televisions. Video games can be controlled with nothing but gestures, voice commands and body movement. In the news, a rogue Australian cyberterrorist is wanted by world’s largest governments and corporations for leaking secret information over the world wide web; spaceflight has been privatized by two major companies, Virgin Galactic and SpaceX; and Time Magazine’s person of the year (and subject of an Oscar-worthy feature film) created a network, “Facebook,” which allows everyone (500 million people) to share their lives online.
Does that sound like the future? Granted, there’s a bit of literary flourish in some of my descriptions, but nothing I said is untrue. Yet we do not see these things incredible innovations, but just boring parts of everyday life. Louis C. K. famously lampooned this attitude with his “Everything is amazing and nobody is happy” interview with Conan O’Brian. Why can’t we see the futuristic marvels in front of our noses and in our pockets for what they really are?
It’s good to be back to blogging after a brief hiatus. As part of my return to some minimal level of leisure, I was finally able to watch the movie Moon (directed and co-written by Duncan Jones) and I’m glad that I did. (Alert: many spoilers ahead). Like all worthwhile art, it leaves nagging questions to ponder after experiencing it. It also gives me another chance to revisit questions about how technology may change our sense of identity, which I’ve blogged a bit about in the past.
A brief synopsis: Having run out of energy on Earth, humanity has gone to the Moon to extract helium-3 for powering the home planet. The movie begins with shots outside of a helium-3 extraction plant on the Moon. It’s a station manned by one worker, Sam, and his artificial intelligence helper, GERTY. Sam starts hallucinating near the end of his three-year contract, and during one of these hallucinations drives his rover into a helium-3 harvester. The collision causes the cab to start losing air and we leave Sam just as he gets his helmet on. Back in the infirmary of the base station, GERTY awakens Sam and asks if he remembers the accident. Sam says no. Sam starts to get suspicious after overhearing GERTY being instructed by the station’s owners not to let Sam leave the base.
I thought about closing out the year with news of the strawberry genome sequencing project, and dipping into the results from the cocoa genome sequencing project, while perhaps enjoying a rainbow form a solar-powered rainbow making machine. They all seemed cool and futuristic and almost certainly something we’d find in the land of science fiction.
But then, there it was: A Robot Christmas. Two weeks ago, the team at Robots Podcast put out a call for robotics labs to make holiday videos, and so far six different robotics labs have responded with videos of their machines singing or playing Christmas carols, decorating, and otherwise wishing us seasons greetings. Since I can’t be the only who wanted to know how our future overlords celebrate the holiday, I thought I’d share. Happy New Year everyone!
A Robotic Christmas, Laboratory of Intelligent Systems, EPFL, Lausanne, Switzerland
D. Boucher at The Economic Word generated the above chart with Google’s endlessly entertaining Ngram viewer. The Ngram viewer lets you search for the number of occurrences of a specific word in every book Google has indexed thus far. As you can see, “future” peaked in 2000, leading Boucher to wonder if we’re beyond the future. Yet, Boucher hedges:
Strangely, however, I look at the technological improvements over the past ten years and I see revolutionary ideas one on top of the other (for instance, the iPhone, iPad, Kindle, Google stuff, Social Networks…). My first reaction is to blindly hypothesize that our current technological prowess may distract us from the future. If it is the case, could it be that technology is a detriment to forward-looking thinkers?
I thought it might be fun to Ngram the Science Not Fiction topics of choice and see if we live up to our reputation as rogue scientists from the future. I figured if we’re all from the future, then our topics should either a) match the trend or b) buck the trend. I’m not sure which is right, but the results were quite interesting. Charts after the jump!
OK, so—whoa. Anyone wielding designer Kaylene Kau‘s prosthetic tentacle would certainly become the instant favorite of any Elder Gods she met. But aside from it’s ability to preserve her from being eaten by Cthulu, Kau’s prosthetic tentacle abandons a way of thinking about prosthetics — that they have to replicate the lost limb as exactly as possible —- for something simple, usable, and elegant.
Instead of a massively complicated set of servos, gears, and microchips the user manipulates the tentacle through two switches: One tightens a cord, causing the tentacle to curl and grip an object, the other lets it go. It’s primarily designed as an aid in conjunction with a biological arm, but it can grip large and small objects effectively.
The arm can join a suite of prosthetic limbs that are changing the way medicine and the rest of us think about replacing a lost limb. Last year, New Zealander Nadya Vessey, who’s missing both legs, asked special effects company Weta (all three Lord of the Rings movies) to make her a mermaid prosthetic she could use for swimming. They needed eight staff members and two and half years, but they did it, and now Vessey swims in the ocean with her fin.
I really want to know: Would you eat Soylent Green?
Remember (*spoiler alert!* sheesh!) Soylent Green is people, as Charlton Heston discovered. But no one ever talks about the rest of that movie, mostly because it’s kind of terrible. But for what it was, there were some cool ideas in Soylent Green.
First, a quick recap: In the movie, the earth is overpopulated and over-polluted. Global warming is in full swing and even rich people have to eat crummy food. The government hands out rations of Soylent products, which are awful, flavorless cubes and loafs of “soy” (actually plankton but really it’s irrelevant cause it’s people) foodstuff that look like red, blue, or green Play-Doh. When you die, you go to a death-a-torium of sorts where you pay a small fee, then watch a really pretty movie filled with scenes from nature and peaceful music. You die quickly and painlessly from a colorless, odorless gas.
Then your body is shipped off and turned into Soylent Green which everyone loves to eat.
Ok! That last part is traumatic, I admit. But Soylent Green isn’t The Road. Marauding hoards of hillbilly cannibals aren’t threatening to strip the meat from your bones. You die peacefully. There is no space for anything in the movie’s version of the future (people are everywhere) and cremation involves burning, which isn’t exactly great for global warming. So what to do with the bodies of humans in a world where there is no room to put them and everyone is starving? What to do indeed…
So, in the spirit of ethical inquiry, I’d like to do some thought experiments. We’re all rational, scientifically minded individuals. In what situations would a reasonable person eat food made of people? Let me set up some scenarios for you, and you tell me how much you’d love to eat Soylent Green (which is people) in that scenario. Here we go!
First some ground rules: Read More
Here’s the extended version of our interview with director Joe Kosinski from the December issue of DISCOVER, in which the first-time feature film director talks about reinventing the light cycle, building suits with on-board power, and how time passes in Tron compared to the real world.
Why return to Tron, and why now?
The original Tron was conceptually so far ahead of its time with this notion of a digital version of yourself in cyberspace. I think people had a hard time relating to in the early 1980s. We’ve caught up to that idea—today it’s kind of second nature.
Visually, Tron it was like nothing else I’d ever seen before: Completely unique. Nothing else looked like it before, and nothing else has looked like it since—you know, hopefully until our movie comes out.
How did you think about representing digital space as a physical place?
Where the first movie tried to use real-world materials to look at digital as possible, my approach has been the opposite: to create a world that felt real and visceral. The world of Tron has evolved [since it’s been] sitting isolated, disconnected from the Internet for the last 28 years. And in that time, it had evolved into a world where the simulation has become so realistic that it feels like we took motion picture cameras into this world and shot the thing for real. It has the style and the look of Tron, but it’s executed in a way that you can’t tell what’s real and what’s virtual. I built as many sets as I could. We built physically illuminated suits. The thing I’m most proud of is actually creating a fully digital character, who’s one of the main characters in our movie.
What did you keep from Tron, and what evolved?
Without getting into the ethics of WikiLeak’s activities, I’m disturbed that Visa, MasterCard and PayPal have all seen fit to police the organization by refusing to act as a middleman for donations. The whole affair drives home how dependent we are on a few corporations to make e-commerce function, and how little those corporations guarantee us anything in the way of rights.
In the short term, we may be stuck, but in the longer term, quantum money could help solve the problems by providing a secure currency that can be used without resort to a broker.
Physicist Steve Wiesner first proposed the concept of quantum money in 1969. He realized that since quantum states can’t be copied, their existence opens the door to unforgeable money.
Heisenberg’s famous Uncertainty Principle says you can either measure the position of a particle or its momentum, but not both to unlimited accuracy. One consequence of the Uncertainty Principle is the so-called No-Cloning Theorem: there can be no “subatomic Xerox machine” that takes an unknown particle, and spits out two particles with exactly the same position and momentum as the original one (except, say, that one particle is two inches to the left). For if such a machine existed, then we could determine both the position and momentum of the original particle—by measuring the position of one “Xerox copy” and the momentum of the other copy. But that would violate the Uncertainty Principle.
…Besides an ordinary serial number, each dollar bill would contain (say) a few hundred photons, which the central bank “polarized” in random directions when it issued the bill. (Let’s leave the engineering details to later!) The bank, in a massive database, remembers the polarization of every photon on every bill ever issued. If you ever want to verify that a bill is genuine, you just take it to the bank”
Katie Roiphe over at Slate is worried about helicopter parents screwing up their kids by trying to perfect them:
You know the child I am talking about: precious, wide-eyed, over-cared-for, fussy, in a beautiful sweater, or a carefully hipsterish T-shirt. Have we done him a favor by protecting him from everything, from dirt and dust and violence and sugar and boredom and egg whites and mean children who steal his plastic dinosaurs, from, in short, the everyday banging-up of the universe? The wooden toys that tastefully surround him, the all-sacrificing, well-meaning parents, with a library of books on how to make him turn out correctly— is all of it actually harming or denaturing him?
The article’s title “If we try to engineer perfect children, will they grow up to be unbearable?” grabbed me (of course). The “engineering” bit wasn’t, to my chagrin, referring to actual, genetic engineering. Instead, Roiphe was referring to parents obsessing over every aspect of their child’s lives, as if some misstep in the minutia would produce an invalid. These parents seem to accept the nature/nurture divide and, realizing there is nothing they can do to improve the genetic make-up of their little bundle of joy, attempt to overwhelm nature with nurture. Yet in the process parents are inhibiting the, ahem, natural ways in which children learn and develop: unstructured play, exploration, discovery, and getting hurt. How can we get helicopter parents to back off? Maybe with genetic engineering? Read More