You will spend a third of your life asleep. If you don’t, your waking hours will be of reduced quality and productivity. For 99% of us, seven hours a night is biological necessity. For a select 1%, what Melinda Beck at the Wall Street Journal dubs the “Sleepless Elite,” less sleep equals more life. So-called short sleepers operate with a kind of low-intensity mania which allows them to go to bed late and wake up early without needing a gallon of coffee to get through the day. And, as it turns out, the ability might be genetic.
“My long-term goal is to someday learn enough so we can manipulate the sleep pathways without damaging our health,” says human geneticist Ying-Hui Fu at the University of California-San Francisco. “Everybody can use more waking hours, even if you just watch movies.”
Dr. Fu was part of a research team that discovered a gene variation, hDEC2, in a pair of short sleepers in 2009. They were studying extreme early birds when they when they noticed that two of their subjects, a mother and daughter, got up naturally about 4 a.m. but also went to bed past midnight.
Genetic analyses spotted one gene variation common to them both. The scientists were able to replicate the gene variation in a strain of mice and found that the mice needed less sleep than usual, too.
Dr. Fu’s research is a reason for excitement because the goal is not just to locate the gene, but to find a way to manipulate sleep pathways safely. For those of us already alive, that means there might be better, safer, more effective stimulants in the future. For those not yet born, genetic engineering may enable future generations to spend less time sawing logs and more time enjoying life. More life! Less sleep! It’s like a longevity enhancement that does nothing to extend your time alive, but instead maximizes your use of that time. But how do short sleepers use their time? Read More
Limitless is one of the first movies to directly take on the idea of pharmaceutical enhancement. The trailer is here and fake viral ad for NZT is here. I’m already wary of the film based on the trailer. Not because of the acting, directing, or plot, which all look good enough. Instead, my problem is that the movie appears to take the same boring old stance on enhancement: the cost of making yourself superhuman is too high.
Limitless has a simple set-up: loser/author Bradley Cooper who lives in filth and dresses like a hobo is offered a pill that will make everything all better. The pill makes him much smarter, more creative, and more driven. Thanks to this new found brilliance, Cooper makes boatloads of money and catches the eye of evil Robert De Niro, who threatens Cooper in various menacing and shadowy ways. Then the pill starts making Cooper crazy and his world starts crumbling around him. It’s Flowers for Algernon except with bespoke suits, exotic cars and international intrigue.
The reason I’m getting an overall vibe of “meh, who cares” from Limitless is that the even though the film has a great bad guy with De Niro and his shadowy mega-corporation, it takes the easy way out and makes the drug the enemy as well. Flowers for Algernon is great because the main character, Charlie, has to cope with how his intelligence-burst impacts his social life. We’re confronted with the fact that increased intelligence doesn’t mean increased maturity, worldly experience, or romantic ability. Limitless ignores these deeper issues.
Wouldn’t it be more interesting if the problem of power and wealth was that Cooper had to deal with other wealthy and powerful people, who are, in general, incredibly awful? Or what would Cooper do if the drug simply stopped working? Or how it affected his relationship with the woman he thought he loved when he becomes too smart – way too smart – for her and is bored by a person he once admired?
The theoretical enhancement drug at the center of Limitless could have allowed the writers to ask much more interesting questions than the trailer lets on. Maybe the movie will surprise me, but I doubt it.
Image viral promotional material for Limitless
Matt Lamkin argues that universities shouldn’t ban cognitive-enhancing drugs like Ritalin and Adderall. Lamkin is a lawyer and, like myself, a master’s candidate in bioethics. He rightly believes that a ban would do little to promote fairness or safety among students. The rule followers would be at a disadvantage while the rule-breakers would be at a greater safety risk. But Lamkin doesn’t believe we, as a society, should be ok with cognitive enhancement usage. Instead, he argues:
The word “cheating” has another meaning, one that has nothing to do with competition. When someone has achieved an end through improper means, we might say that person has “cheated herself” out of whatever rewards are inherent in the proper means. The use of study drugs by healthy students could corrode valuable practices that education has traditionally fostered. If, for example, students use such drugs to mitigate the consequences of procrastination, they may fail to develop mental discipline and time-management skills.
On the other hand, Ritalin might enable a student to engage more deeply in college and to more fully experience its internal goods—goods she might be denied without that assistance. The distinction suggests that a blanket policy, whether of prohibition or universal access, is unlikely to be effective.
Instead, colleges need to encourage students to engage in the practice of education rather than to seek shortcuts. Instead of ferreting out and punishing students, universities should focus on restoring a culture of deep engagement in education, rather than just competition for credentials.
Lamkin’s argument is that cog-enhancers are an easy way out for those in school. Struggling to study builds character and good habits. Though he disapproves of cog-enhancers, I appreciate his hesitancy to involve the law. Lamkin doesn’t believe policing cog-enhancing drug usage is necessary, but would prefer honor codes opposing cog-enhancing drugs. He believes honor codes cause one to “internalize” the value of not using the drug. What is curious is that Lamkin doesn’t actually address what Ritalin and Adderall do for a student. As a person who has a legit prescription for Ritalin, and who knows his fair share of folks who’ve taken Adderall off-label, I believe I can speak to how cog-enhancers work in at least an anecdotal sense.
I am a scientist and academic by day, but by night I’m increasingly called upon to talk about transhumanism and the Singularity. Last year, I was science advisor to Caprica, a show that explored relationships between uploaded digital selves and real selves. Some months ago I participated in a public panel on “Mutants, Androids, and Cyborgs: The science of pop culture films” for Chicago’s NPR affiliate, WBEZ. This week brings a panel at the Director’s Guild of America in Los Angeles, entitled “The Science of Cyborgs” on interfacing machines to living nervous systems.
The latest panel to be added to my list is a discussion about the first transhumanist opera, Tod Machover’s “Death and the Powers.” The opera is about an inventor and businessman, Simon Powers, who is approaching the end of his life. He decides to create a device (called The System) that he can upload himself into (hmm I wonder who this might be based on?). After Act 2, the entire set, including a host of OperaBots and a musical chandelier (created at the MIT Media Lab), become the physical manifestation of the now incorporeal Simon Powers, who’s singing we still hear but who has disappeared from the stage. Much of the opera is exploring how his relationships with his daughter and mother change post-uploading. His daughter and wife ask whether The System is really him. They wonder if they should follow his pleas to join him, and whether life will still be meaningful without death. The libretto, by the renown Robert Pinsky, renders these questions in beautiful poetry. It will open in Chicago in April.
These experiences have been fascinating. But I can’t help wondering, what’s with all the sudden interest in transhumanism and the singularity? Read More
People who work in robotics prefer not to highlight a reality of our work: robots are not very reliable. They break, all the time. This applies to all research robots, which typically flake out just as you’re giving an important demo to a funding agency or someone you’re trying to impress. My fish robot is back in the shop, again, after a few of its very rigid and very thin fin rays broke. Industrial robots, such as those you see on car assembly lines, can only do better by operating in extremely predictable, structured environments, doing the same thing over and over again. Home robots? If you buy a Roomba, be prepared to adjust your floor plan so that it doesn’t get stuck.
What’s going on? The world is constantly throwing curveballs at robots that weren’t anticipated by the designers. In a novel approach to this problem, Josh Bongard has recently shown how we can use the principles of evolution to make a robot’s “nervous system”—I’ll call it the robot’s controller—robust against many kinds of change. This study was done using large amounts of computer simulation time (it would have taken 50–100 years on a single computer), running a program that can simulate the effects of real-world physics on robots.
What he showed is that if we force a robot’s controller to work across widely varying robot body shapes, the robot can learn faster, and be more resistant to knocks that might leave your home robot a smoking pile of motors and silicon. It’s a remarkable result, one that offers a compelling illustration of why intelligence, in the broad sense of adaptively coping with the world, is about more than just what’s above your shoulders. How did the study show it?
Brian Christian is an exemplar of the human species. In 2009, Christian participated in the annual Loebner Prize competition, which is based on Alan Turing’s eponymous test for determining if a computer is able to “think” like a human. Christian did not submit an A.I. he had programmed, but his own mind. Christian was a “confederate,” that is, one of the humans representing humanity in the competition. Five A.I. programs and five humans compete to be judged the most human:
During the competition, each of four judges will type a conversation with one of us for five minutes, then the other, and then will have 10 minutes to reflect and decide which one is the human. Judges will also rank all the contestants—this is used in part as a tiebreaking measure. The computer program receiving the most votes and highest ranking from the judges (regardless of whether it passes the Turing Test by fooling 30 percent of them) is awarded the title of the Most Human Computer.
What makes the competition so intriguing is that, as all contestants are ranked, be they human or computer, there is not only an award for the Most Human Computer, but also an award for the Most Human Human. Brian Christian is one of the vetted few humans who has earned the accolade. He describes his experience in the competition in his outstanding article “Mind vs. Machine” in The Atlantic. The article presents a snippet of what will surely be a wonderful book, The Most Human Human.
Like Sherry Turkle, Christian argues that machines are calling our humanity into stark relief. Yet he sees human-like computers not as automatons dragging us into banality, but as imperfect mirrors, reminding us of what makes us human by what they cannot reflect. I suspect it’s Christian’s double-life as a science journalist and poet that drew him to consider our dual-natured human brain:
Perhaps the fetishization of analytical thinking, and the concomitant denigration of the creatural—that is, animal—and bodily aspects of life are two things we’d do well to leave behind. Perhaps at last, in the beginnings of an age of AI, we are starting to center ourselves again, after generations of living slightly to one side—the logical, left-hemisphere side. Add to this that humans’ contempt for “soulless” animals, our unwillingness to think of ourselves as descended from our fellow “beasts,” is now challenged on all fronts: growing secularism and empiricism, growing appreciation for the cognitive and behavioral abilities of organisms other than ourselves, and, not coincidentally, the entrance onto the scene of an entity with considerably less soul than we sense in a common chimpanzee or bonobo—in this way AI may even turn out to be a boon for animal rights.
Among many conclusions Christian draws is that to be more human you must be yourself. But this is no idle command. The process of being oneself is an active, conscious, and, in some cases, laborious task. Consider your average conversation at a cocktail party – safe topics, non-confrontational questions, scripted answers. Part of Christian’s message, it seems, is not that we should worry about a computer sounding human, but that we humans may make the task too easy. So go forth and be quirky, odd, unique, expressive, honest, clever, eccentric, and above all yourself; in a phrase, be more human.
Image of The Most Human Human via RandomHouse
We all have our favorite capacity/organ that we fail modern-day AI for not having, and that we think it needs to have to get truly intelligent machines. For some it’s consciousness, for others it is common sense, emotion, heart, or soul. What if it came down to a gut? That we need to make our AI have the capacity to get hungry, and slake that hunger with food, for the next real breakthrough? There’s some new information on the role of gut microbes in brain development that’s worth some mental mastication in this regard (PNAS via PhysOrg).
At night in the rivers of the Amazon Basin there buzzes an entire electric civilization of fish that “see” and communicate by discharging weak electric fields. These odd characters, swimming batteries which go by the name of “weakly electric fish,” have been the focus of research in my lab and those of many others for quite a while now, because they are a model system for understanding how the brain works. (While their brains are a bit different, we can learn a great deal about ours from them, just as we’ve learned much of what we know about genetics from fruit flies.) There are now well over 3,000 scientific papers on how the brains of these fish work.
Recently, my collaborators and I built a robotic version of these animals, focusing on one in particular: the black ghost knifefish. (The name is apparently derived from a native South American belief that the souls of ancestors inhabit these fish. For the sake of my karmic health, I’m hoping that this is apocryphal.) My university, Northwestern, did a press release with a video about our “GhostBot” last week, and I’ve been astonished at its popularity (nearly 30,000 views as I write this, thanks to coverage by places like io9, Fast Company, PC World, and msnbc). Given this unexpected interest, I thought I’d post a bit of the story behind the ghost.
When you bundle up all the time that gamers everywhere pour into their favorite games, the statistics are simply staggering. World of Warcraft’s legion of devotees, for example, have now spent more than 50 billion hours—about 6 million years—roaming their mythical, digital universe. Halo 3 players banded together to reach a kill tally of 10 billion, and when they blew past it, kept on shooting in pursuit of 100 billion.
If 10,000 hours of practice represents a sort of genius threshold, then gamers around the world are crossing that threshold. “This means that we are well on our way to creating an entire generation of virtuoso gamers,” writes game designer Jane McGonigal.
You might recognize McGonigal from her talk at TED, “Gaming Can Make a Better World.” But now that speech has become a full-on how-to guide: her new book Reality Is Broken, which came out yesterday. It details how games can fix what’s wrong with the real world (and not just escape from it).
When commentators bandy about those eye-popping numbers about how much time gamers invest in games, it’s usually done to bemoan the youth of America wasting their time on trivial pursuits. But to McGonigal, the allure of games can be used for good. Where our workaday lives can be filled with tedium and busy work, games challenge us with what she calls “hard fun”—hard work that’s satisfying. Games can improve our social connections, and they can provide a huge arena for collaboration.
Games, McGonigal writes, can fix what’s wrong with reality on small or large scales. A personal example: When she was struggling to recover from a concussion, she invented a game and enlisted friends and family as characters with tasks to fulfill, like coming over to cheer her up or keeping her off caffeine. A world-level example: EVOKE, a free online multiplayer games that challenges its players to solve major social ills like hunger and poverty.
We talked to her recently about her mission to save the world with games:
DISCOVER: What are you working on right now?
Jane McGonigal: There are a couple of big things. One of them is Gameful—we’re calling it a secret headquarters online for gamers and game developers who want to change the world. That was based on how many emails and Facebook messages I get from people who saw my TED talk or heard about these games and want to make one or play one, or learn how to design games so that they can make one. It’s a cross between a social network and a collaboration space online. So far we have over 1,100 games developers signed up. That’s a pretty significant proportion of game developers in the U.S. They committed to not just entertaining with games, but making a positive impact.
I also have a new start-up company, called Social Chocolate. It’s a company with which we’re creating gameful experiences that are based on scientific research about power-positive emotions and positive relationships—basically, games that are designed from top to bottom to improve your real life and to strengthen your relationships.
In the book, you write about games’ ability to captivate and satisfy our minds on a “primal” level. Why are games so good at getting in touch with our primal nature?
That is such a cool question. We’ve been playing games since humanity had civilization—there is something primal about our desire and our ability to play games. It’s so deep-seated that it can bypass latter-day cultural norms and biases. If you give us a good game, we can overcome our society’s “make you feel stupid for dancing in front of other people” feeling, or trying to block all thoughts of death because it’s depressing and we’re not supposed to be depressed. The game is much older than any of these societal constraints. So that, I think, makes it a powerful platform for getting in touch with things we’ve lost touch with.
Dancing’s really interesting because if you look at the new games with Kinect and PS Move and the Wii, it’s opening up this different kind of gamer experience. When you watch people play these games, the word “joy” is what you’d use to describe it. It’s different from the kind of immersion that we think of with games where we’re really focused mentally. The physical engagement in combination with music and movement and other people makes it feel more like ritual than computer games have been.
Yet, you say, the mission to create joy in games is often hampered because of the “uncoolness” of happiness. So how do we get over ourselves?
I was curious when I started the Gameful project if game developers would really get behind this idea. Because, there’s definitely that sense among some game developers that it would ruin the fun to be serious about making people happy or improving real life. Is it corny? Does it take away from the fantasy of games? I think there will be a huge part of the game development world that continues to feel that way. But what I’m seeing every year at the gamers’ conferences in a higher percentage of the game industry waking up to the responsibility that comes with the power. I hate to say this, but it’s not so much about wanting to make the world a better place as it is saying, “Wow, we are wielding a tremendous amount of power over young people’s lives. This is great; we’ve invented this powerful medium that’s capable of engaging people like nothing else. But is that what we want to do with our lives, or do we want to do something that matters while we’re wielding that power?”
If you make it a game, gamers will play it no matter what your motivation is in making it. FoldIt is a good example. Clearly, a lot of gamers would rather cure cancer while they’re gaming than do nothing while they’re gaming. It didn’t make the game less exciting to be doing good; it made the game more exciting to be doing good. But it only works because they made a really good game.
Is the world ready for this idea that games can fix serious real-world problems?
In general, I think there are 2 groups of people who don’t push back at all. One are the hardcore gamers who know that they’re capable of doing amazing things and are happy to hear somebody actually talk about that possibility seriously. There’s been a lot of talk about gamers as if they’re wasting their lives, or they’re never going to amount to anything, or they’re not learning anything that really matters. People who play a lot of games love to hear this idea—the games that you love could become a part of your life, not a distraction from your life.
Parents of gamers also seem to get it right away. Parents know that their kids are capable of doing extraordinary things, and they want to believe the best in them—and to have somebody explain to them the science of why games could actually empower their kids rather than waste their lives. They see how much time their kids are playing games and they know that there’s nothing wrong with their kids. They just don’t understand what that passion is about.
People who don’t have gamer friends or family are the hardest to convince. There’s still a perception that games are like single-player experiences with guns more often than not. Usually I have to explain to people that 3 out of 4 gamers prefer cooperative to competitive, and that the majority of our game play is social.