I am not very ethical about how I eat. I am not proud of this, but it is the truth. I am not vegan or vegetarian. In fact, I eat a lot of bacon and beef – I’d probably eat Soylent Green if given the option. I think the loco-vore movement is silly and think “organic” is a misnomer on nine out of ten things labeled as such. Most ethical foodies prefer “natural” and humane production methods. My question for all the ethical foodies out there: what are your thoughts on the very unnatural possibility of vat-grown meat?
Allow me to elaborate. Vat-grown meat is still a work in progress. But it is a real possibility. One of the scientists trying to make it a reality is Dr. Vladimir Mironov. He envisions giant factories called “carneries” that create meat the same way a brewery brews beer. One of his many goals is to be able to add taste and texture controlling features like fat and vascular systems to make his test-tube steaks as delicious as the real thing:
“It will be functional, natural, designed food,” Mironov said. “How do you want it to taste? You want a little bit of fat, you want pork, you want lamb? We design exactly what you want. We can design texture.”
Vat-grown meat is a godsend for those of us who are omnivores, but recognize the significant flaws with our current agricultural system. Many factory farms keep animals in inhumane conditions and the industry around animal meat is an incredibly wasteful and polluting. The current response to these conditions is to support organic, local and humane farming practices. The problem, of course, is that organic, local, and humane practices are economically inefficient, which makes the cost of ethical food prohibitive for most of us.
Yet I see vat-grown meat as presenting a significant conundrum to many supporters of the ethical/organic food movement: it’s too unnatural. Read More
At night in the rivers of the Amazon Basin there buzzes an entire electric civilization of fish that “see” and communicate by discharging weak electric fields. These odd characters, swimming batteries which go by the name of “weakly electric fish,” have been the focus of research in my lab and those of many others for quite a while now, because they are a model system for understanding how the brain works. (While their brains are a bit different, we can learn a great deal about ours from them, just as we’ve learned much of what we know about genetics from fruit flies.) There are now well over 3,000 scientific papers on how the brains of these fish work.
Recently, my collaborators and I built a robotic version of these animals, focusing on one in particular: the black ghost knifefish. (The name is apparently derived from a native South American belief that the souls of ancestors inhabit these fish. For the sake of my karmic health, I’m hoping that this is apocryphal.) My university, Northwestern, did a press release with a video about our “GhostBot” last week, and I’ve been astonished at its popularity (nearly 30,000 views as I write this, thanks to coverage by places like io9, Fast Company, PC World, and msnbc). Given this unexpected interest, I thought I’d post a bit of the story behind the ghost.
Floyd Landis wants to legalize doping in professional cycling. His argument is a reasonable one. Landis argues that, since everyone is doing it already and the tests will never keep up, might as well just legalize and regulate it instead of banning it entirely. Other cyclists and the governing bodies of competitive cycling have all but called Landis a complete nutter. Charges of doping brought against other cyclists, particularly Lance Armstrong, are met with refutations of “innocent until proven guilty.”
While I agree that doping should be allowed for cyclists, I disagree with the reason Landis gives:
You got to go about it another way and you’ve got to legalise doping. They [the testers] are so far behind in the testing organisations that there’s no way to change it now. Just accept that it’s here, that it’s not going away and that it’s just going to get more complicated and the fact that it’s not that complicated yet compared to what it will be. Ten years from now it’s going to be four times as hard as it now to test for things.
Laws and ethics are not based on what is easy and what is hard to control. They are based on standards of justice and what is ethically right. The reason I believe doping should be allowed is that I see nothing unjust or wrong about professional athletes using chemical compounds and medical knowledge to improve their abilities and performance. Let me rephrase that: there is nothing wrong with taking steroids.
When you bundle up all the time that gamers everywhere pour into their favorite games, the statistics are simply staggering. World of Warcraft’s legion of devotees, for example, have now spent more than 50 billion hours—about 6 million years—roaming their mythical, digital universe. Halo 3 players banded together to reach a kill tally of 10 billion, and when they blew past it, kept on shooting in pursuit of 100 billion.
If 10,000 hours of practice represents a sort of genius threshold, then gamers around the world are crossing that threshold. “This means that we are well on our way to creating an entire generation of virtuoso gamers,” writes game designer Jane McGonigal.
You might recognize McGonigal from her talk at TED, “Gaming Can Make a Better World.” But now that speech has become a full-on how-to guide: her new book Reality Is Broken, which came out yesterday. It details how games can fix what’s wrong with the real world (and not just escape from it).
When commentators bandy about those eye-popping numbers about how much time gamers invest in games, it’s usually done to bemoan the youth of America wasting their time on trivial pursuits. But to McGonigal, the allure of games can be used for good. Where our workaday lives can be filled with tedium and busy work, games challenge us with what she calls “hard fun”—hard work that’s satisfying. Games can improve our social connections, and they can provide a huge arena for collaboration.
Games, McGonigal writes, can fix what’s wrong with reality on small or large scales. A personal example: When she was struggling to recover from a concussion, she invented a game and enlisted friends and family as characters with tasks to fulfill, like coming over to cheer her up or keeping her off caffeine. A world-level example: EVOKE, a free online multiplayer games that challenges its players to solve major social ills like hunger and poverty.
We talked to her recently about her mission to save the world with games:
DISCOVER: What are you working on right now?
Jane McGonigal: There are a couple of big things. One of them is Gameful—we’re calling it a secret headquarters online for gamers and game developers who want to change the world. That was based on how many emails and Facebook messages I get from people who saw my TED talk or heard about these games and want to make one or play one, or learn how to design games so that they can make one. It’s a cross between a social network and a collaboration space online. So far we have over 1,100 games developers signed up. That’s a pretty significant proportion of game developers in the U.S. They committed to not just entertaining with games, but making a positive impact.
I also have a new start-up company, called Social Chocolate. It’s a company with which we’re creating gameful experiences that are based on scientific research about power-positive emotions and positive relationships—basically, games that are designed from top to bottom to improve your real life and to strengthen your relationships.
In the book, you write about games’ ability to captivate and satisfy our minds on a “primal” level. Why are games so good at getting in touch with our primal nature?
That is such a cool question. We’ve been playing games since humanity had civilization—there is something primal about our desire and our ability to play games. It’s so deep-seated that it can bypass latter-day cultural norms and biases. If you give us a good game, we can overcome our society’s “make you feel stupid for dancing in front of other people” feeling, or trying to block all thoughts of death because it’s depressing and we’re not supposed to be depressed. The game is much older than any of these societal constraints. So that, I think, makes it a powerful platform for getting in touch with things we’ve lost touch with.
Dancing’s really interesting because if you look at the new games with Kinect and PS Move and the Wii, it’s opening up this different kind of gamer experience. When you watch people play these games, the word “joy” is what you’d use to describe it. It’s different from the kind of immersion that we think of with games where we’re really focused mentally. The physical engagement in combination with music and movement and other people makes it feel more like ritual than computer games have been.
Yet, you say, the mission to create joy in games is often hampered because of the “uncoolness” of happiness. So how do we get over ourselves?
I was curious when I started the Gameful project if game developers would really get behind this idea. Because, there’s definitely that sense among some game developers that it would ruin the fun to be serious about making people happy or improving real life. Is it corny? Does it take away from the fantasy of games? I think there will be a huge part of the game development world that continues to feel that way. But what I’m seeing every year at the gamers’ conferences in a higher percentage of the game industry waking up to the responsibility that comes with the power. I hate to say this, but it’s not so much about wanting to make the world a better place as it is saying, “Wow, we are wielding a tremendous amount of power over young people’s lives. This is great; we’ve invented this powerful medium that’s capable of engaging people like nothing else. But is that what we want to do with our lives, or do we want to do something that matters while we’re wielding that power?”
If you make it a game, gamers will play it no matter what your motivation is in making it. FoldIt is a good example. Clearly, a lot of gamers would rather cure cancer while they’re gaming than do nothing while they’re gaming. It didn’t make the game less exciting to be doing good; it made the game more exciting to be doing good. But it only works because they made a really good game.
Is the world ready for this idea that games can fix serious real-world problems?
In general, I think there are 2 groups of people who don’t push back at all. One are the hardcore gamers who know that they’re capable of doing amazing things and are happy to hear somebody actually talk about that possibility seriously. There’s been a lot of talk about gamers as if they’re wasting their lives, or they’re never going to amount to anything, or they’re not learning anything that really matters. People who play a lot of games love to hear this idea—the games that you love could become a part of your life, not a distraction from your life.
Parents of gamers also seem to get it right away. Parents know that their kids are capable of doing extraordinary things, and they want to believe the best in them—and to have somebody explain to them the science of why games could actually empower their kids rather than waste their lives. They see how much time their kids are playing games and they know that there’s nothing wrong with their kids. They just don’t understand what that passion is about.
People who don’t have gamer friends or family are the hardest to convince. There’s still a perception that games are like single-player experiences with guns more often than not. Usually I have to explain to people that 3 out of 4 gamers prefer cooperative to competitive, and that the majority of our game play is social.
I have a confession. I used to be all about the Singularity. I thought it was inevitable. I thought for certain that some sort of Terminator/HAL9000 scenario would happen when ECHELON achieved sentience. I was sure The Second Renaissance from the Animatrix was a fairly accurate depiction of how things would go down. We’d make smart robots, we’d treat them poorly, they’d rebel and slaughter humanity. Now I’m not so sure. I have big, gloomy doubts about the Singularity.
Michael Anissimov tries to restock the flames of fear over at Accelerating Future with his post “Yes, The Singularity is the Single Biggest Threat to Humanity.”
Combine the non-obvious complexity of common sense morality with great power and you have an immense problem. Advanced AIs will be able to copy themselves onto any available computers, stay awake 24/7, improve their own designs, develop automated and parallelized experimental cycles that far exceed the capabilities of human scientists, and develop self-replicating technologies such as artificially photosynthetic flowers, molecular nanotechnology, modular robotics, machines that draw carbon from the air to build carbon robots, and the like. It’s hard to imagine what an advanced AGI would think of, because the first really advanced AGI will be superintelligent, and be able to imagine things that we can’t. It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there’s a lot of evidence that we aren’t.
Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.
Oh my stars, that does sound threatening. But again, that weird, nagging doubt lingers in the back of my mind. For a while, I couldn’t place my finger on the problem, until I re-read Anissimov’s post and realized that my disbelief flared up every time I read something about AGI doing something. AGI will remove the atmosphere. Really? How? The article, in fact, all arguments about the danger of the Singularity necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not. Read More
The Singularity seems to be getting less and less near. One of the big goals of Singularity hopefuls is to be able to put a human mind onto (into? not sure on the proper preposition here) a non-biological substrate. Most of the debates have revolved around computer analogies. The brain is hardware, the mind is software. Therefore, to run the mind on different hardware, it just has to be “ported” or “emulated” the way a computer program might be. Timothy B. Lee (not the internet inventing one) counters Robin Hanson’s claim that we will be able to upload a human mind onto a computer within the next couple decades by dissecting the computer=mind analogy:
You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. This is different than an emulator, which (if implemented well) can be expected to behave exactly like the system it is emulating, for as long as you care to run it.
In short: we know how software is written, we can see the code and rules that govern the system–not true for the mind, so we guess at the unknowns and test the guesses with simulations. Lee’s post is very much worth the full read, so give it a perusal.
Lee got me thinking with his point that “natural systems don’t have designers.” Evolutionary processes have resulted in the brain we have today, but there was no intention or design behind those process. Our minds are undesigned.
I find that fascinating. In the first place, because it means that simulation will be exceedingly difficult. How do you reverse-engineer something with no engineer? Second, even if a simulation is successful, it by no means a guarantees that we can change the substrate of an existing mind. If the mind is an emergent property of the physical brain, then one can no more move a mind than one could move a hurricane from one system to another. The mind, it may turn out, is fundamentally and essentially related to the substrate in which it is embodied. Read More
Greetings from South Africa, where I’ve been visiting these past two weeks. It’s a country of great beauty and cultural complexity. Besides mastering driving on the left hand side of the road, and not getting too excited when I see “ROBOT” painted in giant white letters on the road (it means stop lights ahead), I made a stop at the District 6 Museum in Cape Town. The events surrounding the real District 6 were part of the inspiration for both the title and content of District 9, the great 2009 science fiction mockumentary set in South Africa.
The movie, if you haven’t seen it, is about a group of aliens who arrive on a mysterious mother ship hovering above South Africa. Eventually the authorities send an expedition up to find out what’s going on and discover a bunch of starving aliens. They are settled in a South African township called District 9, directly below the mother ship (a squatter camp in the township of Soweto, called Chiawelo, was used for the shooting). Much of the story revolves around the forced relocation of the aliens from District 9 to District 10. Besides being confined to the township and being forcibly relocated, they suffer various other kinds of oppression very reminiscent of the ways blacks were treated during the time of apartheid. Interestingly, in this case, South Africans of all colors are united in their hatred and mistreatment of the aliens, derogatively called “Prawns” (not least because they look like supersized bipedal version of king prawns, a delicious crustacean that is often on the menu at nicer restaurants in South Africa).
Michael Burnam-Fink ponders the on-again-off-again relationship the military has with human enhancement:
In 2002, Dr Joseph Bielitzki, chair of DARPA’s Defense Sciences Office, announced a grand program to improve soldiers, with the slogan “Be all that you can be, and a lot more.” His targets: sleep, fatigue, pain, and blood loss. Other projects studied psychological stress, memory, and learning . . . The words on everybody’s lips were “human enhancement,” the use of science and technology to upgrade the human body and mind . . . According to military futurists, the then-new War on Terror required a new type of soldier, independent, fast and more lethal than ever before.
But in Iraq and Afghanistan, the military discovered that elite special forces alone could not restore stability to war-torn countries. General Petraeus’s counter-insurgency strategy relies on building relationships with local partners and requires soldiers with diplomatic skills, not combat enhancements. Approximately $4 billion in annual research funding was shifted away from blue-sky projects to better reconnaissance drones and defenses against roadside bombs, the insurgent’s weapon of choice. And in combat, hard lessons were relearned: War is random, and a super-soldier is just as dead as anyone else if his Humvee rolls over an IED.
Emphasis mine. Burnam-Fink’s point is one well taken: amping up your average G.I. Joe into some sort of techno-berserker übersoldat is not the solution for modern warfare. Super soldiers are still quite susceptible to mundane threats. But re-read that little bit I’ve bolded about Patraeus’s counter-insurgency relying on relationships and diplomacy. The conclusion was that combat enhancements were not as useful as hoped, not that human enhancement in general was deemed ineffective.
Sounds like the US military should focus on enhancing the qualities Patraeus said worked. Create great soldiers who are better, nay, super diplomats. Moral and mental enhancement might improve the panoply of diplomatic skills, including language learning, situational awareness, and culturally sensitive negotiations. Not exactly as Hollywood Cool as see-around-corner rifles or personal heads-up displays, but no one ever said real human enhancements would be glamorous. More to the point, these enhancements would save lives. If a soldier can form a relationship with the locals and properly evaluate an urban environment, then that may lead to more peace with fewer shots fired. Now that sounds like human enhancement.
Image of A U.S. Army Soldier from Task Force Regulars 1st Battalion, 6th Infantry Regiment, Renegade company by Tech. Sgt. Cohen Young via DVIDSHUB on Flickr Creative Commons
When I think about transhumanism, I think about genetic engineering, cognitive enhancing drugs, and osso-neuro-integrated prosthetics. When Wired interviewee Lepht Anonym thinks about transhumanism, she thinks about kitchen sink surgery, using hot glue as a bioproofer and vodka as a sterilizer. Anonym is a biohacker or “grinder” depending on your preferred nomenclature. Grinding is a counter-culture mindset that has origins in cyberpunk and post-modern disenchantment with progress. Biohackers take body-modification, at-home surgery, and add a twist of the electromagnetic spectrum. Anonym seems to be somewhere between the two:
An American body-modification artist of a similar mindset [to Anonym] has created small metal discs of neodymium metal, coated in gold and silicon, which give off mild electric current when in a electromagnetic field. When inserted under the fingertips, this current stimulates the fingers’ nerve endings, allowing the bearer to literally feel the shape and strength of electromagnetic fields around power cords or electronic devices.
Anonym had several of these implanted professionally, choking at the cost, and then learned it was possible to buy the metal herself in bulk, far more cheaply.