Science fiction has a problem: everyone looks the same. I know there are a few series that have aliens that look unimaginably different from human beings. But those are the exception, not the rule. Most major sci-fi series – Star Wars, Babylon 5, Mass Effect, Star Trek, Farscape, Stargate – have alien species that are hominid.
Consider the above image. Of the twenty visible species, only five are visibly not hominid. That’s right, I count the prawn, xenomorph, predator, Cthulhu and A.L.F. as being hominid. I grant that it’s a bit of a stretch. A more conservative evaluation would be that only two of the twenty are truly hominid. The others, which we’ll call pseudo-hominids, still share the following with humans: bipedal locomotion; bilateral symmetry; a morphology of head, trunk, two arms, and two legs; upright posture; and forward-facing, stereoscopic eyes. I grant they don’t look precisely human, but the similarities are too striking to be swept into the nearest black hole.
Even the most strident supporter of parallel evolution would laugh in the face of anyone who claimed that the most intelligent species on nearly every planet in the universe just happened to evolve the exact same physiology. In series like Star Trek and Mass Effect, where interspecies relationships are possible, this cross-species compatibility is made even more preposterous. We all suspend our scientific disbelief to enjoy the story and the characters. No one believes for a second that the first species we meet in the cosmos is going to look just like us save for some pointy ears and a bowl haircut.
But what if many species in the universe do look like humans? How in Carl Sagan’s cosmos could we explain parallel evolution of that magnitude? Star Trek: The Next Generation, manages to give a scientifically plausible answer to the question of hominid and biologically compatible alien species in an episode entitled “The Chase.” Which lead me to develop the Hominid Panspermia Theory of Science Fiction Aliens.
Do you ever worry that Steve Rogers (aka Captain America) wasn’t really giving informed consent when he agreed to become enhanced? Or are curious as to why someone might choose a bionic hand over a real one? The awesome Maggie Koerth-Baker of boingboing.net and I had some of the same questions. We chat about the ethics of superheroes and our perception of science in this week’s Science Saturday on bloggingheads.tv. Enjoy!
I love Pixar. Who doesn’t? The stories are magnificently crafted, the characters are rich, hilarious, and unique, and the images are lovingly rendered. Without fail, John Ratzenberger’s iconic voice makes a cameo in some boisterous character. Even if you haven’t seen every film they’ve made (I refuse to watch Cars or its preposterous sequel), there is a consistency and quality to Pixar’s productions that is hard to deny.
Popular culture is often dismissed as empty “popcorn” fare. Animated films find themselves doubly-dismissed as “for the kids” and therefore nothing to take too seriously. Pixar has shattered those expectations by producing commercially successful cinematic art about the fishes in our fish tanks and the bugs in our backyards. Pixar films contain a complex, nuanced, philosophical and political essence that, when viewed across the company’s complete corpus, begins to emerge with some clarity.
Buried within that constant and complex goodness is a hidden message.
Now, this is not your standard “Disney movies hide double-entendres and sex imagery in every film” hidden message. “So,” you ask, incredulous, “What could one of the most beloved and respected teams of filmmakers in our generation possibly be hiding from us?” Before you dismiss my claim, consider what is at stake. Hundreds of millions of people have watched Pixar films. Many of those watchers are children who are forming their understanding of the world. The way in which an entire generation sees life and reality is being shaped, in part, by Pixar.
What if I told you they were preparing us for the future? What if I told you Pixar’s films will affect how we define the rights of millions, perhaps billions, in the coming century? Only by analyzing the collection as a whole can we see the subliminal concept being drilled into our collective mind. I have uncovered the skeleton key deciphering the hidden message contained within the Pixar canon. Let’s unlock it. Read More
In just a few days, the first decade of the 21st Century will be over. Can we finally admit we live in the future? Sure, we won’t be celebrating New Years by flying our jetpacks through the snow or watching the countdown from our colony on Mars, and so what if I can’t teleport to work? Thanks to a combination of 3G internet, a touch-screen interface, and Wikipedia, the smartphone in my front pocket is pretty much the Hitchhiker’s Guide to the Galaxy. I can communicate with anyone anywhere at anytime. I can look up any fact I want, from which puppeteers played A.L.F. to how many flavors of quark are in the Standard Model, and then use the same touch-screen device to take a picture, deposit a check, and navigate the subway system. We live in the future, ladies and gentleman.
But you may still have your doubts. Allow me to put things in perspective. Imagine it’s 1995: almost no one but Gordon Gekko and Zack Morris have cellphones, pagers are the norm; dial-up modems screech and scream to connect you an internet without Google, Facebook, or YouTube; Dolly has not yet been cloned; the first Playstation is the cutting edge in gaming technology; the Human Genome Project is creeping along; Mir is still in space; MTV still plays music; Forrest Gump wins an academy award and Pixar releases their first feature film, Toy Story. Now take that mindset and pretend you’re reading the first page of a new sci-fi novel:
The year is 2010. America has been at war for the first decade of the 21st century and is recovering from the largest recession since the Great Depression. Air travel security uses full-body X-rays to detect weapons and bombs. The president, who is African-American, uses a wireless phone, which he keeps in his pocket, to communicate with his aides and cabinet members from anywhere in the world. This smart phone, called a “Blackberry,” allows him to access the world wide web at high speed, take pictures, and send emails.
It’s just after Christmas. The average family’s wish-list includes smart phones like the president’s “Blackberry” as well as other items like touch-screen tablet computers, robotic vacuums, and 3-D televisions. Video games can be controlled with nothing but gestures, voice commands and body movement. In the news, a rogue Australian cyberterrorist is wanted by world’s largest governments and corporations for leaking secret information over the world wide web; spaceflight has been privatized by two major companies, Virgin Galactic and SpaceX; and Time Magazine’s person of the year (and subject of an Oscar-worthy feature film) created a network, “Facebook,” which allows everyone (500 million people) to share their lives online.
Does that sound like the future? Granted, there’s a bit of literary flourish in some of my descriptions, but nothing I said is untrue. Yet we do not see these things incredible innovations, but just boring parts of everyday life. Louis C. K. famously lampooned this attitude with his “Everything is amazing and nobody is happy” interview with Conan O’Brian. Why can’t we see the futuristic marvels in front of our noses and in our pockets for what they really are?
D. Boucher at The Economic Word generated the above chart with Google’s endlessly entertaining Ngram viewer. The Ngram viewer lets you search for the number of occurrences of a specific word in every book Google has indexed thus far. As you can see, “future” peaked in 2000, leading Boucher to wonder if we’re beyond the future. Yet, Boucher hedges:
Strangely, however, I look at the technological improvements over the past ten years and I see revolutionary ideas one on top of the other (for instance, the iPhone, iPad, Kindle, Google stuff, Social Networks…). My first reaction is to blindly hypothesize that our current technological prowess may distract us from the future. If it is the case, could it be that technology is a detriment to forward-looking thinkers?
I thought it might be fun to Ngram the Science Not Fiction topics of choice and see if we live up to our reputation as rogue scientists from the future. I figured if we’re all from the future, then our topics should either a) match the trend or b) buck the trend. I’m not sure which is right, but the results were quite interesting. Charts after the jump!
I’m a science educator. I often think, nay obsess, on how I can do my part to help bring more scientific literacy into everybody’s daily life. In a recent blog post entitled The Myth of Scientific Literacy, worthy of a read, Dr. Alice Bell opines that if we (scientists, educators, politicians) are going to plead the case for increased science literacy, then we should do a better job of defining just what we mean by “science literacy.” She says:
Back in the early 1990s, Jon Durant very usefully outlined out the three main types of scientific literacy. This is probably as good a place to start as any:
- Knowing some science – For example, having A-level biology, or simply knowing the laws of thermodynamics, the boiling point of water, what surface tension is, that the Earth goes around the Sun, etc.
- Knowing how science works – This is more a matter of knowing a little of the philosophy of science (e.g. ‘The Scientific Method’, a matter of studying the work of Popper, Lakatos or Bacon).
- Knowing how science really works – In many respects this agrees with the previous point – that the public need tools to be able to judge science, but does not agree that science works to a singular method. This approach is often inspired by the social studies of science and stresses that scientists are human. It covers the political and institutional arrangement of science, including topics like peer review (including all the problems with this), a recent history of policy and ethical debates and the way funding is structured
On the first point, I do think that there are some basic science facts which should be required fodder in K-12 education. From my field alone, people should not only know that Earth orbits the sun, they should know that our year is based upon the time takes Earth to complete the journey. Don’t laugh. On my last birthday, when I told folks that I’d completed another orbit of the Sun, a distressing number of them did not understand the implication and, upon further questioning, didn’t know that Earth’s orbital period was the basis of one year. K-12 students should know that the Moon orbits Earth, why it goes through phases, and given it’s significance (in particular for several religious holidays), that our month is based upon that orbital period. Finally, everybody should know why we have seasons.
Halloween is a-comin’ and this Sunday brings us AMC’s The Walking Dead. In honor of that, we’re discussing The Ethics of the Undead here at Science, Not Fiction. This is part III of IV. (Check out parts I, & II)
Are zombies really dead? How do we know? People are often reported “clinically dead” only to be revived later. If it is moving, if it reacts to stimuli like a food source or sounds, and if metabolic processes are in play, how can we call a zombie dead?
The most basic definition of life is the ability to have “signaling and self-sustaining processes” as the all-knowing Wikipedia tells us:
Living organisms undergo metabolism, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce and, through natural selection, adapt to their environment in successive generations.
Zombies do indeed undergo a qualified form of metabolism, sort of maintain homeostasis, and definitely respond to stimuli. Alternately, zombies do not grow, reproduce, or go through natural selection. So much for a clear answer there.
Consider the following: When we “kill” something, we are implying that our action has made an “alive” thing “dead.” We commonly refer to “killing” zombies. Therefore, a zombie is alive until it is killed. Not quite, some might argue, a zombie is undead. Undead is a special word that describes an entity which was once alive in the full meaning of that word, then died, and was then re-animated (e.g. a zombie). The zombie was not re-vivified, that is, brought back to life, but its bare biological systems were re-started. Read More
Halloween is a-comin’ and this Sunday brings us AMC’s The Walking Dead. In honor of that, we’re discussing The Ethics of the Undead here at Science, Not Fiction. This is part II of IV. (Check out parts I, & III)
Before we can start investigating whether or not something that craves brains has a mind or should be pitied, we need to define just what, exactly, we’re talking about when we talk about zombies.
I’m going to start by ruling out the 28 Days Later zombies and the voodoo/demonic zombies of Evil Dead. First, the name of this blog is Science, not Fiction, which means any religious hokum is right out the door. Demon possession, souls back from Hell, and voodoo are not going to be considered in this investigation. On the other end of the spectrum, in 28 Days Later anything infected with “Rage” becomes a “fast” zombie. In essence, Rage is rabies only way, way scarier. Thus we aren’t dealing with the “undead” so much as the violently insane. So non-fatal pathogens don’t count either. If the pathogen doesn’t first kill you, then re-animate you, then you aren’t a zombie.
Which leads us to the next question: how does the pathogen work? I am not denying here the multitude of variations and nuances among zombie plague viruses, so we have to come up with a generic, realistic version to have our discussion. Zombies generally meet three important criteria. They are 1) stimulus-response creatures that seek flesh 2) continually decomposing and 3) contagious via bodily fluids. If we can explain, reasonably, how and for what reason a pathogen might cause/allow these conditions, we can describe a realistic zombie pathogen.
Halloween is a-comin’ and this Sunday brings us AMC’s The Walking Dead. In honor of that, we’re discussing The Ethics of the Undead here at Science, Not Fiction. This is part I of IV. (Check out parts II, & III)
Zombies are everywhere! Zombieland, Shawn of the Dead, and 28 Days Later in the movies; World War Z and Pride and Prejudice and Zombies on the bookshelf; Left 4 Dead, Dead Rising and Resident Evil in your video games – not to mention the George A. Romero and Sam Rami classics in your DVD collection. And this Sunday Robert Kirkman’s epic The Walking Dead lurches from the pages of comic books onto your television thanks to AMC.
Where ever you turn, zombies are there. We can’t seem to get enough of the re-animated recently departed. But why do we love these ambling carnivorous cadavers so?
Zombies are horrifying. An outbreak would almost certainly lead to global apocalypse. Unrelenting, unthinking, uncaring, undead, they are a nightmare incarnate. They remind us of mortality, of decay, of our own fragility. Perhaps worst, they remind us of how inhuman a human being can become.
Zombies are familiar. Refrains of “Brains!”, guttural groans, and mindless shambling instantly trigger the idea of a zombie in our mind. We all know, somehow, that decapitation – that is, destruction of the zombie brain – is our only salvation. I bet you’ve dressed as one for Halloween. Every time “Thriller” comes on you probably dance like a zombie. Some mornings I feel like a zombie. Even philosophers talk about zombies. We know zombies. They are hilarious, they are frightening, they are part of us. And that is why we love them.
But have you ever asked yourself: is a zombie still a human? is a zombie dead, really? can it feel pain? does a zombie have dignity? Has the question ever popped up in your quite-live brain: is it ok to kill a zombie? Could a zombie be cured? If you could cure it, would you still want to? In honor of Halloween and our culture’s current love affair with brain-eating corpses, I present The Ethics of the Undead, your universal guide for answering all of your most pressing zombie questions. Stay tuned for posts throughout Halloween weekend!
It’s understatement to say that Nikola Tesla was one of America’s greatest inveltors. The man had a gift for creativity, physical intuition, and inventiveness that was truly otherworldly. Among other things, Tesla is responsible for the AC power we currently enjoy; his contemporary Thomas Edison was a stauch proponent of DC.
In the early 1930’s, Tesla claimed that he had invented a death ray that would benefit the military in battle—one capable of destroying up to 10,000 enemy aircraft at distances of up to 250 miles. It was so lethal that it would end the spectacle of war.
Tesla died before he could build this death ray, and he had no documentation hinting at its design in his personal effects. Nobody (not even the FBI) knows what happens to the death ray plans, if any existed.