Zombie stories are often about the utter failure of the government to deal with a big problem and, thanks to George Romero, also a great way to expose issues of class and social status. No one really believes they might attack one day. Zombies are a metaphor, like vampires or werewolves, for the horrifying and uncanny aspects of the human. They also remind you that, when things really hit the fan, you’re on your own. So be prepared! The Center for Disease Control does not want you to be caught unawares. In a post that walks the line between “ha ha this would never happen” and “but seriously just in case, you never know,” Ali S Kahn details the worthy forms of emergency response to hoards of the necrotic, brain-seeking undead:
I’m wary of the idea of meeting at the mailbox. Though I’m no expert, I have a strong suspicion that the mailbox is insufficiently fortified against the shuffling corpses invading the neighborhood. But hey, I’m not at the CDC, so I’m going to trust Kahn on this one. Maybe she keeps a shotgun (or cricket bat? Lobo?) in her mailbox. I just don’t know.
What I do know is I need to get an emergency kit like the one on the right. Because a zombie hoard is nonsense. But the Singularity might trigger a new stone age and I won’t be able to dash off to Wal-Mart for supplies. Should I be embarrassed that a small part of me hopes/expects some sort of epic disaster for the selfish reason that modern life doesn’t let me use a flashlight or flint in day-to-day routines? I mean, I just don’t have enough reasons in my life to use a kerosine lantern.
Maybe that’s how I can write off my next camping trip: research for zombie apocalypse.
For more on zombies, check out my series, the Ethics of the Undead.
Image of zombies kindly broadcasting their presence via Wikipedia
After a long search, you’ve found your Person of Interest—and he’s making it abundantly clear that, while you were hoping for a civilized chat back at the station, he makes it clear he doesn’t like the ambiance there. You don’t want to shoot, you’re too far away to use your Taser, and it’s not like you walk around with a spare tear gas canister hanging from your belt. What’s a law enforcement officer to do?
That’s where the StunRay comes in. A non-lethal, spotlight-like weapon, this new device is designed to disorient its targets with by overloading their neural circuitry with a burst of high-intensity light. Genesis Illumination, which makes the device, patented it in January. (You can see the device in action in this video put out by Genesis.)
I have a confession. I used to be all about the Singularity. I thought it was inevitable. I thought for certain that some sort of Terminator/HAL9000 scenario would happen when ECHELON achieved sentience. I was sure The Second Renaissance from the Animatrix was a fairly accurate depiction of how things would go down. We’d make smart robots, we’d treat them poorly, they’d rebel and slaughter humanity. Now I’m not so sure. I have big, gloomy doubts about the Singularity.
Michael Anissimov tries to restock the flames of fear over at Accelerating Future with his post “Yes, The Singularity is the Single Biggest Threat to Humanity.”
Combine the non-obvious complexity of common sense morality with great power and you have an immense problem. Advanced AIs will be able to copy themselves onto any available computers, stay awake 24/7, improve their own designs, develop automated and parallelized experimental cycles that far exceed the capabilities of human scientists, and develop self-replicating technologies such as artificially photosynthetic flowers, molecular nanotechnology, modular robotics, machines that draw carbon from the air to build carbon robots, and the like. It’s hard to imagine what an advanced AGI would think of, because the first really advanced AGI will be superintelligent, and be able to imagine things that we can’t. It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there’s a lot of evidence that we aren’t.
Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.
Oh my stars, that does sound threatening. But again, that weird, nagging doubt lingers in the back of my mind. For a while, I couldn’t place my finger on the problem, until I re-read Anissimov’s post and realized that my disbelief flared up every time I read something about AGI doing something. AGI will remove the atmosphere. Really? How? The article, in fact, all arguments about the danger of the Singularity necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not. Read More
Michael Burnam-Fink ponders the on-again-off-again relationship the military has with human enhancement:
In 2002, Dr Joseph Bielitzki, chair of DARPA’s Defense Sciences Office, announced a grand program to improve soldiers, with the slogan “Be all that you can be, and a lot more.” His targets: sleep, fatigue, pain, and blood loss. Other projects studied psychological stress, memory, and learning . . . The words on everybody’s lips were “human enhancement,” the use of science and technology to upgrade the human body and mind . . . According to military futurists, the then-new War on Terror required a new type of soldier, independent, fast and more lethal than ever before.
But in Iraq and Afghanistan, the military discovered that elite special forces alone could not restore stability to war-torn countries. General Petraeus’s counter-insurgency strategy relies on building relationships with local partners and requires soldiers with diplomatic skills, not combat enhancements. Approximately $4 billion in annual research funding was shifted away from blue-sky projects to better reconnaissance drones and defenses against roadside bombs, the insurgent’s weapon of choice. And in combat, hard lessons were relearned: War is random, and a super-soldier is just as dead as anyone else if his Humvee rolls over an IED.
Emphasis mine. Burnam-Fink’s point is one well taken: amping up your average G.I. Joe into some sort of techno-berserker übersoldat is not the solution for modern warfare. Super soldiers are still quite susceptible to mundane threats. But re-read that little bit I’ve bolded about Patraeus’s counter-insurgency relying on relationships and diplomacy. The conclusion was that combat enhancements were not as useful as hoped, not that human enhancement in general was deemed ineffective.
Sounds like the US military should focus on enhancing the qualities Patraeus said worked. Create great soldiers who are better, nay, super diplomats. Moral and mental enhancement might improve the panoply of diplomatic skills, including language learning, situational awareness, and culturally sensitive negotiations. Not exactly as Hollywood Cool as see-around-corner rifles or personal heads-up displays, but no one ever said real human enhancements would be glamorous. More to the point, these enhancements would save lives. If a soldier can form a relationship with the locals and properly evaluate an urban environment, then that may lead to more peace with fewer shots fired. Now that sounds like human enhancement.
Image of A U.S. Army Soldier from Task Force Regulars 1st Battalion, 6th Infantry Regiment, Renegade company by Tech. Sgt. Cohen Young via DVIDSHUB on Flickr Creative Commons
A few days ago two assassination attempts on Iranian nuclear scientists were made. One succeeded while the other was a near miss. This is just a short while after programmable logic controllers running Iran’s centrifuges came under cyber attack. Attempts to stop Iran from having the bomb have transitioned from breaking the hardware to killing the brains behind the hardware.
The idea of attacking scientists to stem technological development is an old one. Perhaps the most dramatic example from recent times is Ted Kaczynski, aka the Unabomber. In his case the targeted killings were embedded in an anti-technology philosophy fully developed in his Manifesto. In the recent assassination attempts in Iran, we see the workings of geopolitical pragmatism in its most raw form.
Regardless of what we may think of Iran having the bomb, the strategy of killing scientists and engineers of a country’s technological infrastructure is one that should give us pause. Few steps separate this ploy to making them the domestic enemy as well, a tradition with an even deadlier history that includes the Cultural Revolution and Pol Pot’s purge of academics.
It’s understatement to say that Nikola Tesla was one of America’s greatest inveltors. The man had a gift for creativity, physical intuition, and inventiveness that was truly otherworldly. Among other things, Tesla is responsible for the AC power we currently enjoy; his contemporary Thomas Edison was a stauch proponent of DC.
In the early 1930’s, Tesla claimed that he had invented a death ray that would benefit the military in battle—one capable of destroying up to 10,000 enemy aircraft at distances of up to 250 miles. It was so lethal that it would end the spectacle of war.
Tesla died before he could build this death ray, and he had no documentation hinting at its design in his personal effects. Nobody (not even the FBI) knows what happens to the death ray plans, if any existed.
A patient receiving a flu shot.
In the not too distant future, the phrase “shooting up” could take on a whole new meaning. At least if the U.S. Army has its way. Wired‘s Danger Room blog reported a few days ago that the military is seeking bids for a high-tech form of vaccination that could be delivered quickly and efficiently to a large number of troops in the heat of battle. More specifically, the Pentagon wants a DNA vaccine that can be administered via a literal shot to the arm—and a jolt of electricity. All without causing too much “discomfort” to the patient, of course.
Suffice it to say that this futuristic-sounding vaccine would be a far cry from what you and I received as children. As last year’s swine flu epidemic made painfully clear, our current methods of vaccine development, which have remained essentially unchanged for decades, are woefully outdated. The vaccines take too long—upwards of seven months—to produce, are easily prone to failure if not prepared correctly and, in many cases, lose their potency after only a year. These failings have helped draw attention to DNA-based vaccines, cocktails of genetically engineered plasmids which offer the promise of inducing a stronger, and more targeted, immune response. Where regular vaccines are slow to develop and hard to combine, DNA vaccines can be made relatively quickly and mixed together to ward off multiple pathogens at once. They are also generally safer to produce and administer, more durable and can be scaled more easily.
We are teaming up with Jennifer Ouellette and the crew at the Science and Entertainment Exchange to produce a panel on “MAD SCIENCE,” i.e. Science as a double-edged sword, ethically and morally neutral in and of itself, but dependent upon who wields it, and how.
Beloved Internet Personality Phil Plait is lined up to moderate (after he gets his tattoo) and we’re expecting guests from Eureka, Battlestar Galactica, Fringe, Stargate: Universe and more. Watch this space for additional details.
This week’s travel advice from Fringe: When picking up the ladies at night clubs, avoid the ones with scary blue eyes who don’t talk. They tend to have shockingly pointy teeth, and are likely to eat you. Or at least, parts of you that you might wish you had later. More on the nutritional content of your parts after the jump, which contains mucho spoilers.
As we’ve mentioned before, though, this is sometimes problematic when it comes to J.J. Abrams’s Fringe. Still, we try not to critique.