Category: Meta

And Now I Have a Master's Degree

By Kyle Munkittrick | May 20, 2011 9:14 am

Hooray! I now have a Master of Arts degree from New York University. I even got to wear a bright purple robe with strange sleeves, was hooded, and topped it all off with a mortarboard that barely fit on my head.

My degree is from the John W. Draper Interdisciplinary Master’s Program in Humanities and Social Thought, which means I cobbled together a few disparate fields into my own academic Voltron of study. Critical theory, gender studies, and bioethics comprised the triumvirate of nerdiness out of which I forged my thesis, “Human Enhancement and Our Moral Responsibility to Future Generations.” My advisor was a tremendous resource, educator, and inspiration. Thanks, Greg!

Oh, and I competed in the northern hemisphere’s first ever Threesis competition. The goal: summarize your thesis in three minutes to a lay audience with nothing but a single static keynote slide for visual backup. Not easy, but quite fun.

I had the support of friends and family (my parents and partner in particular) throughout the process. They stood by me while I was pulling all-nighters, living in the library, and deliriously rambling on about Derek Parfit, Jurgen Habermas, and Julian Savulescu.

In true science nerd fashion, I spent the day with the family at the Museum of Natural History looking at the brain (there was an Ethics of Enhancement section!), giant dinosaurs, the stars, and butterflies. A fitting celebration!

CATEGORIZED UNDER: Meta, Transhumanism
MORE ABOUT: Bioethics, Graduation

Why Did Consciousness Evolve, and How Can We Modify It?

By Malcolm MacIver | March 14, 2011 6:58 pm

Update 5/24/11: The conversation continues in Part II here.

I recently gave a talk at the Directors Guild of America as part of a panel on the “Science of Cyborgs” sponsored by the Science Entertainment Exchange. It was a fun time, and our moderators, Josh Clark and Chuck Bryant from the HowStuffWorks podcast, emceed the evening with just the right measure of humor and cultural insight. In my twelve minutes, I shared a theory of how consciousness evolved. My point was that if we understand the evolutionary basis of consciousness, maybe this will help us envision new ways our consciousness might evolve further in the future. That could be fun in terms of dreaming up new stories. I also believe that part of what inhibits us from taking effective action against long-term problems—like the global environmental crisis — may be found in the evolutionary origins of our ability to be aware.

This idea is so simple that I’m surprised I’ve not yet been able to find it already in circulation.

Read More

The First Decade of the Future is Behind Us

By Kyle Munkittrick | December 31, 2010 1:00 pm

In just a few days, the first decade of the 21st Century will be over. Can we finally admit we live in the future? Sure, we won’t be celebrating New Years by flying our jetpacks through the snow or watching the countdown from our colony on Mars, and so what if I can’t teleport to work? Thanks to a combination of 3G internet, a touch-screen interface, and Wikipedia, the smartphone in my front pocket is pretty much the Hitchhiker’s Guide to the Galaxy. I can communicate with anyone anywhere at anytime. I can look up any fact I want, from which puppeteers played A.L.F. to how many flavors of quark are in the Standard Model, and then use the same touch-screen device to take a picture, deposit a check, and navigate the subway system. We live in the future, ladies and gentleman.

But you may still have your doubts. Allow me to put things in perspective. Imagine it’s 1995: almost no one but Gordon Gekko and Zack Morris have cellphones, pagers are the norm; dial-up modems screech and scream to connect you an internet without Google, Facebook, or YouTube; Dolly has not yet been cloned; the first Playstation is the cutting edge in gaming technology; the Human Genome Project is creeping along; Mir is still in space; MTV still plays music; Forrest Gump wins an academy award and Pixar releases their first feature film, Toy Story. Now take that mindset and pretend you’re reading the first page of a new sci-fi novel:

The year is 2010. America has been at war for the first decade of the 21st century and is recovering from the largest recession since the Great Depression. Air travel security uses full-body X-rays to detect weapons and bombs. The president, who is African-American, uses a wireless phone, which he keeps in his pocket, to communicate with his aides and cabinet members from anywhere in the world. This smart phone, called a “Blackberry,” allows him to access the world wide web at high speed, take pictures, and send emails.

It’s just after Christmas. The average family’s wish-list includes smart phones like the president’s “Blackberry” as well as other items like touch-screen tablet computers, robotic vacuums, and 3-D televisions. Video games can be controlled with nothing but gestures, voice commands and body movement. In the news, a rogue Australian cyberterrorist is wanted by world’s largest governments and corporations for leaking secret information over the world wide web; spaceflight has been privatized by two major companies, Virgin Galactic and SpaceX; and Time Magazine’s person of the year (and subject of an Oscar-worthy feature film) created a network, “Facebook,” which allows everyone (500 million people) to share their lives online.

Does that sound like the future? Granted, there’s a bit of literary flourish in some of my descriptions, but nothing I said is untrue. Yet we do not see these things incredible innovations, but just boring parts of everyday life. Louis C. K. famously lampooned this attitude with his “Everything is amazing and nobody is happy” interview with Conan O’Brian. Why can’t we see the futuristic marvels in front of our noses and in our pockets for what they really are?

Read More

MORE ABOUT: future

Would Death Be Easier If You Know You've Been Cloned?

By Malcolm MacIver | December 27, 2010 12:41 pm

It’s good to be back to blogging after a brief hiatus. As part of my return to some minimal level of leisure, I was finally able to watch the movie Moon (directed and co-written by Duncan Jones) and I’m glad that I did. (Alert: many spoilers ahead). Like all worthwhile art, it leaves nagging questions to ponder after experiencing it. It also gives me another chance to revisit questions about how technology may change our sense of identity, which I’ve blogged a bit about in the past.

A brief synopsis: Having run out of energy on Earth, humanity has gone to the Moon to extract helium-3 for powering the home planet. The movie begins with shots outside of a helium-3 extraction plant on the Moon. It’s a station manned by one worker, Sam, and his artificial intelligence helper, GERTY. Sam starts hallucinating near the end of his three-year contract, and during one of these hallucinations drives his rover into a helium-3 harvester. The collision causes the cab to start losing air and we leave Sam just as he gets his helmet on. Back in the infirmary of the base station, GERTY awakens Sam and asks if he remembers the accident. Sam says no. Sam starts to get suspicious after overhearing GERTY being instructed by the station’s owners not to let Sam leave the base.

Read More

Searching For the Future

By Kyle Munkittrick | December 23, 2010 5:45 pm

D. Boucher at The Economic Word generated the above chart with Google’s endlessly entertaining Ngram viewer. The Ngram viewer lets you search for the number of occurrences of a specific word in every book Google has indexed thus far. As you can see, “future” peaked in 2000, leading Boucher to wonder if we’re beyond the future. Yet, Boucher hedges:

Strangely, however, I look at the technological improvements over the past ten years and I see revolutionary ideas one on top of the other (for instance, the iPhone, iPad, Kindle, Google stuff, Social Networks…). My first reaction is to blindly hypothesize that our current technological prowess may distract us from the future. If it is the case, could it be that technology is a detriment to forward-looking thinkers?

I thought it might be fun to Ngram the Science Not Fiction topics of choice and see if we live up to our reputation as rogue scientists from the future. I figured if we’re all from the future, then our topics should either a) match the trend or b) buck the trend. I’m not sure which is right, but the results were quite interesting. Charts after the jump!
Read More

MORE ABOUT: charts, future, ngram

Killing The Dr. Evils of Iran: Is it Open Season On Scientists?

By Malcolm MacIver | November 30, 2010 10:56 pm

dr-evilA few days ago two assassination attempts on Iranian nuclear scientists were made. One succeeded while the other was a near miss. This is just a short while after programmable logic controllers running Iran’s centrifuges came under cyber attack. Attempts to stop Iran from having the bomb have transitioned from breaking the hardware to killing the brains behind the hardware.

The idea of attacking scientists to stem technological development is an old one. Perhaps the most dramatic example from recent times is Ted Kaczynski, aka the Unabomber. In his case the targeted killings were embedded in an anti-technology philosophy fully developed in his Manifesto. In the recent assassination attempts in Iran, we see the workings of geopolitical pragmatism in its most raw form.

Regardless of what we may think of Iran having the bomb, the strategy of killing scientists and engineers of a country’s technological infrastructure is one that should give us pause. Few steps separate this ploy to making them the domestic enemy as well, a tradition with an even deadlier history that includes the Cultural Revolution and Pol Pot’s purge of academics.

Read More

Science Fiction and the Modding of Our Future

By Malcolm MacIver | September 22, 2010 2:20 am

Screen shot 2010-09-22 at [Sep 22] 12.12.02 AMThe chasm between science and the humanities is nowhere more blatent than the lack of work on how science fiction is reprocessed and used by those of us securely strapped into the laboratory. It’s a topic that attracts some heat: Some scientists take to suggestions of inspiration between their creations and those in preceding Sci-Fi with the excitement of a freshman accused of buying their midterm essay off the internet.  In Colin Milburn’s new work on ways of thinking about this interaction, he refers to Richard Feynman’s 1959 lecture “There’s plenty of room at the bottom.” This lecture is a key event in the history of nanotechnology. In it, Feynman refers to a pantograph-inspired mechanism for manipulating molecules. It turns out that he most likely got this idea from the story “Waldo” by Robert Heinlein, who in turn probably got it from another science fiction story by Edmond Hamilton. Rejecting the suggestion of influence, chemist Pierre Laszlo writes: “Feynman’s fertile imagination had no need for an outside seed. This particular conjecture [about a link between Feynman and Heinlein] stands on its head Feynman’s whole argument. He proposed devices at the nanoscale as both rational and realistic, around the corner so to say. To propose instead that the technoscience, nanotechnology, belongs to the realm of science-fictional fantasy is gratuitous mythology, with a questionable purpose.”

Read More

Like a Brain-Thirsty Zombie, Science Not Fiction Is Back—and It's Badder Than Ever

By Amos Zeeberg (Discover Web Editor) | July 7, 2010 1:53 pm

guess who's backIt is with much pleasure that Discover is relaunching Science Not Fiction, a deep look at the science and technology of the future, whether that future unfurls in research labs or on the silver screen. The blog has a core of 5 great contributors who will each cover one of the future’s most compelling topics:

Kevin Grazier is a physicist and astronomer who works on the Cassini mission with NASA/JPL, teaches at UCLA and Cal State LA, and is a science advisor for sci-fi shows like Eureka and Battlestar Galactica. He covers space and physics for Science Not Fiction.

Jeremy Jacquot is a biogeochemistry PhD student at USC and a freelance writer. He is SNF’s resident expert on biology and biotech.

Malcolm MacIver is a bioengineer at Northwestern University who studies the neural and biomechanical basis of animal intelligence. He is also the science adviser for Caprica. He covers AI and robotics for Science Not Fiction.

Kyle Munkittrick is program director at the Institute for Ethics and Emerging Technologies and a grad student at NYU. He covers transhumanism.

Eric Wolff is a freelance writer and longtime contributor to SNF, where he covers electronics, engineering, and energy.

The reanimated blog has been running for the past month in Discover’s top-secret skunkworks, so feel free to browse the recent posts and leave comments about what you like, what you don’t, and what you’d like the blog to cover in the (ahem) future.

[[[End communication]]]

CATEGORIZED UNDER: Meta
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »