Category: Mind & Brain

The Undesigned Brain is Hard to Copy

By Kyle Munkittrick | January 17, 2011 10:47 am


UPDATE: Hanson has responded and Lee has rebutted. My reaction after the jump.

The Singularity seems to be getting less and less near. One of the big goals of Singularity hopefuls is to be able to put a human mind onto (into? not sure on the proper preposition here) a non-biological substrate. Most of the debates have revolved around computer analogies. The brain is hardware, the mind is software. Therefore, to run the mind on different hardware, it just has to be “ported” or “emulated” the way a computer program might be. Timothy B. Lee (not the internet inventing one) counters Robin Hanson’s claim that we will be able to upload a human mind onto a computer within the next couple decades by dissecting the computer=mind analogy:

You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. This is different than an emulator, which (if implemented well) can be expected to behave exactly like the system it is emulating, for as long as you care to run it.

In short: we know how software is written, we can see the code and rules that govern the system–not true for the mind, so we guess at the unknowns and test the guesses with simulations. Lee’s post is very much worth the full read, so give it a perusal.

Lee got me thinking with his point that “natural systems don’t have designers.” Evolutionary processes have resulted in the brain we have today, but there was no intention or design behind those process. Our minds are undesigned.

I find that fascinating. In the first place, because it means that simulation will be exceedingly difficult. How do you reverse-engineer something with no engineer? Second, even if a simulation is successful, it by no means a guarantees that we can change the substrate of an existing mind. If the mind is an emergent property of the physical brain, then one can no more move a mind than one could move a hurricane from one system to another. The mind, it may turn out, is fundamentally and essentially related to the substrate in which it is embodied. Read More

Would Death Be Easier If You Know You've Been Cloned?

By Malcolm MacIver | December 27, 2010 12:41 pm

It’s good to be back to blogging after a brief hiatus. As part of my return to some minimal level of leisure, I was finally able to watch the movie Moon (directed and co-written by Duncan Jones) and I’m glad that I did. (Alert: many spoilers ahead). Like all worthwhile art, it leaves nagging questions to ponder after experiencing it. It also gives me another chance to revisit questions about how technology may change our sense of identity, which I’ve blogged a bit about in the past.

A brief synopsis: Having run out of energy on Earth, humanity has gone to the Moon to extract helium-3 for powering the home planet. The movie begins with shots outside of a helium-3 extraction plant on the Moon. It’s a station manned by one worker, Sam, and his artificial intelligence helper, GERTY. Sam starts hallucinating near the end of his three-year contract, and during one of these hallucinations drives his rover into a helium-3 harvester. The collision causes the cab to start losing air and we leave Sam just as he gets his helmet on. Back in the infirmary of the base station, GERTY awakens Sam and asks if he remembers the accident. Sam says no. Sam starts to get suspicious after overhearing GERTY being instructed by the station’s owners not to let Sam leave the base.

Read More

We Need Gattaca to Prevent Skynet and Global Warming

By Kyle Munkittrick | November 10, 2010 6:54 pm

If only they'd kept Jimmy Carter's solar panels on there, this whole thing could have been avoided.

Independence Day has one of my most favorite hero duos of all time: Will Smith and Jeff Goldblum. Brawn and brains, flyboy and nerd, working together to take out the baddies. It all comes down to one flash of insight on behalf of a drunk Goldblum after being chastised by his father. Cliché eureka! moments like Goldblum’s realization that he can give the mothership a “cold” are great until you realize one thing: if Goldblum hadn’t been as smart as he was, the movie would have ended much differently. No one in the film was even close to figuring out how to defeat the aliens. Will Smith was in a distant second place and he had only discovered that they are vulnerable to face punches. The hillbilly who flew his jet fighter into the alien destruct-o-beam doesn’t count, because he needed a force-field-free spaceship for his trick to work. If Jeff Goldblum hadn’t been a super-genius, humanity would have been annihilated.

Every apocalyptic film seems to trade on the idea that there will be some lone super-genius to figure out the problem. In The Day The Earth Stood Still (both versions) Professor Barnhardt manages to convince Klaatu to give humanity a second look. Cleese’s version of the character had a particularly moving “this is our moment” speech. Though it’s eventually the love between a mother and child that triggers Klaatu’s mercy, Barnhardt is the one who opens Klaatu to the possibility. Over and over we see the lone super-genius helping to save the world.

Shouldn’t we want, oh, I don’t know, at least more than one super-genius per global catastrophe? I’d like to think so. And where might we get some more geniuses? you may ask. We make them.

Read More

Zombies: Can You Kill the Undead?

By Kyle Munkittrick | October 30, 2010 10:02 am

Don't let him fake you out: he isn't looking at anything. The second you turn to look at whatever he sees, boom! Straight for the neck.Halloween is a-comin’ and this Sunday brings us AMC’s The Walking Dead. In honor of that, we’re discussing The Ethics of the Undead here at Science, Not Fiction. This is part III of IV. (Check out parts I, & II)

Are zombies really dead? How do we know? People are often reported “clinically dead” only to be revived later. If it is moving, if it reacts to stimuli like a food source or sounds, and if metabolic processes are in play, how can we call a zombie dead?

The most basic definition of life is the ability to have “signaling and self-sustaining processes” as the all-knowing Wikipedia tells us:

Living organisms undergo metabolism, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce and, through natural selection, adapt to their environment in successive generations.

Zombies do indeed undergo a qualified form of metabolism, sort of maintain homeostasis, and definitely respond to stimuli. Alternately, zombies do not grow, reproduce, or go through natural selection. So much for a clear answer there.

Consider the following: When we “kill” something, we are implying that our action has made an “alive” thing “dead.” We commonly refer to “killing” zombies. Therefore, a zombie is alive until it is killed. Not quite, some might argue, a zombie is undead. Undead is a special word that describes an entity which was once alive in the full meaning of that word, then died, and was then re-animated (e.g. a zombie). The zombie was not re-vivified, that is, brought back to life, but its bare biological systems were re-started. Read More

Delay the Decay: How Zombie Biology Would Work

By Kyle Munkittrick | October 29, 2010 5:23 pm

Ma'am, please, the sign clearly says "Keep Off the Grass"

Halloween is a-comin’ and this Sunday brings us AMC’s The Walking Dead. In honor of that, we’re discussing The Ethics of the Undead here at Science, Not Fiction. This is part II of IV. (Check out parts I, & III)

Before we can start investigating whether or not something that craves brains has a mind or should be pitied, we need to define just what, exactly, we’re talking about when we talk about zombies.

I’m going to start by ruling out the 28 Days Later zombies and the voodoo/demonic zombies of Evil Dead. First, the name of this blog is Science, not Fiction, which means any religious hokum is right out the door. Demon possession, souls back from Hell, and voodoo are not going to be considered in this investigation. On the other end of the spectrum, in 28 Days Later anything infected with “Rage” becomes a “fast” zombie. In essence, Rage is rabies only way, way scarier. Thus we aren’t dealing with the “undead” so much as the violently insane. So non-fatal pathogens don’t count either. If the pathogen doesn’t first kill you, then re-animate you, then you aren’t a zombie.

Which leads us to the next question: how does the pathogen work? I am not denying here the multitude of variations and nuances among zombie plague viruses, so we have to come up with a generic, realistic version to have our discussion. Zombies generally meet three important criteria. They are 1) stimulus-response creatures that seek flesh 2) continually decomposing and 3) contagious via bodily fluids. If we can explain, reasonably, how and for what reason a pathogen might cause/allow these conditions, we can describe a realistic zombie pathogen.

Read More

Zombies: Ethics of the Undead!

By Kyle Munkittrick | October 29, 2010 10:20 am

Um, sir, you've got, uh, red on you.

Halloween is a-comin’ and this Sunday brings us AMC’s The Walking Dead. In honor of that, we’re discussing The Ethics of the Undead here at Science, Not Fiction. This is part I of IV. (Check out parts II, & III)

Zombies are everywhere! Zombieland, Shawn of the Dead, and 28 Days Later in the movies; World War Z and Pride and Prejudice and Zombies on the bookshelf; Left 4 Dead, Dead Rising and Resident Evil in your video games - not to mention the George A. Romero and Sam Rami classics in your DVD collection. And this Sunday Robert Kirkman’s epic The Walking Dead lurches from the pages of comic books onto your television thanks to AMC.

Where ever you turn, zombies are there. We can’t seem to get enough of the re-animated recently departed. But why do we love these ambling carnivorous cadavers so?

Zombies are horrifying. An outbreak would almost certainly lead to global apocalypse. Unrelenting, unthinking, uncaring, undead, they are a nightmare incarnate. They remind us of mortality, of decay, of our own fragility. Perhaps worst, they remind us of how inhuman a human being can become.

Two, four, six, brains. Zombies are familiar. Refrains of “Brains!”, guttural groans, and mindless shambling instantly trigger the idea of a zombie in our mind. We all know, somehow, that decapitation – that is, destruction of the zombie brain – is our only salvation. I bet you’ve dressed as one for Halloween. Every time “Thriller” comes on you probably dance like a zombie. Some mornings I feel like a zombie. Even philosophers talk about zombies. We know zombies. They are hilarious, they are frightening, they are part of us. And that is why we love them.

But have you ever asked yourself: is a zombie still a human? is a zombie dead, really? can it feel pain? does a zombie have dignity? Has the question ever popped up in your quite-live brain: is it ok to kill a zombie? Could a zombie be cured? If you could cure it, would you still want to? In honor of Halloween and our culture’s current love affair with brain-eating corpses, I present The Ethics of the Undead, your universal guide for answering all of your most pressing zombie questions. Stay tuned for posts throughout Halloween weekend!

Images via ThatZombiePhoto.com and lolzombie.com

Caprica Puzzle: If a Digital You Lives Forever, Are You Immortal?

By Malcolm MacIver | October 5, 2010 3:09 pm

CLARICE: Zoe Graystone was Lacy’s best friend. A real tragedy for all of us. She was very special. I mean, she was brilliant.

NESTOR: At computer stuff, right? That’s my major. Did you know that there are bits of software that you use every day that were written decades ago?

LACY: Is that true? Oh, that’s amazing.

NESTOR: Yeah. You write a great program, and, you know, it can outlive you. It’s like a work of art, you know? Maybe Zoe was an artist. Maybe her work… Will live on.

From: Rebirth, Season 1.0 of Caprica

cylon1I’m excited that today Caprica is back on the air for the second half of its first season. As the show’s science advisor, I thought I’d pay homage to its reentry into our living rooms with some thoughts about how the show is dealing with the clash between the mortality of its living characters and the immortality of its virtual characters.

Read More

Let’s Play Predict the Future: Where Is Science Going Over the Next 30 Years?

By Amos Zeeberg (Discover Web Editor) | September 14, 2010 11:50 am

whereAs part of DISCOVER’s 30th anniversary celebration, the magazine invited 11 eminent scientists to look forward and share their predictions and hopes for the next three decades. But we also want to turn this over to Science Not Fiction’s readers: How do you think science will improve the world by 2040?

Below are short excerpts of the guest scientists’ responses, with links to the full versions:

Read More

MORE ABOUT: Top Posts

The New AI: Turn Robots Into Infant Scientists

By Malcolm MacIver | August 25, 2010 5:55 pm

robot_in_crib
While it’s clear that we have a lot going for ourselves right out of the womb, it’s equally clear that one of our most admirable qualities is that we rapidly “get it” – we learn languages, skills for manipulating objects, hip hop dance moves, recipes for coconut mojitos, and how to charm people into liking us (ideally, in that order). Rather than experiential learning like this, early AI work focused on sophisticated reasoning problems. The touchstone for these efforts was Alan Turing’s original effort to mimic the reasoning processes of mathematicians engaged in solving a math problem – an effort that gave us many great things, particularly a distillation of what it means for something to be computable that stands as one of the great intellectual accomplishments of the twentieth century. That form of AI, while successful in particular domains — chess playing and expert systems, for example –  has been less successful in solving problems of ongoing embodied activity, such as the aforementioned coconut mojito making. What if, instead of mimicking a mathematician trying to solve a math problem, Alan Turing had decided to mimic a scientist trying to determine the validity of a hypothesis? According to some developmental psychologists, in doing so we’d actually be emulating the reasoning processes of an infant, and thus, potentially, we’d be unlocking the great power of experiential learning.

Having robots with minds implementing the scientific process rather than math problem solving is essentially what’s happening in a few corners of robotics, most recently with the Xpero project, an effort to develop an embodied cognitive system that learns about its world much like an infant would. It’s one of a host of robo-infants being worked on (here’s a nice overview graphic). This approach has led to some very impressive achievements including an “evil starfish” robot that can quickly learn how to control its body after several of its “limbs” have been chopped off.

Read More

Can We Really Reverse-Engineer the Brain by 2030?

By Kyle Munkittrick | August 24, 2010 12:47 pm

Brainsplosion!Engineer, inventor, and Singularity true-believer Ray Kurzweil thinks we can reverse-engineer the brain in a couple decades. After Gizmodo mis-reported Kurzweil’s Singularity Summit prediction that we’d reverse-engineer the brain by 2020 (he predicted 2030), the blogosphere caught fire. PZ Myers’ trademark incendiary arguments kick-started the debate when he described Kurzweil as the “Deepak Chopra for the computer science cognoscenti.” Of course, Kurzweil responded, to which Myers retorted. Hardly a new topic, the Singularity has already taken some healthy blows from Jaron Lanier, John Pavlus and John Horgan. The fundamental failure of Kurzweil’s argument is summarized by Myers:

My complaint isn’t that he has set a date by which we’ll understand the brain, but that he has provided no baseline value for his exponential growth claim, and has no way to measure how much we know now, how much we need to know, and how rapidly we will acquire that knowledge.

Read More

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »