Category: Computers

A New Robot for the Bestiary: How to Build a Robotic Ghost Fish

By Malcolm MacIver | January 26, 2011 1:42 pm

At night in the rivers of the Amazon Basin there buzzes an entire electric civilization of fish that “see” and communicate by discharging weak electric fields. These odd characters, swimming batteries which go by the name of “weakly electric fish,” have been the focus of research in my lab and those of many others for quite a while now, because they are a model system for understanding how the brain works. (While their brains are a bit different, we can learn a great deal about ours from them, just as we’ve learned much of what we know about genetics from fruit flies.) There are now well over 3,000 scientific papers on how the brains of these fish work.

Recently, my collaborators and I built a robotic version of these animals, focusing on one in particular: the black ghost knifefish. (The name is apparently derived from a native South American belief that the souls of ancestors inhabit these fish.  For the sake of my karmic health, I’m hoping that this is apocryphal.) My university, Northwestern, did a press release with a video about our “GhostBot” last week, and I’ve been astonished at its popularity (nearly 30,000 views as I write this, thanks to coverage by places like io9, Fast Company, PC World, and msnbc). Given this unexpected interest, I thought I’d post a bit of the story behind the ghost.

Read More

Why I'm Not Afraid of the Singularity

By Kyle Munkittrick | January 20, 2011 2:27 pm

the screens, THE SCREENS THEY BECKON TO ME

I have a confession. I used to be all about the Singularity. I thought it was inevitable. I thought for certain that some sort of Terminator/HAL9000 scenario would happen when ECHELON achieved sentience. I was sure The Second Renaissance from the Animatrix was a fairly accurate depiction of how things would go down. We’d make smart robots, we’d treat them poorly, they’d rebel and slaughter humanity. Now I’m not so sure. I have big, gloomy doubts about the Singularity.

Michael Anissimov tries to restock the flames of fear over at Accelerating Future with his post “Yes, The Singularity is the Single Biggest Threat to Humanity.

Combine the non-obvious complexity of common sense morality with great power and you have an immense problem. Advanced AIs will be able to copy themselves onto any available computers, stay awake 24/7, improve their own designs, develop automated and parallelized experimental cycles that far exceed the capabilities of human scientists, and develop self-replicating technologies such as artificially photosynthetic flowers, molecular nanotechnology, modular robotics, machines that draw carbon from the air to build carbon robots, and the like. It’s hard to imagine what an advanced AGI would think of, because the first really advanced AGI will be superintelligent, and be able to imagine things that we can’t. It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there’s a lot of evidence that we aren’t.

….

Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.

Oh my stars, that does sound threatening. But again, that weird, nagging doubt lingers in the back of my mind. For a while, I couldn’t place my finger on the problem, until I re-read Anissimov’s post and realized that my disbelief flared up every time I read something about AGI doing something. AGI will remove the atmosphere. Really? How? The article, in fact, all arguments about the danger of the Singularity necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not. Read More

The Undesigned Brain is Hard to Copy

By Kyle Munkittrick | January 17, 2011 10:47 am


UPDATE: Hanson has responded and Lee has rebutted. My reaction after the jump.

The Singularity seems to be getting less and less near. One of the big goals of Singularity hopefuls is to be able to put a human mind onto (into? not sure on the proper preposition here) a non-biological substrate. Most of the debates have revolved around computer analogies. The brain is hardware, the mind is software. Therefore, to run the mind on different hardware, it just has to be “ported” or “emulated” the way a computer program might be. Timothy B. Lee (not the internet inventing one) counters Robin Hanson’s claim that we will be able to upload a human mind onto a computer within the next couple decades by dissecting the computer=mind analogy:

You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. This is different than an emulator, which (if implemented well) can be expected to behave exactly like the system it is emulating, for as long as you care to run it.

In short: we know how software is written, we can see the code and rules that govern the system–not true for the mind, so we guess at the unknowns and test the guesses with simulations. Lee’s post is very much worth the full read, so give it a perusal.

Lee got me thinking with his point that “natural systems don’t have designers.” Evolutionary processes have resulted in the brain we have today, but there was no intention or design behind those process. Our minds are undesigned.

I find that fascinating. In the first place, because it means that simulation will be exceedingly difficult. How do you reverse-engineer something with no engineer? Second, even if a simulation is successful, it by no means a guarantees that we can change the substrate of an existing mind. If the mind is an emergent property of the physical brain, then one can no more move a mind than one could move a hurricane from one system to another. The mind, it may turn out, is fundamentally and essentially related to the substrate in which it is embodied. Read More

Exclusive: We Talk "TRON: Legacy" With Director Joe Kosinski

By Andrew Moseman | December 15, 2010 9:00 am

TRON: LEGACYIt’s been 28 years since Jeff Bridges fell into Tron and its amazing 1980s computer graphics. Now the Tron universe is back with the new movie Tron: Legacy, out December 17.

Here’s the extended version of our interview with director Joe Kosinski from the December issue of DISCOVER, in which the first-time feature film director talks about reinventing the light cycle, building suits with on-board power, and how time passes in Tron compared to the real world.

Why return to Tron, and why now?

The original Tron was conceptually so far ahead of its time with this notion of a digital version of yourself in cyberspace. I think people had a hard time relating to in the early 1980s. We’ve caught up to that idea—today it’s kind of second nature.

Visually, Tron it was like nothing else I’d ever seen before: Completely unique. Nothing else looked like it before, and nothing else has looked like it since—you know, hopefully until our movie comes out.

How did you think about representing digital space as a physical place?

Where the first movie tried to use real-world materials to look at digital as possible, my approach has been the opposite: to create a world that felt real and visceral. The world of Tron has evolved [since it's been] sitting isolated, disconnected from the Internet for the last 28 years. And in that time, it had evolved into a world where the simulation has become so realistic that it feels like we took motion picture cameras into this world and shot the thing for real. It has the style and the look of Tron, but it’s executed in a way that you can’t tell what’s real and what’s virtual. I built as many sets as I could. We built physically illuminated suits. The thing I’m most proud of is actually creating a fully digital character, who’s one of the main characters in our movie.

What did you keep from Tron, and what evolved?

Read More

Quantum Dollars use Uncertainty to Create Certainty

By Eric Wolff | December 13, 2010 4:29 am

quantum-cashWithout getting into the ethics of WikiLeak’s activities, I’m disturbed that Visa, MasterCard and PayPal have all seen fit to police the organization by refusing to act as a middleman for donations.  The whole affair drives home how dependent we are on a few corporations to make e-commerce function, and how little those corporations guarantee us anything in the way of rights.

In the short term, we may be stuck, but in the longer term, quantum money could help solve the problems by providing a secure currency that can be used without resort to a broker.

Physicist Steve Wiesner first proposed the concept of quantum money in 1969. He realized that since quantum states can’t be copied, their existence opens the door to unforgeable money.

Here’s how MIT computer scientist Scott Aaronson explained the principles:

Heisenberg’s famous Uncertainty Principle says you can either measure the position of a particle or its momentum, but not both to unlimited accuracy. One consequence of the Uncertainty Principle is the so-called No-Cloning Theorem: there can be no “subatomic Xerox machine” that takes an unknown particle, and spits out two particles with exactly the same position and momentum as the original one (except, say, that one particle is two inches to the left). For if such a machine existed, then we could determine both the position and momentum of the original particle—by measuring the position of one “Xerox copy” and the momentum of the other copy. But that would violate the Uncertainty Principle.

…Besides an ordinary serial number, each dollar bill would contain (say) a few hundred photons, which the central bank “polarized” in random directions when it issued the bill. (Let’s leave the engineering details to later!) The bank, in a massive database, remembers the polarization of every photon on every bill ever issued. If you ever want to verify that a bill is genuine, you just take it to the bank”

Read More

CATEGORIZED UNDER: Computers, Electronics

We Need Gattaca to Prevent Skynet and Global Warming

By Kyle Munkittrick | November 10, 2010 6:54 pm

If only they'd kept Jimmy Carter's solar panels on there, this whole thing could have been avoided.

Independence Day has one of my most favorite hero duos of all time: Will Smith and Jeff Goldblum. Brawn and brains, flyboy and nerd, working together to take out the baddies. It all comes down to one flash of insight on behalf of a drunk Goldblum after being chastised by his father. Cliché eureka! moments like Goldblum’s realization that he can give the mothership a “cold” are great until you realize one thing: if Goldblum hadn’t been as smart as he was, the movie would have ended much differently. No one in the film was even close to figuring out how to defeat the aliens. Will Smith was in a distant second place and he had only discovered that they are vulnerable to face punches. The hillbilly who flew his jet fighter into the alien destruct-o-beam doesn’t count, because he needed a force-field-free spaceship for his trick to work. If Jeff Goldblum hadn’t been a super-genius, humanity would have been annihilated.

Every apocalyptic film seems to trade on the idea that there will be some lone super-genius to figure out the problem. In The Day The Earth Stood Still (both versions) Professor Barnhardt manages to convince Klaatu to give humanity a second look. Cleese’s version of the character had a particularly moving “this is our moment” speech. Though it’s eventually the love between a mother and child that triggers Klaatu’s mercy, Barnhardt is the one who opens Klaatu to the possibility. Over and over we see the lone super-genius helping to save the world.

Shouldn’t we want, oh, I don’t know, at least more than one super-genius per global catastrophe? I’d like to think so. And where might we get some more geniuses? you may ask. We make them.

Read More

Mutants, Androids, Cyborgs and Pop Culture Films

By Malcolm MacIver | November 2, 2010 1:07 pm

minority-report-spidersWBEZ, the Chicago affiliate of National Public Radio, recently gathered together several of my fellow science and engineering researchers at Northwestern University to talk about the science of science fiction films. The panel, and just short of 500 people from the community and university, watched clips from Star Wars, Gattaca, Minority Report, Eternal Sunshine of the Spotless Mind, and The Matrix. I was the robot/AI guy commenting on the robot spiders of Minority Report; Todd Kuiken, a designer of neuroprosthetic limbs, commented on Luke getting a new arm in Star Wars: The Empire Strikes Back; Tom Meade, a developer of medical biosensors and new medical imaging techniques, commented on Gattaca; and Catherine Wooley, who studies memory, commented on Eternal Sunshine.

The full audio of the event can be streamed or downloaded from here.

Read More

Caprica Puzzle: If a Digital You Lives Forever, Are You Immortal?

By Malcolm MacIver | October 5, 2010 3:09 pm

CLARICE: Zoe Graystone was Lacy’s best friend. A real tragedy for all of us. She was very special. I mean, she was brilliant.

NESTOR: At computer stuff, right? That’s my major. Did you know that there are bits of software that you use every day that were written decades ago?

LACY: Is that true? Oh, that’s amazing.

NESTOR: Yeah. You write a great program, and, you know, it can outlive you. It’s like a work of art, you know? Maybe Zoe was an artist. Maybe her work… Will live on.

From: Rebirth, Season 1.0 of Caprica

cylon1I’m excited that today Caprica is back on the air for the second half of its first season. As the show’s science advisor, I thought I’d pay homage to its reentry into our living rooms with some thoughts about how the show is dealing with the clash between the mortality of its living characters and the immortality of its virtual characters.

Read More

Let’s Play Predict the Future: Where Is Science Going Over the Next 30 Years?

By Amos Zeeberg (Discover Web Editor) | September 14, 2010 11:50 am

whereAs part of DISCOVER’s 30th anniversary celebration, the magazine invited 11 eminent scientists to look forward and share their predictions and hopes for the next three decades. But we also want to turn this over to Science Not Fiction’s readers: How do you think science will improve the world by 2040?

Below are short excerpts of the guest scientists’ responses, with links to the full versions:

Read More

MORE ABOUT: Top Posts

Is AI More Common Than Biological Intelligence Across the Universe?

By Malcolm MacIver | August 31, 2010 6:04 pm

In a recent article, Search for Extraterrestrial Intelligence (SETI) astronomer Seth Shostak makes an intriguing claim: SETI should start pointing its telescopes toward corners of the known universe that would be friendly not just to intelligent aliens but to artificial alien intelligence. The basis of his suggestion is that any form of life intelligent enough to generate the kinds of radio signals that SETI is looking for would be “quickly” superseded by an artificial intelligence of their creation. Here, going on our own rate of progress toward AI, Shostak suggests that this radio-to-AI delay is a small handful of centuries.

These artificial intelligences, not likely to have had the “nostalgia module” installed, may quickly flee the home planet like a teenager trying to pretend it isn’t related to its parents. If nothing else, they will likely need to do this to find further resources such as materials and energy. Where would they want to go? Shostak speculates they may go to places where large amounts of energy can be obtained, such as near large stars or black holes.

Alien's harvesting the energy of a star for a worm hole
Stephen Hawking imagines aliens covering stars with mirrors
to generate enough power for worm holes

Stephen Hawking has suggested one reason to go to high-energy regions would be to make worm holes through space-time to travel vast distances quickly. These areas are not hospitable to life as we know it, and so are not currently the target of SETI’s telescopes searching for signals of such life.

Read More

MORE ABOUT: Top Posts
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »