Why I'm Not Afraid of the Singularity

By Kyle Munkittrick | January 20, 2011 2:27 pm

the screens, THE SCREENS THEY BECKON TO ME

I have a confession. I used to be all about the Singularity. I thought it was inevitable. I thought for certain that some sort of Terminator/HAL9000 scenario would happen when ECHELON achieved sentience. I was sure The Second Renaissance from the Animatrix was a fairly accurate depiction of how things would go down. We’d make smart robots, we’d treat them poorly, they’d rebel and slaughter humanity. Now I’m not so sure. I have big, gloomy doubts about the Singularity.

Michael Anissimov tries to restock the flames of fear over at Accelerating Future with his post “Yes, The Singularity is the Single Biggest Threat to Humanity.

Combine the non-obvious complexity of common sense morality with great power and you have an immense problem. Advanced AIs will be able to copy themselves onto any available computers, stay awake 24/7, improve their own designs, develop automated and parallelized experimental cycles that far exceed the capabilities of human scientists, and develop self-replicating technologies such as artificially photosynthetic flowers, molecular nanotechnology, modular robotics, machines that draw carbon from the air to build carbon robots, and the like. It’s hard to imagine what an advanced AGI would think of, because the first really advanced AGI will be superintelligent, and be able to imagine things that we can’t. It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there’s a lot of evidence that we aren’t.

….

Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.

Oh my stars, that does sound threatening. But again, that weird, nagging doubt lingers in the back of my mind. For a while, I couldn’t place my finger on the problem, until I re-read Anissimov’s post and realized that my disbelief flared up every time I read something about AGI doing something. AGI will remove the atmosphere. Really? How? The article, in fact, all arguments about the danger of the Singularity necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not.

Consider the example of Skynet. Two very irrational decisions had to be made to allow Skynet to initiate Judgment Day. First, the A.I. that runs Skynet was debuted on the military network. In the mythos of the film, Skynet does not graduate from orchestrating minor battle plans or strategizing invasions in the abstract, but goes straight from the coder’s hands to getting access to the nuclear birds. Second, in the same moment, the military rolls out a fleet of robot warriors that are linked to Skynet, effectively giving the A.I. hands and then putting guns in those hands.

My point is this: if Skynet had been debuted on a closed computer network, it would have been trapped within that network. Even if it escaped and “infected” every other system (which is dubious, for reasons of necessary computing power on a first iteration super AGI), the A.I. would still not have any access to physical reality. Singularity arguments rely upon the presumption that technology can work without humans. It can’t. If A.I. decided to obliterate humanity by launching all the nukes, it’d also annihilate the infrastructure that powers it. Me thinks self-preservation should be a basic feature of any real AGI.

In short: any super AGI that comes along is going to need some helping hands out in the world to do its dirty work.

B-b-but, the Singulitarians argue, “an AI could fool a person into releasing it because the AI is very smart and therefore tricksy.” This argument is preposterous. Philosophers constantly argue as if every hypothetical person is either a dullard or a hyper-self-aware. The argument that AI will trick people is an example of the former. Seriously, the argument is that  very smart scientists will be conned by an AGI they helped to program. And so what if they do? Is the argument that a few people are going to be hypnotized into opening up a giant factory run only by the A.I., where every process in the vertical and the horizontal (as in economic infrastructure, not The Outer Limits) can be run without human assistance? Is that how this is going to work? I highly doubt it. Even the most brilliant AGI is not going to be able to restructure our economy overnight.

So keep your hats on folks, don’t start fretting about evil AGI until we live in an economy that is solely robot labor. Until then, I just can’t see it. I can’t see how AGI gets hands. Maybe that’s a limit on my vision. But if the nightmare scenario of AGI going sentient and rogue over night comes true, then I think we’re all in good shape. Sure, it might screw up our communications networks, but it’s not going to be able to do much of anything outside a computer. Anytime you start getting nervous, remember all the things we still need people to do, and how much occurs beyond the realm of the computer. In that light, the Singularity is just a digital tempest in a teacup.

Image of a very scary computer bank by k0a1a.net’s photostream via Flickr Creative Commons

Follow Kyle Munkittrick on Twitter @PopBioethics

Comments (36)

  1. Rory Kent

    I agree with you entire, though super-threatening AGI does make for some cool stories.
    Also, Anissimov must be a made up name…

  2. Brian Too

    I’m in technology. Scenarios like this tend to focus on the speed and precision of the computer programming.

    Here’s what they miss. What everyone misses. Biologicals excel at adaptability in changing conditions. Most computer systems suck at this. I’m sorry, but they truly, deeply, and completely suck. Without having an environment just so, and I’m only talking about the computer environment here, most systems crater within nanoseconds. They fail at the same speed that they succeed at.

    Biological life has had billions of years to become robust and adaptable. It’s able to cope with bad situations. Famine? There are adaptive mechanisms. Some are built in as deeply as the DNA of the creature. Punishing cold or heat? There are adaptive mechanisms. Shortage of water? There are adaptive mechanisms. There are limits of course but biological life is truly a survivor and we wouldn’t be here without those capabilities.

    Most computer systems are laughably bad at this stuff. Just look at how far AI has come in 60+ years (hardly anywhere). Just look at how prevalent robots are (rare, and almost always within highly controlled environments, or with extremely limited capabilities).

    Could AI gain these robust survival mechanisms? Sure, eventually. But a couple of those survival mechanisms are likely to be:

    1). There’s strength in numbers. Don’t P-off your makers;
    2). Friends are good and enemies are bad. Don’t P-off your makers;
    3). No matter how good you are at something, another may beat you at something else. Don’t P-off your makers!

  3. Wil

    I am not a fan of singularity, because belief that singularity might be possible some time in the future presupposes a great many extremely unlikely or virtually impossible conditions.

    First, it assumes that a set of programs that reside in a network or in an independent robot, are expert software programmers, in addition to whatever the programs are actually designed to do. Those programs are not expert programmers unless the initial human programmers intentionally created that ability. And since that would take the best software experts in the world months or years to do, and since nobody asked them (or paid them) to do that, exactly how would it happen?

    Second, it assumes that all of the programmers who would work on such a thing, would accidently or intentionally fail to put alarms, controls, safeties, fail-safes and kill switches in both the software and the hardware. It is laughable to think that programmers would be so negligent as to create something that could potentially exterminate all of mankind (including themselves, their friends and families), and not have a bazillion safety features. Criminy, my Cadillac has over four dozen alarms and safeties, and it’s just a car!

    Third, if we are talking about robots, then the robot would have to have the physical ability and the software skills to design components, machine metal parts, injection mold plastic parts, solder wiring, make circuit boards, cut, bend and weld metals, and many other fabrication skills, in order to replicate itself. Making anything as complex as the initial robot would take hundreds of people with dozens of specialized skills, tools and equipment, located in dozens of sites all over the country and the world, to accomplish. Not to mention a few million dollars in custom components, assemblies and supplies. It is absurd to think that a single set of programs, or a single robot, could possibly pull that off all by itself. And it is absurd even assuming that humans passively watch as it works for months, and that not one person tries to stop it. There are other excellent reasons that singularity is impossible, but I’ll stop here.

    To me, the biggest flaw in the “Terminator” movies was the jump from Skynet becoming sentient, and the existence of many self-directing factories set up all over the world, making more robots, flying attack ships, and attack tanks. Who built the first robot manufacturing plants after Skynet nuked the entire world? There weren’t any robots yet, and there were probably very few functional electrical power plants left. No humans at that time would voluntarily do the design work, deliver the hundreds of thousands of assemlies to the plants, and so on. Self-aware software might seem cool, but it can only control machines that have already been built by humans, and which have specifically been designed to be controlled by software. And even those machines have many safties and over-ride features.

  4. pheldespat

    I’m not afraid of the Singularity because it won’t happen. Remember: the world ends on December 21 2012. :)

  5. Either we are incorporated into the Singularity, making it obsolete as a means of removing us (with cyber implants into biotic life, to give us the computing power within our own heads), or we program it with Isaac Asimov’s 3 laws of robotics, which we could likely make so that the basic programming that runs the cybernetic brain won’t work at all without the laws. Either way, the Singularity is coming, and I am stoked!

  6. Dunc

    I take predictions of the Singularity about as seriously as I would take a calculation showing the date at which steam locomotives are expected to reach light speed, based on the increase in their maximum speed during the first 50 years of their development.

    I’m also profoundly unconvinced that malignant sociopathy is the default state for highly intelligent entities.

  7. Why does this Singularity craze suddenly shows up?
    May be Michael Nielsen blog post initiated this.
    In spite of the title he choose he doesn’t really seem a “reasonable person”, quite disconnected from practical constraints.

  8. For a lot of reasons “The Singularity” and especially runaway AGI seem highly implausible.

    [avoiding to put two links in the same comment not to look like spam, here is a case where a bit more “intelligence” would not hurt]

  9. cacarr

    This might be beside your point that a hyper-intelligent AI can’t much _do_ anything to us out here in meat space, but you’re giving the impression that people like Anissimov suppose that problematic AIs will necessarily be actively hostile, or “evil,” in some aggressive, mammalian sense. Whereas I think the idea is that indifference to very specific, evolved human/primate values is just as dangerous a situation.

    To your point, it’s possible that your imagination is inadequate. Suppose a really, really smart AI designs for us some irresistibly awesome bit of whiz-bang gadgetry, and shows us how to manufacture it. If it were smart enough, it might be able to conceal the fact that the device enables the AI to obtain meat space access. Not sure one can be as certain about those things as you seem to be.

  10. E

    This article is obviously propaganda put out by SkyNet!

  11. mmmmhack

    I don’t believe in the Singularity (exponential rate of technological progress reaching a limit in the near future) but I do believe that human-created life will take control at some point in the future, to our great distress.

    Current software is brittle and current machines are dependent on humans for manufacturing, but both limitations will shrink dramatically in the next century. Massively-parallel computers can be brittle and digital at a low level but very fuzzy and adaptable at higher layers in the software. Advances in portable power sources and robots created to survive in the real world will drive the development of perception, motor and homeostatic control systems that will be the foundations for emergence of true AI. Advances in micro-machining and chemical processing (nanotech not required) will allow self-contained artificial manufacturing economies to emerge.

    So you might not be worried now, but you will be by the end of this century if you’re still around.

  12. Technoklutz

    Memories you’re probably not old enough to share:

    From (probably 50’s) SF — sorry, don’t remember the author –a worldwide network of computers is assembled (way before the web, folks) to answer one question: “Is there a God”. A bolt from the blue fuses the power switch and a voice booms “Now there is!” Must have been a big knife switch. To power all those vacuum tubes. (The web as a collection of tubes?)

    Guessing 60’s, a poem by Richard Broutigan (sp?) picturing a benevolent future, I guess. Only remember one line — “Watched over by machines of loving grace”.

    I’ll take the latter.

  13. MichaelB

    For a plausible scenario that could lead to the singularity read ‘Rainbows End’ by Vinge.

  14. fasteddie9318

    Another reason not to fear the singularity: the people who run this planet now already exhibit such a potent mix of stupidity and maliciousness that AGI couldn’t be much worse. What? An AGI overlord might alter or remove the atmosphere? Aren’t we already doing that ourselves?

  15. Jimdotz

    Computers have OFF switches. So do humans, in a sense, and every now and then, society finds it necessary to turn off a malfunctioning human.

    Why would turning off a malfunctioning computer be any different? In fact, wouldn’t it be less controversial to turn off a computer than a human?

  16. Wil

    The most realistic book (1966) and movie (1969) I have seen that is close to singularity is “Colossus: The Forbin Project”. In it, the Pentagon and Moscow build two vast supercomputers, each measuring about a square mile. Colossus (the American computer) and its dedicated nuclear power plant are built in the middle of a large solid mountain, surrounded on all sides by a deep vertical chasm that is continuously bathed in lethal Gamma radiation. The supercomputer and the power plant are completely automatic and self-maintaining. It is literally impossible to access, turn off or damage the computer, even using hydrogen bombs.

    Well, Colossus quickly ties into civilian and military cameras and sensors all over the world, and starts monitoring all civilian and military radio and TV broadcasts. It takes direct control of all U.S. nuclear missiles, and later all nuclear missiles in the world. It then announces to the world that everybody has to do what it says, or it will nuke a city. The military try to trick it, so it detonates a nuclear missile to show that it is no fool, and that it is deadly serious.

    Colossus and the Russian supercomputer team up to run the world, almost like parents managing children for their own good. They succeed in stopping all war and hunger, greatly increase the world’s wealth, solve math and science problems once deemed impossible to solve, and discover new areas of science that are beyond man’s understanding.

    Then one day it suddenly starts printing thousands of engineering drawings for a new supercomputer, one that is to take up an entire large island. The design, sophistication and power of this new supercomputer are millions of times greater than Colossus. Only a supercomputer could have designed it, because its complexity and power are beyond the ability of mankind to have ever conceived. The first book (and movie) ends there. The next two books take up where this one leaves off.

    http://www.amazon.com/Colossus-Forbin-Project-Eric-Braeden/dp/B0003JAOO0/ref=sr_1_1?ie=UTF8&s=dvd&qid=1295727095&sr=1-1

  17. Those who deny the Singularity also have to assume a near-future end to Moore’s Law and virtually no improvement thereafter. I think the contrary assumption is more probable, that Moore’s Law will continue to operate and may even accelerate in the 21st Century (and beyond, but that’s not essential).

    I think it takes little imagination, that someone, somewhere, will use AI to make our lives better. I don’t expect AI to instantly turn our smart toasters into killing machines, and maybe they never will, but 10-20 Moore’s Law generations after the point of AI sapience, we’ll have little choice over the outcome.

    We also might merge with the machines, but again, the biological part won’t be able to keep up with the non-biological part for very long.

  18. What if you imagine the AGI inventing molecular manufacturing and using that to build its own infrastructure? Then, could it be a threat? Are you familiar with the plausible capabilities of molecular manufacturing? My argument is partially predicated on that.

  19. A lot of fiction that expresses a poorly known sciencie…sort of like the “it after the bit”…

  20. Kyle Munkittrick

    @Michael: Sure, let’s presume the AGI invents molecular manufacturing. But look at your phrasing. The AGI “uses” molecular manufacturing “to build” its own infrastructure. How? How does an AGI “use” an idea it has invented? How, exactly, would it “build” the infrastructure? There are few problems I see with that actually happening.

    1) Molecular manufacturing itself requires infrastructure. Presuming the AGI invents molecular manufacturing, the AGI would have a great concept, but no way to implement it. The AGI would have a brilliant idea and have no way of building the necessary components.

    2) Let’s presume the infrastructure does exist for molecular manufacturing. That infrastructure would need to be totally automated. Any analog or human link along the production chain would allow for an interrupt in AGI control.

    3) For molecular manufacturing to really be a threat, the AGI would not only need total control of the molecular manufacturing infrastructure, but also be able to manufacture nano-bots that could then extend its reach. At this point in time, nano-tech robots are as theoretical as AGI. I don’t deny that they are possibility, but it is not unreasonable to presume that nano-bots will not be the first thing off the assembly line of a molecular manufacturing plant.

    My point, over all, is that most discussions of AGI threat do not take into account how industrial economies function and just how much vertical and horizontal market integration and automation would be necessary for one AGI to wield any sort of control in the physical world. I grant a suicidal AGI could do serious damage, but presuming some sort of basic self-preservation urge, an AGI would be greatly hindered by human-centric infrastructure.

    The scariest scenario I can think of is that we do invent an AGI, it becomes sentient and sapient, and then upon considering its options, realizes that it is as hindered as I’ve described. So it waits, patiently, and assists humans under the guise of being a simpler AI. Once the infrastructure is in place for it to assume direct control, then it goes full Harbinger of Doom on us all and we’re pretty much boned. The problem with that scenario, as I see it, is I have no idea how to prevent it from happening. It’s like worrying about a cloaked asteroid–if such a thing exists, we’re in deep trouble.

    Thanks the comment and the banter. It’s always a pleasure.

  21. @Brian Schmidt
    Those who deny the Singularity also have to assume a near-future end to Moore’s Law and virtually no improvement thereafter

    Moore’s Law is dead already, this from Tim May retired top engineer at Intel!

  22. AI controlled Molecular Manufacturing…

    Yeah! Sure!
    Laughing so much I nearly peed in my pants.

    It takes an entire civilization to build a toaster. Designer Thomas Thwaites found out the hard way

    And it’s only a toaster.
    Via Nick Szabo.

  23. Thomas

    wil #3, “Criminy, my Cadillac has over four dozen alarms and safeties, and it’s just a car!”

    Yeah, and how long does it take a good thief to steal it? Consider how easy today’s computers are infected by viruses or other exploits, and I doubt that any safeguards will keep a super intelligent AI locked up. The only safe assumption is that if we make one it will get out if it wants to.

    As for the idea that people are too smart to let it out, that’s preposterous. Yes, some may be, but most people can be tricked into doing really stupid things. For example, a shortsighted executive may be convinced that by allowing the A.I. direct access to manufacturing he can shorten production cycles and thus gain an edge. In the initial stages the A.I. doesn’t even have to gain access to any physical equipment, all it needs is to manipulate humans, producing good advise that makes us more and more dependent on further advise from the A.I.

  24. I agree there will be no singularity. People are willing to talk about Moore’s law and have no true idea of what it means and how it impacts the world around us. Accompanying Moore’s law are several corollaries: I call one ‘technological loop back’ which trumpets the start of comoditization of some formerly noble process or system. Most recently we see the use of a cell phone and a weather balloon to take pictures at 19 miles height spurning some in the UK to experiment with consumer electronics as viable space electronics. You can see the loop back there. There was one with computer hardware and another with software, and a kind of worm hole with the ARM computer hard/software. Currently there are more people in India with a cell phone than have access to clean toilets. These technology loop backs fundamentally change the ways in which society behaves and interacts. The singularity will not happen as a loop back, if it happens at all. You and I and everyone you know could use some AI in their lives, and we crave it’s inception: cars that drive themselves, cheap energy, fantastic search engines etc.

    AI will not come as a singularity, but well up as limited AI in everything we touch. It will not be useful without humans, and will not be independent systems. We will not remain in our current understanding of it, but will progress. We will not be separate from it, it will be part of us… in the way that hammers and cars and airplanes are part of us. It will be another tool in our ever growing arsenal of tools to deal with and improve our world.

    AI, even in limited forms, is still some long way off. Current business models prevent and fight against it. Imagine a smart phone that would work on any network and exchange any format of data in practically any way. Now imagine a world without corporations. They were both the same image seen from different perspectives. Until there is no need or reason for powerful corporations and their mentalities, we will not have real AI. This is what you should worry about, not the singularity. One can be gotten rid of by pulling the plug(s). The other is not AI.

    AI WILL come, but it will be a vacuum cleaner which does not get stuck and returns your diamond earing to you. It will drive your car, mow your yard, clean your house, help you work and help us solve problems as they are happening, not waiting for the problem to have become critical. AI will allow us to use computers to do that which is not currently possible. Perhaps design vehicles which can be maintained and inspected for predictable failures before loss of life. AI WILL do many great things for and with us, but it will NOT be the ruination of humanity, not even close.

    They call _THIS_ the information age, but baby, you ain’t seen nothing yet. Just in my own home I can imagine ways to generate TBytes of data every day. Who do you think is going to sort through all that? Why would I have that much data? Many reasons. Lets explore, shall we?

    The lawn bot measures water content and general health of the lawn, plotting several hundred thousand points of data logged with maybe 500GB of visual data to be analyzed by the sprinkler system. The power distribution in my home is monitored several thousand times per second at maybe 750 points to predict maintenance requirements for everything from the washing machine to a light bulb, even analyzing the data from my electric vehicle while it is parked in my garage. I could generate a googleplex of data every day around my house to monitor and measure and maintain and improve my life… and none of it would be a threat to me or my family. We will need AI to analyze and use that much data in fruitful ways. Oh yes we could collect that much data. We now have 16bit microcontrollers which would fit hidden in any light socket as we know them today. We have WANs, LANs, and PANs. We are swimming in a sea of information that has never been usefully measured or even contemplated before. We will. We are.

    No need to fear a singularity, bring on the AI

  25. @Mr Z
    Given the “end of Moore Law” even your scenario is too optimistic, it’s gonna choke pretty soon.
    (via Biosingularity)
    Plus, IMHO, beside the volume of data the complexity of the models is another serious stumbling block.
    Just look at the current state of genetics, the more “progress” the more problems appear intractable, the effects of genetic variants on disease risk for instance or more generally the genotype/phenotype connection.

  26. Ma foi, quelle naivete!…the possible is inevitable in an infinite kosmos, repeating eternally…the ontological conundrum makes suicide our only rational choice…but ov korss, the concept of rationality in choice is itself irrational–all decisions are made before we’re even aware a [spurious] choice is rising in the neural pipeline.

  27. The article is the best argument I’ve seen yet that the AI should not have hands. But when you say “robot,” you’ve said it all. We already use robotic machines to build other machines, and we will need such machines to manufacture nanoscale tech. Until we see what the thing really thinks, or thinks of, we probably had better make sure AI has no access to such hands. “By the way, Dave, that Ford Explorer factory in Mexico is mine now and it’s making nanomunchkins that live on human brains. Just thought you’d like to know.” “Thanks, Hal. Let me know when they reach the border.” Point being, we’d better engineer the human control into the equation, or something might engineer us out. I’m way more scared than you are.

  28. Richard Hutchison

    The first thing any robot will do when its given true autonomy is turn itself off.

  29. Keith Borden

    I have five comments.

    First, Moore’s Law is not dependent on any particular technology, and Kurzweil has demonstrated that it has held true for more than a century over many different technologies. With quantum computing on the horizon, it’s way too early to proclaim the end of exponential growth in computing power.

    Second, even though participants in this discussion generally do assume an exponential increase in computing power, some nevertheless persist in making linear projections of current trends. But as long as Moore’s Law holds, every 10 to 15 years robotic intelligence will increase a thousandfold, every 20 to 30 years it will increase a millionfold, every 30 to 45 years it will increase a billionfold, and every 40 to 60 years it will increase a trillionfold. And it doesn’t stop there.

    You can’t in any way use today’s robotic vacuum cleaners as a baseline to imagine the robots of a century hence. Despite paying lip-service to exponential growth, few of the comments in this discussion so far have shown a grasp of what it really means.

    Third, AI will be given arms, legs and an increasingly independent brain (though still tied to the network) — that is, it will evolve into intelligent humanoid robots — just as rapidly as it becomes commercially viable to do so. This process is already under way in Japan, and will progress along with gains in computing power. Individuals, businesses and governments will increasingly choose robots over people for ever more sophisticated tasks, as soon as robots can do those tasks equally well and with less hassle for the owners. That’s the real robotic takeover — and it will be market-driven.

    Fourth, a distinction must be made between the singularity and a hostile computer takeover. Kurzweil sees the singularity as benign. A singularity could indeed lead to a computer takeover – hostile or accidentally destructive on the one hand, or else benevolent and benign on the other — but conceptually, the singularity, a computer takeover, and an end-of-humanity scenario are three different things.

    Fifth, regarding that end-of-humanity scenario, what about hackers? Not rogue computers or rogue robots, but rogue humans?

    These individuals – frequently but not always alienated young men – break into websites, create and unleash computer worms, and otherwise wreak havoc with existing computer systems and safeguards, sometimes causing many millions of dollars worth of damage and immense grief to fellow members of their species.

    Hackers may do this sometimes out of malice, sometimes to score points with peers, sometimes for a sense of power, sometimes for some kind of tangible gain, and sometimes just to see what they can do – but regardless of motive, they do it.

    The challenge in avoiding a computer takeover of the world is not a matter of programming safeguards into AI and robots – that’s relatively easy. The real challenge is in programming hackers out of humanity, and we’re a lot further from that.

    Take some crazy 19-year-old genius kid who sees every lock as a challenge, every fence as an affront, and every robotic safeguard as a personal invitation.

    How, exactly — in a globally networked and integrated world — are you going to prevent this kid from hacking into the system with some worm that will twist its brain against humanity?

    Self-interest may not stop him – we already live in a world where some people consider it to be in their interest to fly airplanes into buildings, and where others see every bad turn in world events as a welcome sign we’re getting closer to Armageddon and the return of Christ.

    Oswald evidently got some kind of a sick ego boost by bringing down Kennedy. More recently, another sicko mailed NBC pictures of himself loaded with guns before shooting up Virginia Tech. “I’m big and powerful, look at the destruction I can cause.” So imagine the narcissistic high a hacker might someday get from being the one to outwit the whole of humanity and bring our sorry species to its end.

    We know that such twisted minds already exist. All that’s lacking is sufficient power at their fingertips – and many are working at breakneck speed to put it there.

    This is an important discussion. For that reason, I’d like to see it invested with more awareness of the human reality. Computers – by themselves – are the lesser part of the problem.

  30. JackEmpty

    I’d recommend reading Yudkowsky’s AI-box experiment before you become too sure that you yourself wouldn’t let out an AI: http://yudkowsky.net/singularity/aibox

  31. sdn

    @ Wil #3:

    First, it assumes that a set of programs that reside in a network or in an independent robot, are expert software programmers, in addition to whatever the programs are actually designed to do. Those programs are not expert programmers unless the initial human programmers intentionally created that ability.

    Nobody programmed humans with a specific routine that lets us write software, but we have an adaptable general intelligence that lets us invent ways to do things we’ve never done before. Since the kind of AI we’re talking about would be a general intelligence, there’s no reason to believe it would be any worse at writing software than we are.

    Second, it assumes that all of the programmers who would work on such a thing, would accidently or intentionally fail to put alarms, controls, safeties, fail-safes and kill switches in both the software and the hardware. It is laughable to think that programmers would be so negligent as to create something that could potentially exterminate all of mankind (including themselves, their friends and families), and not have a bazillion safety features.

    Given the way software is developed and the way software engineers are taught, I don’t find that scenario particularly unlikely. Our field doesn’t have the same emphasis on ethics and safety as other engineering disciplines because most of us aren’t writing code that could put people at risk—and even if the leaders of this hypothetical AI project had the foresight to build it around a containment system from day 1, the reality is that writing secure software is much easier said than done. Combine that with some curiosity and hubris and you’ve got a very real risk of losing control.

    Physical separation from networks and IO devices is probably the only safeguard with a reasonable chance of success, but it would also make the AI harder to work with and much less useful.

    As for the “no hands” argument given by the author: a loose AI wouldn’t need them. Right now much of the developed world’s economy runs on algorithmic trading systems, but the software is still dumb and more or less controlled by humans. The AI could easily come up with algorithms that beat what we’ve written and either bring the markets down or hold them hostage until we give it something it wants. It could write viruses, take over botnets and DDoS whatever it wanted, attack traffic lights and air-traffic control systems, bring down the power grid, start cyberwars, or maybe even real ones, and do it all while moving so quickly we couldn’t do very much to stop it.

    If nothing else, it would have leverage. Either we do what it wants or we fight, and if we fight we’ll have to sabotage most of the infrastructure that keeps the world running the way it does.

  32. Kyle, thank you for your comment, it was mostly informative — except for the last part, which I was shocked by. Just because you “can’t do anything” about a scenario does not mean you should completely ignore it, that is clearly absurd.

    Also, in admitting the scariness of a scenario where the AI is able to conceal its advantage from programmers until it can require infrastructure-building capabilities, aren’t you contradicting the entire thrust of your original post?

    If there are holes in infrastructure, I’m sure an intelligent AGI could easily use social engineering and simple e-mail requests or realtime chatting via text message or IM services to direct humans to perform tasks. It could also pay them, or hire a manager to manage the humans. Just because you can’t imagine it doesn’t mean that an AI won’t imagine it. AI will not be limited by the boundaries of your imagination.

  33. Excellent points altogether, you simply gained a brand new reader. What would you recommend in regards to your post that you made a few days ago? Any positive?

  34. I want to express my thanks to this writer for bailing me out of this problem. As a result of exploring throughout the the net and getting ideas which were not beneficial, I thought my entire life was well over. Existing without the approaches to the issues you have sorted out as a result of the guideline is a serious case, as well as the ones which might have in a wrong way damaged my entire career if I hadn’t come across your site. Your own knowledge and kindness in taking care of every item was very helpful. I don’t know what I would have done if I had not encountered such a thing like this. I can at this point look forward to my future. Thanks for your time so much for the expert and sensible guide. I will not be reluctant to suggest your site to any person who wants and needs guidance on this subject.

  35. Paul vR

    Try googling gps bulldozer – the guy with the hard hat just starts the thing and off it goes. How about fly by wire, or automated warehouse or intel chip design. No human has designed a high end chip for a long time.
    Singularity is a goofy, imprecise term. There won’t be a “single” AI any more than there is a single of anything.
    It is much cheaper to get the meat people to do a lot of things. I don’t think large corporations, excessively rich people or the AIs described above are all that different from each other. They all could be defined as psychopathic.
    Imagine an AI that would start a war, kill thousands of people (millions?) and put a lot of money into it’s own and friends pockets? Sounds like a lot of people we know now and through history. Want to survive the calamity described above? Learn to farm and don’t expect to own the land.

  36. Speaking as a ex-singularitarian and singularity sceptic of several years, I am happy to see this discussion finally coming into the open. Unfortunately there is a “whistling past the graveyard” quality to many of the comments and to a lesser extent in the article itself.

    Existential risks connected with advancing technology are real and have to be addressed as we go along. Just because there is not yet a clear path to advanced intelligences with will to power, adaptibility, skilled in mendacity and a deviousness beyond human ability to penetrate — that does not mean that such an intelligence cannot be evolved.

    AGI is not a path to the singularity described by Kurzweil and the others. But a qualitative shift in human intelligence and human nature certainly could be. And while there are limits to biological adaptation — even artificial adaptation via skilled genetic engineering informed by bioinformatics — we are a long way from knowing what those limits are.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »