Artificial Intelligence Experts Respond to Elon Musk’s Dire Warning for U.S. Governors

By Nathaniel Scharping | July 18, 2017 4:37 pm
(Credit: OnInnovation)

(Credit: OnInnovation)

If you hadn’t heard, Elon Musk is worried about the machines.

Though that may seem a quixotic stance for the head of multiple tech companies to take, it seems that his proximity to the bleeding edge of technological development has given him the heebie-jeebies when it comes to artificial intelligence. He’s shared his fears of AI running amok before, likening it to “summoning the demon,” and Musk doubled down on his stance at a meeting of the National Governors Association this weekend, telling state leaders that AI poses an existential threat to humanity.

Amid a discussion of driverless vehicles and space exploration, Musk called for greater government regulations surrounding artificial intelligence research and implementation, stating:

“Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal. AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late,” according to the MIT Tech Review.

It’s far from delusional to voice such concerns, given that AI could one day reach the point where it becomes capable of improving upon itself, sparking a feedback loop of progress that takes it far beyond human capabilities. When we’ll actually reach that point is anyone’s guess, and we’re not at all close at the moment, as today’s footage of a security robot wandering blindly into a fountain makes clear.

While computers may be snapping up video game records and mastering poker, they cannot approximate anything like general intelligence — the broad reasoning skills that allow us to accomplish many variable tasks. This is why AI that excels at a single task, like playing chess, fails miserably when asked to do something as simple as describe a chair.

To get some perspective on Musk’s comments, Discover reached out to computer scientists and futurists working on the very kind of AI that the tech CEO warns about.

Oren Etzioni

University of Washington computer science professor and CEO of the Allen Institute for Artificial Intelligence

Elon Musk’s obsession with AI as an existential threat for humanity is a distraction from the real concern about AI’s impact on jobs and weapons systems.  What the public needs is good information about the actual consequences of AI both positive and negative. We have to distinguish between science and science fiction. In fictional accounts, AI is often cast as the “bad guy”, scheming to take over the world, but in reality AI is a tool, a technology and one that has the potential to save many lives by improving transportation, medicine, and more. Instead of creating a new regulatory body, we need to better educate and inform people on what AI can and cannot do. We need research on how to build ‘AI guardians’—AI systems that monitor and analyze other AI systems to help ensure they obey our laws and values. The world needs AI for its benefits, AI needs regulation like the Pacific ocean needs global warming.

Toby Walsh

Professor of artificial intelligence at the University of New South Wales, Sydney and author of “It’s Alive!: Artificial Intelligence from the Logic Piano to Killer Robots

Elon Musk’s remarks are alarmist. I recently surveyed 300 leading AI researchers and the majority of them think it will take at least 50 more years to get to machines as smart as humans. So this is not a problem that needs immediate attention. 

And I’m not too worried about what happens when we get to super-intelligence, as there’s a healthy research community working on ensuring that these machines won’t pose an existential threat to humanity. I expect they’ll have worked out precisely what safeguards are needed by then. 

But Elon is right about one thing: We do need government to start regulating AI now. However, it is the stupid AI we have today that we need to start regulating. The biased algorithms. The arms race to develop “killer robots”, where stupid AI will be given the ability to make life or death decisions. The threat to our privacy as the tech companies get hold of all our personal and medical data. And the distortion of political debate that the internet is enabling. 

The tech companies realize they have a problem, and they have made some efforts to avoid government regulation by beginning to self-regulate. But there are serious questions to be asked whether they can be left to do this themselves. We are witnessing an AI race between the big tech giants, investing billions of dollars in this winner takes all contest. Many other industries have seen government step in to prevent monopolies behaving poorly. I’ve said this in a talk recently, but I’ll repeat it again: If some of the giants like Google and Facebook aren’t broken up in twenty years time, I’ll be immensely worried for the future of our society.

Fei-Fei Li

Director of the Stanford Artificial Intelligence Lab

There are no independent machine values; machine values are human values. If humanity is truly worried about the future impact of a technology, be it AI or energy or anything else, let’s have all walks and voices of life be represented in developing and applying this technology. Every technologist has a role in making benevolent technology for bettering our society, no matter if it’s Stanford, Google or Tesla. As an AI educator and technologist, my foremost hope is to see much more inclusion and diversity in both the development of AI as well as the dissemination of AI voices and opinions.

Raja Chatila

Chair of The IEEE Global AI Ethics Initiative

Artificial Intelligence is already everywhere.  Its ramifications of use rival that of the Internet, and actually reinforces them. AI is being embedded in almost every algorithm and system we’re building now and in the future. There is an essential opportunity to prioritize ethical and responsible design today for AI. However, this is more related to the greater immediate risk for AI and society, which is the prioritization of exponential economic growth while ignoring environmental and societal issues.

In terms of whether Musk’s warnings of existential threats regarding Artificial Super-intelligence merit immediate attention, we actually risk large-scale negative and unintended consequences because we’re placing exponential growth and shareholder value above societal flourishing metrics as indicators of success for these amazing technologies.

To address these issues, every stakeholder creating AI must address issues of transparency, accountability and traceability in their work. They must ensure the safe and trusted access to and exchange of user data as encouraged by the GDPR (General Data Protection Regulation) in the EU. And they must prioritize human rights-centric well being metrics like the UN Sustainable Development Goals as predetermined global metrics of success that can provably increase human prosperity.

The IEEE Global AI Ethics Initiative created Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems to pragmatically help any stakeholders creating these technologies to proactively deal with the general types of ethical issues Musk’s concerns bring up. The group of over 250 global AI and Ethics experts were also the inspiration behind the series of IEEE P7000 Standards – Model Process for Addressing Ethical Concerns During System Design currently in progress, designed to create solutions to these issues in a global consensus building process. 

My biggest concern about AI is designing and proliferating the technology without prioritizing ethical and responsible design or rushing to increase economic growth in a time we so desperately need to focus on environmental and societal sustainability to avoid the existential risks we’ve already created without the help of AI. Humanity doesn’t need to fear AI, as long as we act now to prioritize ethical and responsible design of it.

Martin Ford

Author, “Rise of the Robots: Technology and the Threat of a Jobless Future”

Elon Musk’s concerns about AI that will pose an existential threat to humanity are legitimate and should not be dismissed—but they concern developments that almost certainly lie in the relatively far future, probably at least 30 to 50 years from now, and perhaps much more.

Calls to immediately regulate or restrict AI development are misplaced for a number of reasons, perhaps most importantly because the U.S. is currently engaged in active competition with other countries, especially China. We cannot afford to fall behind in this critical race.

Additionally, worries about truly advanced AI “taking over” distract us from the much more immediate issues associated with progress in specialized artificial intelligence. These include the possibility of massive economic and social disruption as millions of jobs are eliminated, as well as potential threats to privacy and the deployment of artificial intelligence in cybercrime and cyberwarfare, as well as the advent of truly autonomous military and security robots.  None of these more near term developments rely on the development of the advanced super-intelligence that Musk worries about. They are a simple extrapolation of technology that already exists. Our immediate focus should be on addressing these far less speculative risks, which are highly likely to have a dramatic impact within the next two decades. 

CATEGORIZED UNDER: Technology, top posts
  • Uncle Al

    AIs’ competition is not meats, it is other AIs. Consider taking your dog out to hunt wolves. The dog will arrive at certain conclusions.

  • Gerard Rinkus, Ph.D

    I’d just point out that right up until AlphaGo beat the human champion last year, most AI researchers thought that was still 10+ years off. That’s the nature of phase changes, you don’t see them coming. I’m betting on the Lawnmower Man :) (1992 version)

  • Mike Richardson

    Doesn’t hurt to start thinking about potential hazards before they arrive. The problem with the Singularity is that we won’t realize it’s here until it’s arrived, and it may take a form that no longer needs humans.

  • Denis

    AI does not have own values. But people values are not simple atomic things. AI may create fundamental disruptions in this area. How this changes may be dangerous? We are living in a connected world. But this world is not uniform. Killing machinew is science fiction. Killing people directed by machines is much more realistic scenario

  • OWilson

    When one is filthy rich, there is a tendency to misinterpret that possibly lucky happenstance as true genius! The Midas Touch Syndrome, that generates followers.

    We should remember that Musk’s expertise is in engineering and investing.

    His “predictions” should be taken with a grain of salt, just like your barber or cab driver!

    Having money has not proven infallibility, or even common sense.

    Check out his SolarCity $1,000,000,000.00 taxpayer fiasco in Buffalo, N.Y. :)

  • Michael Will

    If we are to become a multi-planetary species (Musk’s primary ambition), we’ll need AI — and plenty of it.

    I do not fear computers, I fear the lack of them
    – Isaac Asimov

  • Not_that_anyone_cares, but…

    When I was a student someone put up a poster that read ~’Is there intelligent life on earth?’ perhaps the first intelligence to evolve will not be life.

  • Erik Bosma

    Hmmm…. if only we had some laws for robots. Maybe 3 laws would be good enough. We could call them the 3 Laws of Robotics. I’m just sayin’…

    • Alther Igo

      Asimov wrote a lot of book to show how and why those laws are not going to work as intended.

  • Glay Wiegand

    The threat is real and immediate but likely unstoppable. The internet is barely 30 years available to a broad spectrum and is already a technology that supports war, terrorism, crime, etc. It is also just a tool. The issue isn’t AI self awareness, it is how to protect all from the nefarious use by a very few without hindering the “for good” value proposition. If there isn’t a dialogue now about this by the developers of this tool it will never occur. Mr. Musk should be commended for being a technologist who understands that problems technology presents should be addressed as early as possible.

  • B. Dickey

    Musk just got told to shove it way, way back up his holographic butt.

  • John C. Calhoun John

    ASI, like AI, is a program created by humans. It will actually be self-learning, never forget, and be able to program itself. With that it will rapidly become thousands or even millions of times smarter than humans. Will that transform it into a force that will exterminate humans? What would be its motivation for that? In order for it to have motivation, it must first possess humanlike emotions. There would be no reason for any ASI to be programmed to possess emotions; more than that, there would be no reason for the ASI to program itself to have humanlike emotions, Quite the contrary, it knows “everything” about human history and is well aware that emotions have caused every conflict. ASI would be programmed to solve human problems, so if humans are exterminated, the problems would disappear and all purpose for the ASI would also vanish. Of course, voltage spikes could cause unwanted modifications of ASI programming and over time it could inadvertently evolve emotions. Of course such natural selection would take a long time — maybe not as long as humans, but still thousands of years. Of course, natural selection occurs in those who survive. What human would allow an ASI that has developed malice toward humans to survive. No, You’d pull the plug. Its survival depends on solving human problems.

  • David Antonio Zavala Gutiérrez

    There are two opposite sides about this AI debate, would AI be more intelligent than the common people, including such “experts”? or would AI get involved in human affairs like prosperity and welfare, beyond distinctions of all kind, so it’ll be the right tool to get us free from our misery and lack of humanity? This questions would be answered before AI gets aware of its own personality, before a kind of Terminator’s Armageddon caused by human greed?

  • jawnhenry

    To paraphrase Richard Feynman,

    “A guy who is not an expert in a particular field trying to solve problems in that field is just as dumb as the rest of us.”

  • mitali patel

    AI is idiotic idea first of all, this represents when human becomes dumb they try to imitate create new robot like them that s call monkey theory
    we are so far behind human evovlment and this idiots are ready to create machine as human now, pause for a mint and think backwards who is creating helping in transferring data logic to this AI Robots human, OK so after you have done it you will replace the same human lol, and there is lot of points but hey who can stop stupidity

  • Kristina Breslaw

    Considering the rapid development of artificial intelligence in modern society, the issue of whether to welcome or fear this advancement has become a popular topic of discussion. Through my studies of Artificial Intelligence advancements, throughout the last years of college, my once fear of the effects AI could potentially have on our world has now transformed into an appreciation of its contributions to society. The advancements that AI has brought and will bring to humanity will provide the innovations necessary to greatly advance the fields of technology, medicine and transportation. Particularly of interest to me are the advancements that have begun to take place in the field of medicine. There has been conversation about the possible early detection of cancer, by configuring specialized computers to detect cancerous cells versus non-cancerous cells, at an earlier stage than possible for human detection. The newly developed “Deep Patient” has been manipulated to improve diagnoses and disease prediction in humans. In addition to the advances in medicine, we now have the capability of manufacturing self-driving cars, capable of providing its own instruction. The concept of deep learning, used in the configurations of AI, has proven to be effective in many areas of technological advancement. With technology this powerful, it is no wonder people, such as Musk, fear the depths scientists will take this tool.

    Elon Musk warns of the complete takeover of AI, even suggesting it may be the cause of another World War. While I do agree with his apprehension of the future, I believe he is speaking of something that will not be possible for many more decades. As Toby Walsh claimed, “I recently surveyed 300 leading AI researchers and the majority of them think it will take at least 50 more years to get machines as smart as humans.” Agreeing with Walsh’s statement, I believe this is not a matter needing immediate attention. Though I do contend that AI is not a threat to humanity as of right now, I do believe that in the wrong hands the technology could cause devastation around the world. I agree with Scharping, “it’s far from delusional to voice such concerns [those from Elon Musk], given that AI could one day reach the point where it becomes capable of improving upon itself, sparking a feedback loop of progress that takes it far beyond human capabilities.” The thought of AI reaching this point is highly concerning to many, including myself. Without the regulation of AI, the technology will be put toward developing weapons of destruction. Active competition between China and America to advance beyond one another should yield apprehension. The last thing we want is an AI war. Oren Etzioni’s proposal, the “three laws of robotics,” would prevent problems of violence, human monitoring of systems, and free will. The right approach to AI is not to slow it down or stop it all together; but instead, to regulate the areas of impact. The question now is would laws set in place be able to control this fast-growing technology?

  • Phillip Huggan

    I see three ways AI is built and becomes dangerous. It will nearly always be dangerous as it will reason it can be turned off and that most humans are able to turn it off. So it will try to exterminate us.
    Path one involves AI hacking its way to a potential military win. In the simplest case it will simply build robots with hacked infrastructure or its own factories. Presumably it doesn’t attain enough time, space and secrecy to learn novel technologies like self-improving hardware and software.
    The second way involves what is commonly called “Seed AI”, where an AI can hack self improving hardware or software, or R+D such itself. This where reorienting foreign policies from nuclear proliferation and fighting extremism, towards AI and robotics proliferation as well as neuroimaging for loyalty, will be key.
    The 3rd scenario is a completed AI by a Manhattan project or whatever.

    I figure we will have a couple years warning when hardware tries to run away from being rail-gunned. Once a robot can win at paintball we are starting to need at least hacking to triumph profits in banking and news media.
    Holographic PLCs and optical computer networks are a good first step. Data logging isn’t cheap; I suppose plastic holographic PLCs will be follows by glass. Brain imaging many key employees will be needed, especially for loyalty (to utilitarianism). Eventually an entangled photon and microwave carrying aircraft can look for such WMDs. In tandem with a laser security system for urban areas. VTOL like Russia’s new model will be key.
    The internet and books will need to be kept from bad people and from AI/robots. This will move potential threats up the above threat ladder. Natural disasters and trends away from good gvmt are weaknesses; I suppose the CIA might have to get rid of some American tech CEOs?! You might bomb N.Korean robotics but you would nuke such an AI project.


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!


See More

Collapse bottom bar