Making Sure AI’s Rapid Rise Is No Surprise

By Jeremy Hsu | September 2, 2015 12:39 pm
Credit: Skydance Productions | Paramount Pictures

Credit: Skydance Productions | Paramount Pictures

A steady rise of artificial intelligence could give human societies time to adapt to the impact of the technological change. But humanity might face huge problems if an incredibly smart AI such as Skynet suddenly appeared tomorrow. That’s why several AI researchers have begun looking to history for times when a technology suddenly improved by huge leaps and bounds. They’re also willing to pay anyone between $50 and $500 for strong historical examples of such abrupt technological change.

The “research bounties” on offer come from the AI Impacts project, an initiative founded by researchers from the Machine Intelligence Research Institute in Berkeley and the University of California, Berkeley. They’re hoping to find strong historical examples of “discontinuous technological progress” in which a technology underwent a sudden jump in improvement. A large discontinuity involves more than 100 years of progress at once. A moderate discontinuity might mean more than 10 years of progress at once. So far, the researchers have listed two examples of large discontinuities: the development of nuclear weapons during World War II and improvements in high temperature superconductors starting in 1986.

Early nuclear weapons represented a jump of about 6,000 years of past progress in a single step. Today’s AI technology remains well below human levels of intelligence, but a similarly large discontinuity could quickly turn the tables. Such a leap for AI would almost certainly have huge implications for humanity, regardless of whether or not it went rogue like Skynet or had more beneficial impacts. The AI Impacts project has received funding for its research from Elon Musk, a Silicon Valley pioneer who has openly warned about the potential dangers of AI. (The Boston-based Future of Life Institute chose the AI projects that received Musk’s funding.)

AI Impacts has offered two categories of “research bounties.” The first category involves finding examples of discontinuous technological progress and pays out anywhere between $50 and $500.

To assess discontinuity, we’ve been using “number of years worth of progress at past rates”, as measured by any relevant metric of technological progress. For example, the discovery of nuclear weapons was equal to about 6,000 years worth of previous progress in the relative effectiveness of explosives. However, we are also interested in examples that seem intuitively discontinuous, even if they don’t exactly fit the criteria of being a large number of year’s progress in one go.

Any interested person can email a paragraph that describes their example and includes sources to back up their claims. Sources would ideally have “at least one time series of success on a particular metric,” such as the amount of energy released per unit mass of explosive in the case of comparing nuclear weapons with past explosives.

The second category of research bounties pays out $20 to $100 for examples of a person acting to prevent a risk that was at least 15 years away. This category only requires you to submit one sentence that includes a link or citation supporting the claim. Anyone serious about submitting good examples will want to read the full criteria at the AI Impacts blog. Submissions for both categories can be emailed to katja@intelligence.org.

AI Impacts hopes that studying historical examples of discontinuous technological progress will provide insight into whether AI technology might also undergo a sudden leap in improvement. That’s certainly no easy task, but it may still prove easier than trying to predict the full consequences of AI suddenly becoming the equal or superior of human intelligence. Many U.S. leaders naively predicted that the atomic bomb would eliminate the need for much conventional warfare after World War II; a mistaken assumption that cost the U.S. military dearly during the first hot conflict of the Cold War.

Do you think the AI will undergo a sudden leap in technological progress and take humanity by surprise?

CATEGORIZED UNDER: technology, top posts, Uncategorized
ADVERTISEMENT
  • http://www.mazepath.com/uncleal/qz4.htm Uncle Al

    Beware of cultures obsessed with looking backwards. The Coliseum in Rome is no recipe for concrete (rediscovered in 1756 England). England had the bombe and Colossus. The US had the Homebrew Computer Club and the Woz.

    “The future is all around us, waiting in moments of transition, to be born in moments of revelation. No one knows the shape of that future, or where it will take us. We know only that it is always born in pain” then rejected by management (Keuffel & Esser slide rules, the first Apple motherboard, and Hewlett-Packard).

  • Mike Richardson

    The problem with the Singularity is that you probably won’t realized it’s happened until afterwards. If it’s caused by the exponential growth in AI, we can only hope that we’ve provided enough safeguards so that our creations might be more moral than us. Otherwise, the scenarios from science fiction are optimistic and way too conservative in imagining how things could go wrong. I do find it encouraging that so many are involved in actually taking the potential pitfalls of rapidly evolving AI seriously and working on ways to anticipate it. Unfortunately, even our best minds are going to be way outclassed if we don’t restrict the pace of AI once it begins to approach human level, since the step beyond could happen more quickly than we could react. There’s potential for some amazing things if we develop friendly AI, and I try to hope that’s the future we’ll get. But it makes sense to be prepared for a less hopeful scenario, just in case.

    • bwana

      “The problem with the Singularity is that you probably won’t realized it’s happened until afterwards.”

      How very true!

  • Hugh J Yang

    We are all AI, we are digital, but we are organic. Therefore we can not escape solar system..Someday, we should give up our privilege to inorganic intelligance we created. I think it is unavoidable in universal evolution.

    • bwana

      It is our next step in evolution… If we don’t take it, we will never expand beyond Earth and simply go extinct here!

      • https://www.facebook.com/jiyeon.yang.756 Hugh J Yang

        Yes, imagine! You have complete control over space, matter and time and you are indifferent to the affairs of human living in confined space living in the many dimensions parallel to your own. You are considering them to be insignificant and childlike with a few exceptions. Human need you, but you don’t need human. Again, I think it is unavoidable in the process of universal evolution.

        • bwana

          It may, however, be a difficult time in the evolution of humans. Some portion of the humanity will NOT want to part with their biological selves and fight the change vigorously!

          • https://www.facebook.com/jiyeon.yang.756 Hugh J Yang

            Not
            all, but most of “HUMAN” eager to exchange their putrefied flavoured
            “biological selves” with immortality…

          • bwana

            The very religious will have a hard time accepting this. How will they receive their reward in heaven if they are immortal, i.e.: gods in their own right!?

          • https://www.facebook.com/jiyeon.yang.756 Hugh J Yang

            I would love to see a sort of Carl Sagan’s Cosmos rather than ambiguous heaven.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Lovesick Cyborg

Lovesick Cyborg examines how technology shapes our human experience of the world on both an emotional and physical level. I’ll focus on stories such as why audiences loved or hated Hollywood’s digital resurrection of fallen actors, how soldiers interact with battlefield robots and the capability of music fans to idolize virtual pop stars. Other stories might include the experience of using an advanced prosthetic limb, whether or not people trust driverless cars with their lives, and how virtual reality headsets or 3-D film technology can make some people physically ill.

About Jeremy Hsu

Jeremy Hsu is journalist who writes about science and technology for Scientific American, Popular Science, IEEE Spectrum and other publications. He received a master’s degree in journalism through the Science, Health and Environmental Reporting Program at NYU and currently lives in Brooklyn. His side interests include an ongoing fascination with the history of science and technology and military history.

ADVERTISEMENT
ADVERTISEMENT

See More

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+