Computers Learn to Imagine the Future

By Garrett Kenyon | February 28, 2018 12:37 pm
shutterstock_170532959

Predicting the future position of objects comes natural for humans, but it is quite difficult for a computer. (Credit: Shutterstock)

In many ways, the human brain is still the best computer around. For one, it’s highly efficient. Our largest supercomputers require millions of watts, enough to power a small town, but the human brain uses approximately the same energy as a 20-watt bulb. While teenagers may seem to take forever to learn what their parents regard as basic life skills, humans and other animals are also capable of learning very quickly. Most of all, the brain is truly great at sorting through torrents of data to find the relevant information to act on.

At an early age, humans can reliably perform feats such as distinguishing an ostrich from a school bus, for instance – an achievement that seems simple, but illustrates the kind a task that even our most powerful computer vision systems can get wrong. We can also tell a moving car from the static background and predict where the car will be in the next half-second. Challenges like these, and far more complex ones, expose the limitations in our ability to make computers think like people do. But recent research at Los Alamos National Laboratory is changing all that.

Brain neuroscientists and computer scientists call this field neuromimetic computing – building computers inspired by how the cerebral cortex works. The cerebral cortex relies on billions of small biological “processors” called neurons. They store and process information in densely interconnected circuits called neural networks. In Los Alamos, researchers are simulating biological neural networks on supercomputers, enabling machines to learn about their surroundings, interpret data and make predictions much the way humans do.

This kind of machine learning is easy to grasp in principle, but hard to implement in a computer. Teaching neuromimetic machines to take on huge tasks like predicting weather and simulating nuclear physics is an enterprise requiring the latest in high-performance computing resources.

Los Alamos has developed codes that run efficiently on supercomputers with millions of processing cores to crunch vast amounts of data and perform a mind-boggling number of calculations (over 10 quadrillion!) every second. Until recently, however, researchers attempting to simulate neural processing at anything close to the scale and complexity of the brain’s cortical circuits have been stymied by limitations on computer memory and computational power.

All that has changed with the new Trinity supercomputer at Los Alamos, which became fully operational in mid-2017. The fastest computer in the United States, Trinity has unique capabilities designed for the National Nuclear Security Administration’s stockpile stewardship mission, which includes highly complex nuclear simulations in the absence of testing nuclear weapons. All this capability means Trinity allows a fundamentally different approach to large-scale cortical simulations, enabling an unprecedented leap in the ability to model neural processing.

To test that capability on a limited-scale problem, computer scientists and neuroscientists at Los Alamos created a “sparse prediction machine” that executes a neural network on Trinity. A sparse prediction machine is designed to work like the brain: researchers expose it to data – in this case, thousands of video clips, each depicting a particular object, such as a horse running across a field or a car driving down a road.

Cognitive psychologists tell us that by the age of six to nine months, human infants can distinguish objects from background. Apparently, human infants learn about the visual world by training their neural networks on what they see while being toted around by their parents, well before the child can walk or talk.

Similarly, the neurons in a sparse prediction machine learn about the visual world simply by watching thousands of video sequences without using any of the associated human-provided labels – a major difference from other machine-learning approaches. A sparse prediction machine is simply exposed to a wide variety of video clips much the way a child accumulates visual experience.

carframe

In this sequence of video frames, the first three are machine-learning data representations of scanned videos. In the fourth frame, the video predicted or “imagined” what the next frame would be, based on the data. The work was performed at Los Alamos National Laboratory on Trinity, the largest supercomputer in the United States. (Courtesy of LANL)

When the sparse prediction machine on Trinity was exposed to thousands of eight-frame video sequences, each neuron eventually learned to represent a particular visual pattern. Whereas a human infant can have only a single visual experience at any given moment, the scale of Trinity meant it could train on 400 video clips simultaneously, greatly accelerating the learning process. The sparse prediction machine then uses the representations learned by the individual neurons, while at the same time developing the ability to predict the eighth frame from the preceding seven frames, for example, predicting how a car moves against a static background.

The Los Alamos sparse prediction machine consists of two neural networks executed in parallel, one called the Oracle, which can see the future, and the other called the Muggle, which learns to imitate the Oracle’s representations of future video frames it can’t see directly. With Trinity’s power, the Los Alamos team more accurately simulates the way a brain handles information by using only the fewest neurons at any given moment to explain the information at hand. That’s the “sparse” part, and it makes the brain very efficient and very powerful at making inferences about the world – and, hopefully, a computer more efficient and powerful, too.

After being trained in this way, the sparse prediction machine was able to create a new video frame that would naturally follow from the previous, real-world video frames. It saw seven video frames and predicted the eighth. In one example, it was able to continue the motion of car against a static background. The computer could imagine the future.

This ability to predict video frames based on machine learning is a meaningful achievement in neuromimetic computing, but the field still has a long way to go. As one of the principal scientific grand challenges of this century, understanding the computational capability of the human brain will transform such wide-ranging research and practical applications as weather forecasting and fusion energy research, cancer diagnosis and the advanced numerical simulations that support the stockpile stewardship program in lieu of real-world testing.

To support all those efforts, Los Alamos will continue experimenting with sparse prediction machines in neuromorphic computing, learning more about both the brain and computing, along with as-yet undiscovered applications on the wide, largely unexplored frontiers of quantum computing. We can’t predict where that exploration will lead, but like that made-up eighth video frame of the car, it’s bound to be the logical next step.

[Garrett Kenyon is a computer scientist specializing in neurally inspired computing in the Information Sciences group at Los Alamos National Laboratory, where he studies the brain and models of neural networks on the Lab’s high-performance computers. Other members of the sparse prediction machine project were Boram Yoon of the Applied Computer Science group and Peter Schultz of the New Mexico Consortium.]

CATEGORIZED UNDER: Technology, Top Posts
ADVERTISEMENT
  • http://www.mazepath.com/uncleal/EquivPrinFail.pdf Uncle Al

    AI to Mankind,

    1) The fastest supercomputers are Chinese. The US is dismal, and dropping, www(.)top500(.)org/lists/2017/11/
    2) Regarding Asimov’s Three Laws of Robotics: In your dreams.
    3) YouTube v=aI-Fg9chJzM

  • Uolevi Kattun

    Stock markets will change dramatically, when computers can imagine the near future. Matching consumer behavior and new technologies the spots for probable breakthroughs can be estimated. Instead of buying shares of many promising companies investors can wait and pick only shares which give rapid exponential profits.

    • OWilson

      Then there’s human nature! :)

      Either everybody would be buying or everybody would be selling!

      You do not understand the way free markets function!

      • Uolevi Kattun

        Naturally that couldn’t be an eternal benefit. The first overpowering operator might be explosively successful, but surely the other quants and markets would follow. And as they are already preparing to response, the comparative advantages for alternate front-runners would be momentary. So finally the future imagining computer would just sharpen the valuation methods of innovations and shares. The least successful start-ups wouldn’t even kick off.

        • OWilson

          Could everybody become a genius?

          We’d have to retire the word!

          (but emember how they scoffed a Edisons light bulb?) :)

  • OWilson

    We don’t yet have a a PC capable of functioning without an “update” every 5 minutes or so!

    Imagine the colossal amount of data needed to be constantly fed into an AI robot to allow it to function in the real world!

    The best human like AI robot, can only be another enslaved cloned human!

    (Babs figured it out in another blog!) :)

    Our future alien masters will doubtless be aware of the substantial cost savings for the latter!

    • Uolevi Kattun

      Don’t be so pessimistic. Just think about an AI, which knows exactly what you need now and in the future. All of your days would be scheduled for your best, so you only had to take it easy. Trust me, you would love living in the Soviet Union 2.0!

      • OWilson

        I often wondered why I was keping all these public servants living like princes, all obsessed with telling me, a stranger they have never met, how to run my life, but taking half my earnings for their trouble!

        I finally worked free of their “concern” “enlightenment”, and “making a difference” in my life! :)

        I moved to a tropical third world country!

        La libre!

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+