Think you’re good at classic arcade games such as Space Invaders, Breakout and Pong? Think again.
In a groundbreaking paper published yesterday in Nature, a team of researchers led by DeepMind co-founder Demis Hassabis reported developing a deep neural network that was able to learn to play such games at an expert level.
What makes this achievement all the more impressive is that the program was not given any background knowledge about the games. It just had access to the score and the pixels on the screen.
It didn’t know about bats, balls, lasers or any of the other things we humans need to know about in order to play the games.
But by playing lots and lots of games many times over, the computer learned first how to play, and then how to play well.
Eye tracking devices sound a lot more like expensive pieces of scientific research equipment than joysticks – yet if the latest announcements about the latest Assassin’s Creed game are anything to go by, eye tracking will become a commonplace feature of how we interact with computers, and particularly games.
Eye trackers provide computers with a user’s gaze position in real time by tracking the position of their pupils. The trackers can either be worn directly on the user’s face, like glasses, or placed in front of them, such as beneath a computer monitor for example.
Eye trackers are usually composed of cameras and infrared lights to illuminate the eyes. Although it’s invisible to the human eye, the cameras can use infrared light to generate a grayscale image in which the pupil is easily recognizable. From the position of the pupil in the image, the eye tracker’s software can work out where the user’s gaze is directed – whether that’s on a computer screen or looking out into the world.
But what’s the use? Well, our eyes can reveal a lot about a person’s intentions, thoughts and actions, as they are good indicators of what we’re interested in. In our interactions with others we often subconsciously pick up on cues that the eyes give away. So it’s possible to gather this unconscious information and use it in order to get a better understanding of what the user is thinking, their interests and habits, or to enhance the interaction between them and the computer they’re using.
It’s difficult to deny that humans began as Homo sapiens, an evolutionary offshoot of the primates. Nevertheless, for most of what is properly called “human history” (that is, the history starting with the invention of writing), most of Homo sapiens have not qualified as “human”—and not simply because they were too young or too disabled.
In sociology, we routinely invoke a trinity of shame—race, class, and gender—to characterize the gap that remains between the normal existence of Homo sapiens and the normative ideal of full humanity. Much of the history of social science can be understood as either directly or indirectly aimed at extending the attribution of humanity to as much of Homo sapiens as possible. It’s for this reason that the welfare state is reasonably touted as social science’s great contribution to politics in the modern era. But perhaps membership in Homo sapiens is neither sufficient nor even necessary to qualify a being as “human.” What happens then?
Nuclear power has long been a contentious topic. It generates huge amounts of electricity with zero carbon emissions, and thus is held up as a solution to global energy woes. But it also entails several risks, including weapons development, meltdown, and the hazards of disposing of its waste products.
But those risks and benefits all pertain to a very specific kind of nuclear energy: nuclear fission of uranium or plutonium isotopes. There’s another kind of nuclear energy that’s been waiting in the wings for decades – and it may just demand a recalibration of our thoughts on nuclear power.
Nuclear fission using thorium is easily within our reach, and, compared with conventional nuclear energy, the risks are considerably lower.
Updated 9/16/14 10:15am: Clarified calculations and added footnote
We humans like to think ourselves pretty advanced – and with no other technology-bearing beings to compare ourselves to, our back-patting doesn’t have to take context into account. After all, we harnessed fire, invented stone tools and the wheel, developed agriculture and writing, built cities, and learned to use metals.
Then, a mere few moments ago from the perspective of cosmic time, we advanced even more rapidly, developing telescopes and steam power; discovering gravity and electromagnetism and the forces that hold the nuclei of atoms together.
Meanwhile, the age of electricity was transforming human civilization. You could light up a building at night, speak with somebody in another city, or ride in a vehicle that needed no horse to pull it, and humans were very proud of themselves for achieving all of this. In fact, by the year 1899, purportedly, these developments prompted U.S. patent office commissioner Charles H. Duell to remark, “Everything that can be invented has been invented.”
We really have come a long way from the cave, but how far can we still go? Is there a limit to our technological progress? Put another way, if Duell was dead wrong in the year 1899, might his words be prophetic for the year 2099, or 2199? And what does that mean for humanity’s distant future?
In 1971—16 years after Einstein’s death—the definitive experiment to test Einstein’s relativity was finally carried out. It required not a rocket launch but eight round-the-world plane tickets that cost the United States Naval Observatory, funded by taxpayers, a total of $7,600.
The brainchild of Joseph Hafele (Washington University in St. Louis) and Richard Keating (United States Naval Observatory) were “Mr. Clocks,” passengers on four round-the-world flights. (Since the Mr. Clocks were quite large, they were required to purchase two tickets per flight. The accompanying humans, however, took up only one seat each as they sat next to their attention-getting companions.)
The Mr. Clocks had all been synchronized with the atomic clock standards at the Naval Observatory before flight. They were, in effect, the “twins” (or quadruplets, in this case) from Einstein’s famous twin paradox, wherein one twin leaves Earth and travels nearly at the speed of light. Upon returning home, the traveling twin finds that she is much younger than her earthbound counterpart.
In fact, a twin traveling at 80 percent the speed of light on a round-trip journey to the Sun’s nearest stellar neighbor, Proxima Centauri, would arrive home fully four years younger than her sister. Although it was impossible to make the Mr. Clocks travel at any decent percentage of the speed of light for such a long time, physicists could get them going at jet speeds—about 300 meters (0.2 mile) per second, or a millionth the speed of light—for a couple of days. In addition, they could get the Mr. Clocks out of Earth’s gravitational pit by about ten kilometers (six miles) relative to sea level. And with the accuracy that the Mr. Clocks were known to be capable of, the time differences should be easy to measure.
This article was originally published on The Conversation.
One of the problems with using passwords to prove identity is that passwords that are easy to remember are also easy for an attacker to guess, and vice versa.
Nevertheless, passwords are cheap to implement and well understood, so despite the mounting evidence that they are often not very secure, until something better comes along they are likely to remain the main mechanism for proving identity.
But maybe something better has come along. In research published in PeerJ, Rob Jenkins from University of York and colleagues propose a new system based on the psychology of face recognition called Facelock. But how does it stack up against existing authentication systems?
This article was originally published on The Conversation.
After years of trying, it looks like a chatbot has finally passed the Turing Test. Eugene Goostman, a computer program posing as a 13-year old Ukrainian boy, managed to convince 33% of judges that he was a human after having a series of brief conversations with them. (Try the program yourself here.)
Most people misunderstand the Turing test, though. When Alan Turing wrote his famous paper on computing intelligence, the idea that machines could think in any way was totally alien to most people. Thinking – and hence intelligence – could only occur in human minds.
Turing’s point was that we do not need to think about what is inside a system to judge whether it behaves intelligently. In his paper he explores how broadly a clever interlocutor can test the mind on the other side of a conversation by talking about anything from maths to chess, politics to puns, Shakespeare’s poetry or childhood memories. In order to reliably imitate a human, the machine needs to be flexible and knowledgeable: for all practical purposes, intelligent.
The problem is that many people see the test as a measurement of a machine’s ability to think. They miss that Turing was treating the test as a thought experiment: actually doing it might not reveal very useful information, while philosophizing about it does tell us interesting things about intelligence and the way we see machines.
Excerpted from You Are Here by Hiawatha Bray
These days new smartphone apps all seem to want the same thing from us—our latitude and longitude. According to a 2012 report from the Pew Research Center’s Internet and American Life Project, three-quarters of America’s smartphone owners use their devices to retrieve information related to their location—driving directions, dining suggestions, weather updates, the nearest ATM. Such location data is a boon to advertisers, who use information on our movements to discern our habits and interests, and then target ads to us.
With location-aware smartphones, advertisers can transcend the merely local. They can begin beaming us hyperlocal advertising, tailored not just to the city, but to a particular city block. The idea is called “geofencing,” an unfortunate name choice that evokes the ankle bracelets sometimes worn by accused criminals under constant surveillance. The earliest such devices fenced in the user by transmitting a radio signal to a box connected to his home telephone line. If the suspect left the building, the radio signal would fade, and the box would place an automated phone call to the cops.
With the addition of GPS and cellular technology, later versions of ankle bracelet technology allowed a greater measure of mobility. A judge might grant a criminal suspect permission to go to her job, her church, and her local supermarket, with each approved location plugged into the court’s computer system. Data from the ankle-strapped GPS could confirm that the suspect was staying out of mischief or send a warning to police when she paid an unauthorized visit to the local dive bar.
Geofencing also has uses for the law abiding. A company called Life360 uses it to help parents keep tabs on their kids. The service homes in on location data from a child’s phone and sends a digital message whenever the kid arrives at home or at school—and whenever he leaves. Stroll off campus at ten in the morning, and the parents instantly know. As of late 2012, Life360 had signed up about 25 million users.
It’s long been known that blind people are able to compensate for their loss of sight by using other senses, relying on sound and touch to help them “see” the world. Neuroimaging studies have backed this up, showing that in blind people brain regions devoted to sight become rewired to process touch and sound as visual information.
Now, in the age of Google Glass, smartphones and self-driving cars, new technology offers ever more advanced ways of substituting one sensory experience for another. These exciting new devices can restore sight to the blind in ways never before thought possible.
One approach is to use sound as a stand-in for vision. In a study published in Current Biology, neuroscientists at the Hebrew University of Jerusalem used a “sensory substitution device” dubbed “the vOICe” (Oh, I See!) to enable congenitally blind patients to see using sound. The device translates visual images into brief bursts of music, which the participants then learn to decode.
Over a series of training sessions they learn, for example, that a short, loud synthesizer sound signifies a vertical line, while a longer burst equates to a horizontal one. Ascending and descending tones reflect the corresponding directions, and pitch and volume relay details about elevation and brightness. Layering these sound qualities and playing several in sequence (each burst lasts about one second) thus gradually builds an image as simple as a basic shape or as complex as a landscape.
The concept has tried and true analogs in the animal world, says Dr. Amir Amedi, the lead researcher on the study. “The idea is to replace information from a missing sense by using input from a different sense. It’s just like bats and dolphins use sounds and echolocation to ‘see’ using their ears.”