Technology enhanced with artificial intelligence is all around us. You might have a robot vacuum cleaner ready to leap into action to clean up your kitchen floor. Maybe you asked Siri or Google—two apps using decent examples of artificial intelligence technology—for some help already today. The continual enhancement of AI and its increased presence in our world speak to achievements in science and engineering that have tremendous potential to improve our lives.
Or destroy us.
At least, that’s the central theme in the new Avengers: Age of Ultron movie with headliner Ultron serving as exemplar for AI gone bad. It’s a timely theme, given some high-profile AI concerns lately. But is it something we should be worried about?
Facebook is watching you, collecting data on your every interaction and feeding it to their data scientists, who are hungry for correlations. But you know that, and you accept it as the price to live in the modern world (you probably even know that Facebook is manipulating you).
And Facebook’s data-science team is particularly interested in your romantic life. They’ve been watching you hook up and break up and, according to a recent presentation by Facebook employee Carlos Diuk, they’ve noticed a few things about you.
But, keep this in mind: these findings are the result of private and proprietary number-crunching, circumventing the normal procedures that let scientists call their output “science.” More on that in a minute.
So without further ado, six things Facebook thinks it knows about your love life:
1. Matchmakers have more friends than the people they’re introducing—73 percent more. (Matchmakers are people who introduce two of their friends, who later become a couple.) And those friends are more disconnected. Matchmakers’ networks include lots of people who aren’t friends with each other. The way I choose to interpret this: matchmakers have to diversify their interactions, so as not to overwhelm any single one with their aggressive extroversion and statements about who would be perrrrfect for whom. Read More
Try to picture a time machine.
You probably envisioned a tricked-out DeLorean or, perhaps, a blue, spinning phone booth, right? But today, time travel isn’t so much about fast cars or alien technology as it is about tweaking our perception of reality. In fact, if you’re reading this on a tablet, you’re holding a time machine of sorts in your hands right now.
Of course, your iPad won’t actually transport you back in time, but it can serve as a window into another world. Imagine visiting the Parthenon, for example, and when you point your iPad toward the crumbled structure, you see the majestic building, but as it was thousands of years ago. You can even walk toward and around the structure, and so long as you’re peering through the tablet, it’s as if you were walking through the past.
This immersive experience, called augmented reality, has captivated archaeologist Stuart Eve, who is trying to change the way we learn history through the five senses. He’s working on augmented-reality technology that not only visually recreates ancient ruins, but also gives you a sense of what they smelled and sounded like.
Think you’re good at classic arcade games such as Space Invaders, Breakout and Pong? Think again.
In a groundbreaking paper published yesterday in Nature, a team of researchers led by DeepMind co-founder Demis Hassabis reported developing a deep neural network that was able to learn to play such games at an expert level.
What makes this achievement all the more impressive is that the program was not given any background knowledge about the games. It just had access to the score and the pixels on the screen.
It didn’t know about bats, balls, lasers or any of the other things we humans need to know about in order to play the games.
But by playing lots and lots of games many times over, the computer learned first how to play, and then how to play well.
Eye tracking devices sound a lot more like expensive pieces of scientific research equipment than joysticks – yet if the latest announcements about the latest Assassin’s Creed game are anything to go by, eye tracking will become a commonplace feature of how we interact with computers, and particularly games.
Eye trackers provide computers with a user’s gaze position in real time by tracking the position of their pupils. The trackers can either be worn directly on the user’s face, like glasses, or placed in front of them, such as beneath a computer monitor for example.
Eye trackers are usually composed of cameras and infrared lights to illuminate the eyes. Although it’s invisible to the human eye, the cameras can use infrared light to generate a grayscale image in which the pupil is easily recognizable. From the position of the pupil in the image, the eye tracker’s software can work out where the user’s gaze is directed – whether that’s on a computer screen or looking out into the world.
But what’s the use? Well, our eyes can reveal a lot about a person’s intentions, thoughts and actions, as they are good indicators of what we’re interested in. In our interactions with others we often subconsciously pick up on cues that the eyes give away. So it’s possible to gather this unconscious information and use it in order to get a better understanding of what the user is thinking, their interests and habits, or to enhance the interaction between them and the computer they’re using.
It’s difficult to deny that humans began as Homo sapiens, an evolutionary offshoot of the primates. Nevertheless, for most of what is properly called “human history” (that is, the history starting with the invention of writing), most of Homo sapiens have not qualified as “human”—and not simply because they were too young or too disabled.
In sociology, we routinely invoke a trinity of shame—race, class, and gender—to characterize the gap that remains between the normal existence of Homo sapiens and the normative ideal of full humanity. Much of the history of social science can be understood as either directly or indirectly aimed at extending the attribution of humanity to as much of Homo sapiens as possible. It’s for this reason that the welfare state is reasonably touted as social science’s great contribution to politics in the modern era. But perhaps membership in Homo sapiens is neither sufficient nor even necessary to qualify a being as “human.” What happens then?
Nuclear power has long been a contentious topic. It generates huge amounts of electricity with zero carbon emissions, and thus is held up as a solution to global energy woes. But it also entails several risks, including weapons development, meltdown, and the hazards of disposing of its waste products.
But those risks and benefits all pertain to a very specific kind of nuclear energy: nuclear fission of uranium or plutonium isotopes. There’s another kind of nuclear energy that’s been waiting in the wings for decades – and it may just demand a recalibration of our thoughts on nuclear power.
Nuclear fission using thorium is easily within our reach, and, compared with conventional nuclear energy, the risks are considerably lower.
Updated 9/16/14 10:15am: Clarified calculations and added footnote
We humans like to think ourselves pretty advanced – and with no other technology-bearing beings to compare ourselves to, our back-patting doesn’t have to take context into account. After all, we harnessed fire, invented stone tools and the wheel, developed agriculture and writing, built cities, and learned to use metals.
Then, a mere few moments ago from the perspective of cosmic time, we advanced even more rapidly, developing telescopes and steam power; discovering gravity and electromagnetism and the forces that hold the nuclei of atoms together.
Meanwhile, the age of electricity was transforming human civilization. You could light up a building at night, speak with somebody in another city, or ride in a vehicle that needed no horse to pull it, and humans were very proud of themselves for achieving all of this. In fact, by the year 1899, purportedly, these developments prompted U.S. patent office commissioner Charles H. Duell to remark, “Everything that can be invented has been invented.”
We really have come a long way from the cave, but how far can we still go? Is there a limit to our technological progress? Put another way, if Duell was dead wrong in the year 1899, might his words be prophetic for the year 2099, or 2199? And what does that mean for humanity’s distant future?
In 1971—16 years after Einstein’s death—the definitive experiment to test Einstein’s relativity was finally carried out. It required not a rocket launch but eight round-the-world plane tickets that cost the United States Naval Observatory, funded by taxpayers, a total of $7,600.
The brainchild of Joseph Hafele (Washington University in St. Louis) and Richard Keating (United States Naval Observatory) were “Mr. Clocks,” passengers on four round-the-world flights. (Since the Mr. Clocks were quite large, they were required to purchase two tickets per flight. The accompanying humans, however, took up only one seat each as they sat next to their attention-getting companions.)
The Mr. Clocks had all been synchronized with the atomic clock standards at the Naval Observatory before flight. They were, in effect, the “twins” (or quadruplets, in this case) from Einstein’s famous twin paradox, wherein one twin leaves Earth and travels nearly at the speed of light. Upon returning home, the traveling twin finds that she is much younger than her earthbound counterpart.
In fact, a twin traveling at 80 percent the speed of light on a round-trip journey to the Sun’s nearest stellar neighbor, Proxima Centauri, would arrive home fully four years younger than her sister. Although it was impossible to make the Mr. Clocks travel at any decent percentage of the speed of light for such a long time, physicists could get them going at jet speeds—about 300 meters (0.2 mile) per second, or a millionth the speed of light—for a couple of days. In addition, they could get the Mr. Clocks out of Earth’s gravitational pit by about ten kilometers (six miles) relative to sea level. And with the accuracy that the Mr. Clocks were known to be capable of, the time differences should be easy to measure.
This article was originally published on The Conversation.
One of the problems with using passwords to prove identity is that passwords that are easy to remember are also easy for an attacker to guess, and vice versa.
Nevertheless, passwords are cheap to implement and well understood, so despite the mounting evidence that they are often not very secure, until something better comes along they are likely to remain the main mechanism for proving identity.
But maybe something better has come along. In research published in PeerJ, Rob Jenkins from University of York and colleagues propose a new system based on the psychology of face recognition called Facelock. But how does it stack up against existing authentication systems?