Nuclear fusion has long been considered the “holy grail” of energy research. It represents a nearly limitless source of energy that is clean, safe and self-sustaining. Ever since its existence was first theorized in the 1920s by English physicist Arthur Eddington, nuclear fusion has captured the imaginations of scientists and science-fiction writers alike.
Fusion, at its core, is a simple concept. Take two hydrogen isotopes and smash them together with overwhelming force. The two atoms overcome their natural repulsion and fuse, yielding a reaction that produces an enormous amount of energy.
But a big payoff requires an equally large investment, and for decades we have wrestled with the problem of energizing and holding on to the hydrogen fuel as it reaches temperatures in excess of 150 million degrees Fahrenheit. To date, the most successful fusion experiments have succeeded in heating plasma to over 900 million degrees Fahrenheit, and held onto a plasma for three and a half minutes, although not at the same time, and with different reactors.
The most recent advancements have come from Germany, where the Wendelstein 7-X reactor recently came online with a successful test run reaching almost 180 million degrees, and China, where the EAST reactor sustained a fusion plasma for 102 seconds, although at lower temperatures.
Still, even with these steps forward, researchers have said for decades that we’re still 30 years away from a working fusion reactor. Even as scientists take steps toward their holy grail, it becomes ever more clear that we don’t even yet know what we don’t know. Read More
If you use a car to get around, every time you get behind the wheel you’re confronted with a choice: how will you navigate to your destination? Whether it’s a trip you take every day, such as from home to work, or to someplace you haven’t been before, you need to decide on a route.
Transportation research has traditionally assumed that drivers are very rational and choose the optimal route that minimizes travel time. Traffic prediction models are based on this seemingly reasonable assumption. Planners use these models in their efforts to keep traffic flowing freely – when they evaluate a change to a road network, for instance, or the impact of a new carpool lane. In order for traffic models to be reliable, they must do a good job reproducing user behavior. But there’s little empirical support for the assumption at their core – that drivers will pick the optimal route. Read More
Go is a two-player board game that originated in China more than 2,500 years ago. The rules are simple, but Go is widely considered the most difficult strategy game to master. For artificial intelligence researchers, building an algorithm that could take down a Go world champion represents the holy grail of achievements.
Well, consider the holy grail found. A team of researchers led by Google DeepMind researchers David Silver and Demis Hassabis designed an algorithm, called AlphaGo, which in October 2015 handily defeated back-to-back-to-back European Go champion Fan Hui five games to zero. And as a side note, AlphaGo won 494 out of 495 games played against existing Go computer programs prior to its match with Hui — AlphaGo even spotted inferior programs four free moves.
“It’s fair to say that this is five to 10 years ahead of what people were expecting, even experts in the field,” Hassabis said in a news conference Tuesday. Read More
In fall, DARPA announced a major success in its Restoring Active Memory (RAM) program. Researchers implanted targeted electrical arrays in the brains of a few dozen volunteers — specifically in brain areas involved in memory.
The researchers found a way to read out neural “key codes” associated with specific memories, and then fed those codes back into the volunteers’ brains as they tried to recall lists of items or directions to places. While the results are still preliminary, DARPA claims that the RAM technique has already achieved “promising results” in improving memory retrieval.
Intriguing as this implant is, it’s only the latest in an ongoing series of neurological techniques and gizmos designed to boost and sharpen memory. The effects and implications of these systems raise questions that are worth consideration. Read More
The world’s most powerful gene-editing tool, CRISPR-Cas9, gives humans the ability to swap out sections of the genome with less money and time than ever before. That’s a lot of power, and with great power comes great responsibility.
But right now, most of the world doesn’t have regulations about what scientists — and someday, hobbyists — can and can’t do to the double helix. In China, scientists have used CRISPR-Cas9 to modify human embryos. And that has left the rest of the world a little nervous. Read More
Watch a fly land on the kitchen table, and the first thing it does is clean itself, very, very carefully. Although we can’t see it, the animal’s surface is covered with dust, pollen and even insidious mites that could burrow into its body if not removed.
Staying clean can be a matter of life and death. All animals, including us human beings, take cleaning just as seriously. Each year, we spend an entire day bathing, and another two weeks cleaning our houses. Cleaning may be as fundamental to life as eating, breathing and mating. Read More
If you’ve ever tried to hold a conversation with a chatbot like CleverBot, you know how quickly the conversation turns to nonsense, no matter how hard you try to keep it together.
But now, a research team led by Bruno Golosio, assistant professor of applied physics at Università di Sassari in Italy, has taken a significant step toward improving human-to-computer conversation. Golosio and colleagues built an artificial neural network, called ANNABELL, that aims to emulate the large-scale structure of human working memory in the brain — and its ability to hold a conversation is eerily human-like. Read More
From Picasso’s “The Young Ladies of Avignon” to Munch’s “The Scream,” what was it about some paintings that arrested people’s attention upon viewing them, that cemented them in the canon of art history as iconic works?
In many cases, it’s because the artist incorporated a technique, form or style that had never been used before. They exhibited a creative and innovative flair that would go on to be mimicked by artists for years to come.
Throughout human history, experts have often highlighted these artistic innovations, using them to judge a painting’s relative worth. But can a painting’s level of creativity be quantified by Artificial Intelligence (AI)?
At Rutgers’ Art and Artificial Intelligence Laboratory, my colleagues and I proposed a novel algorithm that assessed the creativity of any given painting, while taking into account the painting’s context within the scope of art history.
In the end, we found that, when introduced with a large collection of works, the algorithm can successfully highlight paintings that art historians consider masterpieces of the medium.
The results show that humans are no longer the only judges of creativity. Computers can perform the same task – and may even be more objective.
Nine years ago, Joshua Robinson was approached by his then-advisor with news of a discovery that would end up transforming his career, and much of materials science. “I saw this crazy talk about 2-D graphite,” he recalls his adviser saying.
The adviser was referring of course to graphene, the first material to exist as truly two-dimensional: only a single atom thick. Back in 2006, the physics community was just beginning to wrap its mind around how a 2-D material could even exist.
Fast forward to 2015. The realization that materials can be thinned down to the absolute limit of a single atom is spreading, both throughout the world and across the periodic table. Researchers are learning that 2-D isn’t just for the carbon atoms of graphene. Different elemental combinations can lead to fascinating new science and applications.
Robinson is now associate director for Pennsylvania State University’s Center for Two-Dimensional and Layered Materials, a center with 20 faculty and over 50 students dedicated to uncovering the fundamental properties of this new zoo of 2-D materials. It is one of many such centers around the world. And as scientists continue to create new 2-D materials there’s a palpable frenzy to characterize their surprising electronic, optical, and mechanical properties.
The excitement stems from the fact that materials shaved down to only a few atoms act very differently from their so-called “bulk” or 3-D version. Quantum effects begin to take hold as the electrons in the material are squeezed into that impossibly thin layer.
And, being flexible, 2-D materials could bring those unique electrical properties to all sorts of new applications – from bendable touch screens to wearable sensors.
We make a huge number of decisions every day. When it comes to eating, for example, we make 200 more decisions than we’re consciously aware of every day. How is this possible? Because, as Daniel Kahneman has explained, while we’d like to think our decisions are rational, in fact many are driven by gut feel and intuition. The ability to reach a decision based on what we know and what we expect is an inherently human characteristic.
The problem we face now is that we have too many decisions to make every day, leading to decision fatigue – we find the act of making our own decisions exhausting. Even more so than simply deliberate different options or being told by others what to do.
Why not allow technology to ease the burden of decision-making? The latest smart technologies are designed to monitor and learn from our behavior, physical performance, work productivity levels and energy use. This is what has been called Era Three of Automation – when machine intelligence becomes faster and more reliable than humans at making decisions.