A week hasn’t even passed since the inauguration, but television news is saturated with the flurry of activity from President Donald Trump’s administration. Trump, via Twitter, promised to launch an investigation into illegal voting and threatened to “send in the Feds” if Chicago police can’t fix the “carnage.” And that was just between Tuesday and Wednesday.
This heightened scrutiny compelled the Internet Archive, a repository of everything posted on the web, to launch its Trump Archive in early January. You, perhaps, digitally time-traveled with the Internet Archive’s Wayback Machine, or checked out free books, movies and software. The Trump Archive, which draws content from The Internet Archive’s TV News Archive, includes more than 520 hours of televised Trump speeches, interviews, debates and other broadcasts tracing back to 2009. It will continue to grow. Read More
Automated financial trading machines can make complex decisions in a thousandth of a second. A human being making a choice – however simple – can never be faster than about one-fifth of a second. Our reaction times are not only slow but also remarkably variable, ranging over hundreds of milliseconds.
Is this because our brains are poorly designed, prone to random uncertainty – or “noise” in the electronic jargon? Measured in the laboratory, even the neurons of a fly are both fast and precise in their responses to external events, down to a few milliseconds. The sloppiness of our reaction times looks less like an accident than a built-in feature. The brain deliberately procrastinates, even if we ask it to do otherwise. Read More
When we talk of the history of computers, most of us will refer to the evolution of the modern digital desktop PC, charting the decades-long developments by the likes of Apple and Microsoft. What many don’t consider, however, is that computers have been around much longer. In fact, they date back millennia, to a time when they were analogue creations.
Today, the world’s oldest known “computer” is the Antikythera mechanism, a severely corroded bronze artifact which was found at the beginning of the 20th Century, in the remains of a shipwreck near the Mediterranean island of Antikythera. It wasn’t until the 1970s that the importance of the Antikythera mechanism was discovered, when radiography revealed that the device is in fact a complex mechanism of at least 30 gear wheels. Read More
While working as a professor in the sensory-motor systems lab at the Swiss Federal Institute of Technology in Zurich (ETH), Robert Riener noticed a need for assistive devices that would better meet the challenge of helping people with daily life. He knew there were solutions, but that it would require motivating developers to rise to the challenge.
So, Riener created Cybathlon, the first cyborg Olympics where teams from all over the world will participate in races on Oct. 8 in Zurich that will test how well their devices perform routine tasks. Teams will compete in six different categories that will push their assistive devices to the limit on courses developed carefully over three years by physicians, developers and the people who use the technology. Eighty teams have signed up so far.
Riener wants the event to emphasize how important it is for man and machine to work together—so participants will be called pilots rather than athletes, reflecting the role of the assistive technology.
“The goal is to push the development in the direction of technology that is capable of performing day-to-day tasks. And that way, there will an improvement in the future life of the person using the device,” says Riener.
Here’s a look at events that will be featured in the first cyborg Olympics.
Brain-Computer Interface Race
A woman sits at a computer while wearing a cap that has several electrodes attached to her head, wires cascading down her back waves. She’s playing a video game, but instead of using her hands, she’s using only her thoughts to drive a brain-computer interface system.
During the Cybathlon, participants with complete or severely impaired motor function will use their thoughts to control an avatar in a racing video game. The winner will be the first to complete the race, maneuvering an avatar over obstacles and accelerating to the finish line. An algorithm will help determine which team’s interface performed the best. Brain-computer interface devices are a key technologies that will allow people to control future prostheses with their minds.
Functional Electrical Stimulation Bike Race
Functional Electrical Stimulation (FES) is a technique that sends electrical impulses to paralyzed individuals’ muscles to trigger movement. FES can help build muscle mass, increase blood circulation and and improve cardiovascular health. At Cybathlon, paralyzed bike racers will rely on FES to complete about five laps around a racetrack, equalling about 2,200 feet — first to the finish wins. Electrodes will deliver electrical stimulation to their muscles, giving them the leg-power to pedal their bikes. The pilots can actually control how much current they send to their muscles, so balancing speed and stamina will be key to winning the race.
Generally, electrodes are placed on a persons’s skin, but one team—the Center for Advanced Platform Technology from Cleveland—will surgically implant them closer to nerves where they can reach more fiber, reduce muscle fatigue and increase precision. Members of Team Cleveland developed implants — over the course of two decades — that allow a person with paraplegia to stand, perform leg lifts and take steps. For Cybathlon, they’ll adapt their system for bike riding.
Powered Arm Prosthesis Race
The powered arm prosthesis race will show just how important performing basic, daily tasks are to Riener. Pilots with arm amputations will need to carry a tray of breakfast items, for example, and then prepare a meal by opening a jar of jam, slicing bread and putting butter on the bread — tasks that are easy to take for granted. Pinning clothing on a clothes line and putting together a puzzle with pieces that will each require a different type of grip are also challenges in this event.
A prosthetic hand created by the M.A.S.S. Impact team from Simon Fraser University in Canada is a unique design that uses sensors and algorithms to recognize a grip pattern, and users can control the bionic hand in small, precise movements. The system also generates computer models to improve function over time. Last year, organizers held a Cybathlon rehearsal last year, and Riener was especially impressed by OPRA Osseointegratio, a Swedish team that designed a surgically implanted hand controlled by a person voluntarily contracting his muscles. The technology is currently in human trials, and the team’s pilot is the first recipient.
Powered Leg Prosthesis Race
Designing prostheses for lower limbs presents an entirely different set of challenges. Riener hopes to see prosthetic legs at the Cybathlon that can handle uneven terrain, which has been a challenge in the past. During the leg prosthesis race, pilots will compete on parallel tracks through obstacle courses laden with beams, stones, stairs and slopes. Right now, only the most advanced prostheses can handle these challenges — many are heavy and aren’t powerful enough.
Team Össur will bring four different prosthetic legs to the competition. Riener says this team in particular is making incredible advancements in the field. He’s particularly impressed with their commercially available motorized knee prosthesis, as he says it’s more robust and reliable than many past devices. The team is also entering a powered leg prosthesis that is an upgrade to the powered knee and is still a prototype stage; it uses motorized joints to help achieve a natural gait.
Powered Exoskeleton Race
Exoskeletons are worn around the legs to help those with paraplegia walk or even climb stairs. While they’ve been used by physiotherapists in hospitals to improve the health of patients with paralyzed legs, Riener says many designs are still bulky and difficult to use on a daily basis. There are about six companies around the world with exoskeletons on the market, and more prototypes are being developed in research labs around the world.
The Cybathlon exoskeleton event will include tasks that are particularly difficult for people using this technology to accomplish, such as stepping over stones and walking up a slope.
“With these challenges, we’re hoping to see more lifelike exoskeletons with more movability,” says Riener.
Powered Wheelchair Race
Those who use wheelchairs encounter challenges that other people might take for granted. Riener is excited to see how powered wheelchairs are evolving, getting smaller and more capable—in some cases even climbing stairs.
“At Cybathlon, they will have to fit beneath a table, go up a steep ramp, open a door and then close it again, and go down a steep ramp,” says Riener.
Scewo, a team from ETH Zurich developed a wheelchair that balances on two wheels like a Segway and can use a chain to climb up stairs or steep ramps.
While some of the teams are entering technology that is already on the market, Riener is especially excited to see new innovations that have been created from scratch, specifically for Cybathlon.
“It’s exciting to reach a large audience to talk about issues related to people with disabilities,” he says.
Go is a two-player board game that originated in China more than 2,500 years ago. The rules are simple, but Go is widely considered the most difficult strategy game to master. For artificial intelligence researchers, building an algorithm that could take down a Go world champion represents the holy grail of achievements.
Well, consider the holy grail found. A team of researchers led by Google DeepMind researchers David Silver and Demis Hassabis designed an algorithm, called AlphaGo, which in October 2015 handily defeated back-to-back-to-back European Go champion Fan Hui five games to zero. And as a side note, AlphaGo won 494 out of 495 games played against existing Go computer programs prior to its match with Hui — AlphaGo even spotted inferior programs four free moves.
“It’s fair to say that this is five to 10 years ahead of what people were expecting, even experts in the field,” Hassabis said in a news conference Tuesday. Read More
If you’ve ever tried to hold a conversation with a chatbot like CleverBot, you know how quickly the conversation turns to nonsense, no matter how hard you try to keep it together.
But now, a research team led by Bruno Golosio, assistant professor of applied physics at Università di Sassari in Italy, has taken a significant step toward improving human-to-computer conversation. Golosio and colleagues built an artificial neural network, called ANNABELL, that aims to emulate the large-scale structure of human working memory in the brain — and its ability to hold a conversation is eerily human-like. Read More
From Picasso’s “The Young Ladies of Avignon” to Munch’s “The Scream,” what was it about some paintings that arrested people’s attention upon viewing them, that cemented them in the canon of art history as iconic works?
In many cases, it’s because the artist incorporated a technique, form or style that had never been used before. They exhibited a creative and innovative flair that would go on to be mimicked by artists for years to come.
Throughout human history, experts have often highlighted these artistic innovations, using them to judge a painting’s relative worth. But can a painting’s level of creativity be quantified by Artificial Intelligence (AI)?
At Rutgers’ Art and Artificial Intelligence Laboratory, my colleagues and I proposed a novel algorithm that assessed the creativity of any given painting, while taking into account the painting’s context within the scope of art history.
In the end, we found that, when introduced with a large collection of works, the algorithm can successfully highlight paintings that art historians consider masterpieces of the medium.
The results show that humans are no longer the only judges of creativity. Computers can perform the same task – and may even be more objective.
We make a huge number of decisions every day. When it comes to eating, for example, we make 200 more decisions than we’re consciously aware of every day. How is this possible? Because, as Daniel Kahneman has explained, while we’d like to think our decisions are rational, in fact many are driven by gut feel and intuition. The ability to reach a decision based on what we know and what we expect is an inherently human characteristic.
The problem we face now is that we have too many decisions to make every day, leading to decision fatigue – we find the act of making our own decisions exhausting. Even more so than simply deliberate different options or being told by others what to do.
Why not allow technology to ease the burden of decision-making? The latest smart technologies are designed to monitor and learn from our behavior, physical performance, work productivity levels and energy use. This is what has been called Era Three of Automation – when machine intelligence becomes faster and more reliable than humans at making decisions.
Think you’re good at classic arcade games such as Space Invaders, Breakout and Pong? Think again.
In a groundbreaking paper published yesterday in Nature, a team of researchers led by DeepMind co-founder Demis Hassabis reported developing a deep neural network that was able to learn to play such games at an expert level.
What makes this achievement all the more impressive is that the program was not given any background knowledge about the games. It just had access to the score and the pixels on the screen.
It didn’t know about bats, balls, lasers or any of the other things we humans need to know about in order to play the games.
But by playing lots and lots of games many times over, the computer learned first how to play, and then how to play well.
Eye tracking devices sound a lot more like expensive pieces of scientific research equipment than joysticks – yet if the latest announcements about the latest Assassin’s Creed game are anything to go by, eye tracking will become a commonplace feature of how we interact with computers, and particularly games.
Eye trackers provide computers with a user’s gaze position in real time by tracking the position of their pupils. The trackers can either be worn directly on the user’s face, like glasses, or placed in front of them, such as beneath a computer monitor for example.
Eye trackers are usually composed of cameras and infrared lights to illuminate the eyes. Although it’s invisible to the human eye, the cameras can use infrared light to generate a grayscale image in which the pupil is easily recognizable. From the position of the pupil in the image, the eye tracker’s software can work out where the user’s gaze is directed – whether that’s on a computer screen or looking out into the world.
But what’s the use? Well, our eyes can reveal a lot about a person’s intentions, thoughts and actions, as they are good indicators of what we’re interested in. In our interactions with others we often subconsciously pick up on cues that the eyes give away. So it’s possible to gather this unconscious information and use it in order to get a better understanding of what the user is thinking, their interests and habits, or to enhance the interaction between them and the computer they’re using.