Tag: computers

A Sneak Peek at the First Cyborg Olympics

By Caroline Barlott | June 22, 2016 7:30 am
cover-cybathlon

OPRA Osseointegratio, a Sweden, designed a surgically implanted prosthetic limb, which is in human trials. (Credit: ETH Zurich)

While working as a professor in the sensory-motor systems lab at the Swiss Federal Institute of Technology in Zurich (ETH), Robert Riener noticed a need for assistive devices that would better meet the challenge of helping people with daily life. He knew there were solutions, but that it would require motivating developers to rise to the challenge.

So, Riener created Cybathlon, the first cyborg Olympics where teams from all over the world will participate in races on Oct. 8 in Zurich that will test how well their devices perform routine tasks. Teams will compete in six different categories that will push their assistive devices to the limit on courses developed carefully over three years by physicians, developers and the people who use the technology. Eighty teams have signed up so far.

Riener wants the event to emphasize how important it is for man and machine to work together—so participants will be called pilots rather than athletes, reflecting the role of the assistive technology.

“The goal is to push the development in the direction of technology that is capable of performing day-to-day tasks. And that way, there will an improvement in the future life of the person using the device,” says Riener.

Here’s a look at events that will be featured in the first cyborg Olympics.

Brain-Computer Interface Race

brain-computer-interface

A pilot plays a video game with a brain-computer interface during a Cybathlon rehearsal last year. (Credit: ETH Zurich)

A woman sits at a computer while wearing a cap that has several electrodes attached to her head, wires cascading down her back waves. She’s playing a video game, but instead of using her hands, she’s using only her thoughts to drive a brain-computer interface system.

During the Cybathlon, participants with complete or severely impaired motor function will use their thoughts to control an avatar in a racing video game. The winner will be the first to complete the race, maneuvering an avatar over obstacles and accelerating to the finish line. An algorithm will help determine which team’s interface performed the best. Brain-computer interface devices are a key technologies that will allow people to control future prostheses with their minds.

Functional Electrical Stimulation Bike Race

electric-stimulation

Electrodes placed on the outside of the legs stimulate the muscles, giving racers the leg-power they need during a bike race. (Credit: ETH Zurich)

Functional Electrical Stimulation (FES) is a technique that sends electrical impulses to paralyzed individuals’ muscles to trigger movement. FES can help build muscle mass, increase blood circulation and and improve cardiovascular health. At Cybathlon, paralyzed bike racers will rely on FES to complete about five laps around a racetrack, equalling about 2,200 feet — first to the finish wins. Electrodes will deliver electrical stimulation to their muscles, giving them the leg-power to pedal their bikes. The pilots can actually control how much current they send to their muscles, so balancing speed and stamina will be key to winning the race.

Generally, electrodes are placed on a persons’s skin, but one team—the Center for Advanced Platform Technology from Cleveland—will surgically implant them closer to nerves where they can reach more fiber, reduce muscle fatigue and increase precision. Members of Team Cleveland developed implants — over the course of two decades — that allow a person with paraplegia to stand, perform leg lifts and take steps. For Cybathlon, they’ll adapt their system for bike riding.

Powered Arm Prosthesis Race

arm-prosthesis

The prosthetic arm from the M.A.S.S. Impact team. (Credit: ETH Zurich)

The powered arm prosthesis race will show just how important performing basic, daily tasks are to Riener. Pilots with arm amputations will need to carry a tray of breakfast items, for example, and then prepare a meal by opening a jar of jam, slicing bread and putting butter on the bread — tasks that are easy to take for granted. Pinning clothing on a clothes line and putting together a puzzle with pieces that will each require a different type of grip are also challenges in this event.

A prosthetic hand created by the M.A.S.S. Impact team from Simon Fraser University in Canada is a unique design that uses sensors and algorithms to recognize a grip pattern, and users  can control the bionic hand in small, precise movements. The system also generates computer models to improve function over time. Last year, organizers held a Cybathlon rehearsal last year, and Riener was especially impressed by OPRA Osseointegratio, a Swedish team that designed a surgically implanted hand controlled by a person voluntarily contracting his muscles. The technology is currently in human trials, and the team’s pilot is the first recipient.

Powered Leg Prosthesis Race

prosthetic-leg

Designing prostheses for lower limbs presents an entirely different set of challenges. Riener hopes to see prosthetic legs at the Cybathlon that can handle uneven terrain, which has been a challenge in the past. During the leg prosthesis race, pilots will compete on parallel tracks through obstacle courses laden with beams, stones, stairs and slopes. Right now, only the most advanced prostheses can handle these challenges — many are heavy and aren’t powerful enough.

Team Össur will bring four different prosthetic legs to the competition. Riener says this team in particular is making incredible advancements in the field. He’s particularly impressed with their commercially available motorized knee prosthesis, as he says it’s more robust and reliable than many past devices. The team is also entering a powered leg prosthesis that is an upgrade to the powered knee and is still a prototype stage; it uses motorized joints to help achieve a natural gait.

Powered Exoskeleton Race

(Credit: ETH Zurich)

(Credit: ETH Zurich)

Exoskeletons are worn around the legs to help those with paraplegia walk or even climb stairs. While they’ve been used by physiotherapists in hospitals to improve the health of patients with paralyzed legs, Riener says many designs are still bulky and difficult to use on a daily basis. There are about six companies around the world with exoskeletons on the market, and more prototypes are being developed in research labs around the world.

The Cybathlon exoskeleton event will include tasks that are particularly difficult for people using this technology to accomplish, such as stepping over stones and walking up a slope.

“With these challenges, we’re hoping to see more lifelike exoskeletons with more movability,” says Riener.

Powered Wheelchair Race

(Credit: ETH Zurich)

(Credit: ETH Zurich)

Those who use wheelchairs encounter challenges that other people might take for granted. Riener is excited to see how powered wheelchairs are evolving, getting smaller and more capable—in some cases even climbing stairs.

“At Cybathlon, they will have to fit beneath a table, go up a steep ramp, open a door and then close it again, and go down a steep ramp,” says Riener.

Scewo, a team from ETH Zurich developed a wheelchair that balances on two wheels like a Segway and can use a chain to climb up stairs or steep ramps.

While some of the teams are entering technology that is already on the market, Riener is especially excited to see new innovations that have been created from scratch, specifically for Cybathlon.

“It’s exciting to reach a large audience to talk about issues related to people with disabilities,” he says.

Artificial Intelligence Just Mastered Go, But One Game Still Gives AI Trouble

By Carl Engelking | January 27, 2016 12:54 pm
go-board-game-artificial-intelligence

(Credit: Saran Poroong/Shutterstock)

Go is a two-player board game that originated in China more than 2,500 years ago. The rules are simple, but Go is widely considered the most difficult strategy game to master. For artificial intelligence researchers, building an algorithm that could take down a Go world champion represents the holy grail of achievements.

Well, consider the holy grail found. A team of researchers led by Google DeepMind researchers David Silver and Demis Hassabis designed an algorithm, called AlphaGo, which in October 2015 handily defeated back-to-back-to-back European Go champion Fan Hui five games to zero. And as a side note, AlphaGo won 494 out of 495 games played against existing Go computer programs prior to its match with Hui — AlphaGo even spotted inferior programs four free moves.

“It’s fair to say that this is five to 10 years ahead of what people were expecting, even experts in the field,” Hassabis said in a news conference Tuesday. Read More

CATEGORIZED UNDER: Technology, Top Posts
MORE ABOUT: computers

Human-Like Neural Networks Make Computers Better Conversationalists

By Ben Thomas | November 11, 2015 2:00 pm
HAL

HAL 9000, depicted as a glowing red “eye,” was the frighteningly charismatic computer protagonist in Stanley Kubrick’s 1968 movie “2001 Space Odyssey.” (Credit: Screengrab from YouTube

If you’ve ever tried to hold a conversation with a chatbot like CleverBot, you know how quickly the conversation turns to nonsense, no matter how hard you try to keep it together.

But now, a research team led by Bruno Golosio, assistant professor of applied physics at Università di Sassari in Italy, has taken a significant step toward improving human-to-computer conversation. Golosio and colleagues built an artificial neural network, called ANNABELL, that aims to emulate the large-scale structure of human working memory in the brain — and its ability to hold a conversation is eerily human-like. Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

Can You Teach Creativity to a Computer?

By Ahmed Elgammal, Rutgers University | July 30, 2015 2:25 pm

computer paint

From Picasso’s “The Young Ladies of Avignon” to Munch’s “The Scream,” what was it about some paintings that arrested people’s attention upon viewing them, that cemented them in the canon of art history as iconic works?

In many cases, it’s because the artist incorporated a technique, form or style that had never been used before. They exhibited a creative and innovative flair that would go on to be mimicked by artists for years to come.

Throughout human history, experts have often highlighted these artistic innovations, using them to judge a painting’s relative worth. But can a painting’s level of creativity be quantified by Artificial Intelligence (AI)?

At Rutgers’ Art and Artificial Intelligence Laboratory, my colleagues and I proposed a novel algorithm that assessed the creativity of any given painting, while taking into account the painting’s context within the scope of art history.

In the end, we found that, when introduced with a large collection of works, the algorithm can successfully highlight paintings that art historians consider masterpieces of the medium.

The results show that humans are no longer the only judges of creativity. Computers can perform the same task – and may even be more objective.

Read More

CATEGORIZED UNDER: Technology, Top Posts

Why the Data Deluge Leaves Us Struggling to Make Up Our Minds

By Rikke Duus and Mike Cooray | July 16, 2015 5:04 pm

data deluge

We make a huge number of decisions every day. When it comes to eating, for example, we make 200 more decisions than we’re consciously aware of every day. How is this possible? Because, as Daniel Kahneman has explained, while we’d like to think our decisions are rational, in fact many are driven by gut feel and intuition. The ability to reach a decision based on what we know and what we expect is an inherently human characteristic.

The problem we face now is that we have too many decisions to make every day, leading to decision fatigue – we find the act of making our own decisions exhausting. Even more so than simply deliberate different options or being told by others what to do.

Why not allow technology to ease the burden of decision-making? The latest smart technologies are designed to monitor and learn from our behavior, physical performance, work productivity levels and energy use. This is what has been called Era Three of Automation – when machine intelligence becomes faster and more reliable than humans at making decisions.

Read More

CATEGORIZED UNDER: Technology, Top Posts

Google’s Artificial Intelligence Masters Classic Atari Video Games

By Toby Walsh, NICTA | February 26, 2015 5:04 pm

atari-brain

Think you’re good at classic arcade games such as Space Invaders, Breakout and Pong? Think again.

In a groundbreaking paper published yesterday in Nature, a team of researchers led by DeepMind co-founder Demis Hassabis reported developing a deep neural network that was able to learn to play such games at an expert level.

What makes this achievement all the more impressive is that the program was not given any background knowledge about the games. It just had access to the score and the pixels on the screen.

It didn’t know about bats, balls, lasers or any of the other things we humans need to know about in order to play the games.

But by playing lots and lots of games many times over, the computer learned first how to play, and then how to play well.

Read More

CATEGORIZED UNDER: Technology, Top Posts
MORE ABOUT: computers

Eye Tracking Is Coming Soon to a Computer Near You

By Melodie Vidal, Lancaster University | February 23, 2015 10:51 am

 eye

Eye tracking devices sound a lot more like expensive pieces of scientific research equipment than joysticks – yet if the latest announcements about the latest Assassin’s Creed game are anything to go by, eye tracking will become a commonplace feature of how we interact with computers, and particularly games.

Eye trackers provide computers with a user’s gaze position in real time by tracking the position of their pupils. The trackers can either be worn directly on the user’s face, like glasses, or placed in front of them, such as beneath a computer monitor for example.

Eye trackers are usually composed of cameras and infrared lights to illuminate the eyes. Although it’s invisible to the human eye, the cameras can use infrared light to generate a grayscale image in which the pupil is easily recognizable. From the position of the pupil in the image, the eye tracker’s software can work out where the user’s gaze is directed – whether that’s on a computer screen or looking out into the world.

But what’s the use? Well, our eyes can reveal a lot about a person’s intentions, thoughts and actions, as they are good indicators of what we’re interested in. In our interactions with others we often subconsciously pick up on cues that the eyes give away. So it’s possible to gather this unconscious information and use it in order to get a better understanding of what the user is thinking, their interests and habits, or to enhance the interaction between them and the computer they’re using.

Read More

CATEGORIZED UNDER: Technology, Top Posts

Turing Test-Beating Bot Reveals More About Humans Than Computers

By Anders Sandberg, University of Oxford | June 10, 2014 2:28 pm

eugene

This article was originally published on The Conversation.

After years of trying, it looks like a chatbot has finally passed the Turing Test. Eugene Goostman, a computer program posing as a 13-year old Ukrainian boy, managed to convince 33% of judges that he was a human after having a series of brief conversations with them. (Try the program yourself here.)

Most people misunderstand the Turing test, though. When Alan Turing wrote his famous paper on computing intelligence, the idea that machines could think in any way was totally alien to most people. Thinking – and hence intelligence – could only occur in human minds.

Turing’s point was that we do not need to think about what is inside a system to judge whether it behaves intelligently. In his paper he explores how broadly a clever interlocutor can test the mind on the other side of a conversation by talking about anything from maths to chess, politics to puns, Shakespeare’s poetry or childhood memories. In order to reliably imitate a human, the machine needs to be flexible and knowledgeable: for all practical purposes, intelligent.

The problem is that many people see the test as a measurement of a machine’s ability to think. They miss that Turing was treating the test as a thought experiment: actually doing it might not reveal very useful information, while philosophizing about it does tell us interesting things about intelligence and the way we see machines.

Read More

CATEGORIZED UNDER: Technology, Top Posts

Is the Purpose of Sleep to Let Our Brains “Defragment,” Like a Hard Drive?

By Neuroskeptic | May 14, 2012 12:42 pm

Neuroskeptic is a neuroscientist who takes a skeptical look at his own field and beyond at the Neuroskeptic blog


Why do we sleep? We spend a third of our lives doing so, and all known animals with a nervous system either sleep, or show some kind of related behaviour. But scientists still don’t know what the point of it is.

There are plenty of theories. Some researchers argue that sleep has no specific function, but rather serves as evolution’s way of keeping us inactive, to save energy and keep us safely tucked away at those times of day when there’s not much point being awake. On this view, sleep is like hibernation in bears, or even autumn leaf fall in trees.

But others argue that sleep has a restorative function—something about animal biology means that we need sleep to survive. This seems like common sense. Going without sleep feels bad, after all, and prolonged sleep deprivation is used as a form of torture. We also know that in severe cases it can lead to mental disturbances, hallucinations and, in some laboratory animals, eventually death.

Waking up after a good night’s sleep, you feel restored, and many studies have shown the benefits of sleep for learning, memory, and cognition. Yet if sleep is beneficial, what is the mechanism?

Recently, some neuroscientists have proposed that the function of sleep is to reorganize connections and “prune” synapses—the connections between brain cells. Last year, one group of researchers, led by Gordon Wang of Stanford University reviewed the evidence for this idea in a paper called Synaptic plasticity in sleep: learning, homeostasis and disease.

This illustration, taken from their paper, shows the basic idea:

While we’re awake, your brain is forming memories. Memory formation involves a process called long-term potentiation (LTP), which is essentially the strengthening of synaptic connections between nerve cells. We also know that learning can actually cause neurons to sprout entirely new synapses.

Yet this poses a problem for the brain. If LTP and synapse formation is constantly strengthening our synapses, and we are learning all our lives, might the synapses eventually reach a limit? Couldn’t they “max out,” so that they could never get any stronger?

Worse, most of the synapses that strengthen during memory are based on glutamate. Glutamate is dangerous. It’s the most common neurotransmitter in the brain, and it’s also a popular flavouring: “MSG”, monosodium glutamate. But in the brain, too much of it is toxic.

Read More

CATEGORIZED UNDER: Mind & Brain, Top Posts

Bio-Info-Tech: The Cyborg Baby of Cheap Genomes and Cloud Data

By Razib Khan | March 8, 2012 9:00 am

By now you may have heard about Oxford Nanopore’s new whole-genome sequencing technology, which has the promise of taking the enterprise of sequencing an individual’s genome out of the basic science laboratory, and out to the consumer mass market. From what I gather the hype is not just vaporware; it’s a foretaste of what’s to come. But at the end of the day, this particular device is not the important point in any case. Do you know which firm popularized television? Probably not. When technology goes mainstream, it ceases to be buzzworthy. Rather, it becomes seamlessly integrated into our lives and disappears into the fabric of our daily background humdrum. The banality of what was innovation is a testament to its success. We’re on the cusp of the age when genomics becomes banal, and cutting-edge science becomes everyday utility.

Granted, the short-term impact of mass personal genomics is still going to be exceedingly technical. Scientific genealogy nuts will purchase the latest software, and argue over the esoteric aspects of “coverage,” (the redundancy of the sequence data, which correlates with accuracy) and the necessity of supplementing the genome with the epigenome. Physicians and other health professionals will add genomic information to the arsenal of their diagnostic toolkit, and an alphabet soup of new genome-related terms will wash over you as you visit a doctor’s office. Your genome is not you, but it certainly informs who you are. Your individual genome will become ever more important to your health care.

Read More

CATEGORIZED UNDER: Technology, Top Posts
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

The Crux

A collection of bright and big ideas about timely and important science from a community of experts.
ADVERTISEMENT

See More

ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+