Flashing Lights and Sounds Turn Rats Into Gamblers

By Nathaniel Scharping | January 22, 2016 5:14 pm
6033769967_70968be392_o

(Credit: Michael Kappel/Flickr)

Casinos are exciting places. Glitzy facades, riotous color schemes, waitresses in skimpy outfits all combine to assault the senses. The action never stops and people often lose track of time — not to mention their money.

Scientists at the University of British-Columbia, in a new study, demonstrated there’s a method to the madness: They found that flashing lights and animated sounds encouraged rats to take risks in a simple gambling game. The sense-pleasing fanfare associated with striking it big at a casino, researchers say, plays a key role in convincing us to keep pressing our luck. Read More

CATEGORIZED UNDER: Mind & Brain, top posts

Venus Flytraps ‘Count’ to 5 Before Dissolving Prey

By Nathaniel Scharping | January 22, 2016 11:54 am
venus_fly_traps

A Venus flytrap captures a fly. (Credit: Patila/Shutterstock)

For a Venus flytrap, enjoying a meal is as easy as counting to five.

As the insectivorous hunters of the plant world, Venus flytraps have evolved some radical features to attract and ensnare meals. They waft a sweet, fruity aroma into the air to lure hungry flies looking for a meal. They’ve grown touch-sensitive hairs on the inside of their “mouths” to tell them when its time to bite down. Now, it appears they also initiate a countdown of sorts as their captured prey struggles.

Venus flytraps, according to research published Thursday in the journal Current Biology, rely on a counting mechanism to determine whether something they’ve captured is worth eating or not. Read More

CATEGORIZED UNDER: Living World, top posts
MORE ABOUT: plants

Teaching Robots to Be More Than Simple Servants

By Nathaniel Scharping | January 21, 2016 2:24 pm
robots

Madeline Gannon poses with her upgraded robot arm. (Credit: Madlab)

Advancements in robotics technology are forcing us to change our perception of what a robot is. From smart cars to interactive Segways, more powerful computer programs are giving machines the ability to act alongside us, rather than simply for us.

Madeline Gannon, whose research studio Madlab focuses on human-computer interactions, is using new programs to give one of the most basic robots the ability to transcend the boundaries of its creation.

By combining innovative software with motion capture technology, she’s reinvented the robotic arm — the workhorse on assembly lines. Gannon designed software that allows a robotic arm to observe and parrot human movements. She envisions robots less as servants and more as collaborators, and her work is a big step toward that goal. Instead of performing repetitive, point-by-point tasks, Gannon wants robots to use information from their movement to build a logical framework for understanding and anticipating our actions.

Discover spoke with Gannon about the limitations of current robots, teaching robots new tricks, and being a robotic artist.


Discover: You originally started studying architecture?

Gannon: I’m doing my PhD in computational design at Carnegie Mellon University, and it’s housed in the School of Architecture. It’s basically anything that has to do with design and anything that has to do with a computer. So I’ve been working for many years now on inventing better ways to communicate with machines that can make things. And industrial robots are some of the most incredible machines for making things with. They’re so adaptable, and so robust that it’s just really fantastic to work with them.

But the problem is, is that they’re very hard to use and they’re very dangerous to use. So that’s sort of what led me to develop this control software that’s just a little more intuitive and that helps keep you safe when you’re working in really close quarter contact with this machine.

Madeline Gannon plays with her motion-capture-equipped robot. (Credit: Madlab)

Are there many parallels to architecture in how you design a machine?

G: I guess the way that I work is largely trans-disciplinary. So I’m playing a computer scientist and a roboticist, but the questions that I ask with this technology are really informed by architecture. So Quipt is really about how a person and kinetic objects are interacting in space.

A lot of what I’m trying to get it to do is to act like how people would work together in space. And I think that’s just a completely different approach than if you were working inside a robotics department, where they might be doing path planning and optimization.

It seems like Quipt and the Robo.op database you created are kind of the breakthroughs. Are you still working on them?

G: Quipt grew out of Robo.op as an open-source way of communicating with these machines that’s a little easier than their usual workflow. Usually when you create a program for an industrial robot, you either have to teach it points by using a joystick to move it into place and record that point and move it to a new place and record that point and you build up the motion that way.

22441786575_45fa242835_o

The ABB IRB 6700 industrial robot Gannon works with. (Credit: Madlab)

That’s very iterative and it takes a lot of training to get that right. And then at the same time, those programs usually run on a robot for a long period of time, the same simple task over and over, 24 hours a day, 7 days a week. And it doesn’t know anything else; it’s just the task that has been programmed on it.

You mentioned that your library is open source. Have you seen many outside people work with you?

G: There’s been some interest. One of the challenges of working with industrial robots is that every brand of robots has their own programming language, so the only people that can work with me on this project have to own the same brand robot, which is a serious limitation and very unfortunate.

What are you working on right now?

G: I’m working on developing Quipt a little further. The video you wrote about earlier, that’s really looking at a basic proof-of-concept idea that we can program some spatial behaviors into a robot so it can work with us as if it were another human on a task. So what I’m working on now is developing task-specific behaviors.

So, the first thing is being able to put a film camera on the end of the robot, and record different camera moves for someone like a director. So I’m building out different ways that a director would work with a camera operator as a person and figuring out the best way to communicate in the same way that is native to how this person practices their craft to communicate to the robot in a very similar manner.

quipt

The robot peers back at us. (Credit: Madlab)

You talk about robot helpers as an extension of ourselves. Is this software aimed at using robots as tools or could it be applied to robots that think and move for themselves in the future?

G: I like the idea of the robot as a collaborator. And there could be some kind of hierarchy there where maybe it’s an apprentice or a helper.

There’s enough access to technology now that we can understand how a person completes a task in a shared space with a robot, and we should be able to codify that task in a way that the robot doesn’t need to mimic them, but it can know and help out in completing that task.

Where else do you see your technology being implemented?

G: I’m really excited for when you take this machine out of a static controlled setting like a factory and into live, dynamic environments. Another scenario would be on a construction site.

There’s already a level of danger involved and people are aware of their environments. But you can bring this robot and have it help a mason move larger amounts of materials quicker, or help a plumber bend a pipe.

These machines are often used in factory settings for spot-welding chassis. You could bring this robot onto a construction site for the same purpose as in a factory, but it will need an awareness of the people around it and the environment around it.

What are some of the biggest challenges you had to overcome while creating this technology?

G: I’m asking this machine to do exactly what its designed not to do. So that was about the biggest challenge to overcome. The motion capture system is really about capturing high-fidelity movement with sub-millimeter precision, while the robot is really about repeating a predefined static task over and over again.

So it’s not generally used for live control, or for being able to change its mind about where it’s going next. So that was a big hurdle to try to overcome, to get the machine to respond quickly to what I’m doing.

Screen Shot 2016-01-19 at 4.57.46 PM

Gannon holds a motion capture marker up to the robot’s “eye,” allowing it follow her movements. (Credit: Madlab)

It kind of sounds like the robot has to learn.

G: Yeah, I don’t have any machine-learning algorithms implemented yet in here, but I think that that is the direction that development is going to go towards. You can imagine, if you are a master craftsman and you are working with an apprentice, over time the apprentice learns the nuances and body gestures of the master, and they become a better team as they work longer together.

I envision that we can build those kinds of behaviors between a person and an industrial robot.

Where do you see human-robot interactions going, say, within the next ten years?

G: It seems that there’s a division into two camps. One camp is sort of artificial intelligence, where these machines are teaching themselves how to do automation tasks, so sort of replacing and optimizing human labor.

And then the other camp is complementary but perhaps opposite, where its just finding a better way to interface these machines with people and to extend and expand and augment our abilities with these same machines instead of replacing us.

I really liked the artwork on your website, where the robot would trace a pattern you drew on your skin and then recreate it. Is that something you’re still working on?

G: Yeah, that’s something that definitely still in production. The original product, Tactum, lets you design and customize 3-D models directly on your body that can then get sent to a 3-D printer. And because the underlying model is based on the form of your body, it will automatically fit you.

One of the motivations behind building this software so we can work safely with industrial robots, is that I’d like to be able to design on my body and have the robot fabricate and 3-D print on my body instead of having to send that away to another machine — sort of closing the loop there between design and fabrication.

smartwatch+sequence+7

Tactum allows users to draw wearable designs right on their skin. Credit: Madlab

Are there any collaborators outside of the world of robotics that you’d like to bring into your work? 

G: The development of Quipt was sponsored by Autodesk and Pier 9, and they’ve been incredibly supportive of my research and my work and our collaboration sponsors. It’s interesting, Google is now getting into the space of industrial robotics, and I believe Apple is as well.

It’s an interesting time to be working with these machines — they’ve been around for around half a century, but they’re only now being explored outside of manufacturing settings. And that to me is really exciting. I’m happy that bigger companies and industries are seeing the potential and bringing these things out into the wild.

CATEGORIZED UNDER: Technology, top posts
MORE ABOUT: robots

Planet Nine: A New Addition to the Solar System?

By Nathaniel Scharping | January 20, 2016 4:21 pm
planet_9

An artist’s rendering of what Planet Nine may look like, with the Sun in the distance. (Credit: Caltech/R. Hurt/IPAC)

Elementary science courses have provided a pretty firm grasp of what the solar system looks like.

The map, currently, details eight planets and one former planet. But our planetary neighborhood is much larger and more populated than we might think, with dwarf planets, trojans, and maybe even super-Earths orbiting far beyond Neptune. Now, new evidence may provide the most solar-system-shaking revelation in well over a century. Read More

CATEGORIZED UNDER: Space & Physics, top posts
MORE ABOUT: solar system

Prehistoric Massacre is Earliest Evidence of Organized Warfare

By Kiona Smith-Strickland | January 20, 2016 2:42 pm
warefare1

The skull of a man, found lying prone in the lagoon’s sediments. The skull has multiple lesions consistent with wounds from a blunt implement, such as a club. (Credit: Marta Mirazón Lahr, enhanced by Fabio Lahr)

Archaeologists have found the oldest evidence yet of organized conflict, which they say is a precursor to modern warfare.

Historians don’t agree on when humans first started waging real, organized warfare against each other. One school of thought says that war is a product of agriculture, with its stored resources and new ideas about ownership, but others argue that the capacity for war is buried much deeper in our evolutionary history.

Now, the 10,000-year-old victims of a prehistoric battle in Kenya may provide evidence that war is a much older human habit than previously thought. A team of archaeologists led by Marta Mirazón Lahr discovered the fossilized skeletons of at least 21 adults and 5 children lying partially exposed at a site called Nataruk, a few miles from the present-day shores of Lake Turkana, in Kenya. Read More

CATEGORIZED UNDER: Living World, top posts

Home Is Where the Spiders and Beetles Are

By Nathaniel Scharping | January 19, 2016 1:24 pm
home-spider

(Credit: Andrew Kerr/Shutterstock)

Home is where the heart is. And it’s also where the beetles, spiders, ants and flies are.

A study published Tuesday in the journal Peerj by researchers at North Carolina State University found that we share our houses with over 500 species of arthropods — you know, insects, spiders and other creepy crawlies. In the first study of its kind, the researchers visited 50 homes near Raleigh, North Carolina and searched high and low for multi-legged cohabitants, ending up with more than 10,000 samples.

Read More

CATEGORIZED UNDER: Living World, top posts
MORE ABOUT: animals

Tardigrades, Frozen for 30 Years, Spring Back to Life

By Nathaniel Scharping | January 18, 2016 3:58 pm
tardigrade

An artist’s rendering of a tardigrade. (Credit: Sebastian Kaulitzki/Shutterstock)

You can freeze them, burn them, dry them out or even blast them into space, but humble tardigrades can survive it all.

As a demonstration of tardigrade power, a new experiment has shown that even locking the critters in a block of ice for three decades fails to deliver the ultimate knockout.

Japanese researchers successfully brought two tardigrades — often called “water bears” for their claws and head shape — back to life after being frozen for 30 years. A separate team of Japanese researchers with the 24th Japanese Antarctic Research Expedition discovered the eight-legged, microscopic pair of animals back in 1983 in a frozen sample of moss, which was kept below freezing to the present day. Read More

CATEGORIZED UNDER: Living World, top posts
MORE ABOUT: unusual organisms

Behold! The First Zinnia to Bloom in Space

By Carl Engelking | January 18, 2016 12:45 pm
An orange zinnia bloomed in zero gravity. (Credit: NASA/Scott Kelly)

An orange zinnia bloomed in zero gravity. (Credit: NASA/Scott Kelly)

For people on solid ground back in the United States, spring is nothing more than pinprick of light at the end of a long, cold tunnel we call winter. But aboard the International Space Station, by all appearances, spring has sprung.

Marathon U.S. astronaut Scott Kelly over the weekend tweeted images of the first flower to grow from a seed and subsequently bloom in space. The 13-petaled orange zinnia brought a little color to the otherwise aesthetically sterile surroundings on the ISS, and it was the first flower to show its glory in zero gravity.

That’s one small step for Heliantheae, one giant leap for Plantae. Read More

CATEGORIZED UNDER: Space & Physics, top posts

Dissolvable Devices Keep Tabs on the Brain

By Janine Anderson | January 18, 2016 11:16 am
dissolvable_device

A bioreabsorbable sensor. (Credit J. Rogers, University of Illinois)

It was a pretty big “what if.”

What if you could design an implantable medical device that was durable, reliable and accurate, but also dissolved into completely harmless materials when its work was done? The two concepts are a bit incompatible, but John Rogers and his team were up to the challenge of bringing them together.

Rogers, a professor of materials science, engineering and bioengineering at the University of Illinois at Urbana-Champaign, designed novel electronic sensors that collect crucial health data following a brain injury or surgery, and completely dissolve into cerebrospinal fluid when they’re no longer needed. Read More

Mammoth Autopsy Puts Humans in Arctic Thousands of Years Earlier

By Nathaniel Scharping | January 15, 2016 3:40 pm
mammoth_remains

A member of the Russian Academy of Science team excavates the mammoth found in northern Siberia. (Credit: Pitulko et al./Science)

It was clear that the mammoth died violently.

A hole in the skull, cuts to the scapula and ribs, a broken mandible and a battered tusk all point to a struggle to the death with one or more adversaries.

This particular mammoth, discovered 70 degrees north of the equator in northern Siberia and radiocarbon dated to around 45,000 years ago, could provide evidence that humans were active in the region 10,000 years earlier than previously thought. While humans had reached parts of Siberia as part of their exodus from Africa by this time, they were thought to have stopped around 55 degrees north, about as far north as Denmark. Read More

CATEGORIZED UNDER: Living World, top posts
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

D-brief

Briefing you on the must-know news and trending topics in science and technology today.
ADVERTISEMENT

See More

ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+