Tag: computers

Turing Test-Beating Bot Reveals More About Humans Than Computers

By Anders Sandberg, University of Oxford | June 10, 2014 2:28 pm

eugene

This article was originally published on The Conversation.

After years of trying, it looks like a chatbot has finally passed the Turing Test. Eugene Goostman, a computer program posing as a 13-year old Ukrainian boy, managed to convince 33% of judges that he was a human after having a series of brief conversations with them. (Try the program yourself here.)

Most people misunderstand the Turing test, though. When Alan Turing wrote his famous paper on computing intelligence, the idea that machines could think in any way was totally alien to most people. Thinking – and hence intelligence – could only occur in human minds.

Turing’s point was that we do not need to think about what is inside a system to judge whether it behaves intelligently. In his paper he explores how broadly a clever interlocutor can test the mind on the other side of a conversation by talking about anything from maths to chess, politics to puns, Shakespeare’s poetry or childhood memories. In order to reliably imitate a human, the machine needs to be flexible and knowledgeable: for all practical purposes, intelligent.

The problem is that many people see the test as a measurement of a machine’s ability to think. They miss that Turing was treating the test as a thought experiment: actually doing it might not reveal very useful information, while philosophizing about it does tell us interesting things about intelligence and the way we see machines.

Read More

CATEGORIZED UNDER: Technology, Top Posts

Is the Purpose of Sleep to Let Our Brains “Defragment,” Like a Hard Drive?

By Neuroskeptic | May 14, 2012 12:42 pm

Neuroskeptic is a neuroscientist who takes a skeptical look at his own field and beyond at the Neuroskeptic blog


Why do we sleep? We spend a third of our lives doing so, and all known animals with a nervous system either sleep, or show some kind of related behaviour. But scientists still don’t know what the point of it is.

There are plenty of theories. Some researchers argue that sleep has no specific function, but rather serves as evolution’s way of keeping us inactive, to save energy and keep us safely tucked away at those times of day when there’s not much point being awake. On this view, sleep is like hibernation in bears, or even autumn leaf fall in trees.

But others argue that sleep has a restorative function—something about animal biology means that we need sleep to survive. This seems like common sense. Going without sleep feels bad, after all, and prolonged sleep deprivation is used as a form of torture. We also know that in severe cases it can lead to mental disturbances, hallucinations and, in some laboratory animals, eventually death.

Waking up after a good night’s sleep, you feel restored, and many studies have shown the benefits of sleep for learning, memory, and cognition. Yet if sleep is beneficial, what is the mechanism?

Recently, some neuroscientists have proposed that the function of sleep is to reorganize connections and “prune” synapses—the connections between brain cells. Last year, one group of researchers, led by Gordon Wang of Stanford University reviewed the evidence for this idea in a paper called Synaptic plasticity in sleep: learning, homeostasis and disease.

This illustration, taken from their paper, shows the basic idea:

While we’re awake, your brain is forming memories. Memory formation involves a process called long-term potentiation (LTP), which is essentially the strengthening of synaptic connections between nerve cells. We also know that learning can actually cause neurons to sprout entirely new synapses.

Yet this poses a problem for the brain. If LTP and synapse formation is constantly strengthening our synapses, and we are learning all our lives, might the synapses eventually reach a limit? Couldn’t they “max out,” so that they could never get any stronger?

Worse, most of the synapses that strengthen during memory are based on glutamate. Glutamate is dangerous. It’s the most common neurotransmitter in the brain, and it’s also a popular flavouring: “MSG”, monosodium glutamate. But in the brain, too much of it is toxic.

Read More

CATEGORIZED UNDER: Mind & Brain, Top Posts

Bio-Info-Tech: The Cyborg Baby of Cheap Genomes and Cloud Data

By Razib Khan | March 8, 2012 9:00 am

By now you may have heard about Oxford Nanopore’s new whole-genome sequencing technology, which has the promise of taking the enterprise of sequencing an individual’s genome out of the basic science laboratory, and out to the consumer mass market. From what I gather the hype is not just vaporware; it’s a foretaste of what’s to come. But at the end of the day, this particular device is not the important point in any case. Do you know which firm popularized television? Probably not. When technology goes mainstream, it ceases to be buzzworthy. Rather, it becomes seamlessly integrated into our lives and disappears into the fabric of our daily background humdrum. The banality of what was innovation is a testament to its success. We’re on the cusp of the age when genomics becomes banal, and cutting-edge science becomes everyday utility.

Granted, the short-term impact of mass personal genomics is still going to be exceedingly technical. Scientific genealogy nuts will purchase the latest software, and argue over the esoteric aspects of “coverage,” (the redundancy of the sequence data, which correlates with accuracy) and the necessity of supplementing the genome with the epigenome. Physicians and other health professionals will add genomic information to the arsenal of their diagnostic toolkit, and an alphabet soup of new genome-related terms will wash over you as you visit a doctor’s office. Your genome is not you, but it certainly informs who you are. Your individual genome will become ever more important to your health care.

Read More

CATEGORIZED UNDER: Technology, Top Posts

I, Robopsychologist, Part 2: Where Human Brains Far Surpass Computers

By Andrea Kuszewski | February 9, 2012 10:08 am

Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter at @AndreaKuszewski

Before you read this post, please see “I, Robopsychologist, Part 1: Why Robots Need Psychologists.”

A current trend in AI research involves attempts to replicate a human learning system at the neuronal level—beginning with a single functioning synapse, then an entire neuron, the ultimate goal being a complete replication of the human brain. This is basically the traditional reductionist perspective: break the problem down into small pieces and analyze them, and then build a model of the whole as a combination of many small pieces. There are neuroscientists working on these AI problems—replicating and studying one neuron under one condition—and that is useful for some things. But to replicate a single neuron and its function at one snapshot in time is not helping us understand or replicate human learning on a broad scale for use in the natural environment.

We are quite some ways off from reaching the goal of building something structurally similar to the human brain, and even further from having one that actually thinks like one. Which leads me to the obvious question: What’s the purpose of pouring all that effort into replicating a human-like brain in a machine, if it doesn’t ultimately function like a real brain?

If we’re trying to create AI that mimics humans, both in behavior and learning, then we need to consider how humans actually learn—specifically, how they learn best—when teaching them. Therefore, it would make sense that you’d want people on your team who are experts in human behavior and learning. So in this way, the field of psychology is pretty important to the successful development of strong AI, or AGI (artificial general intelligence): intelligence systems that think and act the way humans do. (I will be using the term AI, but I am generally referring to strong AI.)

Basing an AI system on the function of a single neuron is like designing an entire highway system based on the function of a car engine, rather than the behavior of a population of cars and their drivers in the context of a city. Psychologists are experts at the context. They study how the brain works in practice—in multiple environments, over variable conditions, and how it develops and changes over a lifespan.

The brain is actually not like a computer; it doesn’t always follow the rules. Sometimes not following the rules is the best course of action, given a specific context. The brain can act in unpredictable, yet ultimately serendipitous ways. Sometimes the brain develops “mental shortcuts,” or automated patterns of behavior, or makes intuitive leaps of reason. Human brain processes often involve error, which also happens to be a very necessary element of creativity, innovation, and human learning in general. Take away the errors, remove serendipitous learning, discount intuition, and you remove any chance of any true creative cognition. In essence, when it gets too rule-driven and perfect, it ceases to function like a real human brain.

To get a computer that thinks like a person, we have to consider some of the key strengths of human thinking and use psychology to figure out how to foster similar thinking in computers.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

I, Robopsychologist, Part 1: Why Robots Need Psychologists

By Andrea Kuszewski | February 7, 2012 1:38 pm

Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter a @AndreaKuszewski.

“My brain is not like a computer.”

The day those words were spoken to me marked a significant milestone for both me and the 6-year-old who uttered them. The words themselves may not seem that profound (and some may actually disagree), but that simple sentence represented months of therapy, hours upon hours of teaching, all for the hope that someday, a phrase like that would be spoken at precisely the right time. When he said that to me, he was showing me that the light had been turned on, the fire ignited. And he was letting me know that he realized this fact himself. Why was this a big deal?

I began my career as a behavior therapist, treating children on the autism spectrum. My specialty was Asperger syndrome, or high-functioning autism. This 6-year-old boy, whom I’ll call David, was a client of mine that I’d been treating for about a year at that time. His mom had read a book that had recently come out, The Curious Incident of the Dog in the Night-Time, and told me how much David resembled the main character in the book (who had autism), in regards to his thinking and processing style. The main character said, “My brain is like a computer.”

David heard his mom telling me this, and that quickly became one of his favorite memes. He would say things like “I need input” or “Answer not in the database” or simply “You have reached an error,” when he didn’t know the answer to a question. He truly did think like a computer at that point in time—he memorized questions, formulas, and the subsequent list of acceptable responses. He had developed some extensive social algorithms for human interactions, and when they failed, he went into a complete emotional meltdown.

My job was to change this. To make him less like a computer, to break him out of that rigid mindset. He operated purely on an input-output framework, and if a situation presented itself that wasn’t in the database of his brain, it was rejected, returning a 404 error.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

Think It To Do It: The Problem With Touchscreens—and Hands

By Kyle Munkittrick | November 29, 2011 5:17 pm

Tablets and touchscreen smartphones make it feel like we’re living in the future. But they’re the technology of the present. So what should be anticipating for the future of interfaces?

Bret Victor has a solid grip on interface design. And he has a beef with touchscreens as the archetype of the Interface of the Future. He argues that poking at and sliding around pictures under glass is not really the greatest way to do things. Why? Because that just uses a finger! Victor is a fan of hands. They can grab, twist, flick, feel, manipulate, and hold things. Hands get two thumbs up from Victor.

As a result, Victor argues that any interface that neglects hands neglects human beings. Tools of the future need to be hand-friendly and take advantage of the wonderful functions hands can perform. His entire article, “A Brief Rant on the Future of Interfaces” is a glorious read and deserves your attention. One of the best parts is his simple but profound explanation of what a tool does: “A tool addresses human needs by amplifying human capabilities.”

There is, as I see it, one tiny problem with Victor’s vision: hands are tools themselves. They translate brain signals into physical action. Hands are, as Victor shows, super good at that translation. His argument is based on the idea that we should take as much advantage as possible of the amazing tools that hands already are. I disagree.

Read More

CATEGORIZED UNDER: Technology, Top Posts

Later Terminator: We’re Nowhere Near Artificial Brains

By Mark Changizi | November 16, 2011 1:43 pm

I can feel it in the air, so thick I can taste it. Can you? It’s the we’re-going-to-build-an-artificial-brain-at-any-moment feeling. It’s exuded into the atmosphere from news media plumes (“IBM Aims to Build Artificial Human Brain Within 10 Years”) and science-fiction movie fountains…and also from science research itself, including projects like Blue Brain and IBM’s SyNAPSE. For example, here’s a recent press release about the latter:

Today, IBM (NYSE: IBM) researchers unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition.

Now, I’m as romantic as the next scientist (as evidence, see my earlier post on science monk Carl Sagan), but even I carry around a jug of cold water for cases like this. Here are four flavors of chilled water to help clear the palate.

The Worm in the Pass

In the story about the Spartans at the Battle of Thermopylae, 300 soldiers prevent a million-man army from making their way through a narrow mountain pass. In neuroscience it is the 300 neurons of the roundworm C. elegans that stand in the way of our understanding the huge collections of neurons found in our or any mammal’s brain.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

The Crux

A collection of bright and big ideas about timely and important science from a community of experts.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »