This Computer Knows When “Literally” Isn’t Literal

By Carl Engelking | August 5, 2014 12:33 pm

literally

There’s been a literal firestorm in recent years on the proper meaning of “literally” — including the uproar over its non-literal opposite meaning being added to respected dictionaries.

Language is funny that way. We say things that are utterly false, but we seem to understand what the other person means, regardless. Intrigued by this quirk in communication, researchers built the first computational model that can predict humans’ interpretations of hyperbolic statements. (Literally.)

Modeling Exaggeration

Separating literal from figurative speech is actually quite complicated. A proper interpretation of a statement depends on shared knowledge between speaker and listener, the ease of communication and knowledge of a speaker’s intentions. It’s relatively easy for humans to do this in an instant, but computational models aren’t as adept at identifying non-literal speech.

Researchers from Stanford and MIT set out to create a program that could. They began by asking 340 individuals, recruited through Amazon’s Mechanical Turk, to determine whether a series of statements were literal or hyperbolical. The statements described the prices of an electric kettle, a watch and a laptop. For example, “The laptop cost ten thousand dollars.”

The results seemed intuitive: A statement claiming the kettle cost $10,000 was viewed as hyperbolic, but a price tag of $50 was interpreted as a literal statement. Interestingly, when the number was precise, like $51 or $1,001, participants were more likely to view those statements as literal. In other words, round numbers led to fuzzy interpretations.

Complicated Speech

The researchers then used this data to build a computational model that took into account a) how near the price was to a reasonable price, b) whether the number given was precise or fuzzy, and c) how big the number was (prices that are higher being deemed more likely to be exaggerations).

When researchers applied the model to the statements evaluated by human participants they found that it closely matched human judgments of hyperbole. Researchers published the results of their study this week in Proceedings of the National Academy of Sciences.

The team says next they’d like to tackle the linguistics behind other figures of speech, such as irony and metaphor. As to applications of their “literally” model, the researchers don’t specify. But we have one suggestion: Maybe give the grammarians a hard-earned day off and let the bots police the English language for a while. They’ll have a field day, literally.

CATEGORIZED UNDER: Technology, top posts
MORE ABOUT: computers, language
  • Kangamangus

    The computer learns to be wrong like people. Neat.

    • 23skidoo

      All computers have the capability of making mistakes since they were programmed by humans.

  • mserve twentytwo

    I think I like the way that is gong to turn out, It jsut sounds like a good plan.

    Anon-VPN dot Com

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

D-brief

Briefing you on the must-know news and trending topics in science and technology today.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »