Using deep learning to detect sarcasm

Would you trust an AI to recognize irony in social media posts?

From comedy shows to casual conversations, sarcasm is a commonplace aspect of our lives, but it’s still an elusive form of communication to AI.

New research — funded in part by DARPA — indicates that might be changing.

The challenge: For better or worse, intelligence agencies around the world scan social media for possible threats to national security. Natural language understanding has come a long way in the past decade — seen most clearly in the capabilities of OpenAI’s GPT-3 — but detecting sarcasm remains notoriously difficult.

Sarcasm might seem straightforward to us, but humans are much more skilled in inferring tone and meaning than AI. The nuance inherent in sarcasm, such as saying something ironically that you don’t mean, can produce false positives when taken out of context or stripped of its tone of voice — even for humans. 

For computer models designed to detect genuine threats, it can be impossible to discern intent.

The opportunity: If algorithms had a better grasp of sarcasm, including the ability to learn about new trends and slang, they could differentiate jokes from real threats. 

The development: In a new paper, University of Central Florida researchers outlined a method for training neural networks to understand human sarcasm. It involves detecting specific word combinations that function as indicators of sarcasm in a social media post — even without further context.

These neural networks were trained on a variety of datasets, from social media platforms like Twitter and Reddit to headlines from The Onion. This allowed researchers Ramya Akulato and Ivan Garibay to identify relationships between words and punctuation that could indicate a sarcastic tone.

“For instance, words such as ‘just’, ‘again’, ‘totally’, ‘!’ … are the words in the sentence that hint at sarcasm and, as expected, these receive higher attention than others,” they write.

What Akulato and Garibay propose is that this “self-attention architecture” can be an effective method for training neural networks to weight some words more than others based on words that appear around them.

“Attention is a mechanism to discover patterns in the input that are crucial for solving the given task,” Garibay told Defense One.

In AI, attention refers to the way that models can be programmed to weight certain aspects of the data relative to others. The human brain automatically makes these adjustments all the time — if you’re hungry, you’ll notice food; if you’re trying to find your phone, you’ll pay extra attention to shapes that look like your phone.

In “self-attention architecture,” the researchers essentially tell the models to apply this technique in a more granular way, by instructing them to weight certain words within sequences and then directing them to identify patterns that occur within those particular sequences.

“In deep learning, self-attention is an attention mechanism for sequences, which helps learn the task-specific relationship between different elements of a given sequence,” Garibay said.

Oh, a sarcasm detector. Oh, that’s a real useful invention: Like the ill-fated sarcasm detector in The Simpsons, most attempts to use AI to identify sarcasm haven’t had much success.

Some attempts focus on hand-selecting keywords, limiting how widely something could be used in practice (for example, with new slang terms that researchers might not be aware of). Other approaches with neural networks have seen more efficacy, but suffered from the black box issue, in which researchers don’t have the ability to understand how a model came to a given conclusion.

Akulato and Garibay claim that their self-attention architecture differentiates itself by offering the capabilities of neural networks and explainable AI. Being able to understand how and why an AI came to a particular conclusion is particularly important with a topic like sarcasm, which is constantly evolving and can vary widely from person to person.

Intent matters: Still, using AI to classify people’s habits, tendencies, or intent comes with challenges and the demand for deep ethical consideration. Just as bias in AI data has exacerbated racial bias in law enforcement and medicine, ambiguity and limited data too could trip up a sarcasm detector. 

What one culture or group identifies as sarcasm can differ widely from another, and it is incumbent upon researchers and funders to ensure that rigorous research and inclusion have been incorporated throughout the development process. 

This issue is particularly important, given the potential uses that sarcasm AI is potentially being developed for. Miscategorizing a joke as a genuine threat could put people’s life and liberty in jeopardy.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Related
Farmers can fight invasive insects with AI and a robotic arm
As the invasive spotted lanternfly threatens to expand its range, Carnegie Mellon researchers are developing a robot to fight back.
Google unveils AI try-on feature for shopping
Google’s AI-powered virtual try-on feature lets shoppers see what an article of clothing would look like on a wide range of models.
GitHub CEO says Copilot will write 80% of code “sooner than later”
GitHub CEO Thomas Dohmke goes in depth to answer questions about how AI-powered development will change the future of innovation itself.
No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye
A dose of scepticism is warranted when considering the AI doomsayer narrative — there are commercial incentives to manufacture fear of AI.
AI is riding to the rescue on wildfires
AI-powered systems designed to detect, confirm, and detail wildfires at the earliest possible time may help firefighters tame infernos in the West.
Up Next
kate darling
Exit mobile version