“That’s right, I just love that I forgot my sunglasses and the sun is now shining right in my eyes.”
Most people would recognize this statement as sarcasm, but social media analysis tools don’t. Tone is notoriously hard to convey online, and a new University of Guelph model has the potential to change that.
Most organizations use some form of social media analysis or mining to better understand their audience and to track their engagement with the content. In social media mining, artificial intelligence or machine learning networks comb through social media posts to detect patterns that will help make an organization’s content more appealing.
With sarcasm, however, no often means yes, and yes often means no – making this mining process more difficult.
Social media is a prominent marketing tool, with customers engaging with organizations directly and indirectly through comments, messages, likes and more. So how can social media mining programs be improved to better understand the nuances of human languages, including sarcasm? More importantly, how can they tell what sarcasm is and isn’t?
That’s what Dr. Fattane Zarrinkalam, a professor in the School of Engineering at the College of Engineering and Physical Sciences, wanted to learn. With researchers at the Ferdowsi University of Mashhad in Iran, she set out to develop a new model for teaching social media mining programs about the art of sarcasm. Their work was recently published in Knowledge-Based Systems.
Sarcasm 101: False positives
When specific words or phrases like, “love,” “best,” or “I’m so glad,” are used sarcastically on social media, natural language processors built into a social media mining program still pick up these statements as positive.
Zarrinkalam pointed to an online review of a cellphone that read “it’s such a great phone, it doesn’t last longer than 20 minutes.”
“The mining program is misled because the phrase ‘such a great phone’ marks the review as positive. But it’s actually a negative review. As a result, you lose a lot of key information.”
Results from social media mining tools often inform an organization’s recommender system – a program employed to recommend a product to a user – that essentially tells the organization what it should market.
The cellphone, for example, may still be advertised on social media as “the latest phone to have,” even though it’s received false positive reviews, skewing and complicating an organization’s social media analytics and, eventually, its overall marketing plans.
The model Zarrinkalam and her colleagues created essentially teaches sarcasm to the natural language processing programs. If the program comes across a sarcastic post, Zarrinkalam’s model will generate a non-sarcastic post with the same meaning as the original. The earlier example now becomes: “It’s a horrible phone; it doesn’t last longer than 20 minutes.”
As a result, organizations may now be able to avoid marketing and social media problems and receive better sentiment analyses and engagement predictions such as the number of likes or shares.
“This a very challenging problem,” said Zarrinkalam. “Other studies have been able to label posts as sarcastic, but we’ve been able to identify and interpret them. The novelty of our work is the detection and interpretation of these posts to improve an organization’s overall and long-term marketing strategies.”
Zarrinkalam and her fellow researchers also hope to train the model to recognize irony, satire and rhetorical questions.
Contact:
Dr. Fattane Zarrinkalam
fzarrink@uoguelph.ca