I Gotta Feeling

So tell me, how do you feel after reading this blog?

25.10.2020  |  by Peter van der Putten  |  Reverb Channel

We’re going to discuss how understanding the emotion that a story can evoke can be used and abused, and whether it is even possible to predict this emotion using artificial intelligence. So you may be intrigued, curious and a bit bewondered. Or perhaps you are spooked out: in the hunt for clicks these news algorithms are already trying to reverse engineer our behavior and rationality, are they now also after our personal emotions?

Image: Mary Ponomareva

Science fiction? Maybe not. In 2018, the New York Times data science team published a blog on ‘Project Feels’, a project to understand and predict a reader emotional response to any piece of content, and how it relates to engagement. This is less of a ‘far out’ use case than you may think: sentiment analysis is a widely studied are in natural language processing.

A group of 1200 volunteer readers received a number of articles, and for each they needed to write down the emotion they felt after reading it, ranging from boredom to hate, interest, love, happiness, fear, hope or sadness.

The resulting data set was then used to train machine learning models that predict the likelihood of each emotion to be evoked given an article. The researches were facing a number of challenges. For example, how to separate the volunteers that took the job of labeling the documents with their emotions seriously from those that just did a quick and dirty job. Also some articles such as those on political issues by definition generate more controversy and hence a different emotional response for different readers.

The researchers were very open and transparent about the commercial motivations behind the project. For example, they tested whether banner ads placed next to content performed better if predicted emotions were available and have created a ‘data product’ that allows advertisers to understand what type of emotional content certain anonymized customer segments tend to read.

But it does make you wonder whether these kinds of methods could not also be applied for uses that are more directly geared at decreasing polarization and broadening the perspective on certain news topics. Or maybe could it be used as some form of antidote against too much doomscrolling in these times of Covid and Black Lives Matter.

On the flip side, adversaries could use this kind of emotional AI for purposes that are a lot more evil than pushing marketing adverts. And both news and emotional expression are areas that are clearly subjective, so it is interesting to understand the limitations and biases of such an approach.

So we decided to set up a small scale experiment ourselves, based on a small sample of 500 texts from our Reverb Channel corpus of millions of articles that we gathered from public Dutch news sources. The articles were labelled by five volunteers from different backgrounds, using 12 emotions such as anger, compassion, fear, interest, pleasure and sadness.

Even though we chose not to label articles by multiple raters, it was clear from the distributions of emotions there were both similarities as well as major differences across labellers. We checked the performance of the AI on articles unseen by the models during training, and the best models could predict the ‘correct’ emotion for just 4 out 10 articles. This accuracy varied widely across different emotions.

We also delved deeper into what keywords the AI was focusing on to recognize certain emotional categories. Some words made sense such as ‘environment’ for ‘compassion’, ‘war’ for ‘fear’, ‘joy’ for ‘emergency relief’ and ‘criticism’ from sadness, but this relationship was a lot less clear for many other words. This seems to imply these models may be picking up on accidental correlations that existed in this sample.

Likewise, some of the keywords were associated with racial, religious or national backgrounds, even though we ensured our labellers came from a variety of cultural backgrounds. Were these accidental artefact resulting from limited data, or a reflection of real biases in either the news or the labellers? Even though this was a small scale experiment, it showed that predicting emotional response is a difficult and hairy task, and it gave us a deeper hands on understanding of the issues.

But apart from the question whether we can predict emotion, the question is assuming it woujld work, how this could be used for both good and bad use cases. For example, a good uses could be to investigate in what emotional context controversial topics such as politics and ethical issues are being written about across different news sources. Or more practically, provide news search algorithms that more transparently show what the emotional mix is in search results around a certain topic, and that allow readers to get content recommendations from an explicitly diverse set of emotional angles and perspectives.

If we want to prevent the more nasty applications of this technology, one approach is to much like a white hat hackers do in cryptology, research these approaches ourselves instead of leaving it to more evil agents in psy-ops, fake news and propaganda farms. For example, how could they produce content that instills fear, polarization and division, by generating emotionally laden content or target specific segments of vulnerable readers and influencers.

So let us know: how do you feel after reading this blog post?

The prime researcher on the emotion classification study was Kirandeep Kaur, with supervision from Peter van der Putten (both LIACS, Leiden University) and support from Jasper Schelling at ACED.

About the Author

Peter van der Putten

Collaborators

Kiran Kaur

Researcher

Maarten van Hees

Data Engineer

Mary Ponomareva

Graphic Design

Peter van der Putten

Supervisor