Updated: Mon, 10/07/2024 - 21:42

From Saturday, Oct. 5 through Tuesday, Oct. 8, the Downtown and Macdonald Campuses will be open only to McGill students, employees and essential visitors. Many classes will be held online. Remote work required where possible. See Campus Public Safety website for details.


Du samedi 5 octobre au mardi 8 octobre, le campus du centre-ville et le campus Macdonald ne seront accessibles qu’aux étudiants et aux membres du personnel de l’Université McGill, ainsi qu’aux visiteurs essentiels. De nombreux cours auront lieu en ligne. Le personnel devra travailler à distance, si possible. Voir le site Web de la Direction de la protection et de la prévention pour plus de détails.

Detail of a high rise in Montreal. By Phil Deforges at https://unsplash.com/photos/ow1mML1sOi0

Misreading our Emotions: The Troubles with Emotion Recognition Technology

The business of emotion recognition is a lucrative one—but is it based on an unsound premise?

Unravelling the nature of emotional expression

Is the expression of emotion universal across humans? In 1967, American psychologist Paul Ekman sought to answer this question. To do so, Ekman brought a set of flashcards to the isolated peoples of Papua New Guinea to test whether they recognized a set display of core expressions, including wrath, sadness, fear, and joy. Ekman asked a select group of Fore people to make up a story about what was happening to the person in the flashcard. Due to the language barrier, he likened the process to “pulling teeth”— but Ekman did get his stories, and they seemed to correspond to his understanding of emotional expression. For example, when showing a photo of a person expressing sadness, the Fore people described him as having just discovered his son died. In another, a story about a dangerous wild pig was attributed to a person expressing fear.

Ekman’s studies were seen as ground-breaking, and he remains one of the most cited psychologists of the twentieth century. In 2019, however, neuroscientist Lia Feldman Barrett conducted a systematic review of scientific literature on the subject and concluded that there was no reliable evidence that one could accurately predict an individual’s emotional state through facial expressions. It is on this shaky ground that the nascent multi-billion-dollar emotion recognition technology (ERT) industry has been emerging. This technology is developing rapidly and is increasingly becoming part of the core infrastructure of many platforms, including many created by Big Tech. Despite its growing presence in our lives, this technology is not often at the forefront of public discourse concerning artificial intelligence (AI). So, what exactly is ERT?

 

The business of recognizing emotions

Just as it sounds, ERT is a type of AI that attempts to identify the emotional state of humans. Using biometric data (data relating to body measurements and calculations based on human characteristics), ERT assigns emotional states to humans based on facial expressions, bodily cues, and eye movement. The idea of an automated form of emotion recognition is as intriguing as it is lucrative. Advocates point to its potential uses across many fields, including in healthcare to prioritize care; business to develop marketing techniques; law enforcement to detect and reduce crime; employment to monitor employees; and education to cater to student needs. Critics, however, argue that its negative potential outcomes are likely to far outweigh the good. While the possible issues that have been raised are numerous, most fall within three broad categories of concern: privacy, accuracy, and control.

 

Emotion recognition technology, how bad could it be?

In 2021, UN High Commissioner for Human Rights Michelle Bachelet warned of the looming threat ERT poses to privacy and associated rights. As is the case with any AI system, ERT can facilitate and exacerbate privacy intrusions by incentivizing the mass collection and use of personal data. ERT is especially controversial because it relies on biometric information, which is considered sensitive personal information under a number of existing privacy regulations, including the EU’s GDPR, Quebec’s Bill 64 and Canada’s PIPEDA. The purpose of ERT is uniquely intrusive in that it works by making inferences about an individual’s internal state. It is therefore argued by activists that ERT represents a profound intrusion on privacy by undermining freedom of thought. In some contexts, ERT algorithms could be used to infer specific categories of sensitive personal data, including political opinions or health data. This information could then be used to deny an individual essential services or support, such as healthcare or employment. Consider, for example, an insurance company using ERT to detect signs of certain neurological disorders to deny coverage for such pre-existing conditions. The inherent risks of this technology must be considered in the evaluation of whether it is worth the potential benefits.

 

The inaccuracy of ERT

Due to the subjective nature of emotional expression, ERT is particularly prone to producing inaccurate results. The expression of emotions may vary based on culture, context, and the individual. According to a 2011 study on cultural diversity in expression, East Asians and Western Caucasians differ in terms of what features they associate with an angry or happy face. In a 2001 study on reactions to violent films, Japanese subjects were far more likely than their American counterparts to react differently when an authority figure was in the room. It is therefore difficult to universally equate a given expression to a specific emotional state. ERT is also susceptible to bias. Data scientist Cathy O’Neill has pointed out that algorithms are based on past practices and are often used to automate the status quo—including bias. Consider, for example, an algorithm used in male dominated fields that had been conditioned to analyze male features. If ERT was used to recruit new employees, it might struggle to read women’s emotional responses. Such bias is not mere speculation. Recently, an audit of ERT within three services (Amazon Rekognition, Face++ and Microsoft) found stark racial disparities; each were more likely to assign negative emotions to Black subjects. The subjectivity of expression and bias of algorithms calls ERT’s accuracy into question—if it cannot promise correct results, is the technology still valuable?

 

ERT as a form of public surveillance

For public safety and law enforcement, ERT offers the government an intriguing tool for threat prevention. The mass public surveillance that ERT would require to effectively detect threats, however, risks reorienting our society towards one of totalitarian authority and strict public control. This risk is heightened in times of war, uncertainty, or political unrest when the public is more willing to cede freedom in exchange for security. Citing public safety concerns, governments could implement ERT to recognize and prevent violence and crime. Government adoption of this technology is still in its early stages and remains controversial. IBorderCtrl is a smart border control project that uses ERT to produce a “risk score” for travellers seeking entry and has been tested in Hungary, Latvia, and Greece. It is not difficult to imagine a world where ERT is commonplace across airports or other areas of heightened security. However, governments could also use this technology to prevent unwanted public behaviour normally protected in a democratic society, such as public protests. In countries where government surveillance is the norm, many of the fears over ERT have already been realized. In 2020, a Dutch company sold ERT to public security bodies in China. This technology has purportedly been used by the government to tighten control over the already heavily monitored Uyghur people of Xinjiang. The risk that ERT poses to personal freedoms and democratic values remains a glaring issue—one that calls for government response.

 

The uncertain future of emotion recognition technology

Despite the myriad concerns raised by experts, current data and privacy legislation does not sufficiently address the risks posed by ERT. From the unsubstantiated basis it is premised on to its use of biometric data—ERT is uniquely dangerous. Some critics of the technology, including a coalition of civil society organizations and the European Disability Forum, have called for its complete prohibition. In the alternative to a complete ban, the results of ERT should at least be considered with a healthy degree of skepticism until scientific consensus is reached that finds facial expressions can reliably infer emotion.

But even in a world where technology could be used to accurately reveal our emotions—would such an outcome be desirable? As we have seen, concerns surrounding ERT are not limited to those that could easily be fixed with ‘better technology’ or ‘more data’. Any technology that purports to detect our internal state is bound to infringe on our privacy to some degree. As with any form of fast-paced emerging technology, however, our grasp of its potential consequences is limited by our current understanding. It is therefore of paramount importance that experts, legislators, and the public engage in public, transparent, and candid dialogue about the future of this technology. A clear and comprehensive regulatory framework for ERT is necessary to safeguard democratic freedoms for future generations. Some advocates have suggested that governments enforce the same level of scientific rigour required for developing medicines in the field of emotion recognition to guard against unproven applications. The adoption of such stringent oversight of this technology’s development in combination with the strict regulation of the use of biometric data could help to mitigate the most harmful potential results of ERT.

Back to top