Your face says it all: Understanding human social communication
There are over 7 billion people on Earth, and each and every face – whether it’s the eyes, nose, mouth or cheeks – says something unique about the person. The face conveys many different emotions, making it one of the most powerful tools for social interaction. But just how unique are these facial features when we express how we’re feeling? Researchers on the EU-funded FACESYNTAX project reveal that our emotional expressions are as diverse, rich and complex as our faces. The findings were published in the journal ‘Current Biology’.
The face as a social communication tool
Using a cutting-edge computer graphics platform, the research team showed 100 participants various computer-generated faces and asked them to record the emotions that these faces expressed. They had to choose between six traditional emotion categories: happy, surprise, fear, disgust, anger and sad. They rated the facial expression as negative or positive and how activated the face looked (calm and content versus delighted or excited). The researchers then figured out which facial movements (e.g. raised brow, wrinkled nose, gaping mouth) can convey both broad information (e.g. positivity, negativity) and a particular emotional category (e.g. happy, sad). They also worked out which facial movements can put across only one type of information. In short, a raised brow simultaneously conveys two different types of information – broad information (negativity) and a specific emotional category (sad). Facial expressions are composed of combinations of individual facial movements called action units (AUs). From closing eyes to dropping the jaw, these AUs are considered multiplexed signals because they can convey two kinds of information. Out of 42 different AUs, 26 were labelled as multiplexed.
Facial expressions communicate complex, dynamic emotions
“This research addresses the fundamental question of how facial expressions achieve the complex signalling task of communicating emotion messages,” explained corresponding author Prof. Rachael Jack of project coordinator the University of Glasgow’s Institute of Neuroscience & Psychology in a news release. “Using computer generated faces combined with subjective human perceptual responses and novel analytical tools, we show that facial expressions can communicate complex combinations of emotion information via multiplexed facial signals.” The results will have implications for globalisation and cultural integration where social communication that uses virtual means is a key part of society. The findings could also prove useful for future technology. “These results advance our fundamental understanding of the system of human communication with direct implications for the design of socially interactive AI, including social robots and digital avatars to enhance their social signalling capabilities,” concludes Prof. Jack. The overall aim of FACESYNTAX (Computing the Face Syntax of Social Communication) is to deliver “the first formal generative model of human face signalling within and across cultures,” as noted on CORDIS. The model will lay the groundwork for a theoretical framework that brings together existing social face perception theories. The project ends in August 2024. For more information, please see: FACESYNTAX project web page
Keywords
FACESYNTAX, face, facial expression, facial movement, facial signal, emotion, social communication