Skip to main content
European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Computing the Face Syntax of Social Communication

Periodic Reporting for period 3 - FACESYNTAX (Computing the Face Syntax of Social Communication)

Periodo di rendicontazione: 2021-09-01 al 2023-02-28

Problem being addressed. A powerful tool for human social interaction is the face, a complex dynamical system that can generate many complex facial expressions and elicit myriad social judgments. Although facial signals are used daily to communicate with many others, a formal understanding of the "language" of facial signals is currently lacking.

Importance. Facial signals influence others’ social perceptions and behaviors including who is trusted, liked, who to approach or avoid, with substantial downstream consequences. It is therefore critical to understand which facial signals drive these perceptions including bias and misinterpretation, particularly in the context of globalization, cultural integration, and the digital economy with socially interactive digital agents.

Overall objectives. FACESYNTAX aims to address these knowledge gaps by delivering the first formal generative model of human face signalling within and across cultures with two-fold impact. First, our model will form the basis of a new theoretical framework that unites existing theories. Second, our model will be transferred to digital agents to improve their social signalling capabilities including signalling accuracy, a broader repertoire of more nuanced social signals, and more culturally diverse facial signals.
Work completed in each work package (WPs).

WP2. Tools to model facial signals: (1) Generative model of face shape/complexion (3D captured faces of varying ethnicities and age); (2) Visemes—mouth shapes for speech/vocalizations; (3) Blushing/pallor—involuntary cues.

WP3. Modelled face features of social perception:
TRAITS—dominance/competence/trustworthiness/warmth. Two latent feature spaces (ability/intent) structure faces. Directly impacts theory and digital agents. Published as peer-reviewed conference abstract (Hensel et al., 2020. J. Vision) and social robotics conference proceedings (Hensel et al., 2020. Proc. 20th ACM International Conference on Intelligent Virtual Agents). Manuscript to be submitted to a broad audience journal.
CLASS. Face features driving social class perceptions project onto social traits (competence/ warmth/trustworthiness), corresponding to social stereotypes. Presented at Society for Personality and Social Psychology annual convention, 2020/21. Manuscript to be submitted to a broad audience journal.
ATTRACTIVENESS Face features of beauty vary across cultures (East Asian, Western), challenging universality; distinct from averageness and sexual dimorphism; similarities across cultures; cultural and individual specificities. Highlights diversity in social perception; impacts digital agents. Published in broad audience journal (Zhan et al., 2021. Current Biology) and social robotics conference proceedings (Zhan et al., Proc. 20th ACM International Conference on Intelligent Virtual Agents).
EMOTION. Facial expressions represent emotions as category-dimensional multiplexed signals. Unites current theories, forms basis of a new framework that can characterize complexities. Published as a peer-reviewed conference abstract (Liu et al., 2020. J. Vision), in social robotics conference proceedings (Liu et al., 2020. Proc. 20th ACM International Conference on Intelligent Virtual Agents), and broad audience journal (Liu et al., in press. Current Biology).
OPTIMAL SIGNALS. Distinct facial movements drive emotion recognition accuracy; impact for digital agents. Presented at Vision Science Society annual conference 2020; published as peer-reviewed abstract (Querci et al., 2020. J. Vision).
CONVERSATIONAL. Cross-cultural similarities/differences impacting communication; impacts virtual agents. Published in social robotics conference proceedings (Chen et al., 2020. Proc. 20th ACM International Conference on Intelligent Virtual Agents). Manuscript to be submitted to a broad audience journal.

WP 1/3/4. Cross-cultural similarities/differences in facial expression signalling structure. Four latent cross-cultural expressive patterns of broad dimensions (valence, arousal); culture-specific accents refine basic messages. Compositional account, forms new theoretical framework; impacts digital agents. Manuscript to be submitted to a broad audience journal (e.g. Psych. Review).

WP4
INTENSITY. Emotion category and intensity signals temporally decoupled with cross-cultural similarities/differences. Published, peer-reviewed conference abstract (Chen et al., 2020. J. Vision), presented at international conferences (Vision Sciences Society, Society for Affective Science). Manuscript to be submitted to a broad audience journal.
LATENT OPTIMAL SIGNALS. Specific subsets of facial movements are necessary and sufficient for emotion categorization. Work in progress.
ICONIC SIGNALS. Expansion/contraction facial movements represent basic messages (valence, arousal). Presented at international conferences (Vision Sciences Society, Society for Affective Science).
BRAIN IMAGING. Spatio-temporal modulation of brain activities by task, showing where (temporal/parietal lobes) and when (~270ms/380ms/750 ms post-stimulus) facial movements are represented as emotion signals. Presented at international conferences (Society for Affective Science 2021; Vision Sciences Society 2021). Now including emotion categorization task; AU combinations.

WP5. Publications in social robotics conference proceedings show relevance of work for digital agents, e.g. (1) perceptual impact of textural cues (e.g. forehead furrows; Chen et al., 2020); (2) variance in pain facial expressions (best paper award—IEEE International Conference on Robotic Applications, Workshop: Robot-assisted systems of medical training).
WP2. Development of face generation platforms.
EYE MOVEMENTS—location in 3D space/speed/pupil dilation/iris coloration/blinks. Combine face models with gaze variations, examine impact on social perception, particularly across cultures; model gaze behaviors of different messages (e.g. disgust, disbelief), anticipate interaction modulating social perception.
TRANSIENT COLORATION—i.e. blushing/pallor. Examine impact of involuntary cues on social perception (e.g. trustworthiness/competence). Anticipate overriding voluntary cues (i.e. face shape/expressions); cultural variances.
HEAD MOVEMENTS (all directions/speed/amplitude). Model head movements of social messages (e.g. agreement/greetings). Anticipate certain head movements will convey basic information (e.g. negation); facial expressions (e.g. nose wrinkling) refine messages; specific dynamics (e.g. fast/slow; high/low amplitude) convey information (e.g. threat/intensity).
VISEMES. Combine facial signals with speech/vocalizations. Anticipate that facial movements will impacts auditory comprehension (e.g. stressing a word).

WP3. Combine facial expressions (e.g. social traits/emotions) with faces shapes/complexions (e.g. different age, ethnicity, gender) and examine relative contribution to social perception. Anticipate that face identities will modulate facial expression interpretation, particularly involuntary cues.

WP4. Examine syntactical structure of face signals. Anticipate that facial movements will structure by culture and broad social information. Develop MEG pilot to examine how the brain represents dynamic facial movements.

WP5. Transfer models to digital agents. Anticipate increase in (1) recognition performance in other cultures, (b) realism and humanlike-ness, and (c) global marketability.
Four cross-cultural latent expressive patterns that structure a wide range of facial expressions