Perception-driven facial expression synthesis

Hui Yu, O. Garrod, P. Schyns

    Research output: Contribution to journalArticlepeer-review

    Abstract

    We propose a novel platform to flexibly synthesize any arbitrary meaningful facial expression in the absence of actor performance data for that expression. With techniques from computer graphics, we synthesized random arbitrary dynamic facial expression animations. The synthesis was controlled by parametrically modulating Action Units (AUs) taken from the Facial Action Coding System (FACS). We presented these to human observers and instructed them to categorize the animations according to one of six possible facial expressions. With techniques from human psychophysics, we modeled the internal representation of these expressions for each observer, by extracting from the random noise the perceptually relevant expression parameters. We validated these models of facial expressions with naive observers.
    Original languageEnglish
    Pages (from-to)152-162
    Number of pages11
    JournalComputer & Graphics
    Volume36
    Issue number3
    DOIs
    Publication statusPublished - May 2012

    Fingerprint

    Dive into the research topics of 'Perception-driven facial expression synthesis'. Together they form a unique fingerprint.

    Cite this