Facial expression synthesis based on denoising diffusion probabilistic model

Chayanon Sub-r-pa, Ming Zhong Fan, Hui Yu, Rung Ching Chen*

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Facial expression synthesis has been receiving more attention for research and application in computer vision. The state-of-the-art latent diffusion model (LDM) can generate high-quality images from text prompts. However, to edit the facial expression of existing images, the model can over-editing and remove some identity from the original images. In this study, we build a facial expression synthesis pipeline to edit the original image with different expressions: anger, disgust, contempt, fear, happiness, sadness, surprise, and neutral. Our pipeline includes facial segmentation to extract the necessary area for editing, denoising diffusion probabilistic models (DDPM) with text embedding to generate and control the output expression, and image combining to combine generated image back to the original image. In this paper, we experiment and analyze the potential of DDPM with our method.

    Original languageEnglish
    Title of host publicationIET International Conference on Engineering Technologies and Applications (ICETA 2023)
    PublisherInstitution of Engineering and Technology
    Pages107-108
    Number of pages2
    ISBN (Print)9781839539404
    DOIs
    Publication statusPublished - 6 Mar 2024
    Event2023 IET International Conference on Engineering Technologies and Applications, ICETA 2023 - Yunlin, Taiwan, Province of China
    Duration: 21 Oct 202323 Oct 2023

    Conference

    Conference2023 IET International Conference on Engineering Technologies and Applications, ICETA 2023
    Country/TerritoryTaiwan, Province of China
    CityYunlin
    Period21/10/2323/10/23

    Keywords

    • diffusion mode
    • face segmentation
    • facial expression synthesis
    • generative models
    • text embedding

    Cite this