Dynamic Facial Expression Reconstruction from Upper Half-face Data

Project Details


3D face reconstruction is a crucial part of virtual reality (VR) for many applications. The most commonly used VR device, the wearable VR headset, can only capture the upper half-face image. It is quite challenging to reconstruct the whole 3D face from facial images captured by VR headset, because the information of lower half-face is totally missing. In this project, we aimed to reconstruct the whole 3D face, as well as detailed information of facial expressions, from a single upper face image.

Key findings

We firstly reconstructed the whole neutral face by fitting the upper face to a group of pre-defined face templates, which can be solved by a simple optimization process. With the estimated whole neutral 3D face, we estimated the lower facial expressions by modelling the facial action movements according to the subject’s emotion. The facial emotion can be detected by using the cues of upper face via a carefully designed classification method.
Effective start/end date1/09/1831/12/18


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.