Face frontalization for facial expression recognition in the wild

  • Yiming Wang

Student thesis: Doctoral Thesis


Automatic machine analysis of facial expressions has attracted increasing attentions and has been widely applied to various domains such as animation, multimedia and security. Sensing and understanding facial behaviours are the fundamental requirement of human-machine interaction systems. As computing has become more powerful and ubiquitous, there is an ungent demand for the facial expression recognition system which is designed under real-world conditions and enables generalization across population. The proposed work in this thesis addresses four main challenges of facial expression recognition in the wild: 1) identity bias which refers to the fact that facial features are always discriminative in terms of identity but difficult to distinguish in terms of expressions, 2) head pose variations, 3) occlusions, and 4) irregularity of spontaneous expressions and presents approaches to tackle these challenges. Inspired by the success of existing research and benchmarks of facial expression recognition under controlled conditions where identity bias is the only challenge and there are very small or even no variations in terms of the other three problems, we propose to normalize the facial images under unconstrained situations into lab conditions by presenting spatial face normalization and texture recontruction based on face frontalization. The goal is to design a powerful and flexible system which can improve the facial expression recognition performance under unconstrained real-world conditions.
We introduce a novel Facial Expression-Aware face Frontalization (FEAF) method based on spatial normalization strategy. Compared with most existing methods that only addressed one or several challenges, a joint consideration of all the four challenges is taken into account. To effectively solve the problem of identity bias and irregularity of expressions, we present a multi-template model to normalize shape variations deliberately designing multiple frontal shape templates that contain meaningful expressions to fit in with various shapes of facial expressions. Hence, every face is aligned to one of a group of shared template no matter how ambiguous or who the input face is. Subsequently, shape normalization that map the facial shape to a normalized frontal emotional template in order to solve head-pose variations. Finally, we have employed face frontalization techniques to reconstruct facial appearances, where occlusions are removed by maintaining an additional error matrix to restore sparse errors caused by occlusions. The reconstructed faces will be strictly in frontal view. Given the reconstructed faces, some commonly used feature extraction methods and machine learning techniques can be employed for facial emotional states recognition. The state-of-the-art performance is achieved in the task of static facial expression recognition in the wild. We also have demonstrated the superior performance of our proposed models on the task of interpersonal relation prediction.
To capture more subtle facial expression cues and further improve recognition rate, we propose an extended FEAF approach for dynamic facial expression analysis based on accurate shape processing. Contrary to the majority of existing dynamic methods that focus on time alignment, the consideration of head pose variations and occlusions are addressed in this method by using regression-based spatial alignment that estimates an accurate frontal view of facial shape given the a non-frontal face. We develop a cascade regression model to learn the pair-wise relationship between non-frontal facial shape and its frontal counterpart. Different from static FEAF, this method can capture subtle facial muscle changes in an image sequence and, therefore, it can be used for dynamic facial expression recognition. Superior performance has been achieved on several public datasets under both lab and unconstrained conditions.
Date of AwardSept 2018
Original languageEnglish
SupervisorHui Yu (Supervisor), Brett Stevens (Supervisor) & Neil Dansey (Supervisor)

Cite this