Linearly augmented real-time 4D expressional face capture

Shu Zhang, Hui Yu*, Ting Wang, Junyu Dong, Tuan Pham

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

147 Downloads (Pure)


Personalised 3D face creation has always been a hot topic in the computer vision community. Many methods have been proposed including the statistic model, the non-rigid registration and high-end depth acquisition equipment. However, in practical applications, those existing methods still have their own limitations. For example, the performance of the statistic model-based methods highly depends on the generality of the pre-trained statistic model; the non-rigid registration based methods are sensitive to the quality of input data; the high-end equipment-based ethods are less able to be popularised due to the expensive equipment costs; the deep learningbased methods can only perform well if proper training data provided for the target domain, and require GPU for better performance. To this end, this paper presents an adaptive templateaugmented method that can automatically obtain a personalised 4D facial modelling only using a consumer-grade device. The noisy data from such a cheap device are well handled. Thewhole process consists of a series of linear solutions and can be achieved in real-time for online processing only based on the CPU computation on a laptop. There is no constraint nor complex operation required by the proposed method. No additional time-consumptive pre- or post-processing for the personalisation is needed. Comparisons against several existing methods demonstrate the superiority of the proposed method.
Original languageEnglish
Pages (from-to)331-343
Number of pages13
JournalInformation Sciences
Early online date11 Sept 2020
Publication statusPublished - 4 Feb 2021


  • UKRI
  • EP/N025849/1
  • linear
  • personalised
  • 3D expressional face
  • 4D face
  • CPU computation
  • real-time


Dive into the research topics of 'Linearly augmented real-time 4D expressional face capture'. Together they form a unique fingerprint.

Cite this