Abstract
3-D Morphable model (3DMM) has widely benefited 3-D face-involved challenges given its parametric facial geometry and appearance representation. However, previous 3-D face reconstruction methods suffer from limited power in facial expression representation due to the unbalanced training data distribution and insufficient ground-truth 3-D shapes. In this article, we propose a novel framework to learn personalized shapes so that the reconstructed model well fits the corresponding face images. Specifically, we augment the dataset following several principles to balance the facial shape and expression distribution. A mesh editing method is presented as the expression synthesizer to generate more face images with various expressions. Besides, we improve the pose estimation accuracy by transferring the projection parameter into the Euler angles. Finally, a weighted sampling method is proposed to improve the robustness of the training process, where we define the offset between the base face model and the ground-truth face model as the sampling probability of each vertex. The experiments on several challenging benchmarks have demonstrated that our method achieves state-of-the-art performance.
Original language | English |
---|---|
Number of pages | 10 |
Journal | IEEE Transactions on Cybernetics |
Early online date | 17 Feb 2023 |
DOIs | |
Publication status | Early online - 17 Feb 2023 |
Keywords
- 3-D dense face alignment
- 3-D face reconstruction
- computational modeling
- data models
- expression synthesis
- face recognition
- faces
- facial manipulation
- image reconstruction
- shape
- solid modeling