Abstract
This paper presents an approach for reproducing optimal 3-D facial expressions based on blendshape regression. It aims to improve fidelity of facial expressions but maintain the efficiency of the blendshape method, which is necessary for applications such as human–machine interaction and avatars. Themethod intends to optimize the given facial expression using action units (AUs) based on the facial action coding system recorded from human faces. To
help capture facial movements for the target face, an intermediate model space is generated, where both the target and source AUs have the samemesh topology and vertex number. The optimization is conducted interactively in the intermediate model space through adjusting the regulating parameter. The optimized facial expression model is transferred back to the target facialmodel to produce the final facial expression.We demonstrate that given a sketched facial expression with rough vertex positions indicating the intended
facial expression, the proposed method approaches the sketched facial
expression through automatically selecting blendshapes with corresponding weights. The sketched expression model is finally approximated through AUs representing true muscle movements, which improves the fidelity of facial expressions.
help capture facial movements for the target face, an intermediate model space is generated, where both the target and source AUs have the samemesh topology and vertex number. The optimization is conducted interactively in the intermediate model space through adjusting the regulating parameter. The optimized facial expression model is transferred back to the target facialmodel to produce the final facial expression.We demonstrate that given a sketched facial expression with rough vertex positions indicating the intended
facial expression, the proposed method approaches the sketched facial
expression through automatically selecting blendshapes with corresponding weights. The sketched expression model is finally approximated through AUs representing true muscle movements, which improves the fidelity of facial expressions.
Original language | English |
---|---|
Pages (from-to) | 386-394 |
Journal | IEEE Transactions on Human-Machine Systems |
Volume | 44 |
Issue number | 3 |
Early online date | 14 May 2014 |
DOIs | |
Publication status | Published - Jun 2014 |