Abstract
It is very difficult to annotate large-scale facial expressions due to the inconsistent labels caused by the subjectivity of the annotators and the ambiguity of the facial expressions. Moreover, current studies present limitation when addressing facial expression different due to gender gap. Therefore, this artical proposes a self-cure network with two-stage method(SCN-TSM) which prevents deep networks from over-fitting ambiguous images. First, base on SCN-TSM, a two-stage training scheme is designed, taking full advantage of the gendered information. Furthermore, a self-attention mechanism to highlight the essential images, and to weight each sample with a weighting regularization. Finally, a relabeling module to modify the labels of these samples in inconsistent labels. A large number of experiments on public datasets validate the effectiveness of our method.
Original language | English |
---|---|
Title of host publication | 2021 27th International Conference on Mechatronics and Machine Vision in Practice, M2VIP 2021 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 163-168 |
Number of pages | 6 |
ISBN (Electronic) | 9781665431538 |
ISBN (Print) | 9781665431545 |
DOIs | |
Publication status | Published - 7 Jan 2022 |
Event | 2021 27th International Conference on Mechatronics and Machine Vision in Practice, M2VIP 2021 - Shanghai, China Duration: 26 Nov 2021 → 28 Nov 2021 |
Conference
Conference | 2021 27th International Conference on Mechatronics and Machine Vision in Practice, M2VIP 2021 |
---|---|
Country/Territory | China |
City | Shanghai |
Period | 26/11/21 → 28/11/21 |