Skip to content

Accurate and robust eye center localization via fully convolutional networks

Research output: Contribution to journalArticle

Eye center localization is one of the most crucial and basic requirements for some human-computer interaction applications such as eye gaze estimation and eye tracking. There is a large body of works on this topic in recent years, but the accuracy still needs to be improved due to challenges in appearance such as the high variability of shapes, lighting conditions, viewing angles and possible occlusions. To address these problems and limitations, we propose a novel approach in this paper for the eye center localization with a fully convolutional network (FCN), which is an endto-end and pixels-to-pixels network and can locate the eye center accurately. The key idea is to apply the FCN from the object semantic segmentation task to the eye center localization task since the problem of eye center localization can be regarded as a special semantic segmentation problem. We adapt contemporary FCN into a shallow structure with a large kernel convolutional block and transfer their performance from semantic segmentation to the eye center localization task by fine-tuning. Extensive experiments show that the proposed method outperforms the state-of-the-art methods in both accuracy and reliability of eye center localization. The proposed method has achieved a large performance improvement on the most challenging database and it thus provides a promising solution to some challenging applications.
Original languageEnglish
Pages (from-to)1127-1138
Number of pages12
JournalIEEE/CAA Journal of Automatica Sinica
Volume6
Issue number5
DOIs
Publication statusPublished - 3 Sep 2019

Documents

  • Accurate and Robust Eye Center Localization via Fully Convolutional Networks_pp

    Rights statement: © © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

    Accepted author manuscript (Post-print), 1.24 MB, PDF document

Related information

Relations Get citation (various referencing formats)

ID: 15356129