Deep selective feature learning for action recognition

Ziqiang Li, Yongxin Ge, Jinyuan Feng, Xiaolei Qin, Jiaruo Yu, Hui Yu

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Soft-attention mechanism has attracted a lot of attention in recent years due to its ability to capture the most discriminative image features for understanding actions. However, soft-attention tends to focus on fine-grained parts on images and ignores global information, which can lead to totally wrong classification results. To address this issue, we propose a novel deep selective feature learning network (DSFNet), which can automatically learn the feature maps with both fine-grained and global information. Specially, DSFNet is designed to have the ability to learn to adjust the actions for feature map selection by maximizing the cumulative discounted rewards. Moreover, the DSFNet is an easy-to-use extension of state-of-the-art base architectures of multiple tasks. Extensive experiments show that the proposed method has achieved superior performance on two standard action recognition benchmarks across still images (PPMI) and videos (HMDB51).

Original languageEnglish
Title of host publication2020 IEEE International Conference on Multimedia and Expo, ICME 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)978-1-7281-1331-9
ISBN (Print)978-1-7281-1332-6
Publication statusPublished - 6 Jul 2020
Event2020 IEEE International Conference on Multimedia and Expo - London, United Kingdom
Duration: 6 Jul 202010 Jul 2020

Publication series

NameProceedings - IEEE International Conference on Multimedia and Expo
ISSN (Print)1945-7871
ISSN (Electronic)1945-788X


Conference2020 IEEE International Conference on Multimedia and Expo
Abbreviated titleICME 2020
Country/TerritoryUnited Kingdom


  • Action recognition
  • Feature selection
  • Reinforcement learning


Dive into the research topics of 'Deep selective feature learning for action recognition'. Together they form a unique fingerprint.

Cite this