An efficient skeleton-based action recognition approach with view transformation

Tianyu Ma, Jiahui Yu, Hongwei Gao, Zhaojie Ju*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recently, human action recognition has made satisfactory progress on almost daily activities. The emergence of depth sensors makes it more convenient to obtain human skeleton information from videos. Many problems in action recognition can be solved by using joint points, such as light changes and background noise. However, the unfixed viewpoint is one of the difficulties that affect action recognition, which is still under-explored for most existing skeleton-based works. Besides, training a complex deep model increases hardware complexity and cost. The proposed model uses a view adaptive asymmetric convolutional network (VA-ACN) to extract skeleton features from the skeleton data and transform the original body perspective into a new observable viewpoint. The ResNet50-based backbone is improved for high-performance classification. Compared with recent state-of-the-art models, the proposed model improves feature extraction and classification performance without increasing extra computation cost. Hence, it saves training time while improving the performance of action recognition under certain conditions. Experimental results show that the proposed model outperforms state-of-the-art methods.

Original languageEnglish
Title of host publication2021 27th International Conference on Mechatronics and Machine Vision in Practice, M2VIP 2021
PublisherInstitute of Electrical and Electronics Engineers
Pages475-479
Number of pages5
ISBN (Electronic)9781665431538
ISBN (Print)9781665431545
DOIs
Publication statusPublished - 7 Jan 2022
Event2021 27th International Conference on Mechatronics and Machine Vision in Practice, M2VIP 2021 - Shanghai, China
Duration: 26 Nov 202128 Nov 2021

Conference

Conference2021 27th International Conference on Mechatronics and Machine Vision in Practice, M2VIP 2021
Country/TerritoryChina
CityShanghai
Period26/11/2128/11/21

Fingerprint

Dive into the research topics of 'An efficient skeleton-based action recognition approach with view transformation'. Together they form a unique fingerprint.

Cite this