Research on gesture recognition of smart data fusion features in the IoT

Chong Tan, Ying Sun, Gongfa Li*, Guozhang Jiang, Disi Chen, Honghai Liu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

377 Downloads (Pure)

Abstract

With the rapid development of Internet of things technology, the interaction between people and things has become increasingly frequent. Using simple gestures instead of complex operations to interact with the machine, the fusion of smart data feature information and so on has gradually become a research hotspot. Considering that the depth image of the Kinect sensor lacks color information and is susceptible to depth thresholds, this paper proposes a gesture segmentation method based on the fusion of color information and depth information; in order to ensure the complete information of the segmentation image, a gesture feature extraction method based on Hu invariant moment and HOG feature fusion is proposed; and by determining the optimal weight parameters, the global and local features are effectively fused. Finally, the SVM classifier is used to classify and identify gestures. The experimental results show that the proposed fusion features method has a higher gesture recognition rate and better robustness than the traditional method.

Original languageEnglish
JournalNeural Computing and Applications
Early online date25 Jan 2019
DOIs
Publication statusEarly online - 25 Jan 2019

Keywords

  • Fusion features
  • Gesture recognition
  • Hu moment
  • Smart data aggregation
  • SVM

Fingerprint

Dive into the research topics of 'Research on gesture recognition of smart data fusion features in the IoT'. Together they form a unique fingerprint.

Cite this