Human in-hand motion recognition based on multi-modal perception information fusion

Yaxu Xue, Yadong Yu, Kaiyang Yin, Pengfei Li, Shuangxi Xie, Zhaojie Ju

Research output: Contribution to journalArticlepeer-review

Abstract

A human in-hand motion (HIM) recognition system based on multi-modal perception information fusion is proposed in this paper, which can observe the state information between the object and the hand by using customized ten kinds of HIM manipulation in order to recognize the complex HIMs. First, combined with the characteristics of HIM capture, ten kinds of HIM sets are designed, and finger trajectory, contact force and electromyographic signal data are acquired synchronously through the multi-modal data acquisition platform; second, motion segmentation is realized through the threshold segmentation method, the multi-modal signal preprocessing is realized by Empirical Mode Decomposition (EMD), and multi-modal signal feature extraction is realized by Maximum Lyapunov Exponent (MLE); then, a detailed non-linear data analysis is carried out. A detailed analysis and discussion are presented from the results of the Random Forest (RF) recognizing HIMs, the comparison results of motion recognition rates of different subjects, the comparison results of motion recognition rates of different perceptrons, and the comparison results of the motion recognition rates of different machine learning methods. The experimental results show that the multi-modal perception information based HIM recognition system proposed in this paper can effectively recognize ten different HIMs, with an accuracy rate of 93.72%.
Original languageEnglish
Pages (from-to)6793-6805
Number of pages13
JournalIEEE Sensors Journal
Volume22
Issue number7
Early online date4 Feb 2022
DOIs
Publication statusPublished - 1 Apr 2022

Keywords

  • multi-modal information
  • human in-hand motion
  • empirical mode decomposition
  • maximum Lyapunov exponent
  • random forest

Fingerprint

Dive into the research topics of 'Human in-hand motion recognition based on multi-modal perception information fusion'. Together they form a unique fingerprint.

Cite this