TY - JOUR
T1 - A hybrid sensor integrating surface electromyography and photoplethysmography for gesture recognition
AU - Gao, Yan
AU - Fang, Yinfeng
AU - Zhang, Congyi
AU - Zhou, Dalin
AU - Ju, Zhaojie
N1 - Publisher Copyright:
© 2001-2012 IEEE.
PY - 2025/9/9
Y1 - 2025/9/9
N2 - This study develops a wireless, distributed, and synchronized multimodal sensor system that integrates surface electromyography (sEMG) and photoplethysmography (PPG) signals to address the limitations of unimodal sensing in gesture recognition. The proposed sensor nodes enable synchronous acquisition of sEMG and PPG signals, which are subsequently used for classifying 15 commonly used gestures via machine learning (ML) and deep learning algorithms. Experimental results demonstrate that multimodal fusion significantly enhances recognition performance, improving classification accuracy from 85.3% (sEMG-only) to 95.3% when both modalities are combined. In particular, the recognition accuracy for the “wrist flexion” gesture reaches 99.2%. Additionally, we conducted signal-to-noise ratio (SNR) analysis and experiments under varying muscle contraction intensities, providing preliminary evidence that PPG signals are responsive to muscle activity changes. Correlation analysis and principal component analysis (PCA) further verified the complementary nature of sEMG and PPG signals. To explore the intermodal dependency, we also performed transfer entropy analysis and temporal alignment evaluation. With its low cost and compact size, the proposed system exhibits promising application potential in clinical rehabilitation and human-machine interaction scenarios, such as gesture recognition and muscle activity assessment. Compared to other sEMG-based multimodal systems, the integration of PPG offers a novel perspective toward meeting practical clinical demands.
AB - This study develops a wireless, distributed, and synchronized multimodal sensor system that integrates surface electromyography (sEMG) and photoplethysmography (PPG) signals to address the limitations of unimodal sensing in gesture recognition. The proposed sensor nodes enable synchronous acquisition of sEMG and PPG signals, which are subsequently used for classifying 15 commonly used gestures via machine learning (ML) and deep learning algorithms. Experimental results demonstrate that multimodal fusion significantly enhances recognition performance, improving classification accuracy from 85.3% (sEMG-only) to 95.3% when both modalities are combined. In particular, the recognition accuracy for the “wrist flexion” gesture reaches 99.2%. Additionally, we conducted signal-to-noise ratio (SNR) analysis and experiments under varying muscle contraction intensities, providing preliminary evidence that PPG signals are responsive to muscle activity changes. Correlation analysis and principal component analysis (PCA) further verified the complementary nature of sEMG and PPG signals. To explore the intermodal dependency, we also performed transfer entropy analysis and temporal alignment evaluation. With its low cost and compact size, the proposed system exhibits promising application potential in clinical rehabilitation and human-machine interaction scenarios, such as gesture recognition and muscle activity assessment. Compared to other sEMG-based multimodal systems, the integration of PPG offers a novel perspective toward meeting practical clinical demands.
KW - Deep learning
KW - Gesture recognition
KW - Mutilmodal fusion sensor
KW - Photoplethysmography
KW - Surface electromyography
UR - https://www.scopus.com/pages/publications/105016016109
U2 - 10.1109/JSEN.2025.3605063
DO - 10.1109/JSEN.2025.3605063
M3 - Article
AN - SCOPUS:105016016109
SN - 1530-437X
JO - IEEE Sensors Journal
JF - IEEE Sensors Journal
ER -