Human Hand Gestures Understanding via A-mode Ultrasound Modality

  • Donghan Liu

Student thesis: Doctoral Thesis


Recognizing hand gestures is inherently challenging due to the intricate complexities involved, such as variations in force, subtle muscle changes, and diverse hand movements. Traditional single-modality recognition systems tend to excel when focused on specific features and often falls short of comprehensively capturing the full spectrum of hand gestures. While multimodal systems can address the aforementioned issues, they also come with their own set of drawbacks. For instance, synchronizing multimodal signals can be challenging, and different signals require different data processing, which can significantly increase computational demands. Additionally, at present, multimodal devices tend to be more costly for users and are less user-friendly in terms of portability. Against this backdrop, this thesis delves into the potential of A-mode Ultrasound (AUS) signals for hand gesture recognition. This thesis aims to propose novel algorithms for AUS-sensing based gesture recognition, build the portfolio of hand gesture recognition in real-life scenarios, and establish a benchmark for limb motor function rehabilitation and replacement. Our vision emphasizes the capabilities of AUS signals in recognizing a range of gestures with evaluation protocol design and benchmark establishment. In particular, the absence of temporal data, and the lack of robustness to cross-user application are addressed to mitigate the inherent limitations of AUS in deciphering dynamic gestures.
Firstly, motivated by kinematics and real-life gesture mapping, we addressed the gaps between exiting AUS datasets and desired ones, categorizing gestures into static, dynamic, and in-hand manipulation for data collection. Following the polished data gathering protocol, a new dataset was established and evaluated with machine learning models, showing its potential in relevant fields.
Secondly, a deep learning-based dynamic hand gesture recognition framework was designed to based further utilise AUS signals. In particular, two solid implementations were proposed: the Long Short-Term Memory-based (LSTM) solution addressing the temporal correlations between AUS signal frames, and the Convolutional Neural Network-based (CNN) solution addressing the frame number actions and image channels. Evaluation was conducted on the proposed deep learning framework, outperforming the state-of-the-art. This contribution fills the gap in dynamic hand gesture recognition with AUS signals and offers a new avenue to real-life gesture recognition.
Thirdly, a fine-tuning transfer learning-based algorithm was proposed to improve the cross-user application performance. The algorithm accommodates the needs for rapid adaptation of new users and correction of errors for existing users. Significantly improved recognition accuracy was observed and indicated the effectiveness of the proposed fine-tuning transfer learning method. Our approach meets the requirement of rapid adaptation for new users and enhances the generalization capacity of AUS signal-based hand gesture recognition methods.
Finally, the limitations of the research and potential directions for future investigations were concluded. Specifically, the importance of developing new algorithms or novel net- work architectures for real-time gesture recognition was highlighted. By innovating in this direction, we can expect to significantly reduce latency and facilitate more seamless user interactions.
Date of Award19 Oct 2023
Original languageEnglish
Awarding Institution
  • University of Portsmouth
SupervisorHonghai Liu (Supervisor), Ivan Jordanov (Supervisor) & Rinat Khusainov (Supervisor)

Cite this