TY - JOUR
T1 - Dual-hand detection for human-robot interaction by a parallel network based on hand detection and body pose estimation
AU - Gao, Qing
AU - Liu, Jinguo
AU - Ju, Zhaojie
AU - Zhang, Xin
PY - 2019/2/15
Y1 - 2019/2/15
N2 - In this study, a parallel network based on hand detection and body pose estimation is proposed to detect and distinguish human’s right and left hands. The network is employed to human-robot interaction (HRI) based on hand gestures. This method fully uses hand feature information and hand information in the human body structure. One channel in the network uses a ResNet-Inception-Single Shot MultiBox Detector to extract hand feature information for human’s hand detection. The other channel estimates human body pose first and then estimates the positions of the left and right hands using the forward kinematic tree of the human skeleton structure. Thereafter, the results of the two channels are fused. In the fusion module, the human body structure can be utilized to correct hand detection results and distinguish between the right and left hands. Experimental results verify that the parallel deep neural network can effectively improve the accuracy of hand detection and distinguish between the right and left hands effectively. This method is also used for the hand gesture-based interaction between astronauts and an astronaut assistant robot. Our method can be suitably used in this HRI system.
AB - In this study, a parallel network based on hand detection and body pose estimation is proposed to detect and distinguish human’s right and left hands. The network is employed to human-robot interaction (HRI) based on hand gestures. This method fully uses hand feature information and hand information in the human body structure. One channel in the network uses a ResNet-Inception-Single Shot MultiBox Detector to extract hand feature information for human’s hand detection. The other channel estimates human body pose first and then estimates the positions of the left and right hands using the forward kinematic tree of the human skeleton structure. Thereafter, the results of the two channels are fused. In the fusion module, the human body structure can be utilized to correct hand detection results and distinguish between the right and left hands. Experimental results verify that the parallel deep neural network can effectively improve the accuracy of hand detection and distinguish between the right and left hands effectively. This method is also used for the hand gesture-based interaction between astronauts and an astronaut assistant robot. Our method can be suitably used in this HRI system.
UR - https://ieeexplore.ieee.org/document/8643076
U2 - 10.1109/TIE.2019.2898624
DO - 10.1109/TIE.2019.2898624
M3 - Article
SN - 0278-0046
VL - 66
SP - 9663
EP - 9672
JO - IEEE Transactions on Industrial Electronics
JF - IEEE Transactions on Industrial Electronics
IS - 12
ER -