Abstract
Dexterous robot teleoperation enables remote access to specialized therapies, addressing geographic barriers in healthcare. However, traditional methods suffer from computational complexities, local minima issues, and reliance on expensive motion capture devices, yielding inaccuracies for unseen motions. We propose a novel vision-guided motion retargeting approach using only an RGB camera, offering a simple, low-cost solution. Our method introduces a graph encoder network for reasonable initial encodings and optimizes in the latent space, reducing complexity and avoiding local minima entrapment. It transforms retargeting into latent code optimization, enabling true end-to-end automated retargeting without manual rule design. Even with limited data or unseen motions, our approach achieves high-precision retargeting through iterative latent space optimization, mitigating reliance on massive datasets. Through simulated experiments retargeting human demonstrations onto a bi-manual robot, we validate our method's effectiveness in successfully reproducing motions, demonstrating feasibility and accuracy for human-robot motion retargeting.
Original language | English |
---|---|
Title of host publication | 2024 17th International Convention on Rehabilitation Engineering and Assistive Technology, i-CREATe 2024 and World Rehabilitation Robot Convention, WRRC 2024 - Proceedings |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Number of pages | 6 |
ISBN (Electronic) | 9798350355154 |
DOIs | |
Publication status | Published - 11 Dec 2024 |
Event | 17th International Convention on Rehabilitation Engineering and Assistive Technology, i-CREATe 2024 - Shanghai, China Duration: 23 Aug 2024 → 26 Aug 2024 |
Conference
Conference | 17th International Convention on Rehabilitation Engineering and Assistive Technology, i-CREATe 2024 |
---|---|
Country/Territory | China |
City | Shanghai |
Period | 23/08/24 → 26/08/24 |