Skip to content

Goad-directed Humanoid Robot Motion Learning

Project: Research

Description

Robots have moved away from industrial settings and closer into our lives, such as the ASIMO, iCub and NAO humanoid robots, so questions arise of how to interact with them and how to control them to execute a particular task. However, it is not possible to exactly program all potential tasks for a humanoid robot beforehand. Thus, there is a need for solutions that enable users to quickly and intuitively interact with robots and teach them new motion skills. Learning from human demonstrations is one of the state-of-the-art and most promising approaches to teach robots new action skills. Unlike teleportation-related methods, it has the following main advantages: a) it provides users a natural and efficient way to directly interact with robots, especially for non-experts; b) it endows robots with abilities of skill growth and online adapting; c) the learnt skills can potentially be adapted into human living environments.

This project aims to investigate novel goal-directed online learning methods and build up a human-robot interactive demonstration system, which would enable people to more easily interact with and teach humanoid robots using the most natural body gestures. The methods may include, but not limit to, probabilistic graphical models (e.g. dynamic Gaussian Mixture Models [1-2]) and reinforcement learning [3-4]. Humanoid robots will utilise such methods to learn the human action skills to effectively achieve action goals, through a developed interface, which is inexpensive, person-independent, easy to use, and requiring no wearable equipment. This project is expected to greatly improve the state-of-the-art human robot interaction with robot self-learning and self-adaptation, and to potentially bring new applications to social robotics, service robotics, healthcare robotics, etc.
StatusFinished
Effective start/end date1/09/151/09/18
Relations

ID: 4241644