AbstractTo endow robots with skills mastered by human experts has been a promising research domain in recent years with the massive applications of robots. This thesis introduces an advanced human-to-robot skill transfer method with a novel probabilistic framework. To validate this method, a dedicated human skill dataset built with humanoid motions from our everyday life are created for skill transfer experiments.
Prior to proposing the innovative human-to-robot skill transfer framework, I conduct a comprehensive review within the robot skill learning domain. The 3 major problems, skill perception, skill representation and skill learning, and their solving methods are systematically considered in literature review. We start from the core model in learning motion trajectories, namely, the probabilistic mixture model, which are widely applied in skill learning from multiple human demonstrations.
Curved Gaussian Mixture Model (CGMM) is proposed as a novel variant of the original probabilistic mixture model, Gaussian Mixture Model (GMM), whose component Gaussians have special curved principal axe. The existing curvatures enhance the performance in modelling non-linear data, such as motion trajectories in the time-state space. Relevant fuzzy object function based parameter estimation algorithms are proposed to learn the CGMM in batch and incrementally. Empirical results discussed in thesis proves the superiority of the CGMM over other state-of-the-arts in data fitting.
Curved Gaussian Mixture Regression (CGMR) is a specialised regression method for the CGMM developed in this thesis. A human motion skill learned by the proposed CGMM and regressed with the CGMR in the human-to-robot skill transfer framework. The combination of CGMM and CGMR outperforms other type of motion trajectory encoding and retrieving algorithms. Additionally, in order to learn the skill model from a large amount of demonstration data sequentially, I propose a lifelong skill learning framework using the fuzzy incremental learning algorithm.
Sawyer as a typical redundant industrial manipulator is applied to reproduce the learned humanoid skills in this thesis. Once multiple demonstrations from a human skill are perceived by the proposed virtual reality based perception system, a CGMM will be trained on the perceived data via the lifelong learning process accordingly. Then, a regressed trajectory with CGMR is taken as a reference in the Cartesian space to reproduce the joint movements of a robot. To get the joint movements which result in anthropomorphic motions in the Cartesian space, an inverse kinematics with morphologic constrains is proposed. At last, the skill perception, learning and reproducing are tested under the human-to-robot skill transfer framework with a Sawyer robot.
|Date of Award||Dec 2020|
|Supervisor||Zhaojie Ju (Supervisor), Honghai Liu (Supervisor) & Ivan Jordanov (Supervisor)|