Robot learning from demonstrations
: skill adaptation for industrial collaborative robotic arm

Student thesis: Doctoral Thesis

Abstract

Human beings have an exceptional ability to imitate other human behaviours and can comfortably modify their actions to adapt to new situations. To endow a robotic system with such a remarkable skill is vital but quite challenging because of the immense variation and complexity of human activities, the highly classified dynamic nature of humans’ environment, and the variation in the mapping link between a human and the robot embodiment. In this regard, this thesis addresses the ability of an industrial robotic arm to acquire task skills from the instructor more efficiently by enhancing the embodiment mapping from a human arm to a robotic arm, and how to improve a robot’s ability to generalise learned skills across different conditions or new situations.
Firstly, this thesis proposes a novel human-robotic arm mapping strategy to enable a collaborative industrial robotic arm such as the Sawyer robot to sufficiently utilise its high degree of freedom in performing tasks. Furthermore, this approach minimises the disparity in the corresponding mapping of the human arm taught motion and the that reproduced by the industrial robotic arm. The experimental result shows that the joint torque generated using our approach is lesser than those generated using the conventional methods. Furthermore, comparing the performance of the proposed approach with the benchmark on the path lengths of the demonstrations. It also shows that the minimum path length the trajectory generated using the approach proposed is slightly lower than those generated from the other three methods. Finally, the total time spent in the demonstration is relatively better when using the proposed approach than the benchmark, individually and as a group.
Secondly, the thesis proposes a multilayer approach to address the challenging issues of robot learning from demonstration, specifically in an industrial robotic arm operational zones where there are obstacles that can obfuscate actual intent of the demonstrated trajectory. This method empowers a robot to learn a model to perform a task from noisy demonstrations. The method employed a Gaussian mixture model to learn the motion trajectories then project them into the original feature space to retrieve a smooth generalised trajectory and the associated variabilities. In order to adapt to unstructured scenes with minimised computational cost, the retrieved trajectory is decomposed into Gaussian components such that a potential field force is applied to adjust only components that are under the influence of the obstacle. The Root Mean Square Error for the task demonstration is computed and then compared with the existing benchmark methods to check that the method meets expectations. The time to accomplish a set goal shows that the proposed approach outperforms the benchmark methods. Furthermore, the proposed approach requires less computational time to construct the desired trajectory around the obstacle.
Thirdly, given that the robotic arm must cope with the task and environmental constraints, this thesis further proposed a framework that enables a robot to traverse in a scene filled with moving obstacles. The proposed model can estimate the speed of a moving obstacle at every time step and effectively keep a distance from the obstacle to avert a collision. Experimental results validate the robustness of the ever-changing environmental obstacle avoidance method presented herein obstacle avoidance cases. Thus, unlike many other approaches, the proposed approach enables a robotic arm to fulfil the task constraints and avoid collision with moving obstacles of different shapes and sizes.
Date of AwardSep 2021
Original languageEnglish
SupervisorZhaojie Ju (Supervisor), Mohamed Bader-El-Den (Supervisor) & Honghai Liu (Supervisor)

Cite this

'