TY - JOUR
T1 - Finite-time tracking control for serial manipulators using reinforcement learning-based active disturbance rejection
AU - Wang, Haiyan
AU - Spanogianopoulos, Sotirios
AU - Li, Baojiang
AU - Wang, Xichao
N1 - Publisher Copyright:
© ICROS, KIEE and Springer 2025.
PY - 2025/9/9
Y1 - 2025/9/9
N2 - Serial manipulators play a critical role in various applications, where accurate trajectory tracking is essential. This work presents a finite-time tracking control approach based on reinforcement learning (RL) and active disturbance rejection control (ADRC) for serial manipulators with unknown, bounded uncertainties. The control structure is built upon the ADRC framework, in which system uncertainties and external disturbances are collectively treated as a total disturbance and estimated by an extended state observer (ESO). To enhance disturbance estimation, an actor-critic RL agent is incorporated into the ESO, where the actor neural network models the total disturbance, and the critic neural network evaluates the trajectory tracking cost. Through the interaction between the actor and critic networks, a precise disturbance estimate is achieved. Additionally, to ensure fast and accurate trajectory tracking, a finite-time controller based on a non-singular fast terminal sliding mode is introduced, replacing the state error feedback controller in ADRC. The control stability is analyzed using Lyapunov theory. Simulation results show that the proposed control method offers superior tracking performance, along with enhanced uncertainty suppression and robustness.
AB - Serial manipulators play a critical role in various applications, where accurate trajectory tracking is essential. This work presents a finite-time tracking control approach based on reinforcement learning (RL) and active disturbance rejection control (ADRC) for serial manipulators with unknown, bounded uncertainties. The control structure is built upon the ADRC framework, in which system uncertainties and external disturbances are collectively treated as a total disturbance and estimated by an extended state observer (ESO). To enhance disturbance estimation, an actor-critic RL agent is incorporated into the ESO, where the actor neural network models the total disturbance, and the critic neural network evaluates the trajectory tracking cost. Through the interaction between the actor and critic networks, a precise disturbance estimate is achieved. Additionally, to ensure fast and accurate trajectory tracking, a finite-time controller based on a non-singular fast terminal sliding mode is introduced, replacing the state error feedback controller in ADRC. The control stability is analyzed using Lyapunov theory. Simulation results show that the proposed control method offers superior tracking performance, along with enhanced uncertainty suppression and robustness.
KW - Active disturbance rejection control
KW - finite-time control
KW - reinforcement learning
KW - uncertain serial manipulators
UR - https://www.scopus.com/pages/publications/105015701054
U2 - 10.1007/s12555-024-0151-6
DO - 10.1007/s12555-024-0151-6
M3 - Article
AN - SCOPUS:105015701054
SN - 1598-6446
VL - 23
SP - 2759
EP - 2768
JO - International Journal of Control, Automation and Systems
JF - International Journal of Control, Automation and Systems
IS - 9
ER -