UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Actor-Critic Reinforcement Learning for Control With Stability Guarantee

Han, M; Zhang, L; Wang, J; Pan, W; (2020) Actor-Critic Reinforcement Learning for Control With Stability Guarantee. IEEE Robotics and Automation Letters , 5 (4) pp. 6217-6224. 10.1109/LRA.2020.3011351. Green open access

[thumbnail of 2004.14288 (1).pdf]
Preview
Text
2004.14288 (1).pdf - Accepted Version

Download (5MB) | Preview

Abstract

Reinforcement Learning (RL) and its integration with deep learning have achieved impressive performance in various robotic control tasks, ranging from motion planning and navigation to end-to-end visual manipulation. However, stability is not guaranteed in model-free RL by solely using data. From a control-theoretic perspective, stability is the most important property for any control system, since it is closely related to safety, robustness, and reliability of robotic systems. In this letter, we propose an actor-critic RL framework for control which can guarantee closed-loop stability by employing the classic Lyapunov's method in control theory. First of all, a data-based stability theorem is proposed for stochastic nonlinear systems modeled by Markov decision process. Then we show that the stability condition could be exploited as the critic in the actor-critic RL to learn a controller/policy. At last, the effectiveness of our approach is evaluated on several well-known 3-dimensional robot control tasks and a synthetic biology gene network tracking task in three different popular physics simulation platforms. As an empirical evaluation on the advantage of stability, we show that the learned policies can enable the systems to recover to the equilibrium or way-points when interfered by uncertainties such as system parametric variations and external disturbances to a certain extent.

Type: Article
Title: Actor-Critic Reinforcement Learning for Control With Stability Guarantee
Open access status: An open access version is available from UCL Discovery
DOI: 10.1109/LRA.2020.3011351
Publisher version: https://doi.org/10.1109/LRA.2020.3011351
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: Reinforcement learning, stability, Lyapunov's method
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10116270
Downloads since deposit
52Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item