TY  - INPR
N1  - © 2024 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License.
For more information, see (https://creativecommons.org/licenses/by/4.0/).
PB  - Institute of Electrical and Electronics Engineers
A1  - De Lellis, Francesco
A1  - Coraggio, Marco
A1  - Russo, Giovanni
A1  - Musolesi, Mirco
A1  - di Bernardo, Mario
JF  - IEEE Transactions on Control Systems Technology
KW  - Computational control
KW  -  deep reinforcement learning (RL)
KW  -  learning-based control
KW  -  policy validation
KW  -  reward shaping
Y1  - 2024/05/17/
TI  - Guaranteeing Control Requirements via Reward Shaping in Reinforcement Learning
AV  - public
N2  - In addressing control problems such as regulation and tracking through reinforcement learning (RL), it is often required to guarantee that the acquired policy meets essential performance and stability criteria such as a desired settling time and steady-state error before deployment. Motivated by this, we present a set of results and a systematic reward-shaping procedure that: 1) ensures the optimal policy generates trajectories that align with specified control requirements and 2) allows to assess whether any given policy satisfies them. We validate our approach through comprehensive numerical experiments conducted in two representative environments from OpenAI Gym: the Pendulum swing-up problem and the Lunar Lander. Utilizing both tabular and deep RL methods, our experiments consistently affirm the efficacy of our proposed framework, highlighting its effectiveness in ensuring policy adherence to the prescribed control requirements.
ID  - discovery10194559
UR  - http://dx.doi.org/10.1109/tcst.2024.3393210
EP  - 12
ER  -