UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Chance constrained policy optimization for process control and optimization

Petsagkourakis, Panagiotis; Sandoval, Ilya Orson; Bradford, Eric; Galvanin, Federico; Zhang, Dongda; Rio-Chanona, Ehecatl Antonio del; (2022) Chance constrained policy optimization for process control and optimization. Journal of Process Control , 111 pp. 35-45. 10.1016/j.jprocont.2022.01.003. Green open access

[thumbnail of 2008.00030v2.pdf]
Preview
Text
2008.00030v2.pdf - Other

Download (3MB) | Preview

Abstract

Chemical process optimization and control are affected by (1) plant-model mismatch, (2) process disturbances, and (3) constraints for safe operation. Reinforcement learning by policy optimization would be a natural way to solve this due to its ability to address stochasticity, plant-model mismatch, and directly account for the effect of future uncertainty and its feedback in a proper closed-loop manner; all without the need of an inner optimization loop. One of the main reasons why reinforcement learning has not been considered for industrial processes (or almost any engineering application) is that it lacks a framework to deal with safety critical constraints. Present algorithms for policy optimization use difficult-to-tune penalty parameters, fail to reliably satisfy state constraints or present guarantees only in expectation. We propose a chance constrained policy optimization (CCPO) algorithm which guarantees the satisfaction of joint chance constraints with a high probability — which is crucial for safety critical tasks. This is achieved by the introduction of constraint tightening (backoffs), which are computed simultaneously with the feedback policy. Backoffs are adjusted with Bayesian optimization using the empirical cumulative distribution function of the probabilistic constraints, and are therefore self-tuned. This results in a general methodology that can be imbued into present policy optimization algorithms to enable them to satisfy joint chance constraints with high probability. We present case studies that analyse the performance of the proposed approach.

Type: Article
Title: Chance constrained policy optimization for process control and optimization
Open access status: An open access version is available from UCL Discovery
DOI: 10.1016/j.jprocont.2022.01.003
Publisher version: https://doi.org/10.1016/j.jprocont.2022.01.003
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher's terms and conditions.
Keywords: Policy search, Reinforcement Learning, Data-driven process control, Chance constraints, Bayesian Optimization
UCL classification: UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Chemical Engineering
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL
URI: https://discovery.ucl.ac.uk/id/eprint/10143173
Downloads since deposit
Loading...
48Downloads
Download activity - last month
Loading...
Download activity - last 12 months
Loading...
Downloads by country - last 12 months
1.United States
6
2.China
5
3.United Kingdom
4
4.Germany
3
5.Canada
2
6.Russian Federation
1
7.India
1
8.Morocco
1

Archive Staff Only

View Item View Item