UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Towards Inverse Reinforcement Learning for Limit Order Book Dynamics

Roa Vicens, J; Chtourou, C; Filos, A; Rullan, F; Gal, Y; Silva, R; (2019) Towards Inverse Reinforcement Learning for Limit Order Book Dynamics. arXiv 10.48550/arXiv.1906.04813. Green open access

[thumbnail of Towards Inverse Reinforcement Learning for Limit Order Book Dynamics.pdf]
Preview
Text
Towards Inverse Reinforcement Learning for Limit Order Book Dynamics.pdf - Published Version

Download (434kB) | Preview

Abstract

Multi-agent learning is a promising method to simulate aggregate competitive behaviour in finance. Learning expert agents’ reward functions through their external demonstrations is hence particularly relevant for subsequent design of realistic agent-based simulations. Inverse Reinforcement Learning (IRL) aims at acquiring such reward functions through inference, allowing to generalize the resulting policy to states not observed in the past. This paper investigates whether IRL can infer such rewards from agents within real financial stochastic environments: limit order books (LOB). We introduce a simple onelevel LOB, where the interactions of a number of stochastic agents and an expert trading agent are modelled as a Markov decision process. We consider two cases for the expert’s reward: either a simple linear function of state features; or a complex, more realistic non-linear function. Given the expert agent’s demonstrations, we attempt to discover their strategy by modelling their latent reward function using linear and Gaussian process (GP) regressors from previous literature, and our own approach through Bayesian neural networks (BNN). While the three methods can learn the linear case, only the GP-based and our proposed BNN methods are able to discover the non-linear reward case. Our BNN IRL algorithm outperforms the other two approaches as the number of samples increases. These results illustrate that complex behaviours, induced by non-linear reward functions amid agent-based stochastic scenarios, can be deduced through inference, encouraging the use of inverse reinforcement learning for opponent-modelling in multi-agent systems.

Type: Article
Title: Towards Inverse Reinforcement Learning for Limit Order Book Dynamics
Event: International Conference on Machine Learning - 2019
Location: Long Beach, California
Dates: 10 June 2019 - 15 June 2019
Open access status: An open access version is available from UCL Discovery
DOI: 10.48550/arXiv.1906.04813
Publisher version: https://doi.org/10.48550/arXiv.1906.04813
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Maths and Physical Sciences
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Maths and Physical Sciences > Dept of Statistical Science
URI: https://discovery.ucl.ac.uk/id/eprint/10076266
Downloads since deposit
15Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item