Lim, YS;
Gorse, D;
(2018)
Reinforcement learning for high-frequency market making.
In:
ESANN 2018 - Proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning.
(pp. pp. 521-526).
ESANN: Bruges, Belgium.
Preview |
Text
RLforHFMM.pdf - Accepted Version Download (886kB) | Preview |
Abstract
In this paper we present the first practical application of reinforcement learning to optimal market making in high-frequency trading. States, actions, and reward formulations unique to high-frequency market making are proposed, including a novel use of the CARA utility as a terminal reward for improving learning. We show that the optimal policy trained using Q-learning outperforms state-of-the-art market making algorithms. Finally, we analyse the optimal reinforcement learning policies, and the influence of the CARA utility from a trading perspective.
Type: | Proceedings paper |
---|---|
Title: | Reinforcement learning for high-frequency market making |
Event: | European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning |
ISBN-13: | 9782875870476 |
Open access status: | An open access version is available from UCL Discovery |
Publisher version: | https://www.esann.org/ |
Language: | English |
Additional information: | This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions. |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science |
URI: | https://discovery.ucl.ac.uk/id/eprint/10116730 |
Archive Staff Only
View Item |