Kisiel, Damian;
Gorse, Denise;
(2023)
Portfolio Transformer for Attention-Based Asset Allocation.
In: Rutkowski, Leszek and Scherer, Rafał and Korytkowski, Marcin and Pedrycz, Witold and Tadeusiewicz, Ryszard and Zurada, Jacek M, (eds.)
ICAISC 2022: Artificial Intelligence and Soft Computing, PT I.
(pp. pp. 61-71).
Springer Nature Switzerland: Cham, Switzerland.
Preview |
Text
2206.03246.pdf - Accepted Version Download (394kB) | Preview |
Abstract
Traditional approaches to financial asset allocation start with returns forecasting followed by an optimization stage that decides the optimal asset weights. Any errors made during the forecasting step reduce the accuracy of the asset weightings, and hence the profitability of the overall portfolio. The Portfolio Transformer (PT) network, introduced here, circumvents the need to predict asset returns and instead directly optimizes the Sharpe ratio, a risk-adjusted performance metric widely used in practice. The PT is a novel end-to-end portfolio optimization framework, inspired by the numerous successes of attention mechanisms in natural language processing. With its full encoder-decoder architecture, specialized time encoding layers, and gating components, the PT has a high capacity to learn long-term dependencies among portfolio assets and hence can adapt more quickly to changing market conditions such as the COVID-19 pandemic. To demonstrate its robustness, the PT is compared against other algorithms, including the current LSTM-based state of the art, on three different datasets, with results showing that it offers the best risk-adjusted performance.
Type: | Proceedings paper |
---|---|
Title: | Portfolio Transformer for Attention-Based Asset Allocation |
Event: | 21st International Conference on Artificial Intelligence and Soft Computing (ICAISC) |
Location: | Zakopane, POLAND |
Dates: | 19 Jun 2022 - 23 Jun 2022 |
ISBN-13: | 978-3-031-23491-0 |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.1007/978-3-031-23492-7_6 |
Language: | English |
Additional information: | This version is the author-accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions. |
Keywords: | Transformers, Deep Learning, Portfolio Optimization. |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science |
URI: | https://discovery.ucl.ac.uk/id/eprint/10172666 |
Archive Staff Only
View Item |