UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Semi-Counterfactual Risk Minimization Via Neural Networks

Aminian, Gholamali; Vega, Roberto; Rivasplata, Omar; Toni, Laura; Rodrigues, Miguel; (2022) Semi-Counterfactual Risk Minimization Via Neural Networks. In: Proceedings of the 15th European Workshop on Reinforcement Learning (EWRL 2022). (pp. pp. 1-20). European Workshop on Reinforcement Learning Green open access

[thumbnail of 2209.07148v2.pdf]
Preview
PDF
2209.07148v2.pdf - Accepted Version

Download (364kB) | Preview

Abstract

Counterfactual risk minimization is a framework for offline policy optimization with logged data which consists of context, action, propensity score, and reward for each sample point. In this work, we build on this framework and propose a learning method for settings where the rewards for some samples are not observed, and so the logged data consists of a subset of samples with unknown rewards and a subset of samples with known rewards. This setting arises in many application domains, including advertising and healthcare. While reward feedback is missing for some samples, it is possible to leverage the unknown-reward samples in order to minimize the risk, and we refer to this setting as semi-counterfactual risk minimization. To approach this kind of learning problem, we derive new upper bounds on the true risk under the inverse propensity score estimator. We then build upon these bounds to propose a regularized counterfactual risk minimization method, where the regularization term is based on the logged unknown-rewards dataset only; hence it is reward-independent. We also propose another algorithm based on generating pseudo-rewards for the logged unknown-rewards dataset. Experimental results with neural networks and benchmark datasets indicate that these algorithms can leverage the logged unknown-rewards dataset besides the logged known-reward dataset.

Type: Proceedings paper
Title: Semi-Counterfactual Risk Minimization Via Neural Networks
Event: EWRL 2022 - European Workshop on Reinforcement Learning 15 (2022)
Open access status: An open access version is available from UCL Discovery
Publisher version: https://ewrl.wordpress.com/ewrl15-2022/
Language: English
Additional information: © 2022 the Authors. Original content in this paper is licensed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) Licence (https://creativecommons.org/licenses/by/4.0/).
Keywords: Offline policy optimization, Counterfactual Risk Minimization, Unknown reward
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Maths and Physical Sciences
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Electronic and Electrical Eng
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Maths and Physical Sciences > Dept of Statistical Science
URI: https://discovery.ucl.ac.uk/id/eprint/10165989
Downloads since deposit
25Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item