UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

λOpt: Learn to Regularize Recommender Models in Finer Levels

Chen, Yihong; Chen, Bei; He, Xiangnan; Gao, Chen; Li, Yong; Lou, Jian-Guang; Wang, Yue; (2019) λOpt: Learn to Regularize Recommender Models in Finer Levels. In: KDD '19: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. (pp. pp. 978-986). ACM (Association for Computing Machinery): New York, NY, United States. Green open access

[thumbnail of regularize_recommender_finer_levels.pdf]
Preview
PDF
regularize_recommender_finer_levels.pdf - Accepted Version

Download (2MB) | Preview

Abstract

Recommendation models mainly deal with categorical variables, such as user/item ID and attributes. Besides the high-cardinality issue, the interactions among such categorical variables are usually long-tailed, with the head made up of highly frequent values and a long tail of rare ones. This phenomenon results in the data sparsity issue, making it essential to regularize the models to ensure generalization. The common practice is to employ grid search to manually tune regularization hyperparameters based on the validation data. However, it requires non-trivial efforts and large computation resources to search the whole candidate space; even so, it may not lead to the optimal choice, for which different parameters should have different regularization strengths. In this paper, we propose a hyperparameter optimization method, lambdaOpt, which automatically and adaptively enforces regularization during training. Specifically, it updates the regularization coefficients based on the performance of validation data. With lambdaOpt, the notorious tuning of regularization hyperparameters can be avoided; more importantly, it allows fine-grained regularization (i.e. each parameter can have an individualized regularization coefficient), leading to better generalized models. We show how to employ lambdaOpt on matrix factorization, a classical model that is representative of a large family of recommender models. Extensive experiments on two public benchmarks demonstrate the superiority of our method in boosting the performance of top-K recommendation.

Type: Proceedings paper
Title: λOpt: Learn to Regularize Recommender Models in Finer Levels
Event: KDD '19: The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
ISBN-13: 9781450362016
Open access status: An open access version is available from UCL Discovery
DOI: 10.1145/3292500.3330880
Publisher version: https://doi.org/10.1145/3292500
Language: English
Additional information: This version is the author-accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: matrix factorization, regularization hyperparameter, top-k recommendation.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10211293
Downloads since deposit
6Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item