UCL logo

UCL Discovery

UCL home » Library Services » Electronic resources » UCL Discovery

Taking advantage of sparsity in multi-task learning

Lounici, K; Tsybakov, AB; Pontil, M; Van De Geer, SA; (2009) Taking advantage of sparsity in multi-task learning. In: COLT 2009 - The 22nd Conference on Learning Theory. Green open access

[img]
Preview
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
207Kb

Abstract

We study the problem of estimating multiple linear regression equations for the purpose of both prediction and variable selection. Following recent work on multi-task learning [1], we assume that the sparsity patterns of the regression vectors are included in the same set of small cardinality. This assumption leads us to consider the Group Lasso as a candidate estimation method. We show that this estimator enjoys nice sparsity oracle inequalities and variable selection properties. The results hold under a certain restricted eigenvalue condition and a coherence condition on the design matrix, which naturally extend recent work in [3, 19]. In particular, in the multi-task learning scenario, in which the number of tasks can grow, we are able to remove completely the effect of the number of predictor variables in the bounds. Finally, we show how our results can be extended to more general noise distributions, of which we only require the variance to be finite.

Type:Proceedings paper
Title:Taking advantage of sparsity in multi-task learning
Open access status:An open access version is available from UCL Discovery
UCL classification:UCL > School of BEAMS > Faculty of Engineering Science > Computer Science

View download statistics for this item

Archive Staff Only: edit this record