UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Distributed variance regularized Multitask Learning

Donini, M; Martinez-Rego, D; Goodson, M; Shawe-Taylor, J; Pontil, M; (2016) Distributed variance regularized Multitask Learning. In: 2016 International Joint Conference on Neural Networks (IJCNN). (pp. pp. 3101-3109). IEEE Green open access

[thumbnail of Shawe-Taylor_paper.pdf]
Preview
Text
Shawe-Taylor_paper.pdf - Accepted Version

Download (343kB) | Preview

Abstract

Past research on Multitask Learning (MTL) has focused mainly on devising adequate regularizers and less on their scalability. In this paper, we present a method to scale up MTL methods which penalize the variance of the task weight vectors. The method builds upon the alternating direction method of multipliers to decouple the variance regularizer. It can be efficiently implemented by a distributed algorithm, in which the tasks are first independently solved and subsequently corrected to pool information from other tasks. We show that the method works well in practice and convergences in few distributed iterations. Furthermore, we empirically observe that the number of iterations is nearly independent of the number of tasks, yielding a computational gain of O(T) over standard solvers. We also present experiments on a large URL classification dataset, which is challenging both in terms of volume of data points and dimensionality. Our results confirm that MTL can obtain superior performance over either learning a common model or independent task learning.

Type: Proceedings paper
Title: Distributed variance regularized Multitask Learning
Event: 2016 International Joint Conference on Neural Networks (IJCNN)
ISBN-13: 9781509006199
Open access status: An open access version is available from UCL Discovery
DOI: 10.1109/IJCNN.2016.7727594
Publisher version: https://doi.org/10.1109/IJCNN.2016.7727594
Language: English
Additional information: © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Keywords: Convex programming;distributed algorithms; learning artificial intelligence; pattern classification; vectors; MTL methods; URL classification dataset; alternating direction method of multipliers; common model learning; data point volume; distributed algorithm; distributed variance regularized multitask learning; independent task learning; task weight vector variance penalization; Convergence; Linear programming; Mathematical model; Optimization; Scalability; Support vector machines; Training
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/1535951
Downloads since deposit
274Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item