Affordable Access

Access to the full text

A new transfer learning framework with application to model-agnostic multi-task learning

  • Gupta, Sunil1
  • Rana, Santu1
  • Saha, Budhaditya1
  • Phung, Dinh1
  • Venkatesh, Svetha1
  • 1 Deakin University, Center for Pattern Recognition and Data Analytics (PRaDA), Geelong Waurn Ponds Campus, Waurn Ponds, VIC, Australia , Waurn Ponds (Australia)
Published Article
Knowledge and Information Systems
Publication Date
Feb 19, 2016
DOI: 10.1007/s10115-016-0926-z
Springer Nature


Learning from small number of examples is a challenging problem in machine learning. An effective way to improve the performance is through exploiting knowledge from other related tasks. Multi-task learning (MTL) is one such useful paradigm that aims to improve the performance through jointly modeling multiple related tasks. Although there exist numerous classification or regression models in machine learning literature, most of the MTL models are built around ridge or logistic regression. There exist some limited works, which propose multi-task extension of techniques such as support vector machine, Gaussian processes. However, all these MTL models are tied to specific classification or regression algorithms and there is no single MTL algorithm that can be used at a meta level for any given learning algorithm. Addressing this problem, we propose a generic, model-agnostic joint modeling framework that can take any classification or regression algorithm of a practitioner’s choice (standard or custom-built) and build its MTL variant. The key observation that drives our framework is that due to small number of examples, the estimates of task parameters are usually poor, and we show that this leads to an under-estimation of task relatedness between any two tasks with high probability. We derive an algorithm that brings the tasks closer to their true relatedness by improving the estimates of task parameters. This is achieved by appropriate sharing of data across tasks. We provide the detail theoretical underpinning of the algorithm. Through our experiments with both synthetic and real datasets, we demonstrate that the multi-task variants of several classifiers/regressors (logistic regression, support vector machine, K-nearest neighbor, Random Forest, ridge regression, support vector regression) convincingly outperform their single-task counterparts. We also show that the proposed model performs comparable or better than many state-of-the-art MTL and transfer learning baselines.

Report this publication


Seen <100 times