원문정보
초록
영어
Multi-task Learning (MTL) algorithms aim to improve the performance of several learning methods through shared information among all tasks. One particularly successful instance of multi-task learning is its adaptation to support vector machine (SVM). Recently advances in large-margin learning have shown that their solutions may be misled by the spread of data and preferentially separate classes along large spread directions. In this paper, we propose a novel formulation for multi-task learning by extending the recently published relative margin machine algorithm to the multi-task learning paradigm. The new method is an extension of support vector machine for single task learning. The objective of our algorithm is to obtain a different predictor for each task while taking into account the fact that the tasks are related as well as the spread of the data. We test the proposed method experimentally using real data. The experiments show that the proposed method performs better than existing multi-task leaning with SVM and single-task leaning with SVM.
목차
1. Introduction
2. Relative Margin Machine
3. Relative Margin Multi-Task Learning (RMMTL)
3.1. Linear Relative Margin Multi-Task Learning
3.2. Nonlinear Relative Margin Multi-Task Learning
4. Experiments
4.1. Dermatology Dataset
4.2. Isolet Dataset
4.3. Monk Dataset
4.4. Radar Landmine Detection Dataset
5. Conclusions and Discussion
Acknowledgments
References