earticle

논문검색

Multi-Task Support Vector Machine for Data Classification

초록

영어

Multi-task Learning (MTL) algorithms aim to improve the performance of several learning methods through shared information among all tasks. One particularly successful instance of multi-task learning is its adaptation to support vector machine (SVM). Recently advances in large-margin learning have shown that their solutions may be misled by the spread of data and preferentially separate classes along large spread directions. In this paper, we propose a novel formulation for multi-task learning by extending the recently published relative margin machine algorithm to the multi-task learning paradigm. The new method is an extension of support vector machine for single task learning. The objective of our algorithm is to obtain a different predictor for each task while taking into account the fact that the tasks are related as well as the spread of the data. We test the proposed method experimentally using real data. The experiments show that the proposed method performs better than existing multi-task leaning with SVM and single-task leaning with SVM.

목차

Abstract
 1. Introduction
 2. Relative Margin Machine
 3. Relative Margin Multi-Task Learning (RMMTL)
  3.1. Linear Relative Margin Multi-Task Learning
  3.2. Nonlinear Relative Margin Multi-Task Learning
 4. Experiments
  4.1. Dermatology Dataset
  4.2. Isolet Dataset
  4.3. Monk Dataset
  4.4. Radar Landmine Detection Dataset
 5. Conclusions and Discussion
 Acknowledgments
 References

저자정보

  • Yunyan Song College of Science, Tianjin University of Technology, Tianjin, 300384, China
  • Wenxin Zhu College of Basic Science, Tianjin Agricultural University, Tianjin, 300384, China

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.