earticle

논문검색

Weighted Fast Adaptation Prior on Meta-Learning

초록

영어

Along with the deeper architecture in the deep learning approaches, the need for the data becomes very big. In the real problem, to get huge data in some disciplines is very costly. Therefore, learning on limited data in the recent years turns to be a very appealing area. Meta-learning offers a new perspective to learn a model with this limitation. A state-of-the-art model that is made using a meta-learning framework, Meta-SGD, is proposed with a key idea of learning a hyperparameter or a learning rate of the fast adaptation stage in the outer update. However, this learning rate usually is set to be very small. In consequence, the objective function of SGD will give a little improvement to our weight parameters. In other words, the prior is being a key value of getting a good adaptation. As a goal of meta-learning approaches, learning using a single gradient step in the inner update may lead to a bad performance. Especially if the prior that we use is far from the expected one, or it works in the opposite way that it is very effective to adapt the model. By this reason, we propose to add a weight term to decrease, or increase in some conditions, the effect of this prior. The experiment on fewshot learning shows that emphasizing or weakening the prior can give better performance than using its original value.

목차

Abstract
1. Introduction
2. Related Work
2.1 Meta-Learning
2.2 Meta-SGD
3. Proposed Method
4. Experiment and Result
5. Conclusion
Acknowledgement
References

저자정보

  • Tintrim Dwi Ary Widhianingsih PhD Student, Department of Computer Engineering, Dongseo University, Busan, Korea
  • Dae-Ki Kang Professor, Department of Computer Engineering, Dongseo University, Busan, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.