원문정보
초록
영어
Though recent advancement of deep learning methods have provided satisfactory results from large data domain, somehow yield poor performance on few-shot classification tasks. In order to train a model with strong performance, i.e. deep convolutional neural network, it depends heavily on huge dataset and the labeled classes of the dataset can be extremely humongous. The cost of human annotation and scarcity of the data among the classes have drastically limited the capability of current image classification model. On the contrary, humans are excellent in terms of learning or recognizing new unseen classes with merely small set of labeled examples. Few-shot learning aims to train a classification model with limited labeled samples to recognize new classes that have never seen during training process. In this paper, we increase the backbone depth of the embedding network in order to learn the variation between the intra-class. By increasing the network depth of the embedding module, we are able to achieve competitive performance due to the minimized intra-class variation.
목차
1. Introduction
2. Related Work
3. Proposed Method
3.1 Problem Definition
3.2 Network Architecture
4. Experiment Result
4.1 Evaluation on Omniglot Dataset
4.2 Evaluation on mini-Imagenet Dataset
5. Conclusion
Acknowledgement
References