earticle

논문검색

Poster Session II

Fine-Tuning Pre-Trained Deep Learning Models for Multiclass Grayscale Images Classification

초록

영어

Transfer learning significantly improves the performance of a deep learning model on challenging datasets. However, the pre-trained models have certain constraints in terms of their architecture. For example, the state-of-the-art pre-trained models expect an input image with three-color channels because of the wide availability of color images. However, there are certain domains, e.g., medical applications, where grayscale images are produced and the models are required to perform certain tasks on them. Therefore, in this work we propose an approach to run pre-trained models on grayscale images while benefiting from transfer learning for multiclass classification task. We have used the MobileNetV2 pre-trained model to classify the CIFAR datasets. We have compared our results with a conventional method where the grayscale image is stacked up to form a pseudo-color image. Our analysis have shown that the proposed method reduces the computational time per epoch while improves the accuracy of the model.

목차

Abstract
I. INTRODUCTION
II. METHODS
III. RESULTS AND DISCUSSION
IV. CONCLUSION
ACKNOWLEDGMENT
REFERENCES

저자정보

  • Ijaz Ahmad Department of Computer Engineering Chosun University
  • Seokjoo Shin Department of Computer Engineering Chosun University

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      0개의 논문이 장바구니에 담겼습니다.