earticle

논문검색

A comparison of methods to reduce overfitting in neural networks

초록

영어

A common problem with neural network learning is that it is too suitable for the specificity of learning. In this paper, various methods were compared to avoid overfitting: regularization, drop-out, different numbers of data and different types of neural networks. Comparative studies of the above-mentioned methods have been provided to evaluate the test accuracy. I found that the more data using method is better than the regularization and dropout methods. Moreover, we know that deep convolutional neural networks outperform multi-layer neural networks and simple convolution neural networks.

목차

Abstract
1. Introduction
2. Over-fitting in Supervised Training
3. Methods to avoid neural network overfitting
3.1 Use Regularization
3.2 Use Dropout
3.3 Use Different number of data
3.4 Use Different types of Neural Network
4. Empirical Results and Observation
5. Conclusion
Acknowledgement
References

저자정보

  • Ho-Chan Kim Professor, Department of Electrical Engineering, Jeju National University, Korea
  • Min-Jae Kang Professor, Department of Electronic Engineering, Jeju National University, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.