원문정보
국제인공지능학회(구 한국인터넷방송통신학회)
The International Journal of Advanced Smart Convergence
Volume 9 Number 2
2020.06
pp.173-178
피인용수 : 0건 (자료제공 : 네이버학술정보)
초록
영어
A common problem with neural network learning is that it is too suitable for the specificity of learning. In this paper, various methods were compared to avoid overfitting: regularization, drop-out, different numbers of data and different types of neural networks. Comparative studies of the above-mentioned methods have been provided to evaluate the test accuracy. I found that the more data using method is better than the regularization and dropout methods. Moreover, we know that deep convolutional neural networks outperform multi-layer neural networks and simple convolution neural networks.
목차
Abstract
1. Introduction
2. Over-fitting in Supervised Training
3. Methods to avoid neural network overfitting
3.1 Use Regularization
3.2 Use Dropout
3.3 Use Different number of data
3.4 Use Different types of Neural Network
4. Empirical Results and Observation
5. Conclusion
Acknowledgement
References
1. Introduction
2. Over-fitting in Supervised Training
3. Methods to avoid neural network overfitting
3.1 Use Regularization
3.2 Use Dropout
3.3 Use Different number of data
3.4 Use Different types of Neural Network
4. Empirical Results and Observation
5. Conclusion
Acknowledgement
References
저자정보
참고문헌
자료제공 : 네이버학술정보
