원문정보
보안공학연구지원센터(IJMUE)
International Journal of Multimedia and Ubiquitous Engineering
Vol.9 No.9
2014.09
pp.151-158
피인용수 : 0건 (자료제공 : 네이버학술정보)
초록
영어
This paper investigates a gradient descent algorithm with penalty for a recurrent neural network. The penalty we considered here is a term proportional to the norm of the weights. Its primary roles in the methods are to control the magnitude of the weights. After proving that all of the weights are automatically bounded during the iteration process, we also present some deterministic convergence results for this learning methods, indicating that the gradient of the error function goes to zero(weak convergence) and the weight sequence goes to a fixed point(strong convergence), respectively. A numerical example is provided to support the theoretical analysis.
목차
Abstract
1. Introduction
2. Network Structure and Learning Method with Penalty
3. Main results
4. Proofs
5. Numerical Experiments
References
1. Introduction
2. Network Structure and Learning Method with Penalty
3. Main results
4. Proofs
5. Numerical Experiments
References
저자정보
참고문헌
자료제공 : 네이버학술정보
