earticle

논문검색

Convergence of Gradient Descent Algorithm with Penalty Term For Recurrent Neural Networks

초록

영어

This paper investigates a gradient descent algorithm with penalty for a recurrent neural network. The penalty we considered here is a term proportional to the norm of the weights. Its primary roles in the methods are to control the magnitude of the weights. After proving that all of the weights are automatically bounded during the iteration process, we also present some deterministic convergence results for this learning methods, indicating that the gradient of the error function goes to zero(weak convergence) and the weight sequence goes to a fixed point(strong convergence), respectively. A numerical example is provided to support the theoretical analysis.

목차

Abstract
 1. Introduction
 2. Network Structure and Learning Method with Penalty
 3. Main results
 4. Proofs
 5. Numerical Experiments
 References

저자정보

  • Xiaoshuai Ding College of Education, Tibet University for Nationalities, Xianyang 712082, China
  • Kuaini Wang College of Science, China Agricultural University, Beijing 100083, China

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.