earticle

논문검색

Learning Errors by Radial Basis Function Neural Networks and Regularization Networks

초록

영어

Regularization theory presents a sound framework to solving supervised learning problems. However, there is a gap between the theoretical results and practical suitability of regularization networks (RN). Radial basis function networks (RBF) that can be seen as a special case of regularization networks have a rich selection of learning algorithms. In this work we study a relationship between RN and RBF, and show that theoretical estimates for RN hold for a concrete RBF applied to real-world data, to a certain degree. This can provide several recommendations for strategies on choosing number of units in RBF network.

목차

Abstract
 1 Introduction
 2 Approximation via regularization network
 3 RBF neural networks
 4 Error estimates
 5 Conclusion
 References

저자정보

  • Youngkon Lee e-Business Department, Korea Polytechnic University

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.