earticle

논문검색

The Estimating Method of Statistical Language Models Perplexity and Chinese Entropy

원문정보

Yangsen Zhang, Shiwen Yu

피인용수 : 0(자료제공 : 네이버학술정보)

초록

영어

A quantified reasoning and description of the perplexity for evaluating language models by
using the concept of information entropy is discussed in this article: The smaller the entropy
of the language estimated by the language model is, the more precise the language model is; an
interpolated model based on two (n‐1)‐gram models is better than the (n‐1)‐gram component
models, but not a n‐gram model. We also explore the methods to estimating the entropy of Chinese using language models.

목차

Abstract
 1. Introduction
 2. Performance Appraising of Language ModelsBased on Entropy
 3. Performance Appraising of Language ModelsBased on Perplexity
 4. Estimating Chinese Entropy Based onStatistical Language Models
  4.1 A Method to Estimate the Entropy of Chinese
  4.2 Estimation of the Entropy of Chinese UsingSome Statistical Models
  4.3 Performance Comparison of Several ChineseStatistical Models
 5. Conclusion
 References

저자정보

  • Yangsen Zhang Institute of Computational Linguistics, Peking University
  • Shiwen Yu Institute of Computational Linguistics, Peking University

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 기관로그인 시 무료 이용이 가능합니다.

      • 4,000원

      0개의 논문이 장바구니에 담겼습니다.