초록
영어
A quantified reasoning and description of the perplexity for evaluating language models by
using the concept of information entropy is discussed in this article: The smaller the entropy
of the language estimated by the language model is, the more precise the language model is; an
interpolated model based on two (n‐1)‐gram models is better than the (n‐1)‐gram component
models, but not a n‐gram model. We also explore the methods to estimating the entropy of Chinese using language models.
목차
Abstract
1. Introduction
2. Performance Appraising of Language ModelsBased on Entropy
3. Performance Appraising of Language ModelsBased on Perplexity
4. Estimating Chinese Entropy Based onStatistical Language Models
4.1 A Method to Estimate the Entropy of Chinese
4.2 Estimation of the Entropy of Chinese UsingSome Statistical Models
4.3 Performance Comparison of Several ChineseStatistical Models
5. Conclusion
References
1. Introduction
2. Performance Appraising of Language ModelsBased on Entropy
3. Performance Appraising of Language ModelsBased on Perplexity
4. Estimating Chinese Entropy Based onStatistical Language Models
4.1 A Method to Estimate the Entropy of Chinese
4.2 Estimation of the Entropy of Chinese UsingSome Statistical Models
4.3 Performance Comparison of Several ChineseStatistical Models
5. Conclusion
References
저자정보
참고문헌
자료제공 : 네이버학술정보