earticle

논문검색

Employing Latent Dirichlet Allocation Model for Topic Extraction of Chinese Text

원문정보

초록

영어

The hidden topic model of Chinese text, which possesses complicated semantics, is urgently needed, since China has occupied an increasingly significant role during the booming development of globalization over recent years. This paper details and elaborates the basic process of extracting latent Chinese topics by demonstrating a Chinese topic extraction schema based on Latent Dirichlet Allocation (LDA) model. Furthermore, the application was practiced in CCL, an authoritative Chinese corpus, to extract topics for its nine categories. With rigorous empirical analysis, extracting the LDA results has a considerably higher average precision rate as opposed to other three comparable Chinese topic extraction techniques; however the average recall rate is worse than KNN and almost the same with the PLSI model. Moreover, the recall rate and precision rate of LDA-CH is worse than LDA-EH. Therefore, the LDA model should be improved to adapt to the distinctive feature of Chinese words with the purpose of making it better for Chinese topic extraction.

목차

Abstract
 1. Introduction
 2. Literature Review
  2.1. Research of Topic Extraction for English Text
  2.2. Research of Topic Extraction for Chinese Text
 3. Methodology
  3.1. LDA Model
  3.2. Topic Extraction Model for Chinese Text Based on LDA
 4. Experiments
  4.1. Evaluation Merits
  4.2. The Value of Observed Parameters
  4.3. Other Compared Topic Modeling Techniques
 5. Results
  5.1. Comparison with Other Techniques
  5.2 Comparison between LDA-CH and LDA-EH
 6. Conclusions and Discussion
 Acknowledgements
 References

저자정보

  • Qihua Liu School of Information Technology, Jiangxi University of Finance and Economics

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.