원문정보
초록
영어
Many text classifications depend on statistical term measures or synsets to implement document representation. Such document representations ignore the lexical semantic contents or relations of terms, leading to losing the distilled mutual information. This work proposed a synthetic document representation method, WordNet-based hybrid VSM, to solve the problem. This method constructed a data structure of semantic-element information to characterize lexical semantic contents, and support disambiguation of word stems. As a template, lexical semantic vector consisting of lexical semantic contents was built in the lexical semantic space of corpus, and lexical semantic relations are marked on the vector. Then, it connects with special term vector to form the eigenvector in hybrid VSM. Applying algorithm NWKNN, on text corpus Reuter-21578 and its adjusted version, the experiments show that the eigenvector performs F1 measure better than document representations based on TF-IDF.
목차
1. Introduction
2. Related Work
3. Proposed Program
3.1 The Motivation and Theoretical Analysis
3.2 Hybrid VSM of Text Corpus
3.3 Algorithm NWKNN
4. Experiment and Result
4.1 Experiment Setup
4.2 The Results
5. Conclusion
References