원문정보
초록
영어
Deep learning techniques such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) show superior performance in text classification than traditional approaches such as Support Vector Machines (SVMs) and Naïve Bayesian approaches. When using CNNs for text classification tasks, word embedding or character embedding is a step to transform words or characters to fixed size vectors before feeding them into convolutional layers. In this paper, we propose a parallel word-level and character-level embedding approach in CNNs for text classification. The proposed approach can capture word-level and character-level patterns concurrently in CNNs. To show the usefulness of proposed approach, we perform experiments with two English and three Korean text datasets. The experimental results show that character-level embedding works better in Korean and word-level embedding performs well in English. Also the experimental results reveal that the proposed approach provides better performance than traditional CNNs with word-level embedding or character-level embedding in both Korean and English documents. From more detail investigation, we find that the proposed approach tends to perform better when there is relatively small amount of data comparing to the traditional embedding approaches.
목차
Ⅰ. Introduction
Ⅱ. Related Work
2.1. Text Classification
2.2. Deep Learning in Text Mining
2.3. Convolutional Neural Networks (CNNs) for Text Classification
Ⅲ. Proposed Approach
3.1. Hyperparameters Configuration
3.2. Word Vector and Character Vector
3.3. Regularization and Normalization
Ⅳ. Experimental Design and Datasets
4.1. Comparing Models
4.2. Datasets
Ⅴ. Results and Discussion
5.1. Comlementary Effect
5.2. Size Effect
5.3. Possibility of Improvement through Hyperparameter and Embedding optimization
5.4. Implications
Ⅵ. Conclusion and Future Work
Acknowledgements