원문정보
초록
영어
In this study, we proposed an Automated Writing Evaluation (AWE) system for Korean learner texts using a multi-view model. To address the linguistic complexity of Korean, the model represents the input text with various n-grams and combines the results of the base models trained on these features in a higher meta model to make the final prediction. The system outperformed the transformer-based AWE models, achieving an average accuracy of 83.5% and an average F1 score exceeding 82% across evaluation datasets. Furthermore, it maintained consistent performance across all proficiency levels and showed particularly better robustness on unseen data. In addition, the system enhances interpretability of the automatic grading by providing a confidence score for the prediction, the linguistic features using PCA analysis, and the n-gram tokens that contributed to the rating. This study is expected to provide practical help for teachers to evaluate learners' writing more efficiently and accurately.
목차
1. 서론
2. 관련 연구
2.1. 자동 글쓰기 평가 시스템(AWE)의 발전
2.2. 자질 기반(feature-based) 평가 모델
2.3. 엔드 투 엔드(end-to-end) 딥러닝 기반 AWE 모델
3. 방법론
3.1. 데이터
3.2. 평가 시스템의 구조
3.3. 모델 평가 방법(Evaluation metrics)
4. 성능 평가 결과
4.1. 예측 모델의 성능
4.2. 등급별 텍스트의 특징: PCA 모델링
4.3. 입력 텍스트의 예측 결과 사례
5. 결론
참고문헌
