earticle

논문검색

통역 자동 평가의 가능성과 한계 고찰 - MTQE 적용 사례를 중심으로 -

원문정보

Automated Interpreting Assessment : Leveraging MTQE to Evaluate Interpreting Accuracy.

최문선

피인용수 : 0(자료제공 : 네이버학술정보)

초록

영어

This study investigated the potential benefits and limitations of automated interpreting assessment by reviewing various approaches, including machine translation (MT) quality metrics and quality estimation (QE), followed by an experimental application of MTQE to a small-scale dataset of English-Korean consecutive interpretations by three interpreting students. CometKiwi, one of the state-of-the-art MTQE methods that show strong evaluation performance without requiring reference translations, was employed to compare automated evaluation with human evaluation. The findings reveal that automated evaluations exhibited a strong correlation with human evaluations and achieved complete agreement in ranking interpreting outputs, confirming the feasibility of QE-based automated evaluation for interpretations. At both the text and segment levels, higher-quality interpreting outputs showed greater alignment between human and automated evaluations, while lower-quality outputs tended to receive relatively higher scores from the QE model, highlighting discrepancies with human evaluations. It was noted that CometKiwi returned scores above zero for uninterpreted segments, possibly overestimating the final overall quality scores. The study suggests that automated evaluation scores could serve as useful resources for interpreting educators and students when an independent reference source is needed to support human evaluation.

목차


1. 서론
2. 통역 자동 평가를 위한 접근법
2.1. MT 품질 평가 지표
2.2. MT 품질 예측 (MTQE)
3. 연구
3.1. 연구 자료
3.2. 연구 방법
4. 결과 및 논의
4.1. 인간 평가와의 일치도
4.2. 인간 평가자 간 불일치와 자동 평가
4.3. 원문 누락에 대한 자동 평가
5. 결론
참고문헌

저자정보

  • 최문선 Choi, Moonsun. 이화여자대학교

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 기관로그인 시 무료 이용이 가능합니다.

      • 6,600원

      0개의 논문이 장바구니에 담겼습니다.