초록 열기/닫기 버튼

This study aimed to examine the differences in translation time and quality between human translation (HT), machine translation post-editing (MTPE), and ChatGPT post-editing (ChatGPT PE). A quasi-experimental design was employed, involving 30 junior participants majoring in English from a Chinese university. A translation task consisting of 532 words from Chinese to English was assigned to the three groups and the translation speed and quality were assessed using the Multidimensional Quality Metrics (MQM) and Dynamic Quality Framework (DQF). The results demonstrated that the post-editing technique could produce translation quality comparable to human translation while achieving faster translation speed. It was observed that ChatGPT PE outputs exhibited the highest number of terminology errors but had the fewest accuracy errors, while mean differences in two of accuracy dimensions were observed among the three groups. The utilization of post-editing in language teaching would allow for the integration of machine translation strengths and human expertise, resulting in high-quality translations on par with human translations. To optimize post-editing outcomes, it is crucial to develop and teach error-detection techniques and avoid excessive reliance on AI technology.