초록
영어
With the advancement of large language models, it has become increasingly critical to assess their capabilities and limitations to identify areas where human expertise remains essential. This study compares the performance of GPT-4 and 13 translation students in correcting accuracy errors in Korean-to-English financial texts to inform post-editing pedagogy in the era of generative AI. The analysis highlights GPT-4’s strong potential as a post-editing tool for domain-specific informational texts, demonstrating superior performance in correcting omissions, grammatical and terminological errors. However, its performance was inconsistent in addressing lexical ambiguity where students often outperformed it. In the case of syntactic errors, both GPT-4 and students well corrected errors that were easily identifiable, but neither group handled syntactic ambiguity effectively. These findings underscore the need to reconsider priorities in post-editing training and call for further investigations to better understand GPT’s strengths and limitations.
목차
1. Introduction
2. Literature Review
3. Materials and Methods
3.1 Materials
3.2 Methods
4. Post-edited results by GPT and student translators
4.1 Untranslated errors
4.2 Lexical errors
4.3 Syntactic errors
5. Conclusion
References
