초록 열기/닫기 버튼

Writing performance assessment is gaining momentum in the field of English education in Korea. Writing performance tests, however, are prone to subjective rating that may have adverse effects on the reliability of scores. Previous research recommended reinforced training and clearer scoring criteria. Rater training, however, is costly and demanding in the case of essay tests which require understanding and application of multiple criteria without much guidance on benchmark responses. This study thus sought to examine translation as an alternative writing task that may induce greater score reliability as well as accurate measurement of writing proficiency. Three raters scored the responses of 20 Korean college students to an English to Korean translation test and an English essay test solely based on the rubrics. The results showed that inter-rater reliability was higher for the translation test than for the essay test. In addition, the agreement between two of the raters reached a statistically significant level for the translation test. The implications of these findings are discussed in relation to the viability of implementing translation tests in secondary schools together with suggestions for further study on ways to strengthen the advantages of both essay and translation tests and make up for their weaknesses.