초록 열기/닫기 버튼

In this study, ChatGPT was asked to determine the likeliness of a given sentence being written by a native speaker of English. A total of 2,000 prompt sentences were provided: 1,000 from an English native-speaker corpus (LOCNESS) and 1,000 from a Korean EFL learner corpus (GLC). It was found that ChatGPT determined the sentences from the native corpus to be more nativelike than those from the nonnative corpus. The classification accuracy (proportion of native sentences classified as native and nonnative sentences classified as nonnative) was 62.75%. Most of the responses were simple phrases, such as “very likely” or “not very likely,” but in some cases, ChatGPT also provided reasons for a judgment or suggestions for correction. ChatGPT’s answers show that it considers grammaticality and plausibility as the criteria for determining the nativelikeness of a sentence, conforming to the literature. Following the initial testing, ChatGPT was fine-tuned with 1,600 sentences from the prompt data and tested with the remaining 400 sentences. After fine-tuning, the classification accuracy increased to 96.75%. This result showed that the fine-tuning of ChatGPT can enhance accuracy. Finally, the study briefly discussed the characteristics of the answers provided by GPT-4 in comparison with those provided by GPT-3.5.