초록 열기/닫기 버튼

Second language acquisition (SLA) research has extensively delved into cross-linguistic transfer, examining the impact of the linguistic structure of a native language on the acquisition of a second language. Such transfer effect can be either positive or negative, impeding the acquisition. In this paper, we employ transfer learning as a methodology for analyzing the encoding of grammatical structure in neural language models. This approach, transfer learning, involves pre-training the neural language model which is the Transformer-based language model, BabyRoberta, on Korean as the first language (L1). Afterward, we fine-tune the model with English as a second language (L2). Our task includes using the BLiMP test suite (Warstadt, 2020), broadly known as a benchmark for measuring the syntactic ability of neural language models. This allows us to provide insights into how neural language models represent abstract syntactic structures of English, incorporating the structural inductive biases acquired from Korean.