초록 열기/닫기 버튼

In recent years, the increasing abilities of neural (-network) language models (NLMs) have led to examining their representation of syntactic structures. To assess the linguistic knowledge that NLMs acquire, researchers have leveraged the traditional syntactic priming paradigm to investigate the potentials of NLMs in learning abstract structural information. In this study, we concentrated on investigating the extent to which the L2 NLM is sensitive to syntactic priming in psycholinguistic. Following Sinclair et al. (2022), we adopted a novel metric with which we controled various linguistic factors. Based on this adoption, we implemented the L2 NLM trained on the L2 corpus and explored which factors influence the strength of priming effects. In so doing, we discovered that the L2 NLM is also sensitive to various linguistic factors but displays irregular syntactic priming performances depending on experiments with different types of controlled materials.