초록 열기/닫기 버튼

Recent studies have shown that recurrent neural language models (LMs) can understand sentences involving filler-gap dependency (Chowdhury & Zamparelli, 2018; Wilcox et al., 2018, 2019). However, their behavior does not encode the underlying constraints that govern filler-gap acceptability. In this vein, significant issues remain about the extent to which LMs acquire specific linguistic constructions and whether these models recognize an abstract property of syntax in their representations. In this paper, following the lead of Bhattacharya and van Schijndel (2020), we further test whether the L2 neural LM can learn abstract syntactic constraints that have been claimed to govern the behavior of filler-gap constructions. To see this, we implement the L2 neural LM trained on the L2 corpus of English textbooks published in Korea for the last two decades, and then we test the representational overlap between disparate filler-gap constructions based on the syntactic priming paradigm. Unlike the previous studies of L1-neural LMs, we could not find sufficient evidence showing that the L2 neural LM learns a general representation of the existence of filler-gap dependency and the shared underlying constraints.