원문정보
초록
영어
This article concerns the so-called unbound reflexive pronouns in English, which refer to self-forms without any sentence-internal antecedents, running counter to the classic Binding Principle A (Chomsky, 1981). To empirically investigate the distributional properties of the English unbound reflexives, the present study makes ample use of the BYU corpora including COCA, COHA, and GloWbE to collect relevant data, and implements the collected data into BERT, a machine learning technique for natural language processing, to explore how surprisingly the unbound reflexive forms appear in various types of contexts in comparison to the pronominal counter-parts. It is remarkable that the results replicate the findings and claims of the existing theoretical and corpus studies regarding the distribution of the unbound reflexives in English. This suggests that the deep learning skills can be sufficiently used to explore the syntactic phenomena in human languages.
목차
1. 서론
2. 배경
2.1. 이론적 분석
2.2. 코퍼스 분석
2.3. 딥러닝 언어모형과 재귀사
2.4. 연구의 주안점
3. 방법
3.1. 데이터 수집
3.2. 온라인 태깅
3.3. 자료 정리
3.4. 딥러닝 연산
4. 결과
4.1. COCA 분석
4.2. COHA 분석
4.3. GloWbE 분석
5. 논의
6. 결론
참고문헌
