원문정보
초록
영어
This study analyzes cultural biases in major large language models from the United States, South Korea, and China (GPT-4, CLOVA X, and Qwen1.5) through story generation tasks using culture-specific names. Morphological analysis of the generated stories revealed that all models exhibited certain cultural biases. GPT-4 did not show negative biases toward Korean and Chinese cultures but tended to prefer traditional and rural settings when describing these cultures. In contrast, CLOVA X and Qwen1.5, which are specialized for their respective national languages, portrayed their own cultures in modern and positive terms while using a relatively higher proportion of negative adjectives and unrealistic settings when describing Western contexts. These findings are significant because they go beyond the conventional focus on biases in Western-centric models toward non-Western contexts. They newly reveal that East Asian-based models can also exhibit similar biases when representing Western cultures. This research suggests that current language model has fundamental limitations in achieving cultural neutrality and highlights the importance of balanced learning and reflection of diverse cultural contexts as a crucial challenge in language model development.
목차
1. 서론
2. 이론적 배경
3. 실험 방법론
3.1. 데이터
3.2. 모델
3.3. 프롬프트
3.4. 생성 과정
4. 실험 결과
4.1. 형용사 분석
4.2. 일반명사 분석
5. 결론
참고문헌
부록
