초록 열기/닫기 버튼

최근 인공지능 기술의 발달과 함께 다양한 수준에서 윤리적 이슈가 발생하고 있다. 본 연구는 인공지능 챗봇 이용자가 챗봇에 욕설을 하는 비윤리적 상황에서 챗봇 맞춤화 수준에 따라 설득 효과가 어떻게 달라지는지를 살펴보았다. 설득 메시지에 의해 자유가 위협받는다고 느끼는 사람들은 심리적으로 반발하는 경향이 있으므로 본 연구는 심리적 반발 이론(psychological reactance theory: PRT)에 기반을 두었다. 따라서 인공지능 챗봇 이용 중에 발생한 비윤리적 상황에서 맞춤화 수준에 따른 설득 효과와 함께 심리적 반발감을 경유하는 매개효과를 확인하였다. 또한 인공지능 챗봇의 메시지 프레이밍에 따라 설득 효과가 어떻게 달라지는지를 살펴보기 위해 메시지 프레이밍의 조절효과와 조절된 매개효과를 검증했다. 본 연구는 2 (맞춤화 수준: 낮음 vs. 높음) × 2 (메시지 프레이밍: 긍정 vs. 부정) 개체 간 요인 실험을 설계하여 온라인 실험을 진행하였으며, 실험 참여자(N = 382)는 네 가지 조건 중 하나의 조건에 무작위로 배치되었다. 연구 결과, 맞춤화 수준이 인공지능에 욕설 사용을 중단할 행동의도와 인공지능 챗봇의 지속이용의도에 미치는 효과는 심리적 반발(분노)에 의해 매개되었다. 이러한 매개효과는 메시지 프레이밍에 의해 조절되지 않는 것으로 나타났다. 결론에서는 분석 결과를 토대로 본 연구의 함의와 한계점을 제시하였다.


The development of artificial intelligence (AI) technology has raised a number of ethical concerns. The present study examined how the persuasion effect differed depending upon the level of customization of AI ​​chatbot in an unethical situation where the user spoke offensive language on the AI ​​chatbot. According to studies on customization, users who were exposed to customized media or devices that transmitted messages were more likely to positively evaluate the media or devices as well as the messages themselves and to be persuaded by the messages than users who were exposed to non-customized media or devices. Also, it has been found that people experience psychological reactance, an unpleasant motivational arousal that emerges when they feel a threat to or loss of their free behaviors. Previous studies have indicated that inducing people to change their attitudes or behaviors by persuasive messages could be a critical threat to their freedom. Following the claims of studies on customization and psychological reactance theory, the present study investigated the role of psychological reactance caused by AI chatbot’s message that asked to stop using offensive language related to the level of customization of the AI chatbot. In addition, message framing may play a moderating role in the association between the level of customization of AI chatbot and users’ psychological reactance. The present study divided message framing into two frames; the positive frame and the negative frame of the message that asked to stop using offensive language. Integrating these three theoretical approaches, the present study tested the mediating role of psychological reactance (anger & negative cognition) and the moderating role of message framing in the association between the level of customization and the dependent variables, including intention to stop using offensive language and intention to continuously use AI chatbot. An online experiment employed a 2 (customization level: low vs. high) X 2 (message framing: positive vs. negative) between subjects design. A total of 382 participants were randomly assigned to one of the four conditions. Results showed that there was no direct effect of the level of customization on the outcome variables, but the inidrect effect was found. That is, the effect of the level of customization on both behavioral intentions to correct offensive language use and continue to use the AI chatbot was mediated by psychological reactance (anger). Specifically, the high level of customization affected anger negatively, which in turn influenced behavioral intentions negatively. However, the mediation effect was not moderated by message framing. Implications and limitations of the study findings were also discussed.