원문정보
초록
영어
Generative AI chatbots, while powerful, often produce hallucinations, risking user aversion. This theoretical paper (Work in Progress) investigates why users might continue engaging with faulty generative AI. Drawing on CASA (Computers Are Social Actors), attribution, and expectation confirmation theories, I propose a model exploring how chatbot social cues, perceived errors (hallucinations), and user control influence willingness to engage, mediated by social presence and expectation confirmation. To test this framework, I plan a 2 (Social Cues) x 2 (Hallucination) x 2 (Controllability) online experiment using simulated chatbot interactions. Randomly assigned participants will experience the simulated situation and report their willingness to continue engagement. This research seeks to explain user persistence with imperfect AI, offering insights for human-AI interaction theory and practical chatbot design to mitigate failure impacts.
목차
Introduction
Theoretical framework
Generative AI chatbots and hallucinations
The influence of social cues and presence on expectation confirmation
Social cues, social presence and willingness to engage
Perceived error
Perceived autonomy
Future work
References
