earticle

논문검색

When Generative AI Fails : Taming of Shrewish AI (Work in Progress)

초록

영어

Generative AI chatbots, while powerful, often produce hallucinations, risking user aversion. This theoretical paper (Work in Progress) investigates why users might continue engaging with faulty generative AI. Drawing on CASA (Computers Are Social Actors), attribution, and expectation confirmation theories, I propose a model exploring how chatbot social cues, perceived errors (hallucinations), and user control influence willingness to engage, mediated by social presence and expectation confirmation. To test this framework, I plan a 2 (Social Cues) x 2 (Hallucination) x 2 (Controllability) online experiment using simulated chatbot interactions. Randomly assigned participants will experience the simulated situation and report their willingness to continue engagement. This research seeks to explain user persistence with imperfect AI, offering insights for human-AI interaction theory and practical chatbot design to mitigate failure impacts.

목차

Abstract
Introduction
Theoretical framework
Generative AI chatbots and hallucinations
The influence of social cues and presence on expectation confirmation
Social cues, social presence and willingness to engage
Perceived error
Perceived autonomy
Future work
References

저자정보

  • 전현준 연세대학교 경영학과 박사과정

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 기관로그인 시 무료 이용이 가능합니다.

      • 4,000원

      0개의 논문이 장바구니에 담겼습니다.