원문정보
초록
영어
Despite the widespread use of artificial intelligence (AI) in mobile healthcare apps, the need for more transparency in AI algorithms hinders their effectiveness by preventing users from understanding the reasons behind AI-based information provision. To address this challenge, various types of explainable AI (XAI) are adopted to offer transparent explanations of AI. Despite significant debates surrounding AI intervention, limited research has been devoted to whether and how various XAI types affect user behavior differently. In this study, we conducted a randomized field experiment to investigate the effectiveness of three XAI algorithms: 1) feature importance, 2) feature attribution, and 3) counterfactual explanation in promoting users' health behavior. Drawing on the self-regulated learning theory, we expect that XAI focusing on counterfactual explanation increases strategic planning and outcome expectancy, resulting in better self-regulation behavior. Our findings indicate that counterfactual explanation significantly improves users' action planning behavior, leading to a 16.5% increase in workout duration and a 3.49% increase in health records compared to the control group. Our results are salient for users with a high level of AI susceptibility due to age, goal weight loss, and AI outcome. Our finding sheds light on the potential of algorithmic explanations to improve the effectiveness of AI interventions in the healthcare industry, with practical implications for designing more transparent and user-friendly healthcare apps.
목차
Introduction
Related Work
Randomized Field Experiment
Institutional Background
AI-based Model Development
Experimental Design and Process
Data and Empirical Approach
Results
Discussion
References