원문정보
초록
영어
The rapid growth of generative AI has raised concerns about content authenticity on user-generated platforms, particularly in online reviews. This study proposes an interpretable, feature-based machine learning approach to detect AI-generated reviews, focusing on transparency and efficiency. By integrating linguistic feature analysis (LIWC), textual pattern recognition (TF-IDF), and Large Language Model (LLM)-based interpretation, Random Forest and XGBoost classifiers were applied to achieve robust predictive performance. SHAP value analysis was used to enhance interpretability by identifying key linguistic and structural patterns distinguishing AI-generated content from human-written reviews. The findings reveal that AI-generated reviews tend to exhibit structured grammar, formulaic conclusions, exaggerated sentiment, and broader aspect coverage compared to the nuanced and informal style of human reviews. This study contributes to the field by offering (1) an effective feature-based detection framework, (2) empirical validation of linguistic distinctions between AI and human content, and (3) practical guidance for developing lightweight, trustworthy AI-content detection tools.
목차
Introduction
Research Background
Analysis and Results
Linguistic Features Analysis
Detection Models and XAI
Textual Pattern Analysis (TF-IDF)
LLM-Based Linguistic Insights
Conclusion and Discussion
References
