원문정보
AI-Driven Educational Innovation - A Framework for Learning Support and Automated Assessment -
초록
영어
As higher education digitizes, large‑language‑model(LLM) writing aids and automated essay scoring can lighten instructors’ workloads while giving students instant, personalized feedback. Their value, however, remains unclear in humanities courses demanding deep critical inquiry, such as Japanese culture and literature. This study outlines a three‑semester mixed‑methods design integrating LLM tools into an authentic course and testing (1) gains in students’ writing quality, (2) the reliability and acceptance of AI scoring versus human raters, and (3) links between AI‑use intensity and achievement. Roughly 700 undergraduates will be assigned to randomized treatment or control sections each term. Data sources include pre‑/post‑essay scores on a 10‑item rubric, AI‑human concordance indices, Likert surveys on usability and fairness, granular usage logs, and end‑of‑term focus‑group interviews. Interviews capture learners’ experiences, strategies, and concerns about AI adoption, and logs track function choice and revision cycles. Hierarchical linear models, gain‑score ANCOVAs, Cohen’s κ, Bland–Altman plots, and structural equation modeling will analyze nested effects, scoring validity, and attitudinal constructs. Expected results include usage‑dependent improvements in writing, strong AI‑human agreement when explanations are transparent, and positive achievement gains for students who employ AI strategically. By detailing instruments, procedures, and analyses, the study offers a replicable template for evaluating AI as a cognitive partner and instructional aid in humanities settings, informing sustainable AI‑ enhanced teaching, assessment, and curriculum design.
목차
2. 研究の目的および設計の方向性
3. 先行研究の検討
3.1 AI基盤の学習支援ツール
3.2 自動評価の信頼性と受容性
3.3 AI活用と学習達成度との関係
4. 研究方法
4.1 研究設計の概要
4.2 研究対象およびサンプリング
4.3 測定ツールおよびデータ収集
4.4 データ分析方法
5. 予想される結果および考察
6. 結論
参考文献
Abstract
