원문정보
A Study on the Development of Rubrics Using AI Platforms for Assessing University Students’ English Speaking Proficiency
초록
영어
The purpose of this study was to develop and apply AI platform-based assessment criteria to ensure consistent and objective evaluation of university students’ English-speaking proficiency. It also aimed to verify the reliability and validity of AI-generated evaluations by comparing them with instructor assessments. Accordingly, this study examined fifteen AI-assisted English interview test responses produced by five students at the beginning, middle, and end of the semester. The collected data were analyzed using the developed rubrics based on the TOEFL iBT speaking scoring criteria, and AI platforms such as TurBoScribe, Coh-Metrix, and Grammarly were used to measure four key aspects: fluency, variety, coherence, and accuracy. The research findings revealed that the instructor and AI evaluations were highly consistent in fluency, variety, and accuracy, while significant differences were noted in coherence. These findings suggest that instrumental support via AI platforms can significantly aid in maintaining consistency and objectivity when evaluating English-speaking proficiency. Based on the research findings, this study also proposes practical educational implications for integrating AI platforms in future English language classrooms.
목차
II. 이론적 배경
1. 영어 말하기 능력 평가항목
2. 영어 말하기 평가에서 AI 도구 활용 가능성
III. 연구방법
1. 분석 대상
2. 자료의 수집
3. 자료의 분석
IV. 분석 결과
1. 영어 말하기 능력 평가를 위한 AI 플랫폼 활용 평가항목 개발
2. 영어 말하기 능력에 대한 교수자 및 AI 플랫폼 활용 평가 비교 분석
V. 결론 및 제언
Works Cited
Abstract
