ALGORITHMIC FAIRNESS AND PEDAGOGICAL LEGITIMACY IN AI SCORING SYSTEMS: PERSPECTIVES FROM UNIVERSITY ENGLISH WRITING IN CHINA
Abstract
As artificial intelligence (AI) technologies become increasingly integrated into higher education, AI-based writing scoring systems are gaining traction as tools to evaluate student performance efficiently. While these systems offer potential benefits such as speed and consistency, they also raise significant concerns regarding fairness, transparency, and the evolving role of teachers in the assessment process. This qualitative case study investigates how university students and English writing instructors in China perceive the use of AI scoring systems in academic writing courses. Drawing on semi-structured interviews with 13 participants, the study identifies four major themes: perceived algorithmic bias and rigidity, lack of transparency in score generation, tensions between teacher authority and AI judgment, and institutional gaps in policy and support. Findings reveal that despite some operational advantages, AI scoring systems are often viewed as pedagogically misaligned and ethically ambiguous. The study underscores the need for more robust governance mechanisms, teacher training, and transparency standards to ensure the responsible use of AI in educational assessment. It contributes to ongoing discussions on educational fairness, teacher agency, and the ethical implementation of digital technologies in classroom settings.
Downloads
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.