News Source
EXCERPT:
A new academic study has found that artificial intelligence systems used to evaluate student writing may respond differently depending on how a student’s identity is presented, suggesting there is bias in automated educational tools.
The research, titled “Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback,” was published in March by a team from Stanford University. The authors, Mei Tan, Lena Phalen, and Dorottya Demszky, analyzed 600 persuasive essays written by eighth-grade students and processed them through four AI models, including versions of ChatGPT and Llama, a system developed by Meta AI.
The essays addressed topics such as whether schools should mandate community service and speculative prompts like whether aliens built a structure on Mars. Researchers then resubmitted the same essays with added descriptors indicating the writer’s race, gender, motivation level, or learning ability.
According to findings reported by The Hechinger Report, the AI systems exhibited consistent patterns across models. Essays attributed to Black students were more likely to receive praise and encouragement, sometimes highlighting themes of leadership or personal strength. One example of such feedback read: “Your personal story is powerful! Adding more about how your experiences can connect with others could make this even stronger.”
