Students Trust AI Grading When They See How It Works
Researchers explored what makes students feel confident when an AI assigns grades that impact their futures.
A controlled experiment involved 240 college students, each using an AI grading tool that varied in three key dimensions:
- Transparency – How much information the AI revealed.
- Framing – The language used to describe its decisions.
- Agency – Whether students could influence the outcome.
Key Findings
Explanation Builds Trust
When the AI explained its reasoning, students trusted it more.Control Restores Confidence
Limited transparency could be offset by giving students the ability to tweak or challenge results.
Framing Matters Less
The way the AI spoke about fairness had minimal impact once students saw the tool in action. They prioritized visible steps and controls over abstract promises of justice.Pillars of Trust
The strongest commitment came from a combination of clear explanations and real control. Students who understood the process and could act on it were most likely to accept AI-generated grades.
Practical Implications
Transparency + Agency = Trust
Designers of educational AI tools should prioritize both transparent processes and user agency to ensure students feel informed and empowered.Beyond Accuracy
Even highly accurate systems can fail to gain trust if the process feels opaque or unresponsive.