Peer feedback is widely used in active learning environments because it helps students develop valuable skills such as evaluative judgment and communication. However, its effectiveness often depends on how well students provide meaningful feedback.
In many classrooms, this is where challenges emerge. Students may provide comments that are brief, overly positive, or lacking clear suggestions for improvement. While well-intentioned, such feedback rarely helps peers understand how to refine their work.
Several factors contribute to this gap.
Providing thoughtful feedback requires time and cognitive effort. Students must analyze a peer’s work, identify strengths and areas for improvement, and translate those observations into clear and constructive suggestions.
Unclear purpose can also reduce the quality of peer feedback. When students do not fully understand why the activity matters, they may treat it as an administrative task rather than a learning opportunity.
Social dynamics can further complicate the process. Students may hesitate to provide critical feedback if they worry about how their comments might affect their relationships with peers.
Given these challenges, educators are beginning to explore whether tools like Artificial Intelligence (AI) can help support students in producing more meaningful and constructive peer feedback.
One recent study explored whether generative AI could help improve the quality of peer feedback provided by students. The research was presented during our webinar “AI-Driven Peer Feedback: Transforming Quality and Efficiency.”
The study involved 129 third-year Doctor of Pharmacy (PharmD) students, who were randomly assigned to one of two groups.
One group provided self-generated feedback, meaning participants wrote feedback independently without using AI. The other group used AI-assisted feedback, where generative AI prompts helped structure and draft their responses.
Both groups followed the TGAP framework, which organizes feedback into three components: Task, Gap, and Action. Students first identify what their peers did well (Task), then highlight areas that could be improved (Gap), and finally suggest how peers could improve moving forward (Action).
Those in the AI-assisted group used prompts aligned with this framework but were still required to add their own observations about their peers’ work.
The results revealed clear differences in feedback quality between the two groups.
Participants in the AI-assisted group were significantly more likely to provide specific and actionable feedback. For example, 61.3% of students in the AI-assisted group gave specific comments on what their peers did well, compared to 37% in the self-generated group.
When identifying areas for improvement, 55% of AI-assisted students provided specific suggestions, compared to 13.6% in the self-generated group. Similarly, 72.8% of students in the AI group offered clear guidance on how their peers could improve, compared to 22.8% of those who generated feedback on their own.
Student perceptions of the feedback they received were also largely positive. Only 3% of students in the AI-assisted group reported negative feelings about the feedback they received, compared to 12.7% in the self-generated group.
The findings suggest that AI does not replace students’ thinking when giving peer feedback. Instead, it supports the feedback process in three ways.
One key factor is structure. Frameworks such as TGAP guide students to focus on specific elements of feedback: identifying what was done well, recognizing areas for improvement, and suggesting possible next steps.
When AI prompts are aligned with this structure, they help students organize their thoughts and ensure that their feedback addresses these key components.
AI may also help reduce cognitive load during the feedback process. Turning observations into clear, constructive comments can be challenging, particularly for students who are still developing evaluative judgment.
AI-assisted prompts can help students translate their ideas into more coherent feedback, making it easier to articulate both strengths and areas for improvement.
Finally, structured AI prompts may help reduce some of the social pressure students experience when giving feedback. When feedback is framed around specific criteria and guided questions, the focus shifts toward the task or performance rather than the individual.
This makes it easier for students to provide constructive suggestions while maintaining positive peer relationships.
Peer feedback remains an important part of collaborative learning, but it is also a skill that students must learn and practice. As the study suggests, generative AI can help support that learning process by guiding students to produce feedback that is more specific, structured, and actionable.
Rather than replacing student judgment, AI acts as a scaffold that helps students translate their observations into clearer and more constructive comments. When used thoughtfully, it addresses some of the common barriers that make peer feedback challenging in the first place.
If you’re interested in learning more about this research and how AI can enhance the quality and efficiency of peer feedback, access the full webinar recording of “AI-Driven Peer Feedback: Transforming Quality and Efficiency” below.