If you have been following our blog,you may already know about the various peer evaluation methods that exist and are used in Team-Based Learning classes. The question then is - which method should you implement in your own course?
If you’re still unfamiliar with the different methods available, we recommend referring to this this resource: Peer Evaluation In Team-Based Learning: The Definitive Guide.
To decide which method is right for you, you will have to consider several factors such as your institution’s culture, the course’s goals as well as your own goals as an instructor.
We'll discuss some advantages and disadvantages of each method so you can be better informed of which method is most aligned with your goals.
Pros The main advantage of this method is that it requires students to differentiate grading amongst their peers, forcing students to give greater thought to how each individual has contributed to the team.
Cons Differentiated grading, however, acts as a double edged sword. Students may experience a sense of unfairness if they feel their teammates have contributed equally.
Another disadvantage stems from the same problem. Students may try to game the system by colluding on scores they are going to give each other so that on average they all get the same score.
One last drawback comes from the fact that some students start seeing this exercise as a zero-sum game where they can give low scores to some peers to then be able to give higher scores to others.
The main advantage of this approach is that students are not required to give different scores to their teammates. Thus, if they believe that all members have contributed equally, they are not restricted in their ability to give the same scores. This method is often seen as fairer by students.
The main disadvantage of this method is that, given the way in which scores are calculated, students sometimes underestimate the impact that the score they give can have on their peers, resulting in scores that excessively harm or benefit their peers’ grades.
Another disadvantage comes from the fact that students are allowed to give the same score to all the team members. Depending on the cohort, this might lead to grade inflation as students know that they can easily coordinate to get an equal score.
The Koles’ method is more quantitatively and qualitatively thorough compared to the other methods. On the qualitative side, there is an emphasis on providing good feedback because students’ “peer score” is a combination of what others say about them and the quality of the feedback they provide. This has two benefits. Students improve at giving feedback and if peer evaluation is conducted in a formative context, this can lead to better opportunities for other students to improve on their peers’ comments.
On the quantitative side, the Koles’ method asks students to rate their peers in several competencies within three areas: cooperative learning, self-directed learning and interpersonal skills. Students have to rate based on whether their peers meet a particular competency on a four point scale: never, sometimes, often or always.
This method has the same disadvantage as the Fink’s method in that students are not required to discriminate among their peers which might lead to grade inflation. Another drawback is the higher amount of effort required by instructors to review and analyze students’ submissions. This is a result of Koles’ method being more thorough.
Similar to Koles’ method, this approach is more quantitatively and qualitatively thorough than other methods. Students have to rate their teammates on 12 criteria and allocate between 1 and 5 points for each criterion. Usually 1 is considered to little, 5 is considered too much and 3 is regarded as the ideal score.
An additional benefit is that this method is not used to adjust students’ grades. Therefore, this can be useful in highly competitive environments where students are especially concerned about letting other students’ impact their grades.
However, although a benefit in some contexts, not adjusting students’ grades can also be a drawback as this does not allow faculty to take peer evaluation into account during grading for their course. This method also has the same disadvantages as the Koles´ method: students are not required to discriminate among their peers which might lead to grade inflation and it requires a higher amount of effort from instructors to review and analyze students’ submissions.
Regardless of the peer evaluation method you choose, it should be able to address 3 scenarios (Sibley, 2014):
A student who is constantly unprepared. He or she should not be able to benefit from the work of others and their grade should be adjusted accordingly.
A student who is well prepared. The peer evaluation should reward this student.
A student how dominates the team conversation and bullies. Peer evaluation must only reward healthy behaviors and provide feedback to counteract negative ones.
Maybe you’re just starting with peer evaluation and want to choose a method for the first time. Perhaps the approach you are currently using is generating pushback from your students. Regardless of the situation you’re in, we hope you are able to use this post to make a better informed decision about changes to make in your peer evaluation process.
Remember that although these are the most popular methods, you are not limited to using the methods discussed here. Your context should be the number one factor you consider when deciding on a peer evaluation method. If these methods do not fit your requirements, try combining them.
1. Sibley, Jim; Ostafichuk, Pete. (2014) Getting Started With Team-Based Learning. 1st edition. [ebook] (p. 155). Stylus Publishing.
2. Goedde, Rick & Sibley, Jim. Approaches to Peer Evaluation: Pro’s and Con’s of Various Methods. [PDF]. Available at: http://learntbl.org/wp-content/uploads/2014/06/Poster_TBL_peer_Feb2011-22nd.pdf [Accessed 04/06/2018]
3. Levine, R.E., 2012. Peer evaluation in team-based learning. Team-Based Learning for Health Professions Education: A Guide to Using Small Groups to Improve Learning, pp.103-116.