Hi,

 

In peer-only, students can receive different numbers of scores which complicates the algorithm.  In the worst case scenario, Student A scores B, C and D… and no else in the team submits.  B, C and D have scores and A does not.  The compensatory scores help answer the question of ‘What grade would you give A?’ in a straightforward way (for the algorithm, at least).

 

The students in self-and-peer mode all receive the same number of scores, as every student scores everyone in the group, including themselves.  The algorithm can just chew through what it has and produce a grade.  Ignoring normalisation, and other critically important bits, the algorithm basically calculates total-score/number-of-scores for each student.

 

Speaking as a non-academic who’s not trying to teach students anything, I think self-and-peer marking is better as it gives the student the chance to reflect on their own performance and examine where they could do better.  Whether they take that chance is up to them.

 

Speaking as a coder, self-and-peer makes my life easier too! :)

 

 

Which leads me to a brief digression on student behaviour, and how it changes…

 

Non-submission is not a problem for the algorithm, but research shows students typically score themselves higher than their peers, so if there’s only one submission, you’d expect them the submitter to receive a higher grade than their peers, on average.  Of course, it’s entirely possible that the single-submitter actually contributed more than their peer, and if no one else bothers to do their WebPA perhaps it’s more than possible, which is why it’s important students submit their own views!

 

Going back to the side issue in your original email.. students who are new to peer-moderated marking often mistakenly think that giving 5 out of 5 is going to get them all a great grade, when it simply means they all worked the same and get the same group grade.  In later years, as they get wise to how it works, the scores get more varied – perhaps with a higher prevalence of trying to ‘game’ the system,  back stab each other, and/or renege on deals to give high marks to each other.

 

By their final year, students seem to come out the other side, and now be worried that their peers’ opinions might affect their final degree classification.  Then they’re much keener on trained, experienced, impartial academics doing the grading.

 

Others on this group can probably give more examples and anecdotes around student activities during their peer assessments, and information on how differences in cultural background affect how cohorts approach an assessment and the judging of others’ contributions. I think I’ve digressed enough now!

 

Paul

 

 

 

From: WebPA [mailto:[log in to unmask]] On Behalf Of [log in to unmask]
Subject: Re: Problems with WebPA results

 

I see, thank you Paul.

 

Just thinking this through logically though: if non-responders are not “compensated” in this way, when using “self-and-peer” mode, why wouldn't the same problem occur in that case as well?