Print

Print


Hi

I have been playing around with the WebPA algorithm in a spreadsheet this
afternoon.  I would welcome some thoughts/advice please.

You can see the example here:

https://spreadsheets.google.com/pub?key=0AlaZW47BQdRVdDlTWHpMU3JXNDRMMEhYb3lsaWE5b1E&hl=en_GB&single=true&gid=3&output=html

I completely understand how the current algorithm works - this is not my
issue.  The problem I have is illustrated if you compare the marks for
Groups A and B.

1. You'll see that both groups received the same group mark (74%)
2. The individual marks are modified by the peer assessment.
3. However, I am really unhappy that one student in Group B ends up with a
mark of 97% while the top mark in Group A ends up as 82%.
4. Overall I deemed that the appropriate mark for both presentations was 74%
but the calculation ends up awarding one student in one group (which
received exactly the same group as the other) a much higher final mark.
5. To me, this does not seem fair.  Group B's work was not any better than
Group A but, because of a quirk of the system (actually because one student
performed particularly poorly in Group B), these students mostly end up with
much higher marks.

I have always felt uncomfortable about this when using this type of
algorithm in the past.  The individual mark distribution works fine
*within*groups but not when one compared
*between *groups.

Would there be a way of say capping the top mark at that given as the group
mark then all other marks distributed relative to that?

Thanks

Simon