Hi Simon
I am one of the people responsible for designing the algorithm.
From my perspective it is easy to justify the higher weightings and higher marks in your scenario, this is how the system is meant to work and not a quirk in the system.
If both groups scored an overall average of 74, but in one group they all performed equally then they should have the same mark, however in group B one student did much less work, yet they performed equally well. So it then follows that there was more work for the rest of the team and someone else had to put in more work to compensate, so it is valid that one team member received a higher score.
I think your question is more about mark distribution, which also depends on your own marking criteria and assessment independent of WebPA. You can change the distribution of the marks by altering the peer assessed weighting. Peter Willmot and I provide a rationale for having a weighting in this paper:
https://dspace.lboro.ac.uk/dspace-jspui/bitstream/2134/6490/1/Willmott-Crawford-2007.pdf
This is based upon using 50% weighting for peer assessment which shows a good match to Steven Hanson’s data (55%).
We also found that the extremes were representative of the student effort and achievement by validating marks through 4th year mentors.
Following your second e-mail. All the scores are normalised so if there are 4 students in the group each will allocate 1/4 of the marks, for 5 students in the group each will allocate 1/5 of the marks etc. The more people in the group who are not pulling their weight the more work will be required by the key person and the higher their weighting could become, which is intentionally how the algorithm is designed.
Best regards
Adam
________________
Dr Adam Crawford
engCETL Manager
Loughborough University
www.engcetl.ac.uk
-----Original Message-----
From: WebPA Project [mailto:[log in to unmask]] On Behalf Of Simon Brookes
Sent: 15 November 2010 14:28
To: [log in to unmask]
Subject: Algorithm Question Again
There is another problem with the WebPA algorithm (I think!). According
to the "worked example of the scoring algorithm" in WebPA Help, the
final individual WebPA score a student gets is a function of the number
of students in the group - the combined fractions awarded by each
student for each individual. So, the more students within each group,
the higher the scores will be!
This problem is most exaggerated in a scenario where one student
performs very poorly compared with the rest of the group or vice versa -
a common occurrence in my experience.
See my worked example showing the difference in final marks for groups
of students with 5 and 4 members respectively. The group marks were the
same (74) but look at the effect on the final mark.
https://spreadsheets.google.com/pub?key=0AlaZW47BQdRVdHZCY0FoOGNRUXNTR1dGdUswR2lrZGc&single=true&gid=3&output=html
Do the designers of the algorithm have any comments?
Cheers
Simon
>>> Neil A Gordon <[log in to unmask]> 11/11/2010 16:37 >>>
Hi Simon
I share your reservations - and usually end up moderating such cases,
and similarly when the algorithm ends up allocating a mark of 100% to
individual members of a team.
Some colleagues use the option to alter the proportion of how much is
allocated by the algorithm, e.g. 50% from the original mark, 50%
through
webpa - although in my view that introduces another artificial
allocation (so within a team, a student who was allocated zero by the
algorithm would still get 50% of the overall team mark).
It seems to me that tutor moderation is the safest bet rather than
relying fully on the output from the algorithm.
A cap on the top mark could be one solution - although I'm not aware
of
that functionality at the moment - beyond using the proportion option
to
restrict the available variability.
What do others think?
Neil
-----Original Message-----
From: WebPA Project [mailto:[log in to unmask]] On Behalf Of Simon
Brookes
Sent: 11 November 2010 16:04
To: [log in to unmask]
Subject: Algorithm Question
Hi
I have been playing around with the WebPA algorithm in a spreadsheet
this afternoon. I would welcome some thoughts/advice please.
You can see the example here:
https://spreadsheets.google.com/pub?key=0AlaZW47BQdRVdDlTWHpMU3JXNDRMMEh
Yb3lsaWE5b1E&hl=en_GB&single=true&gid=3&output=html
I completely understand how the current algorithm works - this is not
my
issue. The problem I have is illustrated if you compare the marks for
Groups A and B.
1. You'll see that both groups received the same group mark (74%)
2. The individual marks are modified by the peer assessment.
3. However, I am really unhappy that one student in Group B ends up
with
a mark of 97% while the top mark in Group A ends up as 82%.
4. Overall I deemed that the appropriate mark for both presentations
was
74% but the calculation ends up awarding one student in one group
(which
received exactly the same group as the other) a much higher final
mark.
5. To me, this does not seem fair. Group B's work was not any better
than Group A but, because of a quirk of the system (actually because
one
student performed particularly poorly in Group B), these students
mostly
end up with much higher marks.
I have always felt uncomfortable about this when using this type of
algorithm in the past. The individual mark distribution works fine
within groups but not when one compared between groups.
Would there be a way of say capping the top mark at that given as the
group mark then all other marks distributed relative to that?
Thanks
Simon
|