Hi Simon,
This is something I'm currently looking at too. I've got data going back
several years which I am about to analyse in detail to see if there are
any statistical differences in marking - with the aim to identify an
optimal number for a group. E.g. depending on group size, do marking
differences make *that* much difference?
Also, I'm looking at the algorithm, as already discussed, to see if
there is an optimal way to present the scores. I'm wary of mark
discrepancies too and also risks that students can get more than 100%.
Other assessment processes do the same (e.g. Derek Rowntree's model) and
I think its not always best just to give say, 100% if the mark is over
100%. Ideally, the algorithm should hold for all group sizes, marks and
conditions. What might be nice is if an algorithm produced results which
were always within a pre-determined set margin. E.g. if the group mark
is 50% each group member would end up with a mark between 45-55%?
I should add that I didn't design this algorithm but it is very similar
to one I've used by Lawrence Li, which itself has evolved over the last
decade or so. [Li, L. K. Y. (2001). Some Refinements on Peer Assessment
of Group Projects. Assessment & Evaluation in Higher Education, Carfax
Publishing Company. 26: 5-18.]
There are two issues therefore: is the current algorithm fair enough to
smooth out group differences in marking, that students consider
'appropriate' (I'm avoiding using the word 'fair'); and is the algorithm
actually reflecting the students efforts based on their marking? I'm
looking at this by considering the marking, evaluating how they are
actually working in a group (observation etc) and triangulating this
with what they consider is 'fair' after the marks are made. For example,
is the mark that important if the students are happy with ranking, are
they bothered if a group member scored X% as long as they are happy with
their mark? Etc etc.
Regards
Paul
-----Original Message-----
From: WebPA Project [mailto:[log in to unmask]] On Behalf Of Simon
Brookes
Sent: 15 November 2010 14:28
To: [log in to unmask]
Subject: Algorithm Question Again
There is another problem with the WebPA algorithm (I think!). According
to the "worked example of the scoring algorithm" in WebPA Help, the
final individual WebPA score a student gets is a function of the number
of students in the group - the combined fractions awarded by each
student for each individual. So, the more students within each group,
the higher the scores will be!
This problem is most exaggerated in a scenario where one student
performs very poorly compared with the rest of the group or vice versa -
a common occurrence in my experience.
See my worked example showing the difference in final marks for groups
of students with 5 and 4 members respectively. The group marks were the
same (74) but look at the effect on the final mark.
https://spreadsheets.google.com/pub?key=0AlaZW47BQdRVdHZCY0FoOGNRUXNTR1d
GdUswR2lrZGc&single=true&gid=3&output=html
Do the designers of the algorithm have any comments?
Cheers
Simon
>>> Neil A Gordon <[log in to unmask]> 11/11/2010 16:37 >>>
Hi Simon
I share your reservations - and usually end up moderating such cases,
and similarly when the algorithm ends up allocating a mark of 100% to
individual members of a team.
Some colleagues use the option to alter the proportion of how much is
allocated by the algorithm, e.g. 50% from the original mark, 50%
through
webpa - although in my view that introduces another artificial
allocation (so within a team, a student who was allocated zero by the
algorithm would still get 50% of the overall team mark).
It seems to me that tutor moderation is the safest bet rather than
relying fully on the output from the algorithm.
A cap on the top mark could be one solution - although I'm not aware
of
that functionality at the moment - beyond using the proportion option
to
restrict the available variability.
What do others think?
Neil
-----Original Message-----
From: WebPA Project [mailto:[log in to unmask]] On Behalf Of Simon
Brookes
Sent: 11 November 2010 16:04
To: [log in to unmask]
Subject: Algorithm Question
Hi
I have been playing around with the WebPA algorithm in a spreadsheet
this afternoon. I would welcome some thoughts/advice please.
You can see the example here:
https://spreadsheets.google.com/pub?key=0AlaZW47BQdRVdDlTWHpMU3JXNDRMMEh
Yb3lsaWE5b1E&hl=en_GB&single=true&gid=3&output=html
I completely understand how the current algorithm works - this is not
my
issue. The problem I have is illustrated if you compare the marks for
Groups A and B.
1. You'll see that both groups received the same group mark (74%)
2. The individual marks are modified by the peer assessment.
3. However, I am really unhappy that one student in Group B ends up
with
a mark of 97% while the top mark in Group A ends up as 82%.
4. Overall I deemed that the appropriate mark for both presentations
was
74% but the calculation ends up awarding one student in one group
(which
received exactly the same group as the other) a much higher final
mark.
5. To me, this does not seem fair. Group B's work was not any better
than Group A but, because of a quirk of the system (actually because
one
student performed particularly poorly in Group B), these students
mostly
end up with much higher marks.
I have always felt uncomfortable about this when using this type of
algorithm in the past. The individual mark distribution works fine
within groups but not when one compared between groups.
Would there be a way of say capping the top mark at that given as the
group mark then all other marks distributed relative to that?
Thanks
Simon
*****************************************************************************************
To view the terms under which this email is distributed, please go to http://www.hull.ac.uk/legal/email_disclaimer.html
*****************************************************************************************
|