Print

Print


Hi Paul,

I really appreciate your detailed and clear reply. I have just received another similar example from another instructor, which is consistent with your suggestion as to the likely problem. In this group:


  *   4 out of 6 students did not submit ratings
  *   self assessment was not enabled
  *   as a result students only received 1 or 2 separate ratings for each question
  *   even though most ratings were 4 or 5, the WebPA scores were very low. One student got two 5/5s, but just got a rating of 0.66
  *   non-responders got higher ratings than responders.

I have passed on these details to our WebPA admin and will ask him what version we are running.

Many thanks,
Tim.

------------------------------------------------------------------
Timothy Allen
eLearning Consultant & Educational Designer
School of Mining Engineering UNSW
E:   [log in to unmask]<mailto:[log in to unmask]>

On 18 Nov 2016, at 10:22 PM, Paul Newman <[log in to unmask]<mailto:[log in to unmask]>> wrote:

Hi Tim,

I had a look at this, and there are really two issues at play.  The second one is the worry - your WebPA version looks to be broken somehow!


Firstly, the easy one...

Student E gets a lower grade than students B, C and D, because of that slightly lower mark of 4 that he gave student A.

The scores are normalised per question, so B, C and D were essentially giving everyone in their group 5/20 as a fractional score.  The mark they gave divided by the total they gave for that criterion.  B, C and D are implying everyone put in equal effort.

Student E gave A just 4 marks, so only gave a total of 19 for the criterion.  That means B, C and D received 5 /19 each, which is a slightly higher fractional score!  E implies with his scoring that B, C and D did more work than A, and had to put that bit more effort in to carry him.  As it's peer-only marking (no scoring yourself) E has no opportunity to say if he put more effort in too.

End result, E gets a slightly lower score overall, and therefore a lower grade, than B, C and D.


Now the bad news...

As far as I can see, Student A is the only one in Group 7 (your second example) who actually gets the correct grade!  (98.03% before penalties)

There's clearly something wrong with the way WebPA is calculating the peer-only grades in your example, but it's not something I could replicate through the software - I faked both groups assessment submissions in our local copy of WebPA OS and got different results.

The Group 5 simulation came out the same, everyone got the same grade (before penalties).

The Group 7 simulation came out as I expected, but was not the same as yours.  As mentioned above, Student A got a slightly lower Web-PA score, but after rounding it still came out to 100%, the same as B, C and D.  Student A received 98.03% (before penalties).

The closest I could get to your anomalous grading results was to intentionally corrupt the assessment and run it for students to submit in peer-only mode, then force it to use the self-and-peer grading algorithm.  The results weren’t identical, and Group 5  was broken too in my example, but the spread of results for Group 7 was similar, and Student A received a much higher grade than the others.

Running the grading algorithm in this mode doesn't compensate for Student A not submitting, so the other students 'miss-out' because they received lower total scores - which is similar to your guess as to the cause.

I don't know if your copy of WebPA is broken or if the data has been mangled somehow, but the calculation for grades is not working correctly.

I tested without the WebPA 2.0 LTI additions here - what version are you running?

--
Paul Newman
------------------------------------------------------
Senior PHP Developer
Learning Technology & Digital Innovation Group, IT Services
Loughborough University
------------------------------------------------------



From: [log in to unmask]<mailto:[log in to unmask]>
Subject: Problems with WebPA results

Hi everyone,

Every now and then I have had reports of unusual results turning up in WebPA reports. I have decided to post a specific example here to see if anyone can shed some light on the issue.

What is happening is that some groups are getting what looks like abnormal WebPA scores, when compared with other groups with almost identical peer ratings. In the example below, we have two groups of 5 students in which one student did not submit any ratings. The self-assessment option was not enabled, and a 50% PA weighting and a 20% non-completion penalty was set.

[..]