Hi Richard,
I'd be interested in helping out, particularly as my current PhD has been looking at some of what you're interested in. I think the questions would need some revision to build in a holistic view of the assessment as the scores themselves don't show a full picture and you'd need to be careful about the word 'objective' since by our nature, human beings are subjective. So the definition of 'objective' would need to be set in a particular context.
Regards
Paul
-----Original Message-----
From: Richard Craggs [mailto:[log in to unmask]]
Sent: 12 July 2018 17:35
To: [log in to unmask]
Subject: Research into moderating peer assessment and collusion
Hi all,
I'm considering doing a study into how teachers or tools can moderate peer assessments of students. I'd like to know if anyone is interested in this idea.
My research questions are:
1. Is it possible for human beings to objectively agree about whether the results of peer assessment are 'suspicious' (e.g. either a student is unfairly treating another, or collusion has taken place) by looking at the rating students give each other.
2. Can numerical analysis of peer assessments provide indications of suspicious behaviour which match what human's identify?
3. Can data from version control used by students in a group be used to automatically validate peer assessment and thus identify cases where students have been treated unfairly.
My plans are to answer these:
1. getting groups of humans to review peer assessments from WebPA and label them using some scheme such as "normal, suspicious-collusion, suspicious-victimisation". We can then use inter-rater reliability metrics to measure the degree to which this is an objective task.
2. Use agreement measures (like inter-rater reliability) to measure how closely peers' ratings of each group member align within a group. Too much or too little agreement between students may indicate suspicious behaviour. We can compare the results of this with human judgements.
3. I have logs from version control systems that tell me how many lines of code each student contributed to a group project and how many commits they made over what time periods. It will be possible to test whether any of these metrics (number of days active, number of lines contributed, etc) correlate with the outputs of peer assessment. If so, it would be possible to use version control statistics to automatically flag issues in peer assessment.
Let me know if you are interested
(n.b. I'm using "peer assessment" to refer to the thing that WebPA does which is allow students to rate the contribution and quality of peers in group-work)
########################################################################
To unsubscribe from the WEBPA list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=WEBPA&A=1
########################################################################
To unsubscribe from the WEBPA list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=WEBPA&A=1
|