Dear all
How about re-reading this well known paper that discusses the
reliability of assessment.
Elton, L. and B. Johnston, (2002). Assessment in universities: a
critical review of research. York: LTSN Generic Centre.
Others?
Brown, S., (2004). Assessment for Learning. Learning and Teaching in
Higher Education. Issue 1. pp 81-89.
Gibbs, G. and Simpson, S., (2004). Conditions under which assessment
supports students’ learning. Learning and Teaching in Higher Education.
Issue1. 3-31.
Regards
Liam
Joelle Fanghanel
<Joelle.Fanghanel
@TVU.AC.UK> To
Sent by: "Online [log in to unmask]
forum for SEDA, cc
the Staff &
Educational Subject
Development Re: guild of markers
Association"
<[log in to unmask]
.UK>
18/08/2009 14:57
Please respond to
Joelle Fanghanel
<Joelle.Fanghanel
@TVU.AC.UK>
It is difficult not to remember and celebrate the wonderful work of
Peter Knight on assessment and the measurement of performance, and his
views on the Bell Curve....
Joëlle
-----Original Message-----
From: Online forum for SEDA, the Staff & Educational Development
Association [mailto:[log in to unmask]] On Behalf Of Strivens, Janet
Sent: 18 August 2009 14:28
To: [log in to unmask]
Subject: FW: guild of markers
I've been reading the comments on this request with interest. Teresa,
you don't say what subject of type of assessment you are interested in
(though I assume from your job title you're primarily interested in
discursive writing). As Robert and David both hint, I think the subject
matters, in that there are disciplinary cultures which influence
assessors' understanding of 'standards'. You would perhaps expect these
to operate most strongly in small departments with low staff turnover.
How do you define reliability? In departments which mark essays and use
a percentage scale, I would argue that they are in fact using quite a
limited number of levels of discrimination, at most a thirteen point
scale which corresponds to high, average and low within each degree
class. When staff in these departments say their marking agrees, what
they mean is that they differ by no more than a few percentage points
and never by a whole degree class. With a different form of assessment
this would not count as reliability, but I think in an essay-based
assessment it's reasonable. I'm wondering what precisely is meant in
the literature Robert refers to. Certainly it's my experience in
subjects with a lot of discursive writing tasks, the levels of broad
agreement between first and second markers is surprisingly high. When
they disagree, it's usually easy to see that they are privileging
different criteria - ranking them differently in terms of importance.
Such ranking usually balances out in undergraduate marking, but
sometimes not at Masters level.
I'm currently completing an in-depth study of the relation between
conceptions of the subject and controversies about assessment in two
departments, English and Maths, which includes student perspectives.
Happy to share findings in progress if you are interested. Look forward
to your paper!
Janet Strivens
Centre for Lifelong Learning: Educational Development Division,
The University of Liverpool,
128 Mt Pleasant,
Liverpool L69 3GW
Tel: 0151 794 1167 (office)
07939 521554 (mobile)
> > -----Original Message-----
> > From: Online forum for SEDA, the Staff & Educational Development
> > Association [mailto:[log in to unmask]] On Behalf Of Teresa
> McConlogue
> > Sent: 14 August 2009 16:17
> > To: [log in to unmask]
> > Subject: guild of markers
> >
> > Dear All
> >
> > I am currently writing a paper on peer assessment. One of the issues
> > that has emerged is subjectivity in marking. I've been reading
> studies
> > about explicit marking criteria and tacit knowledge and I've come
> > across the idea of a 'guild of markers' (Sadler) and also 'community
> of
> > assessors'. However, I can find no research studies into the
> > reliability
> > of marking of a 'guild' or 'community'. I know tutors claim that
> > working
> > together in a small team, they often assign very similar grades to
> > work,
> > hence the assumption is that they have managed through discussion to
> > exchange tacit knowledge about standards.
> > However, I would like to see some research evidence. It may be that
> > tutors use safe marking practices and a small range of marks, so
it's
> > not surprising that their marks are similar.
> > Is anyone aware of any studies that either dispute or support the
> idea
> > of a guild of markers? I would be very grateful if you could send me
> > some references.
> >
> > Many thanks
> >
> > Teresa McConlogue
> >
> > --
> > Dr. Teresa McConlogue
> > Thinking Writing Advisor
> > Queen Mary, University of London
> > Mile End Road
> > London
> > E1 4NS
> >
> > Direct Telephone: 020 7882 2834
|