I am working with the English Regions Cycling Development Team, which
has been assessing all Local Transport Plans and Annual Progress Reviews
using a fairly complex scoring system. They are using 4 different
scoring sheets for different things. Each sheet has a maximum possible
score, though there are differences in the way scores are given - e.g.
one sheet can only include positive marks whilst another allows negative
points too. Another difference is the degree to which scores for a
particular question are subjective or prescribed, though I think each
sheet has only one or the other. There will be a total of about 140
sets of scores (one per highway authority, more-or-less), which are
being assessed by 10 different people (but they haven't all done exactly
the same number due to regional variations).
We are going to be producing a spreadsheet of all the different
questions that can get scored, and looking at these by region and by
scorer. We know that some of the scorers are more lenient and others
more tough. What we are wondering is whether there is any kind of tool
that can help pinpoint and iron out the variable subjective scoring that
results from joint work like this - and also to work out what different
weightings would be needed to balance out the differences between
individual scorers.
I know it sounds like an old-fashioned maths question (how long does it
take four men to dig a hole 6 feet wide ....), but any answers will be
very gratefully received.
Paul
--
Paul Rosen
Science & Technology Studies Unit
Department of Sociology
University of York
Heslington, York
YO10 5DD.
UK
Tel. 01904 - 434743 Mobile. 07968 - 707738
Fax. 01904 - 434702
Email: [log in to unmask] Web: http://www.york.ac.uk/org/satsu/
|