As far as I can see discrimination and facility indices were settled
on as the standard analysis tools at a time when computers were not
commonly used to make the necessary calculations.  Although they are
simple to calculate they don't give an indication of the significance
of the numbers they produce and I've known groups of academics to argue
about questions with extreme values even though the number of students
who responded was close to the recommended minimum number of responses.

My personal preference is to use AnoVa or correlation (depending on the
scoring scheme for the question) to analyse class performance.  A
correlation coefficient has the same usefulness as for example a
disrimination index but it is accompanied by a significance level so
you know when to take it with a pinch of salt.

I can give some details of this if people think it will be of interest
and I would welcome criticism.

Jon Maber

David Davies wrote:
> Hi Carole
> I'll tell you what we do. For now, forget XML completely. It's not relevant
> to this discussion.
> We've got a web-based MCQ system that pulls MCQs out of a database for
> delivery via the web and pokes the students responses back in. It's no
> different that any other MCQ system that uses server-side processing,
> including all the commercial systems.
> Because the results are gathered in a results database, the system
> automatically calculates both the facility and the discrimination. The
> students receives his/her marks and the question owner(s) receive the stats
> describing the usefulness of their questions. We've spent a bit of time
> developing a tidy web interface to this so that the tutor of lead teacher
> can log in and monitor the performance of their questions at their
> convenience.
> Individual questions are coded according to level of difficulty such that
> there are golden questions that define core learning, standard questions
> that have been used before and so we have an idea as to their facility and
> discrimination, and new questions that have been deemed suitable for
> inclusion, but have not been tested 'in the field'.
> This utility is for us one of the chief benefits of using computer-based
> assessment.
> Cheers,
> David