Hi Alex
>Some tracking of other performance factors such as time taken, order of
>answering questions and changes made would probably be relatively easy to
>impliment. I agree that averaging a student's perfomance out over all tests
>taken would increase one's confidence that final scores were quite
>accurate.
Yes, I believe so as well.
>The people being tested should be made aware of the extra criteria though,
>so that they can be extra careful to avoid the tiles and the booze the
>night before.
Surely - simple good practice at any level or vocational setting.
>I also wonder whether some kind of random drug testing for
>cocaine for example may be worth adding when emphasising performance
>factors(only half joking ;-)) rather than simply allowing enough time for
>all the questions to be answered by a weak student.
But all exams are time limited are they not - on line or on paper? Not
sure what you mean by "allowing enough time for all the questions to be
answered by a weak student.". If they are weak then they will fail to
answer correctly or at all.
>How to compensate for the fact that some people are more nervous or
>irritable (over-aroused for optimal performance) in a test situation than
>others and that such testing may simply exacerbate their clumsiness, while
>they may have performed superlatively when not being tested is another
>problem that would make me uneasy about implimenting such a scheme
>personally.
Again, I cannot quite see you objection. Everyone is nervous for an exam -
on line or on paper. Ticking the incorrect box on paper is surely just as
likely as click an incorrect box on screen - and in both cases, can (or
should be) correctable. Recording corrections may be interpreted
as "incorrect or unsure knowledge", that is why I suggest that inference is
all that can be gained and that the cumulative results of formative and
summative assessment is required to be as sure as possible of actual
knowledge.
>Regarding hard or easy questions, it may also be that some are about
>aspects of the subject domain that students find more interesting, put more
>effort into and consequently performed better on rather than the question
>itself.
Clearly, a student who studies a topic is more likely to answer correctly
than a student who does not study a topic. Surely, this is what we are
testing for - subject knowledge from a prescribed course of learning?
>The alternative answers to any question could also have a bearing
>on how easy it is to distinguish the correct answer from among them.
Absolutely - and this is conventional aspect of MCQ design.
|