At 11:06 AM 5/7/98 -0300, Robert Dawson wrote:
>Jeff Rasmussen wrote:
>
> Just the other day I was chatting with a cognoscenti in the field of
>> computerized evaluation of essay exams. If I understood him correctly,
>he
>> told me that the inter-rater reliability of human judges of the quality
>of
>> essay exams (of the type that now appear on SAT exams) was around .70 and
>> that the multiple correlation between a human and a computer-generated
>> grade was also around .70. Some of the predictors were essay length,
>use
>> of keywords/synonyms and vocabulary. I asked him for a paper, and will
>> report back on it--I suspect my summary above is incorrect due to
>> faulty-memory.
>>
>> Obviously, there are lots of questions about testmanship, ability to
>> distinguish "word salad" answers from well-organized ones & such, but the
>> initial results were intriguing.
>
Robert,
Thanks for the comments below. I don't know enough about this issue to
say I think that it is a good or bad idea. But, nevertheless, just a few
observations.
Just because the computer does the analysis doesn't mean it's some
mindless random process. The keywords are generated by humans and, one
would suspect, it is reasonable to expect that they covary with the quality
of the answer. If the question were how do you grow corn, a good answer
would be more likely to contain keywords such as water, soil, fertilizer
than would a poor answer. Indeed, one way of conceptualizing the matter is
that instead of asking the student:
"Explain how you would grow corn"
we were to ask the student:
"List some of the keywords associated with growing corn"
Answers to these two questions would correlate with each other and your
Platonic ideal (which is a concept, btw, not just rejected by cynics....
but that's a different matter).
As for your a, b & c below, the ability of students to generate
appropriate keywords in a slurry of gibberish would covary with their
knowledge of the topic. For example:
Student A: "The corn is like water for we see the soil and fertilizer bugs
must be sprayed rototill"
probably knows more than
Student B: "The corn is like monkey for we see the pencil and ring phone
must be taken vacuumed"
Or as Shakespear said (more or less): "it takes a wise man to play the fool"
As for the threat of deans & lawyers (oh my!), I doubt that either can
distinguish between sense and gibberish so the student's complaints would
fall on deaf ears. (As an aside: Two lawyers are talking. The first says
"Hey, you're lying to me!" The second says "Yes I am, but hear me out.")
obsolescently yours,
JR
> Inter-rater reliability seems like a red herring here, or at best a
>necessary but not sufficient condition for an acceptable grading scheme.
>As a wild counterexample, suppose that we have two schemes. In one, Grader
>A grades all the papers. In the second, the *same* grader rolls a die for
>each paper; if he rolls 2,3,4,5,6 he grades the paper but on rolling a 1
>he uses the "throw-them-down-the-stairs" technique. The correlations would
>be similar.
>
> Two human graders may frequently disagree; but I would hazard a guess that
>each of them correlates much more strongly with the Platonic ideal grade
>[or, for the cynics who don't believe in that, the average over many
>graders] than the program would.
>
> As another example: There may be a similarly strong correlation between
>students' first-year grades and second-year grades. Would it be ethical to
>save trouble by doing no evaluation in second year and just giving the
>first-year grade again?
>
> A few more thoughts for anybody who seriously thinks that this sort of
>thing would work:
>
> (a) Scenario 1. Student who is unhappy about getting (say) B+ rather than
>A deliberately submits an essay which is semantically gibberish but
>syntactically correct and using the right keywords. Upon getting even a
>passing grade he takes it to the dean and demands that the professor who is
>using stochastic grading techniques be disciplined.
>
> (b) Scenario 2. Word gets out that the essays are graded by machine.
>Bright student figures out, as a challenge, what the program looks for, and
>circulates a page on "How to Write A Relevant Studies 1000 Essay." This
>cannot be held to be cheating, as the other students are still writing the
>essay themselves; no disciplinary committtee would rule that students are
>forbidden to discuss what the professor "wants" to see in an essay.
>
> (c) Word gets out. Student fails course & comes back with a lawyer. 'Nuff
>said.
>
> Does anybody reading this list *really* consider that competence in their
>own discipline, even at theundergraduate level, can be operationalized in
>terms of a computer keyword search? "Use the right buzzwords in the right
>places and you know all aboutX"?
>
> If so, I have another Modest Proposal. Since the evaluation program has
>determined [or been told] what constitutes a good essay, let us add in an
>ELIZA-type routine that will produce the essays as well, in whatever number
>may be desired. It will then be competent in Subject X, and the students
>who would have written the essays [and the professors who would have
>graded^H^H^H^H^H^H handed them to the secretary for scanning] will be
>obsolete.
>
> -Robert Dawson
>
>
.....................................
' Jeff Rasmussen, PhD '
' Indiana University Indianapolis '
' 402 North Blackford '
' Indianapolis, IN 46202 '
' '
' Quantitative Software: '
' http:\\psychology.iupui.edu\fb '
....................................'
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|