John Beech wrote:
>
> What has been disappointing in the last week has been the lack of
> contributions on the original debate - the 'rationale' for
> 'all-or-nothing' approaches to the admissibility of plagiarism evidence.
>
Oh, a SUBSTANTIVE issue ;o)
I took the report in the THES with the usual medical dose of salt, I
never found them too reliable outside their job advert section (and
maybe the book reviews), so I'm not sure what the situation precisely
was, but I would make a couple of distinctions
- "suspicion triggered" testing: examiner thinks parts of an essay are
plagiarised, and uses a tool such as Turnitin to verify the suspicion:
totally unproblematic. Analogous situation: reasonable suspicion
triggers issue of a search warrant, or for a DNA probe. As the analogy
shows, I'd say this raises if at all even less issues than a blanket
probe of all essays
- "random searches of samples", e.g. every 5th essay, or a randomly
chosen different course every year:
At least as unproblematic as checking all. Analogy: random testing for
drink driving around Christmas. Advantage of that strategy: minimises
work while maximises deterrence. We may feel that for very intrusive
investigations (search of one's home, permission to entrapment, etc),
this is unsuitable and reasonable suspicion should be requested (On
random testing of virtue see Supreme Court of Canada in 1991 (R v.
Barnes) But putting an essay through a piece of software is not of that
kind, so no issue here.
- "non-random searches of samples along administrative lines" E.g a
degree program leaves it to individual course organisers if they use the
system, some always do, some always don't:
That's in effect what I did when I tested JISC for my institution, I
asked my courses if they would volunteer. Students tend to be very
unhappy about this, I'm unsure if with good reasons.I suppose the issue
for me would be less one of procedural fairness or admissibility of
evidence but of quality control: students with identical degrees for the
same course from the same institution would have been examined to
potentially very different standards, depending on the courses they
took. As a "tested" student, I may feel that the "untested" students
dilute the value of my degree. But this only means it is problematic as
a policy, not that evidence created this way is problematic.
- "non random searches of samples triggered by external parameters"
think of racial profiling for stop and searches. Only ever Chinese,
Greek,German, African etc students are put through the system because
the examiner "knows" that they are prone to plagiarise.
Problematic for all sorts of reasons. Leads to self-perpetuating myths
(Since more members of group tested, more are caught, which then fuels
the next cycle)Removes deterrence from "non-targeted groups". So
definitely not an acceptable policy. BUT even then, you may wonder if
the fact that the evidence was obtained in a problematic way should make
it inadmissible. Say a tutor is found out doing this despite the
official policy. If this were a full fledged legal issue, different
legal systems would answer this differently, which shows people have
very strong differing intuitions on this. In the US, there is a strong
nexus between illegally obtained evidence and inadmissible evidence: By
making the evidence inadmissible, you take away the temptation to
collect it in an inappropriate way. In continental European systems,
there is no such clear nexus: lots of illegally obtained evidence is
admissible, as long as it is reliable. So in Germany, you still would
kick out the student for cheating, but also the tutor (for racism)
Burkhard
*************************************************************************
You are subscribed to the JISC Plagiarism mailing list. To Unsubscribe, change
your subscription options, or access list archives, visit
http://www.jiscmail.ac.uk/lists/PLAGIARISM.html
*************************************************************************
|