Print

Print


> On Mar 4, 2017, at 10:09 PM, Keith Russell <[log in to unmask]> wrote:
> 
> The second type is where they have gone over the muddle and normalised it somewhat. Some of the software will take you through the text line by line, offering variations as you go.

GS:
(Other than what seems like a cynical attitude on the part of the student/writer, I wonder how this differs from writing drafts that are marked by the subject instructor and/or a writing instructor, then revised.)

KR:
> Because Turnitin is set up at my uni to allow multiple resubmission, students can then make further changes, unseen by me, that reduces the percentage to around 15 or less. Very few lectures look at 15%.

GS:
I have to shake my head when I hear people talking about Turnitin percentages as if that were a measure of anything important. That’s not just a failure of software. It’s a conceptual failure of the people setting "standards."

KR:
> Even when the paragraph blocks are identical in conceptual content, Turnitin will not show up the similarity if just a few words are moved around.

GS:
Part of the problem here is that people stress the need for originality and confuse that with issues of plagiarism. It would be difficult, but it is possible to write a non-plagiarized piece without a bit of "original" verbiage. No need to move words around. Just write "As Keith Russell tells us. . ." and append a footnote at the end. What’s up with all of this obsession about making things similar yet perfectly dissimilar?

> On Mar 4, 2017, at 11:17 PM, Terence Love <[log in to unmask]> wrote:
> 
> It's now more than 5 years since computerised essay marking systems became
> mainstream and more accurate than human essay markers. Giving the  AI an
> ability to test for similarity of essays for sale is likely to be
> straightforward.

GS:
Turnitin and such do compare. Your question presumes an essay for sale on the open market to multiple customers (thus available for testing.) If someone is writing a custom essay for sale, there is nothing to compare.

TL:
> An alternative (or addition) is to make the assessment for each student
> unique. This is the main anti-plagiarism method of doctoral assessment.
> Teachers/ lecturers have traditionally avoided this approach to make their
> life easier.  Now there are ways that minimise or remove such educator
> workload.

GS:
Because the purpose of a university is, of course, to minimize the work of faculty.

TL:
> Unfortunately, it doesn't remove the assessment  problems of university
> insistence on bell curve outcomes

GS:
I suppose percentages of student failure declared in advance makes more sense than fretting over the percentage of repeated words and phrases as a measure of anything (other than the frequency of repeated words and phrases.)


I don’t know if some context--who are these people writing stuff to be read by machines and why are the machines interested in reading the stuff?--would help me understand but all of this seems to be an effort to automate missing the point. Would the use of larger computers and larger datasets allow us to miss the point more massively?

I confess that I have never used Turnitin or similar software (other than Googling passages of papers) so maybe I’m not understanding something about their value.


Gunnar

Gunnar Swanson
East Carolina University 
graphic design program

http://www.ecu.edu/cs-cfac/soad/graphic/index.cfm
[log in to unmask]

Gunnar Swanson Design Office
1901 East 6th Street
Greenville NC 27858
USA

http://www.gunnarswanson.com
[log in to unmask]
+1 252 258-7006


-----------------------------------------------------------------
PhD-Design mailing list  <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------