[apologies for cross posting]

 

Hi all,

 

It is our pleasure to invite you to SoLAR Webinar "Online and automated exam proctoring: the arguments and the evidence".

 

Time and date: 6pm 27 April 2022 CDT / 9am 28 April 2022 AEST

Location: Zoom (meeting URL provided in the registration email)

 

To register, go to https://www.eventbrite.ca/e/online-and-automated-exam-proctoring-the-arguments-and-the-evidence-registration-293560857427

 

(Also, make sure you follow SoLAR's Eventbrite page to get updates for the future events).

 

We are looking forward to seeing you at the webinar!

 

Kind regards,

Isabel Hilliger

 

Society for Learning Analytics Research (SoLAR)

https://solaresearch.org/

 

Online and automated exam proctoring: the arguments and the evidence

A collage of people

Description automatically generated with medium confidence

 

About this event

 

The emergence of online exam proctoring (aka remote invigilation) in higher education may be seen as a function of multiple interacting drivers, including:

 

·       the rise of online learning

·       emergency exam measures required by the pandemic

·       cloud computing and the increasing availability of data for training machine learning classifiers

·       university assessment regimes

·       rising concerns around student cheating

·       accountability pressures from accrediting bodies

 

Commercial proctoring services claiming to automate the detection of potential cheating are among the most complicated forms of AI deployed at scale in higher education, requiring various combinations of image, video and keystroke analysis, depending on the services. Moreover, due to the pandemic, they were introduced in great haste in many institutions in order to permit students to graduate, with far less time for informed deliberation than would have been expected. Consequently, there was significant controversy around this form of automation, with protests at some universities seeing withdrawal of the services, and research beginning to clarify the ethical issues, and produce new empirical evidence.

 

However, numerous institutions are satisfied that the services they procured met the emergency need, and are continuing with them, which would make this one of the ‘new normal’ legacies of the pandemic. Critics ask, however, whether this should become ‘business as usual’. Regardless of one’s views, the rapid introduction of such complex automation merits ongoing critical reflection.

 

SoLAR is therefore delighted to host this panel, which brings together expertise from multiple quarters to explore a range of questions, arguments, and what the evidence is telling us, such as...

·       This is just exams and invigilation in new clothes, right? They’re not perfect, but universities aren’t about to drop them anytime soon, so let’s all get on with it…

·       Are there quite distinct approaches to the delivery of such services that we can now articulate, to help people understand the choices they need to make?

·       What ethical issues do we now recognise that were perhaps poorly understood 2 years ago — or simply couldn’t afford to engage with in the emergency, but which we must address now?

·       What evidence is there about the effectiveness of remote proctoring — automated, or human-powered — at reducing rates of cheating?

·       What answers are there to the question, “Should we trust the AI?” Are we now over (yet another) AI hype curve, and ready for a reality check on what “human-AI teaming” looks like for online proctoring to function sustainably and ethically?

·       What (new?) alternatives to exams are there for universities to deliver trustworthy verification of student ability, and what are the tradeoffs?

·       Who might be better or worse off as a result of the introduction of proctoring?

 

Our panel brings rich experience on the frontline of practice, business and academia:

 

Phillip Dawson is a Professor and the Associate Director of the Centre for Research in Assessment and Digital Learning, Deakin University. Phill researches assessment in higher education, focusing on feedback and cheating, predominantly in digital learning contexts. His 2021 book “Defending Assessment Security in a Digital World” explores how cheating is changing and what educators can do about it.

 

Jarrod Morgan is an inspiring entrepreneur, award-winning business leader, keynote speaker, and chief strategist for the world’s leading online testing company. Jarrod founded ProctorU in 2008, and in 2020 led the company through its merger and evolution into Meazure Learning. In his role as chief strategy officer, he is a frequent speaker for the Online Learning Consortium (OLC), the Association of Test Publishers (ATP), Educause, and many others. He has appeared on PBS and the Today Show, and has been covered by the Wall Street Journal, The New York Times, and is a columnist with Fast Company through their Executive Board program.

 

Jeannie Paterson is Professor of Law and Co-Director of the Centre for AI and Digital Ethics, University of Melbourne. She teaches and researches in the fields of consumer protection law, consumer credit and banking law, and AI and the law. Jeannie’s research covers three interrelated themes: The relationship between moral norms, ethical standards and law; Protection for consumers experiencing vulnerability; Regulatory design for emerging technologies that are fair, safe, reliable and accountable. She recently co-authored “Good Proctor or “Big Brother”? Ethics of Online Exam Supervision Technologies”.

 

Lesley Sefcik is a Senior Lecturer and Academic Integrity Advisor at Curtin University. She provides university-wide teaching, advice, and academic research within the field of academic integrity. She is a Homeward Bound Fellow and a Senior Fellow of the Higher Education Academy. Dr. Sefcik’s professional background is situated in Assessment and Quality Learning within the domain of Learning and Teaching. Current projects include the development, implementation and management of remote invigilation for online assessment, and academic integrity related programs for students and staff at Curtin. She co-authored “An examination of student user experience (UX) and perceptions of remote invigilation during online assessment”.

 

(Chair) Simon Buckingham Shum is Professor of Learning Informatics and Director of the Connected Intelligence Centre, University of Technology Sydney, where his team researches, deploys and evaluates Learning Analytics/AI-enabled ed-tech tools. He has helped to develop Learning Analytics as an academic field for the last decade, and has served two terms as SoLAR Vice-President. His background in ergonomics and human-computer interaction always draws his attention to how the human and technical must be co-designed to work together to create sustainable work practices. He recently coordinated the UTS “EdTech Ethics” Deliberative Democracy Consultation in which online exam proctoring was an example examined by students and staff.

 



To unsubscribe from the DISTANCELEARN-RESEARCH list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=DISTANCELEARN-RESEARCH&A=1