Dear Mark, thanks you for your very helpful thoughts on learning
objectives. I'm working on a paper for Museums and the Web 07 that
tries to deal with these issues. I would be happy to take critical
comments on the draft if you have time to read it.
Regards Stephen
-----Original Message-----
From: Museums Computer Group [mailto:[log in to unmask]] On Behalf Of
Mark Elsom-Cook
Sent: 04 December 2006 08:40
To: [log in to unmask]
Subject: Re: Learning design
Evaluating whether your software/website etc. achieves certain learning
objectives is a thorny issue that has been knocking around the
Computer-Based Learning / Instructional Design communities for at least
30 years. There are loads of PhDs about the issues involved.
What it mostly comes down to is:
1) specifying Learning Objectives (and normally, how the software will
satisfy those objectives) is a standard part of good design and should
be applied to any educational software/site
2) Any piece of software will normally 'contribute to' (rather than
completely satisfy) those objectives - particularly if you deal with
large-scale ones like National Curriculum level
3) It is generally a problem to do any meaningful evaluation to test
whether the users have achieved certain objectives. This is partly
because there are so many variables to control, and partly because many
of the effects of educational software may only be apparent years later
in different contexts. This is particularly the case if you are coming
from a Constructivist perspective (which I believe most museum
activities must be, almost by definition), where the exploratory nature
increases the variation in what an individual might derive from the
experience
The overall effect of the above is that true educational evaluation
of the software is rarely even attempted and people generally simplify
to one of:
a) specifying objectives so trivial that they are measurable/testable
even though they may be meaningless (e.g. user spends at least 2 minutes
on page 3, user completes exercise 'A', user answers 3 out of 5
questions correctly). This also often goes with a drift towards a more
behaviourist bias in the software design - because it is easier to find
things to measure
b) use the '8 out of 10 owners said their cats prefer it' approach. This
method means you abandon hope of doing anything meaningful and evaluate
people's opinions of the site instead. The most common version is to get
a number of teachers, let them use the software, and give them a
structured questionnaire to answer questions about whether they think
learners would benefit from using it. This has the advantage that it
almost always gives positive results (especially if you ask the right
questions) but tells you nearly nothing. It's the same trick that
cosmetics manufacturers use. They sell various creams with '93% of women
asked said they felt a difference' type adverts/studies precisely
because they can find no genuine tests to give a positive result
I'm all for evaluating any aspect of software/sites that we can - and
formative stuff during design is preferred - but the bottom line is that
meaningful educational evaluation is a hard problem and most people
finesse it.
Mark Elsom-Cook
Learning Technology Services Ltd
http://www.ltslimited.co.uk
Stephen C Brown wrote:
>Dear colleagues, have you built learning activities into your Museum
>web site? Was it important for you to be able to demonstrate the
>effectiveness of your designs? Did you test the designs during
>development, and/or after they were completed? Are you willing to
>share some of your experiences? I am writing a paper for Museums and
>the Web 2007 that explores how we can predict what learning is likely
>to result from particular web site designs and how important it is to
>do so. My thesis is that we need to formatively test designs during
>development to ensure they deliver what they are supposed to, and this
>applies as much to learning activities as to any other part of the
>site. We cannot test if we don't know what the design is intended to
>achieve. So if Im right then specifying learning outcomes is essential
>when we are designing websites that encourage and support learning.
>But I could be wrong which is why I'm keen to hear from you.
>
>To make it easier for you to reply there is a small questionnaire at
>http://146.227.82.82:8080/opinio/s?s=102 It's only 10 questions and
>most of them have multiple choice answers.
>
>Many thanks for your help, Stephen
>
>Professor Stephen Brown
>Director, Knowledge Media Design
>De Montfort University
>Portland 2.3a
>The Gateway
>Leicester LE1 9BH
>UK
>
>Tel +44 (0)116 257 7173
>Fax +44(0) 116 250 6101
>mob +44 (0)7989 948230
>http://kmd.dmu.ac.uk
>
>Director, Aria
>http://aria.dmu.ac.uk
>
>
>
>
>**************************************************
>For mcg information and to manage your subscription to the list, visit
>the website at http://www.museumscomputergroup.org.uk
>**************************************************
>
>
>
>
--
Mark Elsom-Cook
[log in to unmask]
**************************************************
For mcg information and to manage your subscription to the list, visit
the website at http://www.museumscomputergroup.org.uk
**************************************************
**************************************************
For mcg information and to manage your subscription to the list, visit the website at http://www.museumscomputergroup.org.uk
**************************************************
|