On 17/11/10 11:01, K Fearon wrote:
> This is a timely discussion as we're currently looking at how we might
> measure the success of the student portal we're about to start
> developing, with the intention of collecting comparison data before we
> start the project to compare with post-launch results. We're looking at
> quality and satisfaction measures rather than value for money so it
> would be really useful to consider how to measure VFM. (Any pointers to
> resources, however basic, would be really helpful here.)
It is tempting to measure what is easiest to measure, rather than
starting from your goals, or, even better, a theory of what is good.
Back in 1998 I went to a meeting in Milan on evaluating computer
supported co-operative work. Someone from Mitre Corporation had counted
every number they could find in their systems, and came along to the
meeting asking "what can I do with these numbers?". Not surprisingly, we
couldn't answer that. Instead, I showed how starting from a theory
(Garrison's Theory of Critical Thinking) it is possible to design
measurement techniques that will allow us to compare the effects of
learning of face-to-face and on-line discussions (see Newman, D. R.,
Johnson, C., Webb, B. and Cochrane, C. (1997), Evaluating the quality of
learning in computer supported co-operative learning. Journal of the
American Society for Information Science, 48: 484–495).
Now for each project, it is not too hard to imagine an ideal outcome.
For example, what sort of student behaviour change might result from a
student portal? How well would it help them in particular situations
(situational relevance)? How much are they hindered by the design? Then
look for theories and/or research techniques that have attempted to
evaluate these desired outcomes, changes and processes.
Dave Newman
|