Print

Print


Hello
I will probably regret this hasty response, especially given that the 
main reason my family can eat is a tiny tiny portion of those millions 
spent on evaluation. (I'd be surprised if it was much more than a 
million in total each year for the museum sector though. Most contracts 
for evaluation are between £1k and 10k so it may even be much less.)

I think there is much room for improvement in the way the sector 
approaches evaluation. One place to start is to revamp the old MLA 
Generic Learning Outcomes and Generic Social Outcomes into a much more 
sophisticated-but-simple tool based on the Theory of Change which 
integrates learning and social outcomes, and incorporates digital.

However, I don't think this is the main requirement. The reason why 
evaluations make less impact than they should is due to the nature of 
project funding, the closed endpoint and fixed outcomes.

The closed endpoint: What is the point of a summative evaluation if that 
really is the end and there is nobody in post to enact any 
recommendations for change? The main problem is that the summative 
evaluation has to be done within a window of time that allows some 
actions to be taken, yet too early to prove any kind of medium to long 
term impact. I'm keen, as an evaluator, to act as a critical friend that 
helps institutions increase the impact and longevity of their 
initiatives from the start, throughout, and beyond the conclusion of the 
initiating phase. Mostly the briefs don't allow me to act like this, and 
this is usually because the evaluation report is required by the funder 
to prove that the funds were spent as planned.

The fixed outcomes: Funders require projects to state in too fixed a way 
what the outcomes will be. Evaluation processes can be designed to pick 
up on unpredicted outcomes but they aren't seen as very important, or 
there is no mechanism in the project for those unpredicted outcomes to 
be built on.

I hope this is helpful testimony!
Best wishes
Bridget




On 06/06/2012 12:51, Mia wrote:
> There's an interesting post called 'Why evaluation doesn't measure up'
> on the Museums Association site
> http://www.museumsassociation.org/museums-journal/comment/01062012-why-evaluation-doesnt-measure-up
> or http://bit.ly/L9FlQz where they say:
>
> "No one seems to have done the sums, but UK museums probably spend
> millions on evaluation each year. Given that, it’s disappointing how
> little impact evaluation appears to have, even within the institution
> that commissioned it."
>
> and:
>
> "Summative evaluations are expected to achieve the impossible: to help
> museums learn from failure, while proving the project met all its
> objectives. Is it time to rethink how the sector approaches
> evaluation?"
>
> I'm curious to know what others think.  Are they right?  Or are they
> missing something?
>
> Cheers, Mia
>
> --------------------------------------------
> http://openobjects.org.uk/
> http://twitter.com/mia_out
>
> ****************************************************************
>         website:  http://museumscomputergroup.org.uk/
>         Twitter:  http://www.twitter.com/ukmcg
>        Facebook:  http://www.facebook.com/museumscomputergroup
>   [un]subscribe:  http://museumscomputergroup.org.uk/email-list/
> ****************************************************************

****************************************************************
       website:  http://museumscomputergroup.org.uk/
       Twitter:  http://www.twitter.com/ukmcg
      Facebook:  http://www.facebook.com/museumscomputergroup
 [un]subscribe:  http://museumscomputergroup.org.uk/email-list/
****************************************************************