I'd love to be able to comment on that post, but the MA seem to miss the point of providing easy to access community activity (see also: job listings behind a password)… but..
I think Bridget's point about funding is the key one and I make it often. Cultural Heritage is almost always geared towards a big up-front capital spend and then, following launch: no money whatsoever. Marketing (or - most usually - lack of) fits into this, too.
IMO the most effective digital projects are all about making incremental, audience-focused, "learn as you go" decisions. Almost no-one knows what they want until they've launched it and seen how audiences react. Pre-launch user testing gets you so far along this line, but projects are never perfect on first go.
I don't know how or whether we can solve this but ideally there would be a capital budget and then a project iteration budget as well as an operating budget. Oh, and a gin and tonic budget too...
In related news:
1) I'm always slightly worried about the clients who want Google Analytics training but then tell me they have absolutely no resource to actually DO anything with all that data…
2) Google Analytics now does Content Experiments (http://bit.ly/NhcsXm)
3) If you haven't seen this video of the Basecamp homepage being iterated - it's ace: http://bit.ly/LzR9fD
Yours, iteratively
Mike
On Wednesday, 6 June 2012 at 13:36, Bridget McKenzie wrote:
> Hello
> I will probably regret this hasty response, especially given that the
> main reason my family can eat is a tiny tiny portion of those millions
> spent on evaluation. (I'd be surprised if it was much more than a
> million in total each year for the museum sector though. Most contracts
> for evaluation are between £1k and 10k so it may even be much less.)
>
> I think there is much room for improvement in the way the sector
> approaches evaluation. One place to start is to revamp the old MLA
> Generic Learning Outcomes and Generic Social Outcomes into a much more
> sophisticated-but-simple tool based on the Theory of Change which
> integrates learning and social outcomes, and incorporates digital.
>
> However, I don't think this is the main requirement. The reason why
> evaluations make less impact than they should is due to the nature of
> project funding, the closed endpoint and fixed outcomes.
>
> The closed endpoint: What is the point of a summative evaluation if that
> really is the end and there is nobody in post to enact any
> recommendations for change? The main problem is that the summative
> evaluation has to be done within a window of time that allows some
> actions to be taken, yet too early to prove any kind of medium to long
> term impact. I'm keen, as an evaluator, to act as a critical friend that
> helps institutions increase the impact and longevity of their
> initiatives from the start, throughout, and beyond the conclusion of the
> initiating phase. Mostly the briefs don't allow me to act like this, and
> this is usually because the evaluation report is required by the funder
> to prove that the funds were spent as planned.
>
> The fixed outcomes: Funders require projects to state in too fixed a way
> what the outcomes will be. Evaluation processes can be designed to pick
> up on unpredicted outcomes but they aren't seen as very important, or
> there is no mechanism in the project for those unpredicted outcomes to
> be built on.
>
> I hope this is helpful testimony!
> Best wishes
> Bridget
>
>
>
>
> On 06/06/2012 12:51, Mia wrote:
> > There's an interesting post called 'Why evaluation doesn't measure up'
> > on the Museums Association site
> > http://www.museumsassociation.org/museums-journal/comment/01062012-why-evaluation-doesnt-measure-up
> > or http://bit.ly/L9FlQz where they say:
> >
> > "No one seems to have done the sums, but UK museums probably spend
> > millions on evaluation each year. Given that, it’s disappointing how
> > little impact evaluation appears to have, even within the institution
> > that commissioned it."
> >
> > and:
> >
> > "Summative evaluations are expected to achieve the impossible: to help
> > museums learn from failure, while proving the project met all its
> > objectives. Is it time to rethink how the sector approaches
> > evaluation?"
> >
> > I'm curious to know what others think. Are they right? Or are they
> > missing something?
> >
> > Cheers, Mia
> >
> > --------------------------------------------
> > http://openobjects.org.uk/
> > http://twitter.com/mia_out
> >
> > ****************************************************************
> > website: http://museumscomputergroup.org.uk/
> > Twitter: http://www.twitter.com/ukmcg
> > Facebook: http://www.facebook.com/museumscomputergroup
> > [un]subscribe: http://museumscomputergroup.org.uk/email-list/
> > ****************************************************************
> >
>
>
> ****************************************************************
> website: http://museumscomputergroup.org.uk/
> Twitter: http://www.twitter.com/ukmcg
> Facebook: http://www.facebook.com/museumscomputergroup
> [un]subscribe: http://museumscomputergroup.org.uk/email-list/
> ****************************************************************
>
>
****************************************************************
website: http://museumscomputergroup.org.uk/
Twitter: http://www.twitter.com/ukmcg
Facebook: http://www.facebook.com/museumscomputergroup
[un]subscribe: http://museumscomputergroup.org.uk/email-list/
****************************************************************
|