Hi all,
This is a really important thread, and many thanks for the information and references so far.
It seems like an opportune moment, in light of Mia's comments about big aggregators, to mention the Europeana Inside project which the Collections Trust is coordinating in partnership with 12 of the main Collections Management System providers. For those of you that are interested, the project website is http://www.europeanainside.eu and there's a short video setting out the aims of the project at http://www.europeana-inside.eu/about/index.html.
The aim of Europeana Inside is to reduce or remove the barriers to opening up collections by part-automating the workflow of delivering collections metadata to 3rd parties directly from collections systems. The particular barriers we aim to address include:
- Mapping - the complexity of mapping and re-mapping information structures and schemata to meet the requirements of different channels and end-points
- Selection - the selection and management of which parts of your collections metadata you want to share, and which you don't
- Licensing - the wrapping of collections metadata in appropriate licensing terms for its intended destination (Europeana in the first instance, but our aim is to make the functionality modular and adaptable to different channels and destinations)
- Round-tripping - investigating the challenges associated with bringing enhanced metadata back in to local systems from 3rd party platforms such as Europeana
Critically, what we're trying to create with Europeana Inside is not a thing (although there will be an open-source implementation for people to experiment with) but an end-user experience that integrates the selection, management and sharing of metadata into the same context as the management, documentation and interpretation of the collections. Hence, a key outcome is that most of the functionality will be embedded into whatever system you are using to manage your collection as an integrated part of the same workflow.
As we know from Z39.50 through OAI PMH and into the brave new world of the API, people can use the same sausage machine to create very different sausages. I have more or less given up on getting people to standardise the structure, the semantics or the identifiers (because of the sheer complexity of the material, information, knowledge and formats our sector deals with), so the problem we're seeking to address is minimising the overhead and maximising the repeatability of the mapping and distribution part of the supply chain.
If anyone is interested in finding out more about Europeana Inside, or how you can get involved, you can register for the newsletter at http://www.europeana-inside.eu/newsletter/index.html.
I hope this is a useful contribution. If you're interested in my view on the other main themes in Collections Management over the past year (and looking ahead to 2013), I've done a round-up of the year at http://www.collectionslink.org.uk/blog/1575-2012-the-year-that-was.
All best, and have a great Christmas.
Nick
-----Original Message-----
From: Museums Computer Group [mailto:[log in to unmask]] On Behalf Of Mia
Sent: 20 December 2012 13:35
To: [log in to unmask]
Subject: Re: Low cost collections management solutions
I think a critical mass more likely to occur around standards used by big aggregators like Europeana (EDM) or the Digital Public Library of America
(DPLA) (an extension of Dublin Core?), though of course aggregation standards are often set up for the lowest common denominator in a set of institutions sharing data so they're not always suitable for fine-grained use.
For further background there's some discussion at MW2011 on using Opensearch for exchanging data between collection management systems recorded at http://museum-api.pbworks.com/w/page/38978974/MW2011%20Opensearch%20unconference%20sessionOne
session involved senior museum staff and representatives of a number of vendors, the other was a general unconference session.
More background on the DPLA's implementation plans:
http://dp.la/dev/wiki/Technical_Overview#Schemas_and_Metadata and DPLA Metadata Schema v2
https://docs.google.com/document/d/1naSx3dsKkFDefX-rs0nU1GZGSlihniDwCNu-HjK65MM/edit?pli=1
The work Richard mentions around persistent URLs and published term lists
is a real sign of progress. (Is there a simultaneous conversation with
Documentation managers around implementing these shared URLs for controlled
vocabularies?)
And to extend Richard's list of reference URLs, a bit of a glossary for this acronym soup to help the ordinary reader follow along...
Europeana Data Model (EDM) http://joinup.ec.europa.eu/asset/edm/description
Lightweight Information Describing Objects (LIDO): What is LIDO?
http://network.icom.museum/cidoc/working-groups/data-harvesting-and-interchange/what-is-lido/
And I assume MUA is the Modes User Group!
http://www.modes.org.uk/about-modes/
Cheers, Mia
--------------------------------------------
http://openobjects.org.uk/
http://twitter.com/mia_out
On 20 December 2012 09:22, James Grimster <[log in to unmask]> wrote:
> thanks Mia
>
> is it possible to get a critical mass around a standards based
> interchange? a different API for each CollMS is fine all the time your
> audience is connecting to a single source. What happens when you've
> got multiple CollMSs, or multiple data sources like archives and
> archaeology in the mix?
> With a federated approach combining relevancy scoring across search is
> challenging - ref Ade's presentation on JISC WW1 at MCG
>
> So surely future-proofing middleware must sit in the, er, middle of
> all this?
> Talking to Paul from Vernon/e-Hive/DigitalNZ: there's lots of SOLR
> based indexers with various, but similar, RSS style search responses
> as APIs on top (as per Europeana).
> If the 'common' interchange in the UK were, for example, Collections
> Trust's SPECTRUM XML interchange ....
>
> --
> James Grimster
> www.orangeleaf.com
>
>
> On 19 Dec 2012, at 23:43, Mia wrote:
>
> > What a great thread! I'd agree with what Nick Poole and James
> > Grimster said above, and...
> >
> > On 30 August 2012 11:51, Richard Light <[log in to unmask]>
> wrote:
> >
> >>
> >> We probably need to give more thought to engineering the pipework
> through
> >> which our information flows. It probably won't be too long before a
> typical
> >> cultural heritage institution is storing its core information in
> >> three
> or
> >> more places (collections management system, image management
> >> system, blog/UGC/social media repository), and needing to meld and
> >> deliver that information to a variety of platforms and audiences.
> >> Writing each interface by hand simply won't scale.
> >>
> >>
> > I think I've spent too long away from writing code because making
> > that actually sounds like it'd be fun (as well as useful). The
> > trick would be making it enough like core business - perhaps as part
> > of a digital preservation or collecting strategy - to justify the
> > resources. Each institution has different needs, but there's
> > already a number of
> WordPress
> > plugins (for example) that deal with collections APIs or
> > repositories, so you might get a critical mass of tools around Nick's first 'middleware'
> > option. The middleware option also seems slightly more future-proof
> > and realistic than a grand unified system that does everything, but
> > then I'm probably biased by the Unix philosophy of writing programs
> > that do one thing and do it well, or the more recent 'do the
> > simplest thing
> possible'.
> >
> > Peigi - this thread started quite a while ago but if you missed the
> > start the archives are available via the JISCMail site - Nick's post
> > here sums
> up
> > http://bit.ly/SVbjcm why you'd want to use a specialist Collections
> > Management System (as you are), tools like WordPress are just a way
> > of creating a user-friendly public-facing interface to them (to
> > build on
> what
> > Mike said).
> >
> > Tehmina - Neatline is very cool but still very beta (though there's
> > a new version coming out soon). As it's based on Omeka,
> > documentation/library science experience really helps people get
> > their heads around the underlying record model. I've got an
> > instance set up that I'd be happy
> to
> > send logins for if you (or anyone else) wants to try it out beyond
> > the sandboxes Omeka and Neatline provide. I've also written
> > something on my experiences teaching Neatline at
> >
> http://openobjects.blogspot.co.uk/2012/11/reflections-on-teaching-neat
> line.html
> >
> > Cheers, Mia
> >
> > ****************************************************************
> > website: http://museumscomputergroup.org.uk/
> > Twitter: http://www.twitter.com/ukmcg
> > Facebook: http://www.facebook.com/museumscomputergroup
> > [un]subscribe: http://museumscomputergroup.org.uk/email-list/
> > ****************************************************************
>
> ****************************************************************
> website: http://museumscomputergroup.org.uk/
> Twitter: http://www.twitter.com/ukmcg
> Facebook: http://www.facebook.com/museumscomputergroup
> [un]subscribe: http://museumscomputergroup.org.uk/email-list/
> ****************************************************************
>
****************************************************************
website: http://museumscomputergroup.org.uk/
Twitter: http://www.twitter.com/ukmcg
Facebook: http://www.facebook.com/museumscomputergroup
[un]subscribe: http://museumscomputergroup.org.uk/email-list/
****************************************************************
****************************************************************
website: http://museumscomputergroup.org.uk/
Twitter: http://www.twitter.com/ukmcg
Facebook: http://www.facebook.com/museumscomputergroup
[un]subscribe: http://museumscomputergroup.org.uk/email-list/
****************************************************************
|