Print

Print


Just some examples at random that I’m aware of and come immediately to mind (from a library background):

There are a couple of experiments/projects with images from the Google Cultural institute - see https://artsexperiments.withgoogle.com/#/introduction

Ben O’Steen at the BL wrote the Mechanical Curator, which did extracted images from digitised books, and found some degree of similarity between images http://blogs.bl.uk/digital-scholarship/2013/10/peeking-behind-the-curtain-of-the-mechanical-curator.html

Mario Klingemann (who is also involved with some of the Google Cultural Institute work) has worked on the grouping (for the purposes of classification) of images from the British Library Flickr 1 million images data set  - you can see him present on this at https://www.youtube.com/watch?v=6wglRwBbg48 - to get the highlights there are some notes on the presentation by Dr James Baker at https://gist.github.com/drjwbaker/b85ba5d85c95eb040ed3. His work focussed on finding ‘similar’ images and then they could be manually tagged

There was another project on automated tagging of the BL images - but it was a bit more experimental (student project demonstrating use of convolutional neural networks) - SherlockNet https://github.com/ludazhao/SherlockNet

The Digital Music Lab project did analysis of music recordings and also created some linked data from the related metadata (although not sure that’s still live) http://dml.city.ac.uk/explore-the-dml-web-interface/

This article from a project based at UCL covers a couple of examples of extracting data from digitised books/metadata https://academic.oup.com/dsh/article/doi/10.1093/llc/fqx020/3789810/Enabling-complex-analysis-of-large-scale-digital

The Bodleian Ballads ImageBrowse software is designed for researching ballad illustrations - allowing browsing of images automatically extracted from the text and finding the reuse of the same woodblock for illustration across publications


 
Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: [log in to unmask]
Telephone: 0121 288 6936

> On 8 Sep 2017, at 10:39, Richard Light <[log in to unmask]> wrote:
> 
> Hi,
> 
> Can anyone point me to working examples where information is automatically extracted (or enhanced) from cultural heritage-related resources?  This might be feature recognition in images; picking out named entities from full text; converting string-value structured data to Linked Data URLs; voice recognition for audio/video; etc.
> 
> Thanks,
> 
> Richard
> -- 
> Richard Light
> **************************************************************** website: http://museumscomputergroup.org.uk/ <http://museumscomputergroup.org.uk/> Twitter: http://www.twitter.com/ukmcg <http://www.twitter.com/ukmcg> Facebook:http://www.facebook.com/museumscomputergroup <http://www.facebook.com/museumscomputergroup> [un]subscribe: http://museumscomputergroup.org.uk/email-list/ <http://museumscomputergroup.org.uk/email-list/> ****************************************************************


****************************************************************
       website:  http://museumscomputergroup.org.uk/
       Twitter:  http://www.twitter.com/ukmcg
      Facebook:  http://www.facebook.com/museumscomputergroup
 [un]subscribe:  http://museumscomputergroup.org.uk/email-list/
****************************************************************