Print

Print


***Apologies for cross-posting***

Dear all

Hope someone can help with this!

As well as articles, conference proceedings etc. we are also currently hosting individual blog entries on our repository.  For an example here are the entries for our American Politics and Policy blog: http://eprints.lse.ac.uk/view/sets/coll=5FUSAPP=5FBlog.html

The current process is fairly ad hoc, with us relying on blog authors and editors sending up PDFs of blog entries.  Obviously this can be hit and miss, depending on if the blog has a full time editor (some do).  We then add metadata and host it along with a URL to the original post. This does generate traffic back to the blog itself which is nice.  The metadata stage can take a while as we publish a large number of blogs at LSE.  We haven't set up any RSS feeds or anything like that to collect the blogs in a pro-active/automated way.

So, our question is threefold:


1.       Is anyone else collecting blog content on their repositories?

2.       If so, are you collecting automatically through a feed to ensure you're getting everything?

3.       If the answer to 2 is 'yes', do you also automate any part of metadata process to save time?

Basically, we'd love to know anyone else out there collecting blog content and to know how you do it.

Any information gratefully received!

Best wishes,

Nancy


Nancy Graham
Research Support and Academic Liaison Manager
Academic Services Group |Library Services
London School of Economics and Political Science
10 Portugal Street, London WC2A 2HD
Tel: 020 7955 7946 | Email: [log in to unmask]<mailto:[log in to unmask]>

Find out about our NEW Research Data Management Service at http://www.lse.ac.uk/library/usingTheLibrary/academicSupport/RDM/home.aspx