Print

Print


Hello All

Is there any mileage in a discussion, sharing of experience, other 
action about Key performance Indicators (KPIs). These will probably vary 
somewhat in detail from one institution to another, but I wonder if 
there are some general principles. examples of good practice, that could 
be shared. I'm not sure how many sites are already producing/thinking of 
producing KPIs.

Regards
Marion Tattersall
Linda Butt said the following on 03/07/2009 08:01:
>
> Hello Cliff,
>
>  
>
> Many thanks for giving such a clear explanation of the recent 
> discussions. I am one of the new party goers, and had indeed been 
> getting all the right notes, but not necessarily in the right order...
>
> I guess that in the end it is up to all of us, as consumers, to 
> "persuade" our suppliers to provide statistics in the format which we 
> require. I wonder whether this might be a theme for next years UKSG? I 
> know that this subject has been covered many times, and am very well 
> aware of the excellent work done by Peter Shepherd and the rest of you 
> who have pioneered and taken it forward. However, if we all start 
> pushing in the same direction at the same time, the mountain might 
> just move.
>
> For those of you who remember the inimitable Wolfie Smith and the 
> Tooting Popular Front, "Power to the People"
>
>  
>
> Regards
>
>  
>
> Linda
>
>  
>
>  
>
> *Linda Butt** *
>
> *Senior Assistant Librarian (Acquisitions and Archives) *
>
> *   *
>
> Kimberlin Library, De Montfort University, The Gateway, Leicester, LE1 
> 9BH
>
> 0116 2506392   [log in to unmask] <mailto:[log in to unmask]>
>
>  
>
>  
>
>  
>
> *From:* [log in to unmask] 
> [mailto:[log in to unmask]] *On Behalf Of *Cliff Spencer
> *Sent:* 02 July 2009 17:38
> *To:* [log in to unmask]
> *Subject:* [lib-stats] Standards and data quality
>
>  
>
> I've had many emails from new members of this list asking if I/We 
> could clarify some of the recent issues, because they have no idea 
> what's going on. You folks are right to complain of course, nothing 
> more annoying than being invited to the party and then playing the 
> wrong music, so here goes, and apologies to the veterans who might 
> want to hit the delete button now.
>
>  
>
>  
>
> Thanks to Peter Shepherd for the heads-up on the latest SUSHI. My last 
> word for now (unless I actually get it to work!)
>
>  
>
> Most of us still manually download stats from publishers' web sites on 
> a monthly, quarterly yearly basis using admin password and login. You 
> may have to do this for many vendors (I have about 70 different 
> logins). This is boring, time consuming and costly. The SUSHI tool 
> will automate this by /pushing/ the data to you using a "client" 
> program (called a harvester) which sits on a machine at work. This can 
> be any old machine -- doesn't have to be a server -- just needs 
> internet access. I've found it best to have a dedicated machine (my 
> old laptop) because there is lots of fiddling which could screw up 
> your other work! Once set up you can schedule the downloads (overnight 
> is best because some are big files) to go into your in-house holding 
> service -- Excel, SQL database or whatever, ready for analysis. If you 
> have a ERM then SUSHI might be built into it and so save lots of work. 
> The stats are in what's called xml format meaning they are a lot more 
> flexible for re-use than say the more common .csv format. XML files 
> will be available to download even if you aren't using SUSHI, so you 
> might want to check this out.
>
>  
>
> OK -- this will only work IF all the standards are observed by all 
> parties. There is a whole standard devoted to SUSHI called an xml 
> Schema which is really for techies. Project COUNTER 
> <http://www.projectcounter.org/> sets the rules (none of what we do 
> would have much meaning without COUNTER) e.g. the journal report 1(R2) 
> relating to titles and months, must be structured exactly right for 
> SUSHI to work.
>
> Michiel (below) is saying that during testing there is a problem 
> matching up # titles in the "standard" manual report, with those in 
> the SUSHI automated service (weird since both are based on the same 
> data although held in a different file format) -- so which data set is 
> correct you might ask?
>
>  
>
> This anomaly sounds annoyingly familiar to many of us.
>
> You may well have to combine data from publishers, aggregators, and 
> subscription agents to get a true reflection of usage, (SUSHI can't 
> help here) but these stakeholders often report different numbers of 
> titles (not to mention variations in spelling and notation!). This 
> makes automating the process of combining data quite tricky since 
> computers can't (yet) rationalise. It's best to use the ISSN field 
> (instead of "title") when combining, but this has limitations if using 
> Excel. More on this if needed.
>
>  
>
> I think Judy's recent mail pretty much highlights what happens when 
> standards get sloppy -- especially taxing if you are uploading into a 
> third party collection service when you really don't want to do any 
> more work on the original files. Ironically those of us (majority) who 
> are still using manual systems are not as inconvenienced as we don't 
> have strict validation issues, but the first one here is a real pain 
> for all.
>
>  
>
> *"Atypon has all the Counter reports in one spreadsheet, so I have to 
> go in and remove all of them that I don't want so it will load and 
> validate properly at Serials Solutions*/. /
>
>  
>
> Atypon are certainly aware of this, but it is unclear as to whether 
> this violates the COUNTER code. Is this allowed Peter - JR1; JR1a; JR3 
> and so on all in one file? (I can't see how this would ever work for 
> SUSHI?)
>
>  
>
> Lots of questions also about the real point of all this - reporting 
> and analysing the stats -- i.e. how can descriptive stats show your 
> successes and failures, and how do they relate to, and impact on the 
> KPIs. Soon!
>
>  
>
> Cliff.
>
>  
>
> ===
>
>  
>
>  
>
>  
>
>  
>
>  
>
> *From:* [log in to unmask] 
> [mailto:[log in to unmask]] *On Behalf Of *Michiel Tibboel
> *Sent:* 01 July 2009 15:20
> *To:* [log in to unmask]
> *Subject:* RE: [lib-stats] SUSHI
>
>  
>
> Hi Annette,
>
>  
>
> We found that the total number of titles in the downloaded usage 
> reports and the usage reports collected through SUSHI (for the same 
> platform) did not always match. We have addressed this with the 
> publishers during testing of the SUSHI webservice to ensure that these 
> incidents were fixed and we continue to monitor this. Our team always 
> checks the data that we collect for our customers to ensure we are 
> looking at the correct data.
>
>  
>
> thanks
>
> Michiel
>
>  
>
> *From:* [log in to unmask] [mailto:[log in to unmask]] *On 
> Behalf Of *Annette Bailey
> *Sent:* Wednesday, July 01, 2009 3:53 PM
> *To:* [log in to unmask]
> *Subject:* Re: [lib-stats] SUSHI
>
> Michiel Tibboel wrote:
>
> "Our team has compared the usage reports retrieved via SUSHI 
> webservice with the usage reports available at the publisher 
> websites. They found that the usage statistics/ publications did not 
> always match, so our team is monitoring this for our customers and 
> working closely with publishers."
>
> Could you please elaborate on what you mean by the usage statistics 
> not matching?
>
> Thank you,
> Annette
> -- 
> Annette Bailey
> Digital Assets Librarian
> Newman Library
> Virginia Tech University Libraries
> Blacksburg, Virginia
> PH: (540) 231-9266
>
> On Wed, Jul 1, 2009 at 7:36 AM, Michiel Tibboel <[log in to unmask] 
> <mailto:[log in to unmask]>> wrote:
>
> Hi all,
>
>  
>
> For our Swets services SwetsWise Selection Support (price per use and 
> usage statistics) and ScholarlyStats (usage statistics) our team 
> is working with publishers to set-up collection of usage reports 
> through SUSHI webservice. Our team has compared the usage reports 
> retrieved via SUSHI webservice with the usage reports available at the 
> publisher websites. They found that the usage statistics/ publications 
> did not always match, so our team is monitoring this for our customers 
> and working closely with publishers.
>
>  
>
> The implementation of SUSHI webservices by publishers is encouraged 
> and ScholarlyStats can provide a testing environment for publishers. 
> ScholarlyStats is committed to finding the most cost effective way to 
> collect and consolidate usage reports for libraries. We are tracking 
> the SUSHI implementations with publishers.
>
>  
>
> ScholarlyStats indeed also provides customers with a SUSHI webservice 
> to harvest your Consolidated Journal Report 1 with the usage 
> statistics for all publishers/platforms in one report directly into 
> SwetsWise Selection Support, Innovative ERM or Thomson JUR. 
>
>  
>
> Publishers reading this list that are interesting in testing their 
> SUSHI webservice with ScholarlyStats can contact me directly.
>
>  
>
> thanks
>
> *Michiel Tibboel
> *Product Manager 
>
> *Swets*
>
>  
>
> *From:* [log in to unmask] 
> <mailto:[log in to unmask]> 
> [mailto:[log in to unmask] 
> <mailto:[log in to unmask]>] *On Behalf Of *Cliff Spencer
> *Sent:* Tuesday, June 30, 2009 3:16 PM
>
>
> *To:* [log in to unmask] <mailto:[log in to unmask]>'
>
> *Subject:* [lib-stats] SUSHI
>
> Hi Leslie,
>
>  
>
> Scholarly stats do still collect manually (probably at big cost) but 
> it's inconceivable that once SUSHI is established they won't use this? 
> Most publishers who are COUNTER complaint will also need to be SUSHI 
> compliant within a year.
>
>  
>
> I've been fiddling around with the (free) code to try and develop my 
> own client, but my coding is not very good and there are lots of 
> issues even for professional programmers see 
> http://www.niso.org/workrooms/sushi
>
>  
>
> So now I'm looking at Tom Barker's free code:
>
>  
>
> "The University of Pennsylvania recently put together a client to 
> harvest SUSHI1.6/COUNTER3.0 data.  We have decided to release it to 
> the Sushi community under the Apache 2 License.
>
>  
>
> The project is here: https://labs.library.upenn.edu/SushiToolkitDocs/site/
>
> We also have a web interface built on the toolkit to create simple 
> spreadsheet reports here: 
> https://labs.library.upenn.edu/SushiWebClient/SushiCall     
>
>  
>
> If you want to pay I've heard good reports of 
> http://www.scholarlyiq.com/ which looks as though it can provide a 
> flexible solution.
>
>  
>
> We have no plans to buy an ERM so hope to download into a data 
> warehouse and play around with the XML reports.
>
>  
>
> I'm much more interested in the analysis to be honest, and putting 
> reports into a secure web site which will show the metrics for our 
> subscriptions. This is all very new and time is needed for 
> development, all against a background of fiscal restraint. (No cash!)
>
>  
>
> BW.
>
>  
>
> C.
>
>  
>
> ===
>
> *From:* [log in to unmask] 
> <mailto:[log in to unmask]> 
> [mailto:[log in to unmask] 
> <mailto:[log in to unmask]>] *On Behalf Of *Leslie O'Brien
> *Sent:* 22 June 2009 21:39
> *To:* [log in to unmask] <mailto:[log in to unmask]>
> *Subject:* Re: [lib-stats] RE: Products for usage stats reports
>
>  
>
> Hi, Cliff.  Thanks for your insights and for sharing your workflow.  
> One question--do you know what SUSHI client service you will be 
> getting?  We are able to harvest our reports from Scholarly Stats via 
> SUSHI using our III ERM, but I'm thinking that Scholarly Stats had to 
> collect the reports manually?  I wonder how many of the providers are 
> Release 3 COUNTER Code/SUSHI-compliant.
>
> Leslie O'Brien
> Virginia Tech
>
>  
>


-- 
______________________________________________

Marion Tattersall
Research Development Librarian
Academic Services, University Library
Room L07, Octagon
The University of Sheffield
Sheffield S10 2TN

phone: 0114 2227281    fax:0114 2227290
_________________________________________

Normally available 0900-1600 Mon-Thurs