Print

Print


Agreeing with Richard, about 2 years ago a few of us conducted a detailed log analysis of browsing behaviour in Renardus.

Renardus, when in existance, provided integrated searching and browsing access to
quality-controlled web resources from major individual subject gateway services. The main
navigation feature was subject browsing through the Dewey Decimal Classification (DDC)
based on mapping of classes of resources from the distributed gateways to the DDC structure.
Among the more surprising results were the hugely dominant share of browsing
activities, the good use of browsing support features like the graphical fish-eye overviews,
rather long and varied navigation sequences, as well as extensive hierarchical directory-style
browsing through the large DDC system.

And more, just based on log analysis. Of coure, a triangulation of methods is always best, but this is an example of how much a 
log analysis can do...

"Koch, T., Golub, K., and Ard?, A. 2006. Users browsing behaviour in a DDC-based Web service: A Log Analysis. Cataloging & 
Classification Quarterly, 42(3/4). P. 163-186.", manuscript available at http://homes.ukoln.ac.uk/~kg249/publ/RenardusFinal.pdf

Cheers,
Kora


--------------------

[snip]

Richard Light wrote:

 > They don't, and of their very nature cannot, tell us anything useful about the informational or educational effectiveness of 
our museum web sites, nor about the enjoyment they might or might not bring.

I think they can, but you need to drill down to the page level and look
at things like the search terms people used to get to our sites (what
did they expect to find?), the search terms they used on our sites (what
can't they find in our navigation or information architecture?), or
dwell times on pages (which content do people stick around to read,
which objects have 'visual velcro'), etc.  But this takes more effort
than skimming over diagrams and total counts, and I don't know of many
(any?) peers with the time to do this.

(More on visual velcro at http://www.aam-us.org/pubs/visualvelcro.cfm)

 > Surely it's much better to acknowledge this, and use other mechanisms such as surveys to get the "right" information, than to 
squeeze unreliable conclusions out of the "wrong" information by ever-more-sophisticated analysis.

As the joke goes, 87% of statistics are meaningless/made up.

There was a useful distinction between diagnostic and reporting
statistics at the London hub workshop on Monday that can help when
thinking about the usefulness or otherwise of web stats.

cheers, Mia

-- 
Dr. Koraljka Golub, UKOLN
http://www.ukoln.ac.uk/ukoln/staff/k.golub/

**************************************************
For mcg information and to manage your subscription to the list, visit the website at http://www.museumscomputergroup.org.uk
**************************************************