Hi, If comparisons are intended to be on a rating scale across the levels, this might not measure what is wanted. One option would be to use AHP, (analytic hierarchy process). It provides each respondent with a method to compare each element against each other on a 19 point scale. Once all the maths is done, it gives all of the elements in a priority listing for each respondent, and also a consistency value i.e. if too much priority on some elements a bias would be identified. The software used is called Expert Choice, but I must warn (if you had not guessed already!) the maths in this is really horrible, and AHP is not part of the SPSS suite of programs. The software is straightforward however as is putting the numbers in and getting the results out. By the way, the comparisons could be (using your 6 elements below, 1 for Lecture Notes and Handouts in VLE, 2 for PowerPoint etc): "In terms of interactivity, which of these pairs of elements is more important" (learner engagement would be a second set of questions) 1 v 2 (choose -9 to 0 to +9) 1 v 3 (choose -9 to 0 to +9) 1 v 4 (choose -9 to 0 to +9) 1 v 5 (choose -9 to 0 to +9) 1 v 6 (choose -9 to 0 to +9) 2 v 3 (choose -9 to 0 to +9) 2 v 4 (choose -9 to 0 to +9) 2 v 5 (choose -9 to 0 to +9) 2 v 6 (choose -9 to 0 to +9) 3 v 4 (choose -9 to 0 to +9) 3 v 5 etc etc etc e.g. -8 in 1 v 2 indicates 1 is much much better than 2, +8 indicates 2 is much much better than 1) The EC software processes the comparisons, and outputs the 6 elements in a priority list, adding up to a total value of 1. It also gives a consistency value between 0 and 1. An appropriate consistency value is less than 0.1, more than this value (>.1 to 1.0) suggests a bias is being introduced which warrants further review. Hope this helps. Dr. John Beaumont-Kerridge Principal Teaching Fellow University of Luton Business School [log in to unmask] On Fri, 7 Jan 2005 12:26:01 +0000, Niall Watts <[log in to unmask]> wrote: >My apologies. This is somewhat off topic but I thought someone on the listserv might >have come across something like ... > >I am looking for a scale that could be use to evaluate eLearning materials in terms >of their interactivity and learner engagement. Something like: > >Level 1 - Lecture Notes and Handouts in VLE >Level 2 - PowerPoint with multimedia or limited interactivity >Level 3 - Simple Quiz >Level 4 - Discussion Board >Level 5 - Simulations & Case Studies > >These levels and titles are indicative only. > >Does anyone know of such a scale? > >Many thanks > >Niall Watts >Educational Technology Officer >Audio Visual Centre >University College Dublin >Belfield >Dublin 4 >Ireland > >T. + 353-1-716 7035 > >