Print

Print


APOLOGIES FOR CROSS POSTING - (PLEASE DON+T REPLY TO THIS POSTING THROUGH
THE ADMIN-STUDENT MAILBASE - USE ADMIN-PLANNING INSTEAD)

Exploring +Value-Added+ Performance Indicators in Higher Education


On 9 May, a Conference on +Exploring +Value-Added+ Performance Indicators
was held at the University of Central Lancashire+s Preston Campus, organised
by the Planning & Performance Review Office.

Speakers included Leslie Wagner, Vice-Chancellor of  Leeds Metropolitan and
John Thompson from HEFCE, together with speakers from Durham University
School of Education, the University of London Institute of Education and
Lancashire County Council - with long experience of value-added performance
indicators in schools.

The Conference generated some useful discussion and it was agreed that
further discussion of the issues raised should continue on the
admin-planning mailbase. As a first contribution, I thought it would be
useful to summarise the key points of the Conference.

There was some discussion on what was meant by +value-added+. John Thompson
usefully reminded the Conference that the Treasury thought of +value-added+
in economic and monetary terms, ie the contribution of higher education to
the economy. However, most speakers looked at +value-added+ measures in
terms of comparative performance - taking an input and an output value for a
cohort of individuals, determining the regression line linking the two
values and measuring +value-added+ as the difference between the regression
line and actual performance. As Leslie Wagner pointed out, it would not be
technically impossible to devise a measure for HE based on +A+ Level entry
scores and degree classifications. HEFCE had already produced equally
complex performance indicators. All that was required was the political
will.

Other speakers, however, thought that the technical deficiencies of such a
measure could undermine its value. The main two concerns were
 - the lack of a common standard for awarding degrees (though as Leslie
Wagner pointed out, this is a dangerous line of argument to adopt);
 - more significantly, the broad banding of degree classifications - the
fact that the majority of students received a second class degree could make
the results statistically meaningless.

The compulsory education sector have been using such performance measures
for well over 10 years and they are now a valued (though often optional)
tool for management and educational development.  They have become
increasingly sophisticated over the years and can take various factors such
as gender, social deprivation etc into account.  Advanced statistical
methods such as Multi-Level Modelling are sometimes used.  They are not
published and the speakers involved with the compulsory sector felt this was
an important factor in establishing their credibility with schools.
Experience also highlighted that there could be significant differences in
performance at subject level even within the same school (and that there
could be great variations from one year to the next).  A key advantage in
the compulsory sector is the existence of a national curriculum and standard
assessment instruments (Key Stage tests, GCSE, NFER CAT, YELLIS, etc).  The
introduction of Curriculum 2000, the reform of A levels and GNVQs, is likely
to make this more relevant in the 16-18 sector.

It was clearly not straight-forward to apply these lessons to higher
education but it did seem possible to envisage +value-added+ performance
indicators playing a role in higher education, perhaps initially as an
internal management tool to help identify areas of good and weaker
performance within an institution. Using degree classifications as the
output could be problematic but perhaps it could be possible to use the
student+s actual final mark, modified if necessary into bandings of marks
with more categories than degree classifications.  I would be interested in
hearing from any r institutions who may have already considered using this.

A more simple measure of institutional +value added+ based on continuation
and completion relative to entry qualification and subject base is already
available in the HEFCE performance indicators published in last December
(calculated by taking in Table T5 the +projected efficiency+ from the
+benchmark+ - I can supply a spreadsheet with this calculation if anyone
can't or doesn't want to do it themselves).

Lee Elliott Major from the Guardian was able to join the conference and will
be considering the issues raised from the point of view of the current
Guardian subject table value added measure.  John O+Leary and Bernard
Kingston - the principle compilers of the Times league tables - were unable
to attend, but have reiterated to me that they would be interested in
including a value added measure in their own league table, if one method
were to become widely acceptable within HE and beyond.

A number of staff from the University of North London published an article
several weeks ago (2 May) in the Guardian on the broader issue of social
inclusion, but with some relevance to +value added+.  They have established
a mailbase list +value-added-he+.  You can download the article, browse the
archives of and join the discussion list at:
http://www.unl.ac.uk/mco/socialinclusion/

I will keep the admin-planning mailbase updated on progress.

Mike Milne-Picken
Head of Planning and Performance Review
University of Central Lancashire
[log in to unmask]
www.uclan.ac.uk/planning


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%