I would take it for granted that politicians have a very simple view of
statistics. This is also part of popular culture. And this is hardly
surprising in the light of the passive role played by statisticians in
appraising statistical evidence.
Statisticians don't challenge statistics for 'real income' for example.
They just produce the statistics and publish them - without regard for
their limitations. Of course all statistics should be critically
appraised. But publication is necessary part of that appraisal. Isn't the
situation even worse if the statistic are acted upon but not published??
(I'll come back to that point!)
Neither the statisticians nor the government control the way the media use
statistics. Allan Reese points out that there was very little readers'
reaction to the publications of league tables for schools. Editors are
guided by readers reactions. With honorable exceptions - such as Harvey
Goldstein - there is little interest in, let alone protest about, the
publication of statistics likely to mislead. So why should editors publish
material on the limitations of statistics? Or even bother to ask members
of the GSS about the limitations?
The RSS did protest, to take another example, that the Claimant Count should
not be the headline count for unemployment. But that protest was misplaced.
Should statisticians, or the government, decide on the headlines?? The
only effect I see is that the media are a bit more specific in referring to
the coverage of the Count than they used to be. The RSS has not followed up
its appraisal of unemployment statistics in any significant way in spite of
the large volume of evidence that has since become available as to exactly
how and why the Count of Claimants has been misleading.
It is unrealistic to expect that all statistics picked up by the media
should carry health warnings about how they can mislead. But where I
differ from Harvey and Allan is in believing that publication without health
warnings is probably a necessary preliminary to the development of health
warnings and to understanding of the factors that influence the statistics.
In the case of school performance, for example, one the main variables
likely to associated with variation in performance is the income level of
pupils' households. We actually do have a crude indicator available of
that income level in the proportion of pupils receiving free school meals.
These FSM statistics for individual schools have an unusual status. In
England and Wales they are available to for research but not available for
publication.
If FSM indicators explain a substantial proportion of the variation in
school performance, then we might be getting a little bit closer to
understanding the real-world situation. But without publication it is not
clear that understanding would be advanced.
Ray Thomas, Social Sciences, Open University, Milton Keynes, MK7 6AA
Tel: 01908 679081 Fax 01098 550401 Email: [log in to unmask]
Post (home): 35 Passmore, Tinkers Bridge, Milton Keynes MK6 3DY
> The problem with the school league tables and the university
> ones is that
> those issuing them (or issuing the raw data upon which they
> are based) know
> full well what their limitations are, yet they refuse to tell
> people about
> these limitations in a way that will indeed create an informed public
> debate. Thus, there are many examples of ministers talking
> about the school
> league tables as if they were reliable indicators of
> educational quality.
> This is more than just 'crass' - it is hypocritical.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|