Hi Mark,
Mark,
My book does not claim to provide a questionnaire that ‘measures’
information literacy as though it were some kind of universal IQ test.
The notion of ‘measurement’ I use is highly context-specific.
There are two important points that need to be made in relation to the
use of checklists in IL evaluation to address some of your
reservations. Firstly the situatedness of learning is fully addressed
by ACRL by applying the "embedded" model of IL (although when it comes
to a definition of this model I prefer to use the ANZIIL framework
which clearly illustrates the contextualisation of IL within the
lifelong learning perspective and defines the various IL elements such
as, generic skills, IL skills, values and beliefs as well as the wider
disciplinary perspective – See Andretta Figure 2.2 pg.22 and Figure 3.1
pg 44).
The second point that needs to be made is about the use of the
diagnostic questionnaire I refer to in my book. Here it must be
stressed that the questionnaire is not used in isolation but as a basis
of individualised learning profiles that show students which particular
area of IL they need to concentrate on during the course of the IL
programme. This formative questionnaire is complemented by an assessed
students’ evaluation of their learning experience particularly in terms
of developing independent learning skills (in line with the embedded
model of IL) through the articulation of tool literacy, critical
thinking and evaluative skills. Such a flexibility enables the
contextualisation of IL at subject level (students’ engagement with IL
activities is determined by the discipline they are studying) and at
learner level through the customisable learning plan or feedback
generated by the questionnaire which promotes motivation and
independent learning by encouraging students to take responsibility for
their learning from the outset.
You are absolutely right in arguing that IL takes on different guises
according to various factors such as learner’s level of IL and their
professional background, this is why the diagnostic questionnaire/self
evaluation approach works well with first year students but not with
post-graduate: “Ideally a combination of these strategies (ie
diagnostic testing, formative and summative feedback etc) should be
used to test different aspects of IL skills.. practice at DASS has
shown that diagnostic testing is a more effective method of integration
at undergraduate level of provision, whereas formative and summative
assessment are more appropriate at postgraduate level” (ibid., pg 63).
I also agree with your preference for a constructivist model of IL and
I would add the need to adopt IL as a framework for learning and not
just a set of skills to be ticked off and then forgotten. Feedback from
the students seems to point to a sense of empowerment as they develop
competences on information literacy and ultimately independent learning
and embark upon a process of continuous learning: “I know that my IL
skills will be very useful for my studies. As my subject knowledge
expands I will probably need new skills in information literacy. It is
not possible to become information literate within a few months. But I
believe I will go on developing my IL skills through my degree and at
work” (ibid., pg 96)
All the best
Susie
From: "M.Hepworth" <[log in to unmask]>
Date: 29 January 2005 16:37:03 GMT
Subject: KPIs for Information Skills Delivery and testing information
literacy
Reply-To: "M.Hepworth" <[log in to unmask]>
Hi,
KPI, as I understand it, comes from the organisational domain and
encapsulate the goals of the organisation and are measureable. An
application of this approach to libraries can be found at
http://www.library.qut.edu.au/pubspolicies/
strategicplan_kpi_wallchart_2003_2006.pdf .
However, although there is a section on delivering information literacy,
they are brief and there is a great deal more on the boader goals of the
academic library in general - maybe that was what was required.
Shifting the topic to a related area I would be interested in how people
test/measure people's information literacy and their views on the
approaches
taken.
One approach is the checklist.
The ACRL list of learning outcomes still seems a good base for
developing a
check list for measuring information literacy in the sense that it
defines
goals for information literacy. Knowling these one could measure how
successful one had been at delivering skills i.e. test the trainee. The
ACRL standards can be found at
http://www.ala.org/ala/acrl/acrlstandards/
informationliteracycompetency.htm#stan
and could be converted into a check list.
Susie Anretta's (2005) recent book 'Information Literacy: A
practitioners
guide' published in Oxford by Chandos Publishing, provides a diagnostic
questionnaire that can be used to 'measure' a person's information
literacy.
A set of questions covering a topic indicate the level of knowledge in a
particular area or sub-set of information literacy.
Although these tests are obviously useful, I have reservations about the
check list approach and wonder what others think. Here is my view.
Neither ACRL standards and outcomes nor Susie's test tackle the
situatedness
of learning because of their generic nature. In fact they do not
intend to,
hence this is not a criticism but introduces the question of whether
attention to measurement may have negative implications, especially with
regard to information literacy. Let me explain. Information literacy,
although following broad similarities, has distinct characteristics in
terms
of knowledge, attitudes and skills according to the roles, tasks,
knowledge,
learning objectives and learning styles of different people. For
example it
will not be same for an experimental scientist, a humanities scholar, a
person dealing with their own critical medical condition or a chief
executive. Nor will it be learnt and applied in same way by a novice,
expert, a holist, a serialist, a visualiser etc. Therefore to assume it
is
generic disguises the fact that unless learning is connected to a
recognisable context and helps achieve relevant aims and objectives
i.e. it
is applied, and also relates to the characterisitics of the individual,
the
depth of learning is likely to be shallow.
As mentioned above KPI encapsulates goals. These goals will be
different in
different contexts. Seeking a generic approach to measuring whether
information literacy training has been successful is obviously
necessary and
useful, but I think we need to be wary of taking a too mechanisitic,
behaviourist, view of information literacy. Check boxes that 'measure'
information literacy minimise the importance of the individual,
experiential, constructivist nature of learning. People may learn
'correct'
responses with regard to information literacy tests but if the training
has
not related to their learning needs, will they make sense of it,
internalise
it and be able to apply that knowledge to different situations?
The check box approach also has the danger of presenting a simplistic
view
of information literacy, which may be useful, but may be
counterproductive
(check box ticked, done that, now move on) in terms of persuading other
people of the need to devote sufficient time to developing this
knowledge
and associated skills. This is evident in many models of information
literacy that ignore the motivational issues of information literacy as
well
as the complexity of the thinking skills associated with the information
literacy process, such as setting goals, conceptualising,
deductive/inductive reasoning, categorising, synthesising, critical
reflection etc. etc.
Sorry to go on but am currently developing an information literacy
training
course, in conjunction with a PhD student, which tries to integrate
knowledge about the learning process, information seeking behaviour and
information literacy as well as the practical aspects of delivery and
assessment. This will be implemented in Tanzania in April - hence
uppermost
in my mind.
What methods have been used to evalaute information literacy? How do
these
relate to differences in the individual and the domain?
It would be an interesting and worthwhile project for someone to review
current practice for evaluating information literacy. Perhaps it has
already been done?
Best wishes,
Mark
|