Dear Alex,
Touching on your observation:
Echoing Gunnar: One problem with traditional design project
assessment, assuming it's done well, is that it happens at the end of
a project, normally the end of a semester, all at once. Up to that
point a student may not know how they are doing. Due to the nature of
ID projects, the thing to be assessed, often appears at the last
minute. Good scholarship and other qualities like, diligence and team
working aren't evaluated. Traditional design project assessment is
done badly, in my opinion, when it focusses on the result instead of
the design process.
This is what Boud discusses - the problem of constructing an
assessment process that is student-centred, rather than
institution-centred, that involves students in an ongoing way,
prompts reflection on their learning through each stage of the design
process, and asks them to self-evaluate their own progress according
to explicit criteria based on industry standards jointly agreed at
the beginning of the project. This means that formative assessment is
ongoing and cumulative, so that students are aware of their progress
and can then ask for assistance in improving areas identified as
requiring work (before the final submission). Summative assessment
may occur at staged intervals, according to certain exercises
designed to introduce principles or technical skills and check
understanding of these, or learning journals designed to reflect on
learning. These may be assessed during the project, or all at the end
(if that is the way it still occurs) - this means assessment has more
and varied components (conducted by the teacher with student and peer
involvement) and is less dependent on subjective (external) teacher
evaluation and institutional requirements for ranked student grades
(even though this is still required, the outcome should align more or
less with student expectations).
Biggs talks about constructive alignment of learning with assessment
criteria, however, Boud takes this further than simple alignment -
suggesting that we need to really think through the process ourselves
as professionals and try to understand what it is that we are really
doing when we design, how we evaluate the process ourselves, and how
we recognise and articulate success in our own work - this becomes
the basis for developing assessment tools to assist students to
self-assess the very things you suggest are difficult to assess
(particularly in undergrad courses). The interesting thing is, once
you start to do this, the question occurs: what are we marking when
we mark - does it correlate to our self-evaluation during our own
design processes? ie. do we evaluate good scholarship, diligence and
team effort in our own professional work, and if we don't, then why
is it an assessment requirement in design education? My own
observation is that it is tricky to construct a design project that
evaluates student progress in learning about learning about design
(which is presumably what university design education attempts to
do), while trying to evaluate the outcomes against industry standards
(which is traditionally what vocational education attempted to do).
And it gets trickier when student expectations are to learn 'how to
do it', not how to learn about how to approach doing it (sorry if
this is getting a little muddy).
In teaching graphic design, I embed assessment each week in a range
of ways that involves students in peer and self-assessment processes
that are informal, formative, and do not result in marks. This is an
attempt to create a regular assessment context that allows students
to see a range of responses to a staged weekly outcome and asks them
to articulate what is interesting and why, in relation to a specific
design principle, rather than to simply express a 'judgement' about
what is 'cool' or 'good', which may bear no relationship to the
actual assessment criteria.
The tricky thing is balancing the awarding of marks for process and
outcome in summative assessment - this sits uncomfortably with me as
I am aware of how subjective this is - I am much happier providing
formative feedback for learning - identifying areas of strength and
areas for improvement - in verbal and written form. However, I have
designed a number of summative assessment tools which add up to
roughly half the subject mark (essentially evaluating design process,
thinking and critical self-reflection), then an applied outcome
(evaluating application of process in a defined communication context
such as a poster - even in this context, students decide the
communication 'content' against which I mark their outcome - this is
a way of engaging them in the criteria - if they define what is to be
communicated to whom, and with what response, they are more likely to
successfully address them).
Like Paul, I do not provide 'final' summative assessment until a
'review' process occurs - where the 'finished' project is presented
formally in the class, during which the student receives explicit
feedback on strengths and suggested improvements - another student
records the comments, so the student is fully engaged in the
discussion. They have another week to amend their project (or not)
and submit the work, which is then marked. I think this correlates
better with graphic design practice, where the first presentation is
often amended before sign-off.
Anyway, in my experience, students appreciate being able to define
their own assessment context - it helps them understand how
assessment works in the institution, and how work is evaluated in the
profession. Role-playing and simulation (client and designer, client
and target market, designer and target market, target market and
sales rep, etc.) might help in this case, though I haven't yet had
the courage to try this myself!
I hope this is helpful, regards, teena
BIGGS, J.B.(1996a) Enhancing teaching through constructive alignment,
Higher Education, 32, pp. 347-364.
--
Teena Clerke
PO Box 1090
Strawberry Hills NSW 2012
0414 502 648
|