JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for PHD-DESIGN Archives


PHD-DESIGN Archives

PHD-DESIGN Archives


PHD-DESIGN@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

PHD-DESIGN Home

PHD-DESIGN Home

PHD-DESIGN  2006

PHD-DESIGN 2006

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Assessing the learning of undergraduate design stud

From:

teena clerke <[log in to unmask]>

Reply-To:

teena clerke <[log in to unmask]>

Date:

Tue, 18 Jul 2006 17:33:20 +1000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (111 lines)

Dear Alex,

Touching on your observation:
Echoing Gunnar: One problem with traditional design project 
assessment, assuming it's done well, is that it happens at the end of 
a project, normally the end of a semester, all at once. Up to that 
point a student may not know how they are doing. Due to the nature of 
ID projects, the thing to be assessed, often appears at the last 
minute. Good scholarship and other qualities like, diligence and team 
working aren't evaluated. Traditional design project assessment is 
done badly, in my opinion, when it focusses on the result instead of 
the design process.

This is what Boud discusses - the problem of constructing an 
assessment process that is student-centred, rather than 
institution-centred, that involves students in an ongoing way, 
prompts reflection on their learning through each stage of the design 
process, and asks them to self-evaluate their own progress according 
to explicit criteria based on industry standards jointly agreed at 
the beginning of the project. This means that formative assessment is 
ongoing and cumulative, so that students are aware of their progress 
and can then ask for assistance in improving areas identified as 
requiring work (before the final submission). Summative assessment 
may occur at staged intervals, according to certain exercises 
designed to introduce principles or technical skills and check 
understanding of these, or learning journals designed to reflect on 
learning. These may be assessed during the project, or all at the end 
(if that is the way it still occurs) - this means assessment has more 
and varied components (conducted by the teacher with student and peer 
involvement) and is less dependent on subjective (external) teacher 
evaluation and institutional requirements for ranked student grades 
(even though this is still required, the outcome should align more or 
less with student expectations).

Biggs talks about constructive alignment of learning with assessment 
criteria, however, Boud takes this further than simple alignment - 
suggesting that we need to really think through the process ourselves 
as professionals and try to understand what it is that we are really 
doing when we design, how we evaluate the process ourselves, and how 
we recognise and articulate success in our own work - this becomes 
the basis for developing assessment tools to assist students to 
self-assess the very things you suggest are difficult to assess 
(particularly in undergrad courses). The interesting thing is, once 
you start to do this, the question occurs: what are we marking when 
we mark - does it correlate to our self-evaluation during our own 
design processes? ie. do we evaluate good scholarship, diligence and 
team effort in our own professional work, and if we don't, then why 
is it an assessment requirement in design education? My own 
observation is that it is tricky to construct a design project that 
evaluates student progress in learning about learning about design 
(which is presumably what university design education attempts to 
do), while trying to evaluate the outcomes against industry standards 
(which is traditionally what vocational education attempted to do). 
And it gets trickier when student expectations are to learn 'how to 
do it', not how to learn about how to approach doing it (sorry if 
this is getting a little muddy).

In teaching graphic design, I embed assessment each week in a range 
of ways that involves students in peer and self-assessment processes 
that are informal, formative, and do not result in marks. This is an 
attempt to create a regular assessment context that allows students 
to see a range of responses to a staged weekly outcome and asks them 
to articulate what is interesting and why, in relation to a specific 
design principle, rather than to simply express a 'judgement' about 
what is 'cool' or 'good', which may bear no relationship to the 
actual assessment criteria.

The tricky thing is balancing the awarding of marks for process and 
outcome in summative assessment - this sits uncomfortably with me as 
I am aware of how subjective this is - I am much happier providing 
formative feedback for learning - identifying areas of strength and 
areas for improvement - in verbal and written form. However, I have 
designed a number of summative assessment tools which add up to 
roughly half the subject mark (essentially evaluating design process, 
thinking and critical self-reflection), then an applied outcome 
(evaluating application of process in a defined communication context 
such as a poster - even in this context, students decide the 
communication 'content' against which I mark their outcome - this is 
a way of engaging them in the criteria - if they define what is to be 
communicated to whom, and with what response, they are more likely to 
successfully address them).

Like Paul, I do not provide 'final' summative assessment until a 
'review' process occurs - where the 'finished' project is presented 
formally in the class, during which the student receives explicit 
feedback on strengths and suggested improvements - another student 
records the comments, so the student is fully engaged in the 
discussion. They have another week to amend their project (or not) 
and submit the work, which is then marked. I think this correlates 
better with graphic design practice, where the first presentation is 
often amended before sign-off.

Anyway, in my experience, students appreciate being able to define 
their own assessment context - it helps them understand how 
assessment works in the institution, and how work is evaluated in the 
profession. Role-playing and simulation (client and designer, client 
and target market, designer and target market, target market and 
sales rep, etc.) might help in this case, though I haven't yet had 
the courage to try this myself!

I hope this is helpful, regards, teena


BIGGS, J.B.(1996a) Enhancing teaching through constructive alignment, 
Higher Education, 32, pp. 347-364.
-- 
Teena Clerke
PO Box 1090
Strawberry Hills NSW 2012
0414 502 648

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager