Dear Stephen,
My apologies for replying late to your post.
You mentioned competency-based assessment of design students. This offers accurate and reliable assessment on competence - rather than quantity or quality.
Currently, I suggest competency-based assessment offers the best way forward for evaluating design students. It resolves many problems in accuracy and subjectivity in assessment, facilitates accurate and transparent curricula design, and helps clearly define curriculum resourcing and delivery.
The challenge of going the competency-based path of design education assessment, however, is both deep and radical. It requires dropping almost ALL of the conventional attitudes and practices of assessing design students - especially dropping making professional judgements about the 'quality' of students' design outputs.
Instead, it requires identifying a large number of competences defined so specifically that any and all competent assessors can instantly and uniformly confirm whether a student has demonstrated a specific competence (or not).
That is, there is zero subjective judgement. That is there is no judgment of a percentage of any specific competence. The outcome of the assessment of the student's evidence is only either Yes (the student has demonstrated that specific competence) or No (the student has not yet demonstrated that specific competence).
Any percentages in grading at a larger scale for university purposes come from assessing how many competences the student has successfully demonstrated.
To increase student learning, and speed up marking, we have identified that having students produce a commentary helps.
(See Love, T. and Cooper, T. (2010). The Central Role of Commentary on Evidence in E-Portfolios. In N. Buzzetto-More (Ed) The E-Portfolio Paradigm: Informing, Educating, Assessing and Managing with E-Portfolios. Santa-Rosa, California: Informing Science Press (pp. 267-288). http://www.love.com.au/docs/2010/commentary.pdf )
I'm writing this wondering how it would apply in a Korean context?
Warm regards,
Terry
==
Dr Terence Love
Director
Design Out Crime & CPTED Centre
Perth, Western Australia
[log in to unmask]
www.designoutcrime.org
+61 (0)4 3497 5848
==
ORCID 0000-0002-2436-7566
-----Original Message-----
From: [log in to unmask] [mailto:[log in to unmask]] On Behalf Of Stephen B Allard
Sent: Friday, 30 June 2017 11:38 PM
To: [log in to unmask]
Cc: Stephen B Allard <[log in to unmask]>
Subject: Re: About grading design projects: Evaluation on quantity or quality?
Dear Dennis and all...
I've just completed the grade rebuttal period of my latest spring semester grading period, and have reflected on how my grading methods have changed relative to what is being queried and discussed in this thread in the past weeks. I have observed that there is much more going on with grades and grading than mere design project quality and quantity values. Especially in the big data world we now occupy today.
When I began teaching design years ago, my grading was primarily focused on project quality and relied on my more or less 'subjective' expertise and experience as a designer who was working in the field at that time to: teach, give feedback and evaluate student work. Letter grades were used to communicate design project quality levels and skills improvement progress to students and university registrar departments who collect and record student grades. At the time, using current best practices that were used in industry to work with students was valued by the university that I was teaching at as well as by the students who respected how I was developing my career as a designer outside of academia in the practice world. 'Higher level design project quality' grades were those projects that came close to or might pass muster in a real world design department or studio setting. Those students and their projects that were awarded with a lower grade did not meet the albeit subjective standards that industry can apply to design work. Although this kind of grading is good for evaluating design work that might make in the practice world, it does not serve well students, professors, university departments or government ministries of education. Since those years in the beginning of my grading, I have come to understand better all the varieties of how and why grades are valued by different groups involved in the process of design education.
As I made deeper inroads into teaching design at a variety of universities, I began to include more quantity based evaluation methods as I felt more inclined to better address student expectations and their inquiries into my grading methods, as well as satisfy departmental mandates of a more measurable objectivity. I experimented with rubrics and their associated random numerical values, but found this more mathematical approach to explaining why a student had performed better than the other students was not improving the quality of design outcomes and project quality. I observed that students will shift their value of the process of grading design away from subjective design project quality to a more quantitative numerical based measurement system if they are given numerical values that explain their progress. To students, numbers help explain 'who' is doing better' and 'who is doing more poorly', but does not serve to measure design project quality. In fact, it lowers it quite substantially. I have learned that students are able utilize number values and math to understand why each facet of their project does or does not meet standard against rubrics, but only relative to their other classmates and 'their numbers'. University registrar offices are also able to use number values and math to record grades to help them understand where they stand statistically against other competing university programs. They then share this statistical data with the marketing department and education ministry who know nothing about design quality and only value design student performance levels. This realization begins a very problematic and growing sea change in design education when viewed through the lens of big data methods of quality measurement of design student outcomes.
I am currently part of a national effort to measure the competence levels of design students in South Korea. It has involved very large investments in big data related software interfaces that are aimed at compiling performance related grade data at a national level and turning student skill and performance levels in to a language that various forms of artificial intelligence algorithms can be used to inform ministry of education officials on how well or poorly the nations design schools are doing. Professors have become the data input labor force of IT industry consultants who are claiming that they can use machine learning and AI to produce valid measurements of student competence and performance in design and other area disciplines in education. As a result, many professors are forced to massage and manipulate the quantity and quality levels of student grades and performance in order to meet strict relative grading mandates set the school and in tern by the national government. This has an impact on design project quality I have observed. I now not only transfer letter graded midterm and final project quality and quantity levels into numerical form, but also factor in attendance and participation level measurements as part of the final grade equation. All of this data gets converted into numerical form that software algorithms can understand and then communicate with a variety of other data bases that are measuring and communicating student performance statistics at the department, university and federal government levels.
Students are well aware of this increase in this numerical quantity based measurement of performance levels of all kinds as they share and compare with their classmates at the end of the semester. As a result, they are in danger of losing a vast area of design project quality understanding and its history as we all bow more and more towards satisfying the data based methodology of measuring, recording and communicating student performance levels and not the quality outcomes of their design efforts. As teachers and professors reduce student work into a language that can be understood by algorithms which are then shared among other machine learning interfaces that spit out all manner of perspectives on the data set at hand, we are in danger of losing a very human centered aspect of design and how it has been applied to better the quality of life for humanity.
Which begs the question... Is design and its outcomes to serve machines or is design and its outcomes to serve humanity?
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]> Discussion of PhD studies and related research in Design Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
|