Gunnar asked:
> Will you give some specific examples? It’s hard to know what you’re claiming in the abstract.
This is not so much a “claim” as an observation based on it’s repeated recurrence in our projects.
Here is one little example. It’s taken from an early project we undertook some time ago for the South Australian Government. We were asked to develop a water rates bill for a new rating system.
Prior to our involvement there had been a major ongoing media battle between the government and the public, aided and abetted by the government opposition, about the seeming unfairness of the new system. There was even a court case in which ratepayers alleged that the government was overcharging for water. The case was lost, but significantly, two of the judges in their written judgement seemed not to understand how the rating system worked.
One of the central concepts of the new rating system was an access rate, a fixed charge that covered the cost of the infrastructure of pipes from the mains and the meter a property. Apart from that, users under the new system were expected to pay for all the water they used.
Previously ratepayers received an annual allocation of water and they payed for that. They were only charged more if they used more than their allocation. The additional charge was called an excess water rate. To discourage over use of water, the water utility charged a high price for every extra amount that ratepayers used over their allocation.
I should add that the specifics of this type of billing project are very demanding. There are massive costs in reprogramming the software for bill production systems. It’s not like redesigning a single document, but creating a set of rules for creating a very large set of possibilities. Any small errors on our part, leading to customer queries or complaints, makes the call centre switchboard light up like a Christmas tree. Misunderstandings that give rise to payment errors or late payments, likewise are very costly. We had to tread very carefully. One of the most vexing problems for us was that we could not see anything in the design of the existing bill that would lead to such confusion. It was dull but hardly an outstandingly poor example of its kind. Our preliminary diagnostic testing at the baseline measurement stage did not throw up any major issues.
I should add that the ‘testing’ we do is open ended, unlike a lot of usability testing. It’s designed to capture the unexpected as well as the routine. It draws on appreciative dialogue research rather than psychological or usability testing. Indeed one of the first questions we ask each other after such testing is “what struck you”? "Was there anything you did not expect”? Outliers are important.
Following this early work, we developed some prototype bills and explanations of the new seemingly controversial rating system. But this was more of a tidying up than a radical new design. We then asked ratepayers to read bills and the explanations we had prepared and invited them to explain them to us in their own words.
Our epiphany, or if you like, moment of creativity, came when we noticed something about ratepayers pronunciation. A small proportion of the ratepayers pronounced the word ACCESS as EXCESS. Further probing made it clear that a proportion of ratepayers did not have the word ‘access’ in their vocabulary. The nearest word in their lexicon to do with water was the term ‘excess’. This was why many of the ratepayers were angry; they did not understand why they were having to pay excess water rates when they hadn’t used any water. The solution involved not using the word ‘access’ in any of our explanations. The new bill, looking slightly smarter, with some modest wording changes, was introduced. The switchboard did not light up. There were no reports in the press. The government opposition lost interest in the issue. And people payed their ware rate bills on time.
This is a modest example but it shows two things: the value of testing of this type, and the precise moment in a project when we made a creative leap in understanding the problem we were facing. The moment did not come in the studio, not in any brainstorming or creative sessions, nor in our detailed discussions with stakeholders etc etc, but in the routine testing of prototypes.
This was an example from the early 1990s—a modest little light bulb went on—but one of may that had been done since the 1970’s. It remains a recurrent feature of nearly every project of this type since then. I should add that the methods we developed then have been copied, a number of designers claiming to have invented the methods themselves. I suppose we should be pleased even flattered by this. If it had been done in an academic context it would be called plagiarism and they would lose their job. Different worlds. Different standards of behaviour.
I hope this helps
David
--
blog: http://communication.org.au/blo <http://communication.org.au/blo>g/
web: http://communication.org.au <http://communication.org.au/>
Professor David Sless BA MSc FRSA
CEO • Communication Research Institute •
• helping people communicate with people •
Mobile: +61 (0)412 356 795
Phone: +61 (03) 9005 5903
Skype: davidsless
60 Park Street • Fitzroy North • Melbourne • Australia • 3068
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
|