On 20 May 2014, at 2:32 am, Francois Nsenga <[log in to unmask]> wrote:
> David, please correct me if I am wrong, or else confirm my understanding
> that this latter connotation of 'evidence by default' is what you start
> with in your undertakings at Communication Research Institute, proving with
> "useful evidence" that actual communication instruments are faulty; or that
> the 'good' ones are yet non-existent.
We would have to tease out the differences in terminology to be absolutely sure that we are talking about the same things. But I think we agree.
Let me recast it into my own terms so that we can hopefully make some progress.
Ideally, in our work we start with the scoping stage. In that stage we would undertake some twenty separate investigations covering all the aspects that have been mentioned about this 'fuzzy' front end by Klaus, Ranjan, Elizabeth Pastor, IDEO, etc…plus a few investigations of our own design. These are all directed at investigating the 'problem space'. One of the emergent outcomes from this is a series of new ideas and possibilities. And, as Ranjan has point out, there is no evidence at that stage to suggest that any of these ideas or possibilities are worth pursuing. Following this intensive incubation and inventive phase of scoping, we go back to our starting point and reask the question: "what is the problem space we are dealing with"? Sometimes this suggests a total redefinition of the problem and problem space. For example, in the medicines area, the problem may have nothing to do with the medicine information but with the medicine itself; it is the medicine that needs redesigning. In the case of the tax office, we sometimes think that a boundary shift in responsibilities might massively reduce the cost of tax collection.
All this work is recast into another set of questions. Given our potential redefinition of the problem, what is the current state of affairs? It is at this stage, which we call Baseline Measurement, where we try to measure, by collecting evidence, how well the world is currently coping with the problem as it is newly defined. This stage is, I believe, rarely if ever undertaken. I could go on at length about why this is a serious omission from most design projects. But I'm sure you can all join the dots.
It is only after this stage that we move into the familiar prototyping and iterative testing etc. culminating in the monitoring stage. Properly done, we return, once the monitoring is showing critical deterioration of performance and start the process again. This sometimes happens as a result of realignment of stakeholders and their interests, as Klaus suggests, but there are other factors that can result in a totally new problem space emerging.
The ones that intrigue me the most are those that emerge because of a shift in our languaging. But that's another issue.
I hope this clarifies our use of evidence. Are we still discussing the same issues?
David
--
blog: http://communication.org.au/blog/
web: http://communication.org.au
Professor David Sless BA MSc FRSA
CEO • Communication Research Institute •
• helping people communicate with people •
Mobile: +61 (0)412 356 795
Phone: +61 (0)3 9005 5903
Skype: davidsless
60 Park Street • Fitzroy North • Melbourne • Australia • 3068
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
|