Hi David,
Your comment is interesting:
>>>>"It’s now become part of our standard practice in discovering faults in designs. In that context we do count the number of faults at each iteration and we look to see a reduction over the repeat testing and refinement. But we don’t have to do a statistical analysis to know whether or not we have eliminated faults or not. As I’ve said before on this list. it’s a bit like clinical practice in medicine where you look for symptoms of pathology and then apply a treatment. You then look to see if the symptoms disappear. "
But isn't it a trap, to think that when we find less faults, the cause
of this effect IS the design intervention? As Ali said, couldn't it be
by chance? Or, under a systemic mindset, couldn't it be caused by
another concurrent factor?
I would like to hear more about this.
Regards,
Ricardo Martins
2018-05-11 3:55 GMT-03:00, Ali Ilhan <[log in to unmask]>:
> Dear David,
>
> I agree, up to a point. :) Yes, you do not have to use statistics all the
> time. Statistics are a rite of passage for a good reason in those fields, I
> would not call it silly (in my department, not only learning stats was a
> rite of passage, but also a basic understanding of qual research methods
> was part of the whole package, but again it depends) . Depends on the
> questions that you ask.
>
> Discovering faults (I think we discussed this before), is a different
> animal. You just need a couple of examples to prove that something is not
> working. Discovering why something works, on the other hand, is very
> different. And there I think , you need stats (the types of clinical
> practice that you mention, always are backed up with statistical studies
> later on, one or two successful cases may be the product of pure chance) .
> I am not, and will never be a quantoid orthodox (although quantitative
> stuff is what I do, most of the time). Question drives the method, not the
> other way around. But that said, the scale is off-balance in design (I am
> specifically talking about the studio education literature), towards not
> even qualitative research but anecdotal evidence. That, I cannot accept. I
> think any field (even economics and healthcare) needs a healthy mixture of
> qual and quantitative research methods. When I teach research methods to
> graduate, I do not divide it as qual and quant. I teach basics, data
> collection, and data analysis.
>
> Back to topic, I still stand behind what I say. We need more long term,
> longitudinal studies that use quantitative methods in design education
> research.
>
> All the best,
>
> ali
>
>
> On 11 May 2018 at 09:14, [log in to unmask] <
> [log in to unmask]> wrote:
>
>> Hi Ali,
>>
>> Thanks for your comment. I’ve been doing little studies of the type you
>> describe for a long time.
>>
>> They don’t have to be statistical to have validity. That is only
>> necessary
>> if what you want to do is representative of an entire population and the
>> differences you are looking for are quantifiable by their nature. Often
>> we
>> are interested in a qualitative difference between using one approach
>> rather than another or between one type of outcome and another.
>>
>> Unfortunately, prevailing research teaching in universities in areas like
>> education, social science, psychology, and marketing treat statistical
>> methods as a rite of passage. You have to use them to prove you are a
>> researcher. Silly stuff.
>>
>> One of the most important early papers I published about new methods in
>> design and design education had no statistics .
>> Sless, David. “Image Design and Modification: An Experimental Project in
>> Transforming.” Information Design Journal 1, no. 2 (1979): 74–80.
>>
>> It’s now become part of our standard practice in discovering faults in
>> designs. In that context we do count the number of faults at each
>> iteration
>> and we look to see a reduction over the repeat testing and refinement.
>> But
>> we don’t have to do a statistical analysis to know whether or not we have
>> eliminated faults or not. As I’ve said before on this list. it’s a bit
>> like
>> clinical practice in medicine where you look for symptoms of pathology
>> and
>> then apply a treatment. You then look to see if the symptoms disappear.
>>
>> BTW, I was told anecdotally by a colleague, Clive Richards, that he used
>> to use the method in his own teaching. Might be worth a try.
>>
>> David
>> --
>>
>>
>>
>>
>>
>>
>>
>
>
> -----------------------------------------------------------------
> PhD-Design mailing list <[log in to unmask]>
> Discussion of PhD studies and related research in Design
> Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
> -----------------------------------------------------------------
>
--
Ricardo Martins
Consultor em Design
----
Curitiba - PR - Brasil
+55 41 8855 8007
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
|