My own experience in dealing with health care researchers is that they are
open to other research methods but will expect to see a controlled trial (or
a well-argued for alternative) where possible for when evaluating effect of
an intervention.
We have for instance used user-testing methodology to incrementally improve
on the design of information products we´re producing for use by
health professionals in their work. We've performed these studies mainly to
aid our design process: to observe users' problems and give us ideas about
how to solve them through alterations in the design. In publishing we will
document data from these studies to show how we involved stakeholders and
users and to illustrate the issues they found critical, but we won't point
to these studies to document actual effect. For that, we need a more
convincing research design, such as a controlled trial, which we also have
carried out and will publish simaltaneously.
Our design-related research has been carried out first and foremost to
support a design process; academic publication has been a secondary goal.
Following up on this order of priority can result in decisions made underway
that might reduce the quality of the data seen from a publication point of
view. For one thing, you might change the intervention between two sets of
testing, thus making your data less rigorous (but resulting in a better design).
This kind of incremental process using research methods to serve pragmatic
improvement goals is not unique to the design field. It has its parallel in
the quality improvement area of health research, where the main goal is to
support a quality improvement process at a local level, rather than having
publication as a main goal. One example of this is a method called
"Breakthrough" process. (See The Institute of Health: www ihi.org for a more
in-depth description of this method.) The skeptics of this method still
argue: ok, but have they documented effect of their work at the end of it
all? Too many of these kind of quality improvement studies have not been
organised in a way that provides reliable documentation of effect of the
intervention over time. If you are going to try to establish a cause-effect
relation between an intervention and an improvement, you need to produce a
study design that argue for this in a rigorous enough fashion, no matter
what field you are in.
In many cases, such as if you are redesigning a hospital environment, a
controlled trial is certainly not feasible. But an interupted time series
could be a method to explore effects: measuring the same outcomes several
times before implementing the change, then measuring several times after
implementation. If the intervention has had an effect, it should be visible
on the curve. This is a rigorous enough study design to satisfy managers who
are used to dealing with results from clinical trials, and one that is
gaining support in health research communities. I don't see why this method
should be controversial for design researchers? But it is definitely costly
and time consuming.
On the other hand, the effect is not always the end question. We can for
instance test for the effect on knowledge through a controlled trial of an
information product, but we cannot see if people actually use the product in
a real life situation. For that we might need additional kinds of data -
observational studies or log data from web sites. Then we're getting into
mixed methods, which Gavin Melles mentioned is gaining a lot of ground. That
makes sense to me.
I agree with Chris - it's important to tailor the research method to the
audience as well as the research question. But health research hasn't been
standing still since clinical trials were introduced. More and more, health
institutions are speaking "designer-like" language - with focus on end-user
experience and employing incremental quality improvement processes that
resemble design processes. And there also seems to be increased
understanding that health services exist within complex systems, and can't
always be reliably studied exclusively through controlled trials. So there
is arguably room for many approaches to communicating with decision makers
in this field.
|