Dear All,
An article in the New York Times titled “Why Do So Many Studies Fail to Replicate?” discusses the issue of replication. Some of the issues in this article are potentially significant for design research. While I have often heard it said that there is no point in attempting to replicate design experiments, there are many cases in which this seems wrong. If some kinds of design research claim to have wide significance for the field, then some form of replication should be possible.
Over the past few years, there has been an important project in the field of psychology titled the Reproducibility Project. This article in the New York Times by Dr. Jay Van Bavel of New York University of New York University discusses the issues and challenges of the project — and the challenges of replication in any field that works with human beings rather than natural or physical phenomena.
Here are the first three paragraphs:
—snip—
Last year, a colleague asked me if I would send her the materials needed to try to replicate one of my published papers — that is, to rerun the study to see if its findings held up. “I’m not trying to attack you or anything,” she added apologetically.
I laughed. To a scientist, replication is like breathing. Successful replications strengthen findings. Failed replications root out false claims and help refine imprecise ones. Testing and retesting make science what it is.
But I understood why my colleague was being delicate. Around that time, the largest replication project in the history of psychology was underway. This initiative, called the Reproducibility Project, reran 100 studies published in prominent psychology journals.
—snip—
You will find the complete article at:
http://www.nytimes.com/2016/05/29/opinion/sunday/why-do-so-many-studies-fail-to-replicate.html
This op-ed piece is based on article by Van Bavel, Peter Mende-Siedlecki, William J. Brady and Diego A. Reinero in the Proceedings of the National Academy of Sciences titled: "Contextual sensitivity in scientific reproducibility.”
Here is the abstract:
—snip—
In recent years, scientists have paid increasing attention to reproducibility. For example, the Reproducibility Project, a large-scale replication attempt of 100 studies published in top psychology journals found that only 39% could be unambiguously reproduced. There is a growing consensus among scientists that the lack of reproducibility in psychology and other fields stems from various methodological factors, including low statistical power, researcher’s degrees of freedom, and an emphasis on publishing surprising positive results. However, there is a contentious debate about the extent to which failures to reproduce certain results might also reflect contextual differences (often termed “hidden moderators”) between the original research and the replication attempt. Although psychologists have found extensive evidence that contextual factors alter behavior, some have argued that context is unlikely to influence the results of direct replications precisely because these studies use the same methods as those used in the original research. To help resolve this debate, we recoded the 100 original studies from the Reproducibility Project on the extent to which the research topic of each study was contextually sensitive. Results suggested that the contextual sensitivity of the research topic was associated with replication success, even after statistically adjusting for several methodological characteristics (e.g., statistical power, effect size). The association between contextual sensitivity and replication success did not differ across psychological subdisciplines. These results suggest that researchers, replicators, and consumers should be mindful of contextual factors that might influence a psychological process. We offer several guidelines for dealing with contextual sensitivity in reproducibility.
—snip—
It seems to me that contextual sensitivity would be an important factor in design research *if* we were to spend more time attempting to study and replicate published results. While this would not be significant for projects in design history or critical design, it would be important for interaction design, industrial design, and many other fields where what we study involves human beings — and how they perceive, use, understand, or interact with designed artifacts of many kinds.
You will find the original article in full at:
doi: 10.1073/pnas.1521897113
Best regards,
Ken
Ken Friedman, PhD, DSc (hc), FDRS | Editor-in-Chief | 设计 She Ji. The Journal of Design, Economics, and Innovation | Published by Tongji University in Cooperation with Elsevier | Launching in 2015
Chair Professor of Design Innovation Studies | College of Design and Innovation | Tongji University | Shanghai, China ||| University Distinguished Professor | Centre for Design Innovation | Swinburne University of Technology | Melbourne, Australia
--
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
|