Hi, Peter,
Thanks for your reply. Been thinking about it. Once again, I’m
reproducing your full note because you raise so many valuable issues.
By and large I agree, and I’d say these issues deserve thought. I’m
going to concur on one specific point with a note that I’ve said much
the same, and I’m going to disagree on one point.
The journal article format known as the critical literature review is
not about individual learning, nor even about the role of
contextualization that a literature review serves in the PhD thesis. The
critical literature review and the parallel format of the bibliographic
essay that appears in book form involve concept mapping to advance the
knowledge of the field. I don’t think anyone has suggested that a
critical literature review is just about individual learning, nor even
about individual learning at all, except incidentally as an author
learns a great deal in writing one. The critical literature review adds
to the body of knowledge of a field – that is why journals publish
them.
With Zotero, I’m going to disagree. Zotero does not do 80% of the
work. Zotero doesn’t do 10% -- not even the 10% that a serious
thematic bibliography does. Zotero compiles resource lists. They are
poorly formatted in every Zotero list I’ve seen, and there is no
apparent rationale for selection other than the enthusiasm of the
author. In that sense, Zotero is a bit like “My Favorite Things” in
the Sound of Music. (Remember Julie Andrews singing the film version of
the Rodgers and Hammerstein musical? “Raindrops on roses and whiskers
on kittens, bright copper kettles and warm woolen mittens, brown paper
packages tied up with strings, these are a few of my favorite
things.”) That is not even a bibliography, much less a concept map
with a description of the value these have for the design field.
Thanks for your proposal. I can see the value of an interpretive
collaborative review. But this is quite different to a wiki, or any of
the other collaborative tools floating about in conversations here.
If I read this correctly, the tool is a expert-level tool where those
who participate must demonstrate skill, knowledge, and expertise to join
in. While this does not entirely solve the free-rider problem, it does
solve the competency problem.
Just as I disagree with you on Zotero, I disagree with Victor on the
idea that we’ll get good concept maps out of a wiki. The problem the
repeated calls for doing this work on a wiki is that folks want the
wiki, but they don't want the work. They imagine that somehow a wiki or
Zotero or any of these other tools will magically yield something even
though no one actually does the work of write skilled, competent
entries. The paragraphs, random notes, and odd thoughts that accumulate
in a wiki won't congeal into a concept map without rigor and
intelligence. This takes work that will not likely be forthcoming in any
project where those who lack skills wait for others to flesh out their
ideas with real thinking and writing. No serious researcher is likely to
take part in an open environment like a wiki or Zotero, not when the
participants are people they would not want to work with in seminars or
direct research collaborations.
Time is the most valuable resource I have. If I wouldn’t “spend”
time in seminars and research collaborations with someone, I won’t
spend time collaborating with them on a wiki. Wikipedia rises to a
reasonable level of mediocrity without taking the next step for
precisely this reason. Experts won’t spend time or waste it on a
reference tool where unskilled amateurs can revise and waste hours or
days of careful writing. The reason for the success of such open-access,
online references as the Stanford Encyclopedia of Philosophy is that
experts compile and edit it, review it, and work together carefully to
ensure continuing, updated improvements through expert-level
participation.
That seems to me to be the kind of thing you are aiming at with your
interpretive collaborative review. The medium seems a bit more
collaborative than the single-author articles in the Stanford
Encyclopedia of Philosophy, and the principle of expert-level
participation makes the collaborative investment worthwhile.
Best regards,
Ken
Professor Ken Friedman, PhD, DSc (hc), FDRS | University Distinguished
Professor | Dean, Faculty of Design | Swinburne University of Technology
| Melbourne, Australia | [log in to unmask] | Ph: +61
39214 6078 | Faculty
On Tue, 1 Nov 2011 12:15:40 -0400, Peter Jones | Redesign
<[log in to unmask]> wrote:
Ken - I appreciate the distinction making in your critique. I agree
that we have several different purposes for critical, bibliographic, and
narrative review of sources. Because the methods for producing these
formats and outputs are quite similar (bibliographies, annotated, with
summary, narrative, or multiple attributes) people often produce an
adequate artifact and can confound the purposes. I would say that if we
don’t teach good practice at the MDes level, those that pursue a PhD
will find this an especially difficult undertaking. We may teach
critiquing, but critical review writing and literature reviews are
pitiful in much of the design literature.
And I agree there’s a real need for disciplinary development and
conceptual mapping of literature and concepts to theoretical and
historical development. Developmental concept mapping through the
literature is a PhD level task. But the outcome of this work should not
be “just” individual learning. As I noted with respect to graduate
medicine - Review articles are not only a primary means of practitioner
and advanced resident study, they are also a significant output of
fellows and faculty (and MD/PhD’s) who have requirements for
publishing, and are advancing their disciplines. I think we have some
parallels to medical education, but at the PhD level design is being
treated more like a social sciences PhD. I’m not convinced this is
the only or best model myself.
Medical professionals move into fellowships or PhD programs to pursue
advanced study or pure research. At that stage, but not in residency as
much, they are producing review articles. Residents in their research
rotation often work on ongoing research projects, but as a PGY3 resident
they do not initiate research, and they often join projects that are
mid-stream and have their literature base well established. Therefore,
they may have the opportunity to write review article or produce
critical literature reviews, but it’s not that common in my
observations of US programs.
So if our purpose is to strengthen the research base of our field, the
tools you’ve indicated are ways to do promote those purposes, of
course. I think there is room for different types of commitments in
developing the concepts from literature. One of them is a
research-based approach I’ve been developing with a Pharmacy professor
in U Toronto’s Knowledge Media Design Institute. The Interpretive
Collaborative Review is a process and a system (prototype) in search of
funding. I can appreciate why something simple like Zotero achieves
adoption (which is nicely articulated as a Web 2.0 design in many ways).
Zotero meets 80% of the need while leaving the advanced features to
academics. The ICR is described as:
Collaborative Discovery of Information Significance: A Framework for
Making Sense of Healthcare Research
Peter Pennefather and Peter H. Jones. Laboratory for Collaborative
Diagnostics, Leslie Dan Faculty of Pharmacy, University of Toronto
We present a framework for collaborative sensemaking by a
problem-focused community using electronically accessible scientific
journal articles and other digital information artifacts. The framework
guides collective structured evaluations of the significance of
information sources associated with a given problem. The Interpretive
Collaborative Review (ICR) framework is designed as a social informatics
process. It is motivated by a need for researchers and practitioners to
ascertain a current, collective interpretation of electronically
accessible information and collectively generated propositions for
problem understanding in complex and rapidly developing domains.
Healthcare related information domains are used as an example where
there is a need to integrate information derived from biomedical
sciences, evidence-based measures of clinical outcomes, and health
systems socio-economic analysis.
The ICR framework establishes a conceptual model and a process for
explicit human assignment of reviews and scores to information sources
within an online dialogical environment, enabling collaborative
evaluation, discussion, and recording of significance relationships. At
least three necessary dimensions of significance relationships are
recognized and evaluated with respect to each source considered: 1)
match, 2) standing, and 3) authority. Match = Claims in the source
(meaning), Standing =
Warranted linking of claim to evidence (agency), and Authority =
Evidence in source (power). These referents have both objective data
(associated with a publication) and subjective interpretations.
Each dimension is further characterized by collective scoring for three
qualities of value in the source: 1) knowledge validity, 2) precedence,
and 3) maturity. The resulting matrix of scores, specific comments,
group editorial commentaries, and references are all woven into an
electronic sensemaking narrative publication designed to be indexed,
retrieved, and reviewed along with the associated corpus of prioritized
sources.
ICR makes a strong appeal for the dialogic construction of knowledge
about collective problems using intentional human assignment of scores
and reviews. We find that algorithmic relevancy scores are insufficient
when considering the significance of materials in the context of
collective problem solving. Human interpretation is needed to determine
the relevance of a given information source to a problem context and to
understand the range of equally valid perspectives in the recognition of
that relevance. The authenticity of a source’s authorship can only be
determined by another human being with contextual knowledge of the
problem domain and of human motivations and ethical sensibilities. The
credibility of a source to a problem situation represents another
interpretive context, as the perception of the credibility of the source
is a complex function of trust, expertise and of quality.
This is the ICR in summary, which serves some of the purposes we are
discussing. It will publish review results electronically, yet also is
compatible with peer-review and with new forms of editorial review.
I am quite in agreement with your purpose to address the gaps in our
literatures and “to do the hard yards and actually write and develop
some of these tools.” I will just note that there’s a lot more
funding available to do this in medicine than in design!
Best, Peter
Peter Jones, Ph.D.
Associate Professor, Faculty of Design
Strategic Foresight and Innovation
OCAD University
http://DesignDialogues.com
|