JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for PHD-DESIGN Archives


PHD-DESIGN Archives

PHD-DESIGN Archives


PHD-DESIGN@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Monospaced Font

LISTSERV Archives

LISTSERV Archives

PHD-DESIGN Home

PHD-DESIGN Home

PHD-DESIGN  November 2011

PHD-DESIGN November 2011

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: texts

From:

Ken Friedman <[log in to unmask]>

Reply-To:

PhD-Design - This list is for discussion of PhD studies and related research in Design <[log in to unmask]>

Date:

Thu, 3 Nov 2011 19:58:11 +1100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (207 lines)

Hi, Peter,

Thanks for your reply. Been thinking about it. Once again, I’m
reproducing your full note because you raise so many valuable issues.

By and large I agree, and I’d say these issues deserve thought. I’m
going to concur on one specific point with a note that I’ve said much
the same, and I’m going to disagree on one point.

The journal article format known as the critical literature review is
not about individual learning, nor even about the role of
contextualization that a literature review serves in the PhD thesis. The
critical literature review and the parallel format of the bibliographic
essay that appears in book form involve concept mapping to advance the
knowledge of the field. I don’t think anyone has suggested that a
critical literature review is just about individual learning, nor even
about individual learning at all, except incidentally as an author
learns a great deal in writing one. The critical literature review adds
to the body of knowledge of a field – that is why journals publish
them.

With Zotero, I’m going to disagree. Zotero does not do 80% of the
work. Zotero doesn’t do 10% -- not even the 10% that a serious
thematic bibliography does. Zotero compiles resource lists. They are
poorly formatted in every Zotero list I’ve seen, and there is no
apparent rationale for selection other than the enthusiasm of the
author. In that sense, Zotero is a bit like “My Favorite Things” in
the Sound of Music. (Remember Julie Andrews singing the film version of
the Rodgers and Hammerstein musical? “Raindrops on roses and whiskers
on kittens, bright copper kettles and warm woolen mittens, brown paper
packages tied up with strings, these are a few of my favorite
things.”) That is not even a bibliography, much less a concept map
with a description of the value these have for the design field.

Thanks for your proposal. I can see the value of an interpretive
collaborative review. But this is quite different to a wiki, or any of
the other collaborative tools floating about in conversations here.

If I read this correctly, the tool is a expert-level tool where those
who participate must demonstrate skill, knowledge, and expertise to join
in. While this does not entirely solve the free-rider problem, it does
solve the competency problem.

Just as I disagree with you on Zotero, I disagree with Victor on the
idea that we’ll get good concept maps out of a wiki. The problem the
repeated calls for doing this work on a wiki is that folks want the
wiki, but they don't want the work. They imagine that somehow a wiki or
Zotero or any of these other tools will magically yield something even
though no one actually does the work of write skilled, competent
entries. The paragraphs, random notes, and odd thoughts that accumulate
in a wiki won't congeal into a concept map without rigor and
intelligence. This takes work that will not likely be forthcoming in any
project where those who lack skills wait for others to flesh out their
ideas with real thinking and writing. No serious researcher is likely to
take part in an open environment like a wiki or Zotero, not when the
participants are people they would not want to work with in seminars or
direct research collaborations.

Time is the most valuable resource I have. If I wouldn’t “spend”
time in seminars and research collaborations with someone, I won’t
spend time collaborating with them on a wiki. Wikipedia rises to a
reasonable level of mediocrity without taking the next step for
precisely this reason. Experts won’t spend time or waste it on a
reference tool where unskilled amateurs can revise and waste hours or
days of careful writing. The reason for the success of such open-access,
online references as the Stanford Encyclopedia of Philosophy is that
experts compile and edit it, review it, and work together carefully to
ensure continuing, updated improvements through expert-level
participation.

That seems to me to be the kind of thing you are aiming at with your
interpretive collaborative review. The medium seems a bit more
collaborative than the single-author articles in the Stanford
Encyclopedia of Philosophy, and the principle of expert-level
participation makes the collaborative investment worthwhile.

Best regards,

Ken

Professor Ken Friedman, PhD, DSc (hc), FDRS | University Distinguished
Professor | Dean, Faculty of Design | Swinburne University of Technology
| Melbourne, Australia | [log in to unmask] | Ph: +61
39214 6078 | Faculty


On Tue, 1 Nov 2011 12:15:40 -0400, Peter Jones | Redesign
<[log in to unmask]> wrote:

Ken - I appreciate the distinction making in your critique. I agree
that we have several different purposes for critical, bibliographic, and
narrative review of sources. Because the methods for producing these
formats and outputs are quite similar (bibliographies, annotated, with
summary, narrative, or multiple attributes) people often produce an
adequate artifact and can confound the purposes. I would say that if we
don’t teach good practice at the MDes level, those that pursue a PhD
will find this an especially difficult undertaking. We may teach
critiquing, but critical review writing and literature reviews are
pitiful in much of the design literature.

And I agree there’s a real need for disciplinary development and
conceptual mapping of literature and concepts to theoretical and
historical development. Developmental concept mapping through the
literature is a PhD level task. But the outcome of this work should not
be “just” individual learning. As I noted with respect to graduate
medicine - Review articles are not only a primary means of practitioner
and advanced resident study, they are also a significant output of
fellows and faculty (and MD/PhD’s) who have requirements for
publishing, and are advancing their disciplines. I think we have some
parallels to medical education, but at the PhD level design is being
treated more like a social sciences PhD. I’m not convinced this is
the only or best model myself.

Medical professionals move into fellowships or PhD programs to pursue
advanced study or pure research. At that stage, but not in residency as
much, they are producing review articles. Residents in their research
rotation often work on ongoing research projects, but as a PGY3 resident
they do not initiate research, and they often join projects that are
mid-stream and have their literature base well established. Therefore,
they may have the opportunity to write review article or produce
critical literature reviews, but it’s not that common in my
observations of US programs.

So if our purpose is to strengthen the research base of our field, the
tools you’ve indicated are ways to do promote those purposes, of
course. I think there is room for different types of commitments in
developing the concepts from literature. One of them is a
research-based approach I’ve been developing with a Pharmacy professor
in U Toronto’s Knowledge Media Design Institute. The Interpretive
Collaborative Review is a process and a system (prototype) in search of
funding. I can appreciate why something simple like Zotero achieves
adoption (which is nicely articulated as a Web 2.0 design in many ways).
Zotero meets 80% of the need while leaving the advanced features to
academics. The ICR is described as:

Collaborative Discovery of Information Significance: A Framework for
Making Sense of Healthcare Research

Peter Pennefather and Peter H. Jones. Laboratory for Collaborative
Diagnostics, Leslie Dan Faculty of Pharmacy, University of Toronto

We present a framework for collaborative sensemaking by a
problem-focused community using electronically accessible scientific
journal articles and other digital information artifacts. The framework
guides collective structured evaluations of the significance of
information sources associated with a given problem. The Interpretive
Collaborative Review (ICR) framework is designed as a social informatics
process. It is motivated by a need for researchers and practitioners to
ascertain a current, collective interpretation of electronically
accessible information and collectively generated propositions for
problem understanding in complex and rapidly developing domains.
Healthcare related information domains are used as an example where
there is a need to integrate information derived from biomedical
sciences, evidence-based measures of clinical outcomes, and health
systems socio-economic analysis.

The ICR framework establishes a conceptual model and a process for
explicit human assignment of reviews and scores to information sources
within an online dialogical environment, enabling collaborative
evaluation, discussion, and recording of significance relationships. At
least three necessary dimensions of significance relationships are
recognized and evaluated with respect to each source considered: 1)
match, 2) standing, and 3) authority. Match = Claims in the source
(meaning), Standing =
Warranted linking of claim to evidence (agency), and Authority =
Evidence in source (power). These referents have both objective data
(associated with a publication) and subjective interpretations.

Each dimension is further characterized by collective scoring for three
qualities of value in the source: 1) knowledge validity, 2) precedence,
and 3) maturity. The resulting matrix of scores, specific comments,
group editorial commentaries, and references are all woven into an
electronic sensemaking narrative publication designed to be indexed,
retrieved, and reviewed along with the associated corpus of prioritized
sources.

ICR makes a strong appeal for the dialogic construction of knowledge
about collective problems using intentional human assignment of scores
and reviews. We find that algorithmic relevancy scores are insufficient
when considering the significance of materials in the context of
collective problem solving. Human interpretation is needed to determine
the relevance of a given information source to a problem context and to
understand the range of equally valid perspectives in the recognition of
that relevance. The authenticity of a source’s authorship can only be
determined by another human being with contextual knowledge of the
problem domain and of human motivations and ethical sensibilities. The
credibility of a source to a problem situation represents another
interpretive context, as the perception of the credibility of the source
is a complex function of trust, expertise and of quality.

This is the ICR in summary, which serves some of the purposes we are
discussing. It will publish review results electronically, yet also is
compatible with peer-review and with new forms of editorial review.

I am quite in agreement with your purpose to address the gaps in our
literatures and “to do the hard yards and actually write and develop
some of these tools.” I will just note that there’s a lot more
funding available to do this in medicine than in design!

Best, Peter

Peter Jones, Ph.D.
Associate Professor, Faculty of Design
Strategic Foresight and Innovation

OCAD University
http://DesignDialogues.com

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


WWW.JISCMAIL.AC.UK

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager