Print

Print


Ken - Thanks, let me concisely refine these points and see if we have common ground. Email does not enable dialogue, and even when we try the medium conforms us to a textual mode of communication.  (And wikis are not much better, they are textual as well, and are worse than email for deep exposition, in my decade or so of experience with them).  

We are covering several complex themes within a single email discussion - it may require iteration to clarify meaning and to respond to gaps in sensemaking. 

I completely support your objectives for the critical review and conceptual mapping in a domain. I agree the purpose is primarily for communicating with scholars about the development of research and understanding in a field. Perhaps I wasn't clear enough about "disciplinary development" but that's my point, that scholars at the PhD level and faculty should be engaged in mapping their understanding of the significance of concepts and research findings. This advances or develops the discipline, and it is a practice trained at the PhD level and in dissertation work. 

This practice should be started at the MDes/MA level in the form of annotated bibliographies, that could encourage a type of canonical structure such as you had indicated (two emails back? ;)  If we don't train for quality writing and critical reviewing at the  Master's level, we are leaving the advanced practitioner with significant skill gaps. 

My comments about Zotero are related to my own developmental challenges with the ICR (interpretive collaborative review). The most challenging problem with scholarly innovation is a shared and agreeable infrastructure.  We have to look at the entire Zotero product - not just the community website that Rosan pointed us to. Zotero is a full-featured reference manager available for free as a Firefox plugin. It is a classical disruptive innovation - I use the Zotero app now instead of EndNote or Refworks. With the addition of the community site, the infrastructure is extended to enable the kinds of cooperative thinking tools that I want to build, such as the ICR.

We have designed the interpretive collaborative review (ICR) process as a theoretically sound way to cooperatively formulate critical reviews of information sources as a loosely-coupled group editorial process. The interpretive review is based on the consideration that different stakeholders for a shared concern will have unique perspectives and understandings about the significance of a source (a journal article or even a blog post). Based on Ross Ashby's principle of requisite variety, the idea is to create a "dialogic surround" that encompasses the problem area by the different perspectives associated with the type of review output. Not all the reviewers are experts, and their fields of knowledge and experience may differ widely. If you want grounded clinical assessment of a new strain of malaria, you want not just infectious disease researchers but observers on the ground to participate. If you want to review the development of the multiple tracks of thinking in service design, you (as an editor) would recruit the perspectives you believe will enrich understanding.

The theoretical basis for this model draws on information science, social systems theory, and Dervin's sense-making methodology. As a dialogic design process it relies on requisite variety and requisite saliency to draw out information significance, which can be articulated as capsule reviews across a fairly large corpus of articles, which are distributed across a team of reviewers, who agree to select and critique a subset - and then review and critique each other's reviews, almost like you and I are doing here in email. 

We propose that information significance is a type of relevance (which we call pertinence) recognized by human perceivers, as opposed to keyword relevance based on statistical term matching in a corpus. I consider this a collective sensemaking process.

It is an interpretive review, not a purely "objective" approach. We ask reviewers to identify the warrants in the reviewed articles that support the claims the reviewer may be promoting. We want to make reviewer perspectives as explicit as possible, since in much of science and medicine these values and rationale are never otherwise expressed. 

The implementation of this idea is where infrastructure comes in. We have tried several times to build an effective "super-wiki"  that would be sufficient to produce an publishable ICR process. The idea is that an editor interested in producing an ICR would use our online toolset to convene an shared review committee, collect a corpus of sources , then select perhaps 100 articles to actually allocate for review among the team. The toolset enables the review, making a transparent record of commentary and scores associated with measures of Match (pertinence), Authority (trust), and Standing (authenticity). Now you have a qual and quant record of the review that can be published online, and searched with very rich metadata associated with the problem domain.

So yes, I am interested in people willing to join us in making these reviews possible for a well-defined and motivating problem area. And more so, I continue to look for funding to build a review toolset that makes this easy for first-time users and participants.

Thanks, Peter

-----Original Message-----
From: Ken Friedman [mailto:[log in to unmask]] 

Hi, Peter,

Thanks for your reply. Been thinking about it. Once again, I’m reproducing your full note because you raise so many valuable issues.

By and large I agree, and I’d say these issues deserve thought. I’m going to concur on one specific point with a note that I’ve said much the same, and I’m going to disagree on one point.

The journal article format known as the critical literature review is not about individual learning, nor even about the role of contextualization that a literature review serves in the PhD thesis. The critical literature review and the parallel format of the bibliographic essay that appears in book form involve concept mapping to advance the knowledge of the field. I don’t think anyone has suggested that a critical literature review is just about individual learning, nor even about individual learning at all, except incidentally as an author learns a great deal in writing one. The critical literature review adds to the body of knowledge of a field – that is why journals publish them.

With Zotero, I’m going to disagree. Zotero does not do 80% of the work. Zotero doesn’t do 10% -- not even the 10% that a serious thematic bibliography does.  ... snipped>

Thanks for your proposal. I can see the value of an interpretive collaborative review. But this is quite different to a wiki, or any of the other collaborative tools floating about in conversations here.

If I read this correctly, the tool is a expert-level tool where those who participate must demonstrate skill, knowledge, and expertise to join in. While this does not entirely solve the free-rider problem, it does solve the competency problem.

Just as I disagree with you on Zotero, I disagree with Victor on the idea that we’ll get good concept maps out of a wiki. The problem the repeated calls for doing this work on a wiki is that folks want the wiki, but they don't want the work. They imagine that somehow a wiki or Zotero or any of these other tools will magically yield something even though no one actually does the work of write skilled, competent entries. The paragraphs, random notes, and odd thoughts that accumulate in a wiki won't congeal into a concept map without rigor and intelligence. This takes work that will not likely be forthcoming in any project where those who lack skills wait for others to flesh out their ideas with real thinking and writing. No serious researcher is likely to take part in an open environment like a wiki or Zotero, not when the participants are people they would not want to work with in seminars or direct research collaborations.

Time is the most valuable resource I have. If I wouldn’t “spend” time in seminars and research collaborations with someone, I won’t spend time collaborating with them on a wiki. Wikipedia rises to a reasonable level of mediocrity without taking the next step for precisely this reason. Experts won’t spend time or waste it on a reference tool where unskilled amateurs can revise and waste hours or days of careful writing. The reason for the success of such open-access, online references as the Stanford Encyclopedia of Philosophy is that experts compile and edit it, review it, and work together carefully to ensure continuing, updated improvements through expert-level participation. 

That seems to me to be the kind of thing you are aiming at with your interpretive collaborative review. The medium seems a bit more collaborative than the single-author articles in the Stanford Encyclopedia of Philosophy, and the principle of expert-level participation makes the collaborative investment worthwhile.

Best regards,

Ken

Professor Ken Friedman, PhD, DSc (hc), FDRS | University Distinguished Professor | Dean, Faculty of Design | Swinburne University of Technology
| Melbourne, Australia | [log in to unmask] | Ph: +61
39214 6078 | Faculty 


On Tue, 1 Nov 2011 12:15:40 -0400, Peter Jones | Redesign <[log in to unmask]> wrote:

Best, Peter

Peter Jones, Ph.D.
Associate Professor, Faculty of Design
Strategic Foresight and Innovation

OCAD University
http://DesignDialogues.com