Print

Print


Friends,

Ranulph, Chris, and Catherine each state important points. I agree 
with them - conferences serve more purposes than simply publishing or 
exposing one's own research. For my school, presenting at a 
conference is an effective mechanism for determining who cares enough 
about attending to do the work of submitting a presentation. It 
enables us to fly the flag and therefore justifies the cost. It is 
part of the publishing mechanism but not central. Nevertheless, 
conferences are vital for other reasons.

To cut conference funding and conference attendance would diminish 
the general development of research networks and the entire research 
agenda.

That said, I've fairly well shifted my attention from large 
conferences to small, focused events where it is possible to "confer" 
in dialogue with other participants rather than rush from room to 
room finding out what the program contains. This past year I 
co-chaired Wonderground, but I did not do it because I like big 
conferences. I did it because I like working with Eduardo, Martim, 
and Terry. My preference is small, focused conferences such as the 
forthcoming conference on events and event structures at Denmark's 
Design School.

Conferences -- even large ones -- nevertheless serve many purposes. 
They give people a chance to meet and to survey the field. They 
function as a kind of ecological wetlands between research at home 
and formal publication, and they give people a chance to gather 
responses and gain insight on one's own work and on the field. In 
some places, conferences also serve as a training ground for doctoral 
candidates and inexperienced or younger researchers to begin 
presenting their work. These deserve discussion. Eliminating 
conference support is yet another unfortunate outcome of the 
audit-driven mentality that finds the reasons and purposes of 
conferences to diffuse to measure in quantitative terms.

Can conferences be more fruitful to serve the field better -- our 
field, any field? Yes. That is a specific thread in its own right. 
Despite the problems that pop up in specific conferences or the uses 
to which some put them, the medium of the conference remains relevant 
to design research as it does to most fields.

Terry's note on the use of specific journals as a metric deserves a 
further comment. This kind of policy will prove to be a real problem 
on a national level. In some fields, some schools establish lists of 
target journals that serve for tenure and promotion. Only these 
journals count. This is especially the case in some North American 
universities. On the one hand, publishing guarantees visibility, but 
the fact that citation rankings occur to and from journals in only 
one data base -- without including citations to books or monographs 
-- means that impact ratings do not measure true impact on a field, 
only impact between and among journals in the database.

This gets sillier still when one realizes that journals have a 
limited number of pages per year. This places a limit on the number 
of possible articles that any journal can publish. In a field of 
10,000 scholars with only one journal, only a couple dozen scholars 
-- and only a couple dozen schools -- will have any "impact" in a 
given year. A single university can afford -- albeit mistakenly -- to 
pursue the "top 5" journals strategy. An entire nation cannot, at 
least not if it wishes to bring all its universities and research 
centers up, rather than to use a proxy measure that will inevitably 
sort out all but a handful of research universities, relegating the 
rest to second-class or third-class ranking. It seems to me that 
Australia's policy will have the effect of sorting out status among 
Australian universities, reinforcing the dominant status of the "big 
eight" while reducing the standing of the other 30 or so Australian 
universities. In international terms, this will be a disaster, 
especially since education is one of Australia's most profitable 
international industries.

One can use journals in a metric system, but it must be reasonable 
and appropriately wide to encourage research and raise the research 
capacity of the nation as a whole. This means using a metric that 
genuinely covers the fields it measures, and allows opportunity to 
all universities rather than an elite few. The elite will remain 
elite. The others deserve a chance to improve.

In Norway, the government uses a publishing metric to measure 
research productivity. The government maintains a database of 
research publishing channels. The database includes journals and 
academic or scientific publishers, with all approved channels on 
level 1 and some select channels on level 2. A journal article is 
worth 1 point or 3 points respectively, a monograph is worth 5 points 
or 8 points, and a book chapter is worth 7/10 point or 1 point. Each 
publication by a faculty member accumulates a number of points and 
each school tallies up and presents all its publishing points in each 
publishing year. The ministry allocates basic research funds to 
research-based universities, university colleges, and professional 
schools based on the total number of points.

Our database shows 85 journals with the word "design" in the title. 
We admittedly miss some design and design research journals that do 
not use the word - the new journal Artifact is an example. (Artifact 
is registered, but it does not show up in searching design as a title 
word.) Nevertheless, the broad spectrum covers much of the field. If 
you would like to see for yourself, go to URL:

http://dbh.nsd.uib.no/kanaler/

No system is perfect, but a broad system, works well enough. It 
counts respectable journals while placing greater emphasis on a 
smaller selection of elite journals.

For those who wish to look further into citations and impact, I would 
suggest visiting the section of William Starbuck's web site titled 
"Citations of Journals Related to Business." Go to URL:

http://pages.stern.nyu.edu/~wstarbuc/

The focus is business and management journals, but the discussion is 
a lucid and useful analysis of issues and challenges that affect all 
fields, especially if the Howard regime controls your life and future 
as a scholar.

Andrew van de Ven addresses many of these issues in an important new 
book forthcoming on Oxford University Press, Engaged scholarship: A 
Guide for Organizational and Social Research.

Visit his personal web site at to read chapter 1 and chapter 9 as PDF 
downloads.

http://webpages.csom.umn.edu/smo/avandeven/AHVHOME.htm

Van de Ven discusses the problem of why so much research is not used, 
and how we can do better in fields of professional practice.

Gads. I seem to have too much to say these days, but Terry's question 
and the further contributions got me rolling.

If the issues of impact and fruitful use interest you, do, please, 
have a look at Starbuck and Van de Ven.

Yours,

Ken

-- 

Prof. Ken Friedman
Institute for Communication, Culture, and Language
Norwegian School of Management
Oslo

Center for Design Research
Denmark's Design School
Copenhagen

+47 46.41.06.76    Tlf NSM
+47 33.40.10.95    Tlf Privat

email: [log in to unmask]