Dear all,
Interesting thread.
Here are some observations from the position of Editor-in-Chief of Research in Engineering Design for over a decade and one that follows the impact factor (IF) topic even longer.
IF can be manipulated in different ways. One "creative" way that happened is two journals "controlled" by the same editor, citing each other's papers hence avoiding the self-citation problem. I verified this myself.
IF of a journal is determined by a small fraction of the papers that appear in that journal that receive very high citation numbers. Most papers, even in high IF journals, are not noticed or are cited seldom. Consequently, one cannot learn from the IF of a journal on the equality of the papers published in it.
In traditional journals, there is some relationship between the journal IF and the quality of its review system. You have to know it through your judgment of its content, review process, or acquaintance with the people involved. This statement is supported by my experience in publishing in over 40 different journals.
The reliance on IF and other quantitative measures for research quality may be the result that decision-makers are not familiar with the variety of journals that exist today or do not want to judge for themselves. Using a number that somebody calculates is easier.
Different topics in design would have different chances to be cited, for example, change management in design is likely to be cited much more than design philosophy; hence different design journals cannot be judged just by their IF but also considering their focus.
Some papers are cited not because they should be but because somebody else cited them and this propagates further. Some papers are cited not because they provide supporting evidence but because they contain a statement, even without support, that somebody wishes to use. Such practices distort the fragile status of IF.
Bottom line:
As authors, producing a product - a journal paper - you have many stakeholders: your audience, your promotion committee, funding agency. Until you manage to change their perspective, work your way to address their concerns.
Know well why you submit a paper to a particular journal (e.g., it is considered the best, the best papers on the subject of my paper appear there; it is the journal of the important society, ... )
Since it is always possible to find a journal to publish your research, make sure you publish quality research. I also heard firsthand that some papers published by a candidate prevented him from getting tenure as their quality was mediocre.
Cheers,
Yoram
Professor Yoram Reich, FDRS, HF INCOSE-IL
Chair in Engineering Design and Systems Engineering
Faculty of Engineering, School of Mechanical Engineering
Head, Systems Engineering Research Initiative http://engineering.tau.ac.il/seri
Head, MSc (with thesis) program in Systems Engineering https://engineering.tau.ac.il/Engineering-Faculty-Systems-Engineering-M.Sc
Academic Head, Final Project Course, Mechanical Engineering https://www.tauengprojects.com/
Tel Aviv University, Tel Aviv 69978, Israel
T: +972-3-6407385, F: +972-3-6407617
[log in to unmask], http://www.eng.tau.ac.il/~yoram
Editor-in-Chief
Research In Engineering Design
http://www.springer.com/163
Co-editor: Leshomra, Sustainable and Ecological Communities in Israel, Resling, Tel Aviv, 2023.
עורך במשותף: לְשָׁמְרָהּ-קהילות מקיימות ואקולוגיות בישראל
tinyurl.com/ycyk92vf
Co-author: We Are Not Users: Dialogues, Diversity, and Design, MIT Press, 2020
https://mitpress.mit.edu/books/we-are-not-users
https://www.amazon.com/dp/026204336X/
-----Original Message-----
From: PhD-Design <[log in to unmask]> On Behalf Of Ken Friedman
Sent: Thursday, 6 April 2023 10:19
To: [log in to unmask]
Subject: Re: Design research journal rankings
Dear Ali,
Thanks for these. Assessments and rankings are a plague and a bother. There are many good reasons question them — and to question any specific ranking system. Any system you can use is prone to being gamed. When that happens, the assessment may only measure some factors — and a gamed system may not even measure what it purports to measure.
My colleagues and I did that first journal ranking article as a matter of self-preservation in a system where the journals of all other disciplines were ranked in ways that helped the relevant schools and faculties while our journals were invisible. The more detailed article published in Design Studies continued the process.
Now, as a journal editor, I worry about rankings for two reasons. First, many people are required by their universities or by their departments to consider rankings when they place articles with journals. Second, even though we are funded as a not-for-profit journal by a university and a university press, our university has a strategic plan for us that requires us to meet certain ranking goals. Elsevier serves as our publisher, but we are in reality a diamond open access journal because Tongji University pays all author fees as well as paying Elsevier for other services. The university also supports an editorial office through the D&I Publishing Platform and they pay for extensive design services that enable us to produce a well-designed attractive journal in full colour without any extra charge to authors for colour plates, while authors with colour material are free to use as many illustrations as they wish. All of this costs a great deal — it comes with a price for the editorial team, and that price is our need to worry about rankings.
The entire set if league tables and rankings that plagues the university world now is compounded by many factors. Foremost among these is the fact that there are somewhere between 14,000 and 22,000 universities in the world today. (The number differs depending on who is counting.) That is many times more than the world’s 2,000 to 3,000 universities in the 1950s — and fewer than 1,000 only a century before that. What universities are and what they do is heavily contested, but many want or require their staff to engage in research as the price of academic work. How do they know what research is good, valid, or worth doing? Often they don’t. They may not even care. But they do care about representing the research done as good, valid, and worth doing. Finding metrics to make this representation is therefore significant.
For individual faculty members, the ability to contribute to university metrics is also significant. With the growth of the adjunct class and casual employment, and the dramatic growth of administration, there are fewer tenure track and full time faculty jobs than in the past. Those who aspire to such jobs are forced to demonstrate their worthiness by using the kinds of quantification that universities can measure.
All of this gives rise to many quandaries.
While I plan to read Juan Pablo Pardo-Guerra’s book, The Quantified Scholar, my suspicion is that a good diagnosis of the problem doesn’t lead to a solution is a situation where so many different groups and individuals have a stake in keeping the problem what it is today.
Yours,
Ken
Ken Friedman, Ph.D., D.Sc. (hc), FDRS | Editor-in-Chief | 设计 She Ji. The Journal of Design, Economics, and Innovation | Published by Tongji University in Cooperation with Elsevier | URL: http://www.journals.elsevier.com/she-ji-the-journal-of-design-economics-and-innovation/
Chair Professor of Design Innovation Studies | College of Design and Innovation | Tongji University | Shanghai, China | Email [log in to unmask] | Academia https://tongji.academia.edu/KenFriedman | D&I http://tjdi.tongji.edu.cn
—
Ali Ilhan wrote:
Date: Wed, 5 Apr 2023 07:48:39 -0400
From: Ali Ilhan <[log in to unmask]>
Subject: Re: Design research journal rankings
Dear all,
I have not come across a newer ranking article yet, but you may find the article below on why it is notoriously difficult to assess research/journals in humanities (which I think can easily be applied to
design):
https://www.mdpi.com/2078-2489/11/11/540
Now some tangents:
Are all the articles in predaotry journals useless:
https://direct.mit.edu/qss/article/doi/10.1162/qss_a_00242/114726/Are-papers-published-in-predatory-journals
Preprint version:
https://osf.io/preprints/642ad/
and
This is an excellent book about the “ills” of quantifying research assesement (the book focuses on British research excellence framework):
https://cup.columbia.edu/book/the-quantified-scholar/9780231197816
“Combining interviews and original computational analyses, The Quantified Scholar provides a compelling account of how scores, metrics, and standardized research evaluations altered the incentives of scientists and administrators by rewarding forms of scholarship that were closer to established disciplinary canons. In doing so, research evaluations amplified publication hierarchies and long-standing forms of academic prestige to the detriment of diversity. Slowly but surely, they reshaped academic departments, the interests of scholars, the organization of disciplines, and the employment conditions of researchers.”
All the best,
Ali
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]> Discussion of PhD studies and related research in Design Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
|