Dear Ken and all,
Thank you for your posts.
Extending the use of the term 'algorithm' in a loose manner can result in a kind of 'useless for research' terminology.
if not careful, 'algorithm' can be unhelpfully easily extended to include a wide variety of processes that have some kind of aim.
The outcome is a definition such that 'everything is an algorithm' and as a result, in research terms, the word 'algorithm' defines nothing and becomes meaningless.
This extension of the use of 'algorithm' into 'everything is an algorithm' is as useless as 'everything is mathematics', 'everything is politics' or 'everything is design'.
The meaning of the term 'algorithm' has a long tradition of scientific use as a formal technical definition. Algorithm refers specifically in research to a mathematically-defined procedure that identifies a unique singular solution to a problem via a singular unambiguously-defined and precise mathematical process.
There are many other processes that do not have this mathematical aspect for identifying a solution from a problem via a process. One of them is the activity of 'design'.
It would seem helpful for researchers and in making sound theory if the mathematical term 'algorithm' is not ambiguously extended to also include these other processes.
Otherwise, to be devil's advocate, if we want to extend the word algorithm we should perhaps drop the term design because 'all design is an algorithm'.
In research terms, therefore, I suggest it is probably not too useful to call all and anything an algorithm just because we can mindlessly lump almost anything associatively with the term.
In other words, I suggest its probably best to avoid regarding as algorithms such other processes and objects as recipes, management processes, navigation routers, almanacs and rules. To be more exactly precise, some of these, such as routing devices and almanacs *may* depend on algorithms for their production.
Going back to the focus of Ali's original post, perhaps it is better to see the situation more broadly; that,
"Many human defined processes and systems can cause problems when they interact. "
This has four benefits:
1. It has been written and proven many times and is well justified in many fields.
2. It applies also in situations that do not include algorithms. and does not require tortuously extending the technical meaning of algorithm
3. Algorithms (in the conventional mathematically-defined sense) are only a very small part of the above overall problem, and the roles of algorithms can be specifically identified.
4. Most importantly, it brings to focus back onto humans decision-making that causes the problems through choices about the use of such mathematical algorithms.
I don't have an exact epistemologically justifiable definition of the term 'algorithm' and the definitions in the non-technical dictionaries are not formally specific in a research sense.
The technical definition is relatively clear that algorithm is an unambiguous mathematically defined specification of a process to find singular solutions to a class of problems.
This seems useful.
In contrast, extending the definition of algorithm to loosely mean ANY kind of prescribed process doesn't seem too helpful in terms of increasing research precision of theory-making.
Alongside all of this, of course is academic politics and personal promotion.... There is some kudos in some disciplines of being able to drop the word algorithm regardless of how meaninglessly it is used ....!
Best regards,
Terence
==
Dr Terence Love,
School of Design and Built Environment, Curtin University, Western Australia
CEO, Design Out Crime and CPTED Centre
PO Box 226, Quinns Rocks, Western Australia 6030
[log in to unmask]
[log in to unmask]
+61 (0)4 3497 5848
ORCID 0000-0002-2436-7566
==
-----Original Message-----
From: PhD-Design - This list is for discussion of PhD studies and related research in <[log in to unmask]> On Behalf Of Ken Friedman
Sent: Monday, 10 June 2019 2:45 AM
To: [log in to unmask]
Subject: AI Algorithms: More Thoughts
Dear All,
An article in today’s New York Times reminded me that the recent thread on AI algorithms ended in an unsatisfactory way.
The metaphor likening comments in the thread with “swimming and complaining the water is wet” didn’t reflect the conversation in a responsible way. A more accurate reflection occurred in comment that “algorithms are not inherently bad it's how they are used.”
Human beings have used algorithms for millennia. Recipes, almanacs, routers in the early days of navigation, travel guides today. I pointed to these issues myself, writing "One possible outcome of bad AI is a world that begins to resemble historical empires in which overlapping jurisdictions and different sets of rules trapped people in cages from which there was no escape while making the ordinary business of life impossible for those who were subject to the rules. The British Raj in India comes to mind along with the late Ottoman Empire. In fact nearly any empire eventually fell prey to the problems we are apparently creating for ourselves by shaping a world around algorithms when we have no way to predict the outcome of different sets of rules and algorithms of differing strength as the algorithms interact.”
One problem occurs when algorithms designed for one purpose interact with systems design for another purpose. That’s how YouTube transformed innocent home videos into porn magnets.
Another problem occurs when the algorithms of different companies and government agencies reinforce the prejudices and problems of the societies in which those firms and agencies exist.
Today, however, we are also seeing something worse than unintended consequences. This is the purpose-built use of algorithms to create disinformation in news channels in an effort to destabilise democracies. That is the topic of the article in today’s NY Times.
> https://www.nytimes.com/interactive/2019/06/07/technology/ai-text-disi
> nformation.html
> <https://www.nytimes.com/interactive/2019/06/07/technology/ai-text-dis
> information.html?action=click&module=Top%20Stories&pgtype=Homepage>
It’s an interesting article with a section that allows you to test the effect of some algorithms in real time on the screen as you read. I recommend it.
No one is swimming to complain that the water is wet. That’s a silly approach to a serious problem. In today’s world, we are inevitably going to use different kinds of algorithms for much of our daily work, most of our information, and much of our entertainment.
What we should be doing is to be informed about the challenges and inflection points that we navigate in the world that human beings have designed. And — as citizens — we should be aware of the opportunities we have to influence the regulation of the information environment through voting and through stating our views to those whom we elect.
Yours,
Ken
Ken Friedman, Ph.D., D.Sc. (hc), FDRS | Editor-in-Chief | 设计 She Ji. The Journal of Design, Economics, and Innovation | Published by Tongji University in Cooperation with Elsevier | URL: http://www.journals.elsevier.com/she-ji-the-journal-of-design-economics-and-innovation/ <http://www.journals.elsevier.com/she-ji-the-journal-of-design-economics-and-innovation/>
Chair Professor of Design Innovation Studies | College of Design and Innovation | Tongji University | Shanghai, China ||| Email [log in to unmask] <mailto:[log in to unmask]> | Academia http://swinburne.academia.edu/KenFriedman <http://swinburne.academia.edu/KenFriedman> | D&I http://tjdi.tongji.edu.cn <http://tjdi.tongji.edu.cn/>
--
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]> Discussion of PhD studies and related research in Design Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
|