Dear Glen, Ken and List,
I have a couple of questions in the vicinity of this topic which I have been mulling over for a while and not really answered in any satisfactory way. The recent posts motivate me to raise them here.
Firstly, I am much in agreement with Ken, and don't want to add much other than to mention that the question of whether automata can design appears to be in essence precisely the same issue rehearsed in the 'Chinese Room' debate in AI and behavioural science domains.
Certainly I agree that:
> A machine cannot design any more than a machine can learn.
And I think I want to agree with:
> Design requires learning and conscious decision making.
Ok, now suppose I have a some automaton which can accept a set of design criteria and comes up with solution. At its most simple, this 'automaton' might just be an Excel spreadsheet set up using one single formula. I put in the width of a door frame and it tells me the size of the lintel I need. I am very happy that this spreadsheet is not designing.
Now I make it a bit more sophisticated. The relevant formulae tell me that the strength and stiffness of the beam increase linearly with the width but with the square and the cube, respectively, of the height. I can write a program which trades off the height against the width, and even checks of the result against a lintel catalogue database to recommend a standard production size. I think I can reasonably argue that the automaton is beginning to 'optimise'; in this case against cost, as bespoke lintels would be expensive. I can make it optimize on weight too, finding the lightest lintel that would do the job. It can trade off weight against cost if I can specify how I balance them for importance. I probably can't do this just with Excel formula tools, but I reckon I could do it with a macro.
Moving beyond the capability of Excel, I can build a program that takes a lot more input about the whole house I'm building and comes up with a not just the door lintel, but the whole design of a house. I can move beyond engineering maths, I input 'contemporary' as a style, and it checks its database for the right sort of features, and produces plans for a 'contemporary' house. It even comes up with a combination of features that surprises, maybe even pleases me, (and the lintels are strong enough too!) And yet I am happy it is not designing; its looking in its database, coming up with a few combinations and doing a bit of optimising, but it isn't designing. Fine. It may even store my response to what it proposes and use this later to modify its future processing.
Now enough of this. Hey, I want my house *designed*.... so I delete the program and get myself an living, breathing, conscious architect and tell him I want a 'contemporary' house, and I don't want the doors to collapse, and a few other things too. So he uses the information he has access to, (some in his memory, and some in catalogues) and he comes up with some combination of this, and optimises it against some criteria of cost and some other things. What he comes up with surprises, even pleases me. And then I notice it is *exactly* the same as the plan my program produced.
Now the architect is conscious, so potentially he could be designing. But the program did it by optimising, right? Despite this, please, please tell me the architect was designing. If he wasn't, by association, my career is in tatters. All these years I've been calling myself a designer, when all I've been doing is optimising.
Ok, I've calmed down a little. Yes, the architect was designing because he was conscious, and the automaton wasn't because it isn't conscious. Am I bothered? Maybe not.
So question one, why is a conscious designer designing, and an automaton not. Is it only an issue of definition?
Secondly, yes I also agree with:
> Despite this fact, however, it is the programmer who is
> ultimately responsible for the automated output.
What worries me is this: so we have a machine that, in less rigorous use of language, 'designs', (in the sense that Deep Blue 'plays chess'). It accepts a set of input parameters in some form. (These could be "1.2 metres" or "contemporary", whatever.) These could be represented in some Euclidean n-space. The automaton produces output which specifies a design as a set of parameters. These too could be represented in another Euclidean n-space, and the bounds of acceptable designs may be two or more non-contiguous regions of this space.
Question two: Can I ever be sure of the bounds of the acceptable solutions in that space without checking every point in that space? Even if I can move around the hyperplanes that form the boundaries of that space and check every point on their surfaces, can I ever know the volume isn't hollow? (i.e. that there isn't a wholly enclosed region of unacceptable solutions inside the region of acceptable solutions.
I am not a pure mathematician, but it appears that many, (I am tempted to say all, but I won't), non-trivial design problems can, (arguably), be characterised as what they term 'np-problems'; i.e. their solutions cannot be found by deterministic methods, but can be checked deterministically for validity once found.
So one possibility might be to randomly pick points in the solution space and check them. If the universe of potential solutions is big I might use knowledge to predict 'likely' areas, pick solutions within these areas, and then check the solutions I pick for validity. But....oh dear!... my Architect is losing consciousness again....
And so will I if I don't have dinner.
Regards,
John Shackleton
Brunel Design
School of Engineering and Design
Brunel University
Uxbridge, Middlesex
UB8 3PH
UK
01895 266322
|