It is a common error to conflate computing and programming. It is true that computers are usually controlled by programs. However, you can program without a computer and also have a non-programmed computer. Turing originally posited the computer not as the electronic digital system we are familiar with today but as a symbolic logic system designed to resolve certain problems in mathematics - specifically issues around Goedel numbering and formal descriptive systems. Von Neumann, on the other hand, was concerned with specific problems in symbolic control systems. The manner in which you describe a computer program as a list of things to be done in your last post is not really a good example of how either a computer program works nor of the fundamental concepts that underly the theory of computability. Rather than bore the list with a description of computing theory, which is not my area anyway, I refer you to the Wikipedia entry on this topic. It is a reasonable outline that can be read in a few minutes. The introduction reads as follows: A central question of computer science is to address the limits of computing devices by understanding the problems we can use computers to solve. Modern computing devices often seem to possess infinite capacity for calculation, and its easy to imagine that, given enough time, we might use computers to solve any problem. However, it is possible to show clear limits to the ability of computers, even given arbitrarily vast computational resources, to solve even seemingly simple problems. To explore these areas, computer scientists usually address the ability of a computer to answer the question: Given a formal language, and a string, is the string a member of that language? This is a somewhat esoteric way of asking this question, so an example is illuminating. We might define our language as the set of all strings of digits which represent a prime number. To ask whether an input string is a member of this language is equivalent to asking whether the number represented by that input string is prime. Similarly, we define a language as the set of all palindromes, or the set of all strings consisting only of the letter 'a'. In these examples, it is easy to see that constructing a computer to solve one problem is easier in some cases than in others. But in what real sense is this observation true? Can we define a formal sense in which we can understand how hard a particular problem is to solve on a computer? It is the goal of computability theory to answer just this question. Best Simon On 04.03.06 00:00, Andrew Bucksbarg wrote: > I am sorry, the dynamics of power that seem to pool around technology > makes me a little touchy. I was just curious to know what these > "computational processes" actually are. I find it interesting that > notions about computing (Wiener, von Neumann) come from thoughts > about and the modeling of biological processes and human thought, > cybernetics for instance. Isn't programming just articulating to the > computer what you want it to do? > > ...go to the store to buy milk... if it's raining, wear a rain coat, > otherwise take a jacket... wait on the corner for the light to turn > green, if it doesn't scratch your head until it does, etc... and > object oriented- go to the store to by x... etc... > > and like you say, this is pretty comprehensible to many. Simon Biggs [log in to unmask] http://www.littlepig.org.uk/ Professor of Digital Art, Sheffield Hallam University http://www.shu.ac.uk/schools/cs/cri/adrc/research2/