notes-computer-programming-programmingLanguagesBook-programmingLanguagesChCsTheory101

Table of Contents for Programming Languages: a survey

Chapter : CS theory 101

Church-Turing thesis

todo defn turing machine, etc

What is computation?

There are actually three answers to this.

One answer is, "A computation is a formally specified mechanical procedure of symbol manipulation, for calculating the value of some mathematical function at a given input, which can be completed in a finite number of steps" (this is my own wording but I'm intending to capture a commonly held concept).

Beginning in the 1930s, a consensus has formed that this informal, intuitive definition can be captured by the mathematical notion of Turing-equivalence (this consensus is called "the Church-Turing thesis"); so the second answer is "A computation is the sort of thing that a Turing machine does".

A third answer is given by noticing that we now have machines that we call 'computers': "A computation is the sort of activity that a computer performs".

Today these three answers are usually taken as equivalent but I do not believe that to be the case. The first and second definition do not really capture interactive or concurrent computation, in which the computer accepts input or emits output in the middle of the computation.

Tangent: Turing machines do not model interactive computation

There are two subtle differences between what Turing machines do and what real computers do. The first difference is that a Turing machine technically has an arbitrarily large amount of memory, and real computers only have finite memory. In my opinion, however, this difference is just a technical detail, requiring us to attach annoying qualifiers to statements, but not really calling into question the idea that a Turing machine models what real computers do. A more important difference is that a Turing machine takes input, runs, and eventually either returns a result, or hangs (never terminates). The Turing machine model is fundamentally 'batch mode'; it assumes that all input is present at the beginning, and the only output is at program termination. Today's real computers, however, can be run in an interactive mode in which inputs from the outside world come in at various times, and outputs to the outside world are emitted at various times.

In Turing's time, as far as I am aware, the purpose of the first computers was to do non-interactive mathematical calculations, and the purpose of programming languages were to specify a way to compute a mathematical function. Even if an early computers was given additional input after it was turned on or if it continued to execute a program after emitting output, this was just an implementation detail, and the goal was the (possibly repeated) evaluation of a mathematical function on various given inputs. The Turing machine model fits this perfectly (except for the finite memory detail mentioned above).

Today, however, the purpose of computers is often to interact with users or with other computers, and programming languages must specify interactive behavior, by which i mean they must specify how the computer is to react to inputs given during execution, and they must specify the nature and timing of outputs to be emitted during execution.

The components of the Turing machine model obscure this distinction; the Turing machine has a 'tape' which is a linear array of memory cells, and one can easily imagine that, during execution, the contents of certain locations on this tape could be changed to represent incoming input, and the contents of certain other locations could be sent to the outside world as output; and indeed, this picture is very similar to how input and output is accomplished in real computers. However, the notion of Turing computability includes no concept of such ongoing, interactive input and output; this is an extension to that model.

This sort of extension, which seems minor in the Turing model, does not sit so well in lambda calculus. How are inputs and outputs to be represented? One common answer is to designate some functions as inputs ("get_key") and some as outputs ("print"); each time an input function is evaluated, it might return different results; and each time an output function is evaluated, it might do more than just calculated its result value, it might have a "side effect" in the outside world. Great; but this makes no sense in the intended interpretation of lambda calculus, in which the functions are really just mathematical functions; it renders invalid a large number of formal results about lambda calculus, which assumes that any function can always be substituted for any other function that reduces to it; and it requires us to bring the details of the process of reduction into our model, because now the order and timing in which things are reduced matters, not just as a strategy for implementing a computation, but for the meaning of the computation itself.

In other words, there is an 'impedence mismatch' between the early formalism of computation and what computers actually do; this leads to complexities such as 'side effects'.

Closely related to this is concurrent computation. A model of concurrent computation can be decomposed into a model of the interactive computation of each computer along with a model for the connection of the computers. There are currently a number of competing proposals for the modeling of concurrent computation.

Programming languages for interactive computation

Taking a step back, a language that can specify any "interactive computation" is really a very general thing; because the inputs can be connected to sensors and the outputs to effectors, it is a language that can specify any sort of procedure/behavior that can be mechanically described and executed.

Summary

In summary:

Halting problem

Complexity

NP-complete problems

Ahmdale's law