Blog » "Computation" considered harmful. "Value" not so hot either.
Posted on 13 Feb 2009 15:50
The term "computation" has at least six different senses, all of which are commonly used in technical literature:
- a dynamic computing process
- the result of a computational process
- an algorithm that describes a computational process
- a formal description of an algorithm in a specific language
- a formal description of the semantics of a computational process; e.g. a lambda expression
- a dynamic process that converts a lambda expression to normal form
- a formal description of lambda conversion
One could probably come up with additional technical uses for the term. Even worse, "computation" is an ordinary word with well-understood if inexact semantics. Readers come to a technical text with a well-formed idea of what the word means, and thus no reason to think their notion is incorrect.
"Value" is even worse. It too carries multiple technical meanings in the literature, and it is even more common in ordinary language. Same for "evaluation".
These terms are ambiguous. They should be banned from technical discourse.
Usually this ambiguity is tolerable, at least for subject matter experts. But when it becomes a problem, it's a big problem. It cost me many exasperating hours of head-scratching when I decided to finally get to the bottom of monads and IO in Haskell.
The texts that flummoxed me are Moggi's paper on "notions of computation" and the standard idiom used to describe IO in Haskell.
Moggi (Notions of computation and monads, p. 3):
In particular, we identify the type A with the object of values (of type A) and obtain the object of computations (of type A) by applying an unary type-constructor T to A. We call T a notion of computation , since it abstracts away from the type of values computations may produce.
A typical example of how IO is described in Haskell (A History of Haskell: Being Lazy With Class, p. 25):
The key idea is to treat a value of type IO a as a “computation” that, when performed, might perform input and output before delivering a value of type a.
Obviously this language is "good enough", since Moggi's paper was very influential, and Haskell is very successful. But both passages immediately struck me as hazy and somewhat absurd. Try as I might, I could find no way to assign a precise meaning to them. It took a lot of research and a flurry of messages on the haskell-cafe mailing list to figure out why they didn't work for me and discover what the authors (probably) meant.
Needless to say, part of the problem was the ambiguity of the terms "computation" and "value". I came to the texts thinking that a value is either a "datum" like 3 or a function like sqrt. They're just static mathematical objects, so I could not conceive of how a value could "be" a computation, or could "be performed".
The scales fell from my eyes when I came across the following passage buried in the Wikipedia article on Theory of Computation:
A computation consists of an initial lambda expression (or two if you want to separate the function and its input) plus a finite sequence of lambda terms, each deduced from the preceding term by one application of Beta reduction.
I was already familiar with the lambda calculus, but I had never thought of a lambda expression as a "computation". I still don't like it; a lambda expression is not a computation, it's a formal representation of a mathematical object (a value). Lambda conversion is a purely syntactic calculation that yields a different notational representation of the same value. The lambda calculus is a formal model of computation; the same computation could be represented in a different formalism. So it's just wrong to say that a computation is a sequence of lambda expressions.
Nonetheless, it's clear why one might think in these terms and use this language, and I think that is what's going on in the passages quoted above. My ideas of computation and value were based on the idea of a direct mapping from expression to mathematical object. Even though I was quite familiar the notions of language and metalanguage, I thought of a metalanguage as serving mainly to clarify meaning; once you understand the semantics of the language, you can disregard the metalanguage and think directly in terms of abstract semantic objects (e.g. numbers and functions.)
But in Haskell that's not good enough; to understand how lazy evaluation works, you need to keep lambda calculus (or the equivalent) in mind, because denotation has to pass through lambda expressions in various states of reduction before arriving at normal form and then semantic objects. You cannot disregard the lambda metalanguage, because the different forms lambda expressions may take constitute part of the semantic universe. Make it explicit and things become more clear. The above definition of Haskell IO becomes:
The key idea is to treat a value of type IO a as a “lambda expression” that, when converted to normal form, might perform input and output before delivering a value of type a.
Unfortunately, that's not good enough: how can a lambda expression in normal form "be performed" and "deliver" a value?
That is the subject of my next post, wherein I will discuss weaknesses in Haskell's informal metalanguage and suggest replacements for "computation", "value", and "evaluation".
Like this entry?
on iPhone, iPad, AppleTV, PC. It's free.