Performance Economics

Blog » Performance Economics

Posted on 1235217632|%A: %d %B, %Y|agohover

Another cooked-up performance "controversy" is bubbling on the haskell-cafe mailng list: is gcc a "better" compiler than ghc? The question hinges on the performance (speed) characteristics of a relatively small, simple chunk of code when run through the respective compilers. It seems gcc makes faster code in this isolated case. I can't follow the technical details, but I do know that this kind of "performance" comparison is pointless and probably harmful.

Such comparisons totally miss the point by misunderstanding the notion of performance. Performance is not about time and/or space efficiency, it's about economic efficiency, and the speed of compiled code is only one factor in the economic equation. Your compiler can produce the fastest code in the world, but it won't be of much use if it isn't correct. So any calculation of return on investment must factor in the "cost of correctness".

Assuming correct code and correct compilation, a compiler that produces faster code offers better Return On Investment. But every programmer knows you cannot make such assumptions, so an investment decision based solely on speed benchmarks is a poor decision based on incomplete information.

In any real-world project of even modest complexity this means a performance assessment of a toolset like C++ and gcc must account for all the effort involved in "proving" correctness. Since proof is not possible, this means not only the cost of testing, but also the cost of fixing bugs. Testing can never prove the absence of bugs, so all such code, even when published as production quality, is essentially experimental.

Hence functional languages have a major advantage in performance economics, since the cost of correctness drops considerably. We can never prove that a piece of software correctly addresses a customer's problems, or that the problem specification is accurate, but functional languages go a long way towards guaranteeing correctness of the code itself. And if you've proven the correctness of the code, there's not much point in making major investments in testing what needn't be tested. Some testing or validation will always be required - e.g. design of the user experience - but most expense involved in establishing code correctness is incurred up front in the effort it takes to prove correctness. Which in principle should be a lot less than the expense of testing and fixing code written in imperative languages.

Moore's law is a law of economics, not a law of nature. Engineering is essentially about economics. I would bet there is a similar law waiting to be discovered about the economics of programming, since functional languages (along with logic languages and some other idioms) promise to turn programming into something resembling real engineering. Haskell code has certain mathematical properties that the equivalent C/C++/Java/etc. code cannot hope to have, which means it can reliably be analyzed and transformed based on mathematically principles rather than heuristics. New methods of optimization are bound to arise that will be unavailable to imperative language compilers.

It's inevitable that some day functional language compilers will produce code that even the best hand-rolled assembler cannot match for speed and space efficiency. When that starts to happen and the economics of functional languages - or rather, languages with some kind of support for provable correctness - become more clear, it wouldn't be surprising to see chip designers revive the idea of machines designed for reduction optimization.

In any case, the point is that people who want to make the case for Haskell or other functional languages should not allow themselves to be suckered into discussions about performance isolated from other economic factors. Benchmark code should not use simple algorithms whose imperative implementations involve minimal side-effects and can thus be reasoned about informally with little effort. Instead they should involve a well-defined problem, as simple as possible, but one that requires the kind of non-trivial reasoning that complicates the life of an imperative programmer. Something that makes cost of correctness as much a part of the benchmark as space/time efficiency. And such benchmarks should always be called "performance economics benchmarks", never just "performance benchmarks".

Like this entry?

rating: 0+x

Leave a comment

Add a New Comment
or Sign in as Wikidot user
(will not be published)
- +
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License