Red Echo

February 21, 2009

This rant, Why Should I Ever Use C Again, is a funny, irreverent, spot-on critique of a certain strain of micro-optimized thinking about software performance. Samples:

Okay, C is fast. But C is not the only fast language. It’s not even the fastest. If speed was everything, we would be (1) on a race track (2) using assembly. The C-is-fast-so-use-C crowd doesn’t know there is a clan that says the same for assembly. But, in 1980, it was clear that _slower_ C made more sense than _faster_ assembly.

Computers are really good at automating repetitive work. Every decade or so it becomes clear that it has come time to delegate to the computers some entire category of task which programmers have spent years sweating over. When this happens, it’s not because the computer can suddenly do a better job than a clever human being could, on a one-by-one comparison – it’s because the computer can do an adequate job a billion times a day without breaking a sweat. Computers keep getting faster, and for any problem there comes a point where the run-time cost of an automatic solution disappears against the development-time cost of human brainpower.

Programming language compilers themselves are an early example of this process in action. People take the notion of a high-level language for granted now, enough so that nobody uses the term “high-level language” anymore, but writing entire applications in assembly language was once routine. When I was beginning my programming career the need to think in terms of machine instructions was still common enough that every serious development tool offered some way to embed “raw” assembly instructions in your C or Pascal or Basic code. Nobody does that anymore, at all; you could still probably beat the compiler if you worked very hard at it, but it simply is not worth your time to bother.

A similar change occurred with the rise of Java, which pushed the idea of automatic memory management into the mainstream. You youngsters may not believe this, but paying careful attention to the lifetime of every last scrap of data was once a normal part of programming practice. Nobody would design such a language now: memory management is complex, error-prone, and amenable to automatic analysis – therefore it makes no sense to waste any more human brainpower on the problem. We delegate that task to the computers, which will do it in a simple-minded but utterly reliable way, and move on to more interesting things.

We’re about to see another of these shifts. As always, it’s going to start out looking like a nerf toy. How can you get any real work done with something like that, people will wonder? Where’s the go-faster button? Where are the sharp edges? The answer, as always, is that when you have a button, you have to decide whether, and when, to press it. We will choose to give up another layer of fine-grained control in exchange for the opportunity to spend our brainpower solving more interesting problems.

This generational shift is being driven by the ubiquity of multi-processor computing. Everyone has a dual-core machine now, and in two years everyone will have a quad-core machine. Meanwhile, the really big problems are being solved on server farms with thousands of networked processors. The tools we use today leave control of concurrency largely in the hands of the programmer: so programmers have to solve the same set of concurrency design problems over and over. This situation is just begging for automation. In the next few years we will see a new level of abstraction come to prominence: we will choose to give up a few old familiar tools, and in exchange we will be able to delegate the concurrency problem to our new tools.

I think I know what the recipe for this new style will be, and I’m convinced enough to spend a lot of my free time building a working example, but the details are less important than the process. A new generation of candy-coated, dumbed-down, kid-stuff programming tools are about to come down the pike. They will look limited and faintly embarrassing, but get ready to jump on board anyway: those limitations are the key to the future.

3 Comments

  1. This is interesting stuff, Mars. Thanks for sharing what is going on in the realm of multi-processor computing. I think there is a similar push in the realm of web development as well. There are some masochists who are still spinning custom JS implementations, but the vast majority have moved from spending hours dealing with x-browser compatibility to using a well-tested framework and actually building interesting applications. Looking forward to hearing more about the project that you are working on.

    Comment by Andrew — February 21, 2009 @ 11:50 am

  2. Hah! Youngsters. I remember when building an analog circuit was still faster than those new-fangled, fancy, computer thingies…. :)

    Comment by Bob K. — February 24, 2009 @ 9:13 am

  3. Bob — aren’t you are rich beyond anyone’s dreams from the royalties for “fire” and “the wheel?” ;-)

    Comment by Aaron Ballman — February 24, 2009 @ 1:57 pm