Mar 18, 2016
An alternative worldview to 'modularity'

It's a common trope among programmers that a single computer contains enough bits that the number of states it can be in far exceeds the number of atoms in the universe. See, for example, this 3-minute segment from a very entertaining talk by Joe Armstrong, the creator of the Erlang programming language. Even if you focus on a single tiny program, say one that compiles down to a 1KB binary, it's one of 21024 possible programs of the same length. And 1KB is nothing these days; larger programs get exponentially larger spaces of possibility.

The conventional response to this explosion of possibilities is to observe that the possibilities stem from a lack of structure. 10 bits encode 210 possibilities, but if you divide them up into two sub-systems with 5 bits each and test each independently, you then only have to deal with twice 25 possibilities — a far smaller number. From here stem our conventional dictums to manage complexity by dividing systems up into modules, encapsulating internal details so modules can't poke inside each other, designing their interfaces to minimize the number of interactions between modules, avoiding state within modules, etc. Unfortunately, it's devilishly difficult to entirely hide state within an encapsulated module so that other modules can be truly oblivious to it. There seems the very definite possibility that the sorts of programs we humans need to help with our lives on this planet intrinsically require state.

So much for the conventional worldview. I'd like to jump off in a different direction from the phenomenon of state-space explosion in my first paragraph. I'll first observe that if we had to target a single point in such impossibly large spaces, we'd never be able to do so. What saves us is that for every desire we have in mind, every bit of automated behavior we conceive of that will help us in our lives, there are many many possible programs that would satisfy it. We're targeting not a single point, but a whole territory, a set of potential programs that we could have written (and might yet write in the future) that meet our need of the moment.

This is a really nice property, and it saves us in the short-term. However, this boon comes curled up around a stinger. Even though you care only about whether your program is in this territory (correct) or not (buggy) and don't care precisely where in the territory your program is, your program only encodes the current point you're at. But human programs have a way of lasting a long time, and other people will make changes to your program. Every little change is moving its location in the space of possible programs to some adjacent point. Make enough changes and you can end up pretty far from where you started. Are you still in the territory of correct programs? You have no possible way of knowing as long as you focus on just the contents of your program. It doesn't matter how much you tinker with the anatomy of your program, dividing and sub-dividing it into sub-systems and components. We need to start encoding the boundaries of the territory we care about.

Tests are the only way we yet have to try to encode territories. A test says, "I don't care what's inside your program, but does it work in this one situation?" A comprehensive set of tests is like a Fourier Transform, taking us away from the space of possible programs and into the space of possible situations our program may find itself in. Ideally, if we pass all tests we're confident that our program hasn't left the territory of correct programs. However, contemporary tests aren't quite the complete solution yet. It's easy today to write tests that fail in spurious ways even if your program remains correct. The map (tests) doesn't yet correspond well to the territory.

Modules work well for simple sub-systems without internal state. But remember the dictum to make our designs as simple as possible but no simpler. To the extent that our programs require state, stop tweaking modularity constraints endlessly and ineffectually. Instead, practice writing tests, watch out for ways that the tests you write only work for this program but not other correct programs. Work on ways to write more general tests rather than more general code.

With apologies to Fred Brooks:

Show me a program's code but not its tests, and I shall continue to be mystified. Show me its tests, and I won't usually need its code; it'll be obvious.

(Thanks Kinnard Hockenhull for the conversations that resulted in this post.)

Making the big picture easy to see, in software and in society at large.
Code (contributions)
Prose (shorter; favorites)
favorite insights
Social Dynamics
Social Software