May 22, 2016
My mission

I'm working on ways to better convey the global structure of programs. The goal: use an open-source tool, get an idea for a simple tweak, fork the repo, orient yourself, and make the change you visualized -- all in a single afternoon. I'd like to live in such a world, wouldn't you?


Here's why I think this is a good goal to aim for. There's a curious blind spot in the way programmers tend to see the world. For all our constant attempts to make things better by creating new languages and tools, we tend not to notice the gravity-like tendency of projects to grow more complex, brittle and ossified over time. The pervasiveness of this phenomenon makes it hard to distinguish the real benefits of a new language or tool. Is your language really that much better than the ones that came before, or is it just temporarily better by having less historical baggage? It's a curious case of the cognitive bias called fundamental attribution error. Until we can understand why projects grow more complex over time and change their trajectory in the long term, most ideas we can think of are merely temporary stop-gaps.

Projects ossify over time because they tend to accumulate complexity over time. In my experience there are three major reasons for this accumulation of complexity:

A. Backwards compatibility considerations. Early mistakes in the design of an interface are often perpetuated indefinitely. Supporting them takes code. Projects that add many new features also accumulate many missteps. Over time the weight of these past adaptations starts to prevent future adaptation.

B. Churn in personnel. If a project lasts long enough early contributors eventually leave and are replaced by new ones. The new ones have holes in their knowledge of the codebase, all the different facilities provided, the reasons why design decisions were made just so. Peter Naur pointed out back in 1985 the odd fact that that no matter how much documentation we write, we can't seem to help newcomers understand our programs without talking to the original authors. In-person interactive conversations tend to be a precious resource; there's only so many of them newcomers can have before they need to start contributing to a project, and there's only so much bandwidth the old hands have to review changes for unnecessary complexity or over-engineering. Personnel churn is a lossy process; every generation of programmers on a project tends to know less about it, and to be less in control of it.

C. Vestigial features. Even after accounting for compatibility considerations, projects past a certain age often have features that can be removed. However, such simplification rarely happens because of the risk of regressions. We forget precisely why we did what we did, and that forces us to choose between reintroducing regressions or continuing to cargo-cult old solutions long after they've become unnecessary.

There may be other phenomena I haven't considered, but these three suffice to illuminate a crucial point: they're independent of most technology choices we tend to think about. A new language or tool will at best have a short-term effect unless we're able to keep the big picture of a codebase understandable over time as members join and leave it.

Constraints on a solution

In direct correspondence to the above list, I've nailed down the following primary design invariants:

A. Minimize compatibility constraints. We can't avoid users creating habits and muscle memory in the tools they use, but things are different when we're building libraries for other programmers. In that situation the user of our creation is another programmer with the ability to empathize with our situation if given the right context, and without our API deeply embedded in muscle memory.

B. Be friendly to outsiders. Many best practices we teach programmers today help insiders manage a project but hinder understanding in newcomers. In a strange new project straight-line code is usually easier to follow than lots of indirection and abstractions. Comments are of limited value because most comments explain local features, but fail to put them in a global context. Build systems that automate a lot of work in our specialized industrial-strength setup turn out to be brittle on someone's laptop when running for the first time.

C. Be rewrite-friendly. Rewrites are traditionally considered a bad idea, but the question isn't whether they're a good or bad idea. Software is the most malleable medium known to man, and we should do all we can to embrace that essential malleability that is its primary characteristic and boon. If rewriting is stressful and error-prone, perhaps we're doing software all wrong in the first place.

These can seem like difficult constraints to solve independently let alone simultaneously, but it seems likely that conveying global structure lies on the road to them all. If a project is to remain easy to understand over the long term it can't afford to have historical baggage; it is by definition friendly to outsiders; and it has to have been constantly getting rewritten.

Having carved out these constraints, everything else is open to question. I'm willing to try global variables, large functions, even goto statements if I can trade understandability in the large for understandability in the small. Our instincts on the right way to manage complexity have failed to yield dividends, so it's time to go against our instincts.

Brass tacks

I'm a lot less certain about the solution than the previous sections, but I've been exploring some promising mechanisms in my current project, Mu:

  1. Deemphasize interfaces in favor of tests. Automated tests are great not just for avoiding regressions and encouraging a loosely-coupled architecture, but also for conveying the big picture of a project. They can support all three invariants above:

    • A. If a library and its users both have good tests, compatibility becomes a lot less important. It's not that hard to change how you call a library when you upgrade, if you can be confident that you'll be alerted to any breakage. It's more desirable to guarantee that upgrades take an hour 100% of the time, than that upgrades take no time 80% of the time.

    • B. Instead of describing your architecture, point newcomers at core tests that they can instrument or step through in a debugger. That allows them to focus first on the concrete series of operations desired in a situation rather than how those operations happen to be organized in code.
    • C. Tests help rewrites. What makes rewrites unpopular isn't the work required for the rewrite, it's the uncertainty and stress at the prospect of introducing subtle regressions. Tests could eliminate that uncertainty and stress.

    However, modern tests aren't good enough for our purposes. I'm exploring expanding testing in two directions to help gain confidence that everything is well when all tests pass:

    1. Testable interfaces at all levels of the stack. I'd like to be able to check that my server can send email even when the disk is full, for example, or that my browser remains responsive even in for really long webpages. I think the conventional wisdom about not testing I/O is just an artifact of having non-testable interfaces for I/O. For example, a more testable interface for printing to screen would be to require a pointer to a screen object, so that tests can print to a fake screen and check what was printed.
    2. White-box testing. Instead of checking that a function merely returns the expected value, allow it to log domain-specific facts that it discovered in the process, and encourage tests to check the log for specific features. This would allow testing more than just functional correctness — performance, fault tolerance, race conditions, etc. For example, you would be able to check that your search function doesn't double the number of lookups required when it searches through double the data. More details →
  2. Deemphasize abstractions in favor of traces. Traces are like logs of just the timeless properties of a domain deduced by the program, as opposed to details of the current implementation. Traces help us support two of the three invariants above:

    • B. I think it might be more helpful for newcomers to follow a curated ‘curriculum’ of traces in sequence rather than grope blindly through the deep directory tree of a large application. For example, the repository for a text editor might guide users first to a trace of the events that happen between pressing a key and printing a character to screen, demarcating the major sub-systems of the codebase in the process and allowing each line in the logs to point back at code, silently jumping past details like what the precise function boundaries happen to be at the moment.
    • C. Traces can also help veterans more easily manage and rewrite a system. Since traces can be associated with levels of detail (akin to log levels), it is possible to skim them more effectively than a flat listing or a step-by-step debugger. When tracking down bugs I've been finding it very helpful to use a ‘zoomable’ interface to bounce around the trace of a program in a random-access manner.
  3. Deemphasize modules in favor of layers. The key property of an application organized in a sequence of layers is that you can stop after any layer in the sequence and build a working app that runs and passes all its tests. It just has fewer features than the complete version. Layers help support one invariant:

    • B. Layers help make a program more approachable to outsiders by enabling them to first play with a simpler version of the program. It's a cleaned-up history of the codebase, focusing on just the core before gradually introducing more peripheral features. More details →

It's worth asking, if these techniques are so great why haven't they been tried before? I think it's because of a ‘drawback’ they all share: they're too powerful, and it's easy to shoot yourself in the foot. It takes taste to trace just domain-independent facts and not implementation details, to make white-box tests robust to radical changes, to organize an app into layers. But I think requiring taste is a good thing. A prime reason we forget important aspects of codebases is that we create rules and processes around them, and those who follow us have no reason to remember the reasons. If every generation is allowed to make its own mistakes the reasons will stay in the air and not get lost. Our software should reward curiosity.

Did any of this resonate? Check out the project, Mu! Try skimming its sources! Tell me what you think about any of this!

(An older version of this mission statement.)

Making the big picture easy to see, in software and in society at large.
Code (contributions)
Prose (shorter; favorites)
favorite insights
Social Dynamics
Social Software