Oct 7, 2013
A new way to organize programs

If you're a programmer this has happened to you. You've built or known a project that starts out with a well-defined purpose and a straightforward mapping from purpose to structure in a few sub-systems.
But it can't stand still; changes continue to roll in.
Some of the changes touch multiple sub-systems.
Each change is coherent in the mind of its maker. But once it's made, it diffuses into the system as a whole.
Soon the once-elegant design has turned into a patchwork-quilt of changes. Requirements shift, parts get repurposed. Connections proliferate.
Veterans who watched this evolution can keep it mostly straight in their heads. But newcomers see only the patchwork quilt. They can't tell where one feature begins and another ends.

What if features could stay separate as they're added, so that newcomers could see a cleaned-up history of the process?

Solution

Start with a simple program.

// Includes
// End Includes

// Types
// End Types

// Globals
// End Globals

int main(int argc, char* argv[]) {
  if (argc > 1) {
    // Commandline Options
  }
  return 0;  // End Main
}

It doesn't do much, it's just a skeleton, but it's a legal program and it builds and runs. Now features can insert code at well-defined points in this program:

:(after "Includes")
#include <list>
#include <string>

The first line above is a directive to find an existing line matching the pattern "Includes", and insert the following lines after it. You can insert arbitrary fragments at arbitrary points, even inside functions:

:(after "Commandline Options")
if (arg == "test")) {
  run_tests();
  return 0;
}

A simple tool will splice them into the right places:

int main(int argc, char* argv[]) {
  if (argc > 1) {
    // Commandline Options
    if (arg == "test")) {
      run_tests();
      return 0;
    }
  }
  return 0;  // End Main
}

With the help of directives, we can now organize a program as a series of self-contained layers, each containing all the code for a specific feature, regardless of where it needs to go. A feature might be anything from a test harness to a garbage collector. Directives are almost too powerful, but used tastefully they can help decompose programs into self-contained layers.

We aren't just reordering bits here. There's a new constraint that has no counterpart in current programming practice — remove a feature, and everything before it should build and pass its tests:

$ build_and_test_until 000organization
$ build_and_test_until 030
$ build_and_test_until 035
...

Being able to build simpler versions of a program is incredibly useful to newcomers to your project. Playing with simpler versions can help gain fluency with the global architecture of your project. Newcomers can learn and test themselves in stages, starting with the simplest versions and gradually introducing the complexities of peripheral features. Try it for yourself, and tell me what you think!

$ git clone http://github.com/akkartik/wart
$ cd wart/literate
$ make   # builds and runs tests; needs gcc and unix

This example is in C, but the idea applies to any language. Free yourself of the constraints imposed by your compiler, and write code for humans to read.

Rationale and history

You might have noticed by now that directives are just a more readable way to express patches. This whole idea is inspired by version control. For some time now I've been telling people about a little hack I use for understanding strange codebases. When I am interested in learning about a new codebase, I skim its git log until I find a tantalizing tag or commit message, like say "version 1". Invariably version 1 is far simpler than the current version. The core use cases haven't yet been swamped by an avalanche of features and exceptions. The skeleton of the architecture is more apparent. So I focus on this snapshot, and learn all I can. Once I understand the skeleton I can play changes forward and watch the codebase fill out in a controlled manner.

But relying on the log for this information is not ideal, because the log is immutable. Often a codebase might have had a major reorg around version 3, so that reading version 1 ends up being misleading, doing more harm than good. My new organization lays out this time axis explicitly in a coherent way, and makes it available to change. The sequence of features is available to be cleaned up as the architecture changes and the codebase evolves.

Codebases have three kinds of changes: bugfixes, new features and reorganizations. In my new organization bugfixes modify a single layer, new features are added in new layers, and reorganizations might make more radical changes, including changing the core skeleton. The hope is that we can now attempt more ambitious reorganizations than traditional refactoring, especially in combination with tracing tests. I'll elaborate on that interaction in the future.

comments

      
  • br7tt, 2013-10-07: This is really interesting. I've never seen an approach to modularizing code that takes time into account. The two kindred ideas that come to mind are Aspect Oriented Programming) and Ruby's Modules).

    What does the file structure for the layers look like? Is wart/literate the best place to go t read some code?

    Also what do you think is the consequence of using anchors to specific points in the text rather something more oriented towards the structure of the system?   

        
    • Kartik Agaram, 2013-10-07: Thanks for the kind words! Yes, check out wart/literate, but with a caveat. I'm a lot less certain of the solution than I am of the problem, and there may well be a better option. I tend to be liberal; less restrictions mean less code, a more understandable solution and less room for bugs. So I picked this patch-like syntax as the easiest thing that would work without AOP's restrictions on join points.

      (It was sharp of you to spot the connection to Ruby modules; I actually started wart wanting to take ruby's notion of open classes to functions in lisp: http://akkartik.name/blog/wart. It's one of my favorite ideas.)   

          
      • br7tt, 2013-10-07: It feels like you almost need a graphically programming environment to maximize (not readability) intelligibility (?). Files don't allow you to see the whole structure at once and how it evolves in time. There's also always the tension between being able to understand the code (genotype) and being able to understand the system that the code generates (phenotype).   
            
        • Anonymous, 2013-10-08: br7tt as poor Kartik has heard one too many times, I agree. We should be reasoning about executing programs, not source code.   
              
          • Kartik Agaram, 2013-10-08: Hmm, are you thinking about *multiple* interacting programs? Just unit tests help reason about executing programs, right? Don't all programming languages help to varying degrees to reason about runtime? What would your ideal system look like? I think I've forgotten past context.. :/
            
        • Kartik Agaram, 2013-10-08: You know, I had a similar plan sometime last year. I got all excited about https://www.leapmotion.com, and I imagined a world where programmers would interact with their programs like life-size motor engines over 30-inch monitors. The 3D embodiment would help comprehension, I thought, but crucial was the ability to zoom in and out and see coherent maps of the codebase.

          Then I thought of layers this March, and left this idea behind :) Partly because I've always been skeptical of secondary incentives created by tools. The moment you create a tool that makes something easy, programmers will overload it and build software complex enough that it becomes a pain. The path to utopia passes through making them take some responsibility for their tools, IMO. So don't try to maximize intelligibility at a single point in time. If you think about the trajectory, I think, text files are still a win because we don't yet know how to do graphics with a lot less dependencies. Perhaps if I could get on the stack that Alan Kay is building... http://www.vpri.org/pdf/tr2009016_steps09.pdf

      
  • David Barbour, 2013-10-09: You model a stream of transforms/rewrites on a codebase.

    Observations:

    * this would be a lot easier if performed upon a target language constructed for deep access and rewriting

    * similar manipulations apply to non-code artifacts, such as editing of documents, images, 3D models

    * updates must be extracted by the programmer's environment, from user actions or resulting diffs in state.

    * your directives are likely to be lossy, fragile, e.g. not tracking copy-and-paste actions or decision-making process that went into selecting code.

    There is some relationship to issues addressed in my recent manifesto:

    https://groups.google.com/forum/#!topic/reactive-demand/gazxhLLXscQ

    I'm talking about manipulating a personal environment, not a shared codebase. But I think a similar design is appropriate.   

      
  • Sae Hirak, 2013-10-12: I really like this idea. I'm going to experiment with adding it to Nulan and see what happens. I suspect hyper-static scope will get in the way, though. We'll see.   
        
    • Kartik Agaram, 2013-10-12: Thanks! Yeah, tell me how it goes. I too want to try it with a high-level language, and js would be a good candidate. I wouldn't be surprised if the metaprogramming features of a HLL obviate the need for my patch-based directive format.
      
  • Anonymous, 2013-11-08: Interesting, have been thinking about something similar to this for a while.   
        
    • Kartik Agaram, 2013-11-08: Thanks. I'd love to hear more about your ideas.
      
  • Anonymous, 2013-11-25: Reminds me of AOP and Desugaring.   
        
    • peterlund, 2014-11-24: And literate programming (tangle, weave).
      
  • ARaybold, 2016-07-09: As others have mentioned, this looks very much like Aspect-Oriented Programming, which uses the term cross-cutting concern for features that do not fit naturally into the program's hierarchical structure. It looks as though your approach would be prone to the same sorts of problems as AOP: 1) While a cross-cutting concern may be conceptually coherent, their implementations often cannot be understood without looking at the larger code in considerable detail, and, in particular, affect and/or be affected by the side-effects of other concerns; 2) changes often cross-cut existing cross-cutting concerns; 3) changes are often best approached by first refactoring the existing code - this is an important special case of 2). AOP may not make any of these things worse, but they may tend to diminish its usefulness.   
        
    • Kartik Agaram, 2016-07-09: I thought the others who mentioned AOP mostly did it as related work I should be aware of, but not to say it's similar enough to the point where AOP's drawbacks translate automatically. The big differences with AOP are that a) I can insert code anywhere I want, no need for specially blessed join points, and that b) AOP still continues to build just the final monolith, whereas I add constraints about what combinations of concerns are required to continue to work on an ongoing basis. I think these additions make this a completely different beast from AOP:

      1) I don't require each layer to be understandable just by reading it in isolation. I'm constantly building my program just up to a certain layer and then reading the generated code, or stepping through it in a debugger. The goal is to support interactivity in the learning process, or active reading if you like.

      2) Layers absolutely patch previous layers. That's where they get all their power from. I don't make the distinction between "base program" and "cross-cutting concern". If I did, everything but layer 1 would be cross-cutting concerns. That's near 100% of the program, since I've already shown my layer 1 above: it's an empty skeleton that does nothing.

      Layers are more powerful than cross-cutting concerns. They're powerful enough that they become a completely first-class unit of organization. As Alan Kay put it, "take the hardest and most profound thing you need to do, make it great, and then build every easier thing out of it."

      3) Since I don't require a "good architecture" before adding a new layer, because my mechanism for inserting code is as just arbitrary patching, and since I constantly make sense of my program by looking at the "joined" version the compiler sees, development is absolutely not blocked on reorganizing.

      This doesn't mean there are no drawbacks. I tried to enumerate them in this post, and I'll do a follow-up at some point about lessons learned. But I feel confident that I am making new mistakes and not just repeating AOP's missteps.

      
  • qznc, 2017-05-30: OOP was supposed to fix this problem, although the target was not to make programs more readable, but to increase stability. Changing existing code always has the risk that you break it. The Open-Closed-Principle is an expression of this intention.

    However, in practice this leads to an over-engineered mess of factories and

    other design patterns, because developers try to support every possible hook.   

        
    • Kartik Agaram, 2017-05-30: Absolutely. I suspect that with hindsight the analogy between software and lego may be one of the most damaging ideas in our history, turning us from programmers to line managers at some factory. Modifying code is what we *do*, and we should embrace that. Things become stable when requirements converge, not when you put them behind a plate of glass to prevent yourself and others from touching them. Not allowing ourselves to change some (most) of a codebase just makes our problems much more harder, by forcing us to generalize simple solutions to anything that anybody anywhere may want to do. There's no way you'll end up at a standard connector. You end up with every possible hook, as you say.

      Worth reading is Martin Sústrik's essay on software as memes subject to natural selection.

      
  • Kartik Agaram, 2017-09-29: _"It is usually possible to learn about the principles of a system by watching it under construction.. In my experience complex systems have been built with layered abstractions.. As a system boots, older services organize themselves before the newer ones."_ http://cap-lore.com/Software/Layered.html   
  • Anonymous, 2019-12-11: Hi Kartik Agaram.

    Your profile image [1], although it looks a small thumbnail, is a 5.1 MB PNG which takes years to load on slow connections.

    [1]: https://cdn.commento.io/api/commenter/photo?commenterHex=472ec38f958dc6b9b3a5498e52b635bf68c4e1645fdb8fd5b364409aa207b659   

        
    • Kartik Agaram, 2019-12-11: Thank you for the report! Looks like Commento isn't resizing the images I send them. I've opened a support request with them.   
    • Kartik Agaram, 2019-12-15: Should be fixed now.
      
  • Anonymous, 2022-02-06: https://gateway.pinata.cloud/ipfs/QmeVYAP75GAvY8Q8iSfMoWMGgTPjvRh2xcM7Zb6qEop2VZ?preview=1   
  • Anonymous, 2022-08-16: You're chafing for the ability to write your program top-down in a language that is designed in such a way that it is otherwise prohibited.

    Programmers familiar with C and Pascal are notorious for their advice to attempt to understand a program by starting at the bottom of the file (e.g. where main usually is) and working backwards. They then habituate themselves to this way of working that it ends up being a sort of missing stairstep not perceived as a deficiency in the programming system they are using. Some who are able to see a bit further end up writing their own idiosyncratic programming system, which they evangelize to the world under the label "literate programming".   

        
    • Kartik Agaram, 2022-08-16: I think you're confusing top-down with putting main up on top. You can design a program top-down and still put main at the bottom.

      Regardless, the point of this article (and something that goes beyond literate programming) is that there shouldn't be a single top. Keeping multiple variants of a program in sync helps readers triangulate on the big picture.   

          
      • Anonymous, 2022-08-18: My contention is that comprehension benefits by being both conceptually and physically laid out top-down.

        > You can design a program top-down and still put main at the bottom.

        Sure, you *can*. What's the benefit? Doing so is counterproductive. Put the stuff people are expected to read first first.

        > the point of this article[...] is that there shouldn't be a single top

        Are you sure? Maybe your thinking eventually evolved away from what you wrote here, but this article doesn't come off that way. It's rather opposed to that, at least as written.   

            
        • Kartik Agaram, 2022-08-18: > Put the stuff people are expected to read first first.

          I totally agree. I just disagree that people are always expected to read main first. As a counter-example, I usually care much more about the data structures.

          > Are you sure? Maybe your thinking eventually evolved away from what you wrote here, but this article doesn't come off that way. It's rather opposed to that, at least as written.

          Certainly possible, but after a second look I'm not sure what you're seeing. Sure, the example has main up top. But that's just one layer. You can't have main in more than one layer.

          Around the same time I wrote this I also wrote http://akkartik.name/post/literate-programming where I said, "There's a pervasive mindset of top-down thinking, of starting from main, whether or not that's easiest to read."

          I've certainly used top-down in many of my layers. And I certainly don't advocate for the style of putting main at the bottom just to please a compiler. But it muddies the waters to conflate a design methodology with syntactic ordering. Neophyte readers are liable to think they're doing top-down design just by pushing main up top.

          And bottom-up does make sense sometimes. For example, here's this layer (again from around the time of this post) that starts tracking reference counts to cons cells. "Top down" doesn't feel meaningful for this layer.   

              
          • Anonymous, 2022-08-18: I think there's a mismatch in what we're talking about.

            It sounds like you're still conceptualizing the final form of the program text as if it were a traditional program and wrapping your approach around that. This preoccupation is what I'm referring to. As if there's this quaint approach to communicating how the traditional program evolved, but at the end of the day it's a fiction and the tangled C is still the *real* program.

            What I'm saying is forget that.

            We should care no more about the tangled version than the average programmer cares about the intermediate assembly that their compiler produces.

            To truly prioritize programs-for-humans-first-and-not-machines (the compiler for the target language itself being an abstract machine), then that means we should concern ourselves *only* with the input that gets fed into this programming system, for *that* is the program. (Again, just as contemporary programmers conceive of their C source as constituting their canonical program, and not the assembly that it produces. For all most programmers care, it would be fine for you to write a single-pass compiler to generate their object file(s) directly from the C if that were actually possible. Same thing here—no tangle, just input.)   

                
            • Kartik Agaram, 2022-08-18: I'm not yet completely following your thesis. Do you have some vision of the ideal input that gets fed into the programming system, or are you just alluding to my layered organization in this case? Or are you saying we shouldn't concern ourselves with these threadbare abstractions, if we're programming C we should focus on the code actually fed into the compiler? Or are you saying I should go in the opposite direction and make the abstraction of layers more high-level?

              Basically I'm wondering: What would you add/subtract? Or would you do something totally different?   

                  
              • Anonymous, 2022-08-18: I'm saying almost the inverse of "if we're programming C we should focus on the code actually fed into the compiler".

                If we're "programming" our colleagues' mental model with some gentle, thoughtful introduction that we've arranged for the benefit of comprehension, then we should use the same input when explaining to the computer what the program does. Abandon all notions that we're programming C. Forget that C exists, even. It should not be our goal to produce a C program. Let's not say, "here is our program, which is what the compiler and I-the-author operate on when we're doing work, but on the other hand here is a layered narrative to explain how we got to this point". Instead, throw out the former entirely and let there be only the latter. It's what our colleague references, it's what we feed into the compiler, and it's what we deal with ourselves when working on the program.   

                    
                • Kartik Agaram, 2022-08-18: I see, so building a whole new, more high-level language?   
                      
                  • Anonymous, 2022-08-18: Yes. Not being constrained by having to submit either to a language that prevents you from writing a program description in the best way possible NOR having to submit to what amounts to a thin macro layer above such a system.

                    There should really only be two levels of concern: the concrete, extreme low-level (i.e. the machine) and then the high-level that you actually desire to use for communicating about the behavior and design rationale of the program. Since C isn't bedrock, don't make it an essential detour. (In a way, this is the progression that C++ and Objective-C made. They stopped trying to be glorified preprocessors and began aiming to be languages unto themselves, with varying levels of success at shedding their legacy. What would a language look like if it kept these design goals in mind?) Most programmers pay little attention to the first, lowest level. If there *is* to be an intermediate level (and it's not clear that there should be), then it should demand even less attention than that, not more.   

        
    • Kartik Agaram, 2022-08-18: Bumping up a bit, I find https://www.teamten.com/lawrence/programming/write-code-top-down.html a bit too draconian. I like top-down a lot of the time, but it would be nearly as stifling to have to put main at the top as it is at the bottom.

      This is my favorite defense of bottom-up programming: http://www.paulgraham.com/progbot.html

Comments gratefully appreciated. Please send them to me by any method of your choice and I'll include them here.

archive
projects
writings
videos
interviews
subscribe
Mastodon
RSS (?)
twtxt (?)
Station (?)