Feb 12, 2014
Consensual Hells: From cognitive biases to institutional decay

I've just published the first of a series of guests posts over at ribbonfarm.com. Check it out.


* *
Oct 7, 2013
A new way to organize programs

If you're a programmer this has happened to you. You've built or known a project that starts out with a well-defined purpose and a straightforward mapping from purpose to structure in a few sub-systems.
But it can't stand still; changes continue to roll in.
Some of the changes touch multiple sub-systems.
Each change is coherent in the mind of its maker. But once it's made, it diffuses into the system as a whole.
Soon the once-elegant design has turned into a patchwork-quilt of changes. Requirements shift, parts get repurposed. Connections proliferate.
Veterans who watched this evolution can keep it mostly straight in their heads. But newcomers see only the patchwork quilt. They can't tell where one feature begins and another ends.

What if features could stay separate as they're added, so that newcomers could see a cleaned-up history of the process?

Solution Read more →

* *
Oct 6, 2013
How I describe my hobby to programmers

(I still haven't found a way to describe it to non-programmers.)

Wart is an experimental, dynamic, batshit-liberal language designed to eventually be used by small teams of hobbyists writing potentially long-lived software. The primary goal is to be easy for anyone to understand and modify. This goal goes for both code written in Wart, and the code implementing Wart.

$ git clone http://github.com/akkartik/wart
$ git checkout c73dcd8d6  # state when I wrote this article
$ cd wart
$ ./wart      # you'll need gcc and unix
ready! type in an expression, then hit enter twice. ctrl-d exits.
⇒ 2

Read more →

* *
Aug 13, 2013
The trouble with 'readability'

We programmers love to talk about the value of readability in software. But all our rhetoric, even if it were practiced with diligence, suffers from a giant blind spot.

Exhibit A

Here's Douglas Crockford on programming style. For the first half he explains why readability is important: because our brains do far more subconsciously than we tend to realize. The anecdotes are interesting and the presentation is engaging, but the premise is basically preaching to the choir. Of course I want my code to be readable. Sensei, show me how!

But when he gets to ‘how’, here is what we get: good names, comments and consistent indentation. Wait, what?! After all that discussion about how complex programs are, and how hard to understand, do we really expect to make a dent on global complexity with a few blunt, local rules? Does something not seem off?

Exhibit B

Here's a paean to the software quality of Doom 3. It starts out with this utterly promising ideal:

Local code should explain, or at least hint at, the overall system design.

Unfortunately we never hear about the 'overall system design' ever again. Instead we get.. good names, comments and indentation, culminating in the author's ideal of beauty:

The two biggest things, for me at least, are stylistic indenting and maximum const-ness.

I think the fundamental reasons for the quality of Doom 3 have been missed. Observing superficial small-scale features will only take you so far in appreciating the large-scale beauty of a program.

Exhibit C

Kernighan and Pike's classic Practice of Programming takes mostly the code writer's part. For reading you're left again with guidelines in the small: names, comments and indentation.


I could go on and on. Everytime the discussion turns to readability we skip almost unconsciously to style guides and whatnot. Local rules for a fundamentally global problem.

This blind spot is baked into the very phrase ‘readable code’. ‘Code’ isn't an amorphous thing that you manage by the pound. You can't make software clean simply by making all the ‘code’ in it more clean. What we really ought to be thinking about is readable programs. Functions aren't readable in isolation, at least not in the most important way. The biggest aid to a function's readability is to convey where it fits in the larger program.

Nowhere is this more apparent than with names. All the above articles and more emphasize the value of names. But they all focus on naming conventions and rules of thumb to evaluate the quality of a single name in isolation. In practice, a series of locally well-chosen names gradually end up in overall cacophony. A program with a small harmonious vocabulary of names consistently used is hugely effective regardless of whether its types and variables are visibly distinguished. To put it another way, the readability of a program can be hugely enhanced by the names it doesn't use.

Part of the problem is that talking about local features is easy. It's easy to teach the difference between a good name and a bad name. Once taught, the reader has the satisfaction of going off to judge names all around him. It's far harder to show a globally coherent design without losing the reader.

Simple superficial rules can be applied to any program, but to learn from a well-designed program we must focus on what's unique to it, and not easily transferred to other programs in disparate domains. That again increases the odds of losing the reader.

But the largest problem is that we programmers are often looking at the world from a parochial perspective: “I built this awesome program, I understand everything about it, and people keep bugging me to accept their crappy changes.” Style guides and conventions are basically tools for the insiders of a software project. If you already understand the global picture it makes sense to focus on the local irritations. But you won't be around forever. Eventually one of these newcomers will take charge of this project, and they'll make a mess if you didn't talk to them enough about the big picture.

Lots of thought has gone into the small-scale best practices to help maintainers merge changes from others, but there's been no attempt at learning to communicate large-scale organization to newcomers. Perhaps this is a different skill entirely; if so, it needs a different name than ‘readability’.


  • Mark Ruzon, 2013-08-14: Communicating the meaning of large amounts of code can't be done in the code itself. It requires thorough documentation that has to be kept up to date as pieces change. There is no substitute for discipline and diligence.

    And I'm very parochial when it comes to my code. I admit it. To me the art and/or science of programming is as important as producing a great result.   

    • Kartik Agaram, 2013-08-14: "I'm very parochial when it comes to my code. I admit it. To me the art and/or science of programming is as important as producing a great result."

      Hmm, perhaps I came off harsher than I intended. Caring about the art of programming isn't parochial. It's what I'm doing here as well, I hope.

      I meant that it's counter-productive to focus on the creator's role and ignore that of future contributors. Surely the science of programming should care about the entire life cycle of a codebase rather than just who happens to be running the show at the moment?   

    • Kartik Agaram, 2013-08-14: "There is no substitute for discipline and diligence."

      I'm not trying to replace discipline and diligence (hence my use of the word at the start of the article). I'm arguing that we've been aiming for the wrong (incomplete) goal all along. Perhaps this explains why our results are so abysmal regardless of how hard we try.

  • Anonymous, 2013-08-31: I like the "readable programs not code" focus. But can we do even better? Computers are interactive. Why can't we ask useful questions about our programs, and get useful explanations? Why can't we obtain information about how a function is actually used? Why can't we animate its execution and how it touches objects or globals?

    I wonder if conversational programming or live programming might offer a better basis for understanding our code.

    But we may also need to simplify our languages. Use of callbacks, call/cc, shared state, etc. don't do much to enhance grokkability of our programs.   

    • Kartik Agaram, 2013-08-31: Thanks David! The solution I've been working on does indeed use interactivity. I hope to post it in the next couple of days.

      I fear, though, that you will be disappointed by my direction. I don't know how to automate the visualization of arbitrary programs like Bret Victor wants; the high-level concepts they need to operate with seem always to be quite domain-specific. So my approach is a baby step compared to RDP.   

      • Anonymous, 2021-03-17: How can I see your solution?
  • Kartik Agaram, 2014-06-17: "Ironically, the aspect of writing that gets the most attention is the one that is least important to good style, and that is the rules of correct usage. Can you split an infinitive? Can you use the so-called fused participle? There are literally (yes, "literally") hundreds of traditional usage issues like these, and many are worth following. But many are not, and in general they are not the first things to concentrate on when we think about how to improve writing. The first thing to do in writing well—before worrying about split infinitives—is what kind of situation you imagine yourself to be in. What are you simulating when you write? That stance is the main thing that distinguishes clear vigorous writing from the mush we see in academese and medicalese and bureaucratese and corporatese." -- Steven Pinker, http://edge.org/conversation/writing-in-the-21st-century   
  • Kartik Agaram, 2016-09-19: Richard Gabriel puts it better than I ever could: http://akkartik.name/post/habitability


* *
Jun 9, 2013
A new way of testing

There's a combinatorial explosion at the heart of writing tests: the more coarse-grained the test, the more possible code paths to test, and the harder it gets to cover every corner case. In response, conventional wisdom is to test behavior at as fine a granularity as possible. The customary divide between 'unit' and 'integration' tests exists for this reason. Integration tests operate on the external interface to a program, while unit tests directly invoke different sub-components.

But such fine-grained tests have a limitation: they make it harder to move function boundaries around, whether it's splitting a helper out of its original call-site, or coalescing a helper function into its caller. Such transformations quickly outgrow the build/refactor partition that is at the heart of modern test-based development; you end up either creating functions without tests, or throwing away tests for functions that don't exist anymore, or manually stitching tests to a new call-site. All these operations are error-prone and stress-inducing. Does this function need to be test-driven from scratch? Am I losing something valuable in those obsolete tests? In practice, the emphasis on alternating phases of building (writing tests) and refactoring (holding tests unchanged) causes certain kinds of global reorganization to never happen. In the face of gradually shifting requirements and emphasis, codebases sink deeper and deeper into a locally optimum architecture that often has more to do with historical reasons than thoughtful design.

I've been experimenting with a new approach to keep the organization of code more fluid, and to keep tests from ossifying it. Rather than pass in specific inputs and make assertions on the outputs, I modify code to judiciously print to a trace and make assertions on the trace at the end of a run. As a result, tests no longer need call fine-grained helpers directly.

An utterly contrived and simplistic code example and test:

int foo() { return 34; }
void test_foo() { check(foo() == 34); }

With traces, I would write this as:

int foo() {
  trace << "foo: 34";
  return 34;
void test_foo() {
  check_trace_contents("foo: 34");

The call to trace is conceptually just a print or logging statement. And the call to check_trace_contents ensures that the 'log' for the test contains a specific line of text:

foo: 34

That's the basic flow: create side-effects to check for rather than checking return values directly. At this point it probably seems utterly redundant. Here's a more realistic example, this time from my toy lisp interpreter. Before:

void test_eval_handles_body_keyword_synonym() {
  run("f <- (fn (a b ... body|do) body)");
  cell* result = eval("(f 2 :do 1 3)");
  // result should be (1 3)
  check(car(result) == new_num(1));
  check(car(cdr(result)) == new_num(3));


void test_eval_handles_body_keyword_synonym() {
  run("f <- (fn (a b ... body|do) body)");
  run("(f 2 :do 1 3)");
  check_trace_contents("(1 3)");

(The code looks like this.)

This example shows the key benefit of this approach. Instead of calling eval directly, we're now calling the top-level run function. Since we only care about a side-effect we don't need access to the value returned by eval. If we refactored eval in the future we wouldn't need to change this function at all. We'd just need to ensure that we preserved the tracing to emit the result of evaluation somewhere in the program.

As I've gone through and 'tracified' all my tests, they've taken on a common structure: first I run some preconditions. Then I run the expression I want to test and inspect the trace. Sometimes I'm checking for something that the setup expressions could have emitted and need to clear the trace to avoid contamination. Over time different parts of the program get namespaced with labels to avoid accidental conflict.

check_trace_contents("eval", "=> (1 3)");

This call now says, "look for this line only among lines in the trace tagged with the label eval." Other tests may run the same code but test other aspects of it, such as tokenization, or parsing. Labels allow me to verify behavior of different subsystems in an arbitrarily fine-grained manner without needing to know how to invoke them.

Other codebases will have a different common structure. They may call a different top-level than run, and may pass in inputs differently. But they'll all need labels to isolate design concerns.

The payoff of these changes: all my tests are now oblivious to internal details like tokenization, parsing and evaluation. The trace checks that the program correctly computed a specific fact, while remaining oblivious about how it was computed, whether synchronously or asynchronously, serially or in parallel, whether it was returned in a callback or a global, etc. The hypothesis is that this will make high-level reorganizations easier in future, and therefore more likely to occur.


As I program in this style, I've been keeping a list of anxieties, potentially-fatal objections to it:

  • Are the new tests more brittle? I've had a couple of spurious failures from subtly different whitespace, but they haven't taken long to diagnose. I've also been gradually growing a vocabulary of possible checks on the trace. Even though it's conceptually like logging, the trace doesn't have to be stored in a file on disk. It's a random-access in-memory structure that can be sliced and diced in various ways. I've already switched implementations a couple of times as I added labels to namespace different subsystems/concerns, and a notion of frames for distinguishing recursive calls.

  • Are we testing what we think we're testing? The trace adds a level of indirection, and it takes a little care to avoid false successes. So far it hasn't been more effort than conventional tests.

  • Will they lead to crappier architecture? Arguably the biggest benefit of TDD is that it makes functions more testable all across a large program. Tracing makes it possible to keep such interfaces crappier and more tangled. On the other hand, the complexities of flow control, concurrency and error management often cause interface complexity anyway. My weak sense so far is that tests are like training wheels for inexperienced designers. After some experience, I hope people will continue to design tasteful interfaces even if they aren't forced to do so by their tests.

  • Am I just reinventing mocks? I hope not, because I hate mocks. The big difference to my mind is that traces should output and verify domain-specific knowledge rather than implementation details, and that it's more convenient with traces to selectively check specific states in specific tests, without requiring a lot of setup in each test. Indeed, one way to view this whole approach is as test-specific assertions that can be easily turned on and off from one test to the next.

  • Avoiding side-effects is arguably the most valuable rule we know about good design. Could this whole approach be a dead-end simply because of its extreme use of side-effects? Arguably these side-effects are ok, because they don't break referential transparency. The trace is purely part of the test harness, something the program can be oblivious to in production runs.

The future

I'm going to monitor those worries, but I feel very optimistic about this idea. Traces could enable tests that have so far been hard to create: for performance, fault-tolerance, synchronization, and so on. Traces could be a unifying source of knowledge about a codebase. I've been experimenting with a collapsing interface for rendering traces that would help newcomers visualize a new codebase, or veterans more quickly connect errors to causes. More on these ideas anon.


  • Sae Hirak, 2013-06-15: This is a brilliant idea. The primary reason I don't write unit tests is because... well, unit tests tend to be larger than the code they're actually testing, so it takes a lot of effort to write them. That would be fine if the tests didn't change very often, but when refactoring your code, you have to change the tests!

    So now you not only have to deal with the effort of refactoring code, but *also* refactoring tests! I can't stand that constant overhead of extra effort. But your approach seems like it would allow me to change the unit tests much less frequently, thereby increasing the benefit/cost ratio, thereby making them actually worth it.

    I don't see any connection at all to mocks. The point of mocks is that if you have something that's stateful (like a database or whatever), you don't test the database directly, instead you create a fake database and test that instead.   

  • boxed, 2013-06-19: http://doctestjs.org/ has a mode of operation that is very similar, and it has really good pretty-printing and whitespace normalization code to handle those brittleness problems you talk about.

    One thing I try to do with my tests is assert completeness at the end. So for example, if you trace (in doctest.js parliance "print") three things: "a", {"b": 1} and 4, then if you assert "a", that object is popped from the pile of objects that have been traced. This way you can at the end do: assert len(traces) == 0. This is pretty cool in that you assert both the positive _and negative_. I use this type of thinking a lot.   

  • David Barbour, 2013-10-10: I've been pursuing testing from the other side: by externalizing state (even if using linear types to ensure an exclusive writer), it becomes much easier to introduce tests as observers (semantic access to state). But I like the approach you describe here. Data driven approaches like this tend to be much more robust and extensible, i.e. can add a bunch more observers.

    The piece you're missing at the moment: generation of the trace should be automatic, or mostly so. Might be worthwhile to use a preprocessor, and maybe hot-comments, to auto-inject the trace.   

  • Kartik Agaram, 2014-01-31: Without traces, the right way to move function boundaries around is bottom-up: https://practicingruby.com/articles/refactoring-is-not-redesign   
  • Kartik Agaram, 2014-04-29: Perhaps I shouldn't worry about the effect of testing on design. Support from an unexpected source:

    "The design integrity of your system is far more important than being able to test it any particular layer. Stop obsessing about unit tests, embrace backfilling of tests when you're happy with the design, and strive for overall system clarity as your principle pursuit." -- David Heinemeier Hansson, http://david.heinemeierhansson.com/2014/test-induced-design-damage.html   

  • Anonymous, 2014-06-06: Interesting. You are basically creating an addition API, one that is used solely for testing (the trace output). The interesting challenge here is to prove whether this new API is more resistant to breakage because of refactoring when compared to the primary API.   
    • Anonymous, 2014-06-06: What I meant, there are multiple ways to accomplish the task even on the business logic level. For example, both of the following snippets are correct:

      void clean_up () {
           sweep_the_floor ();
           wash_the_dishes ();
      void clean_up () {
           wash_the_dishes ();
           sweep_the_floor ();

      Yet the trace describing what have happened would be different.

      To account for this kind of thing, the test would have to do some kind of normalisation on of the trace...   

    • Kartik Agaram, 2014-06-06: Thanks! Traces would have to focus on domain-specific facts rather than details of incidental complexity. Hopefully that problem is more amenable to good taste. But yes, still an open question.


* *
Nov 26, 2012
Software libraries don't have to suck

When I said that libraries suck, I wasn't being precise.1 Libraries do lots of things well. They allow programmers to quickly prototype new ideas. They allow names to have multiple meanings based on context. They speed up incremental recompiles, they allow programs on a system to share code pages in RAM. Back in the desktop era, they were even units of commerce. All this is good.

What's not good is the expectation they all-too-frequently set with their users: go ahead, use me in production without understanding me. This expectation has ill-effects for both producers and consumers. Authors of libraries prematurely freeze their interfaces in a futile effort to spare their consumers inconvenience. Consumers of libraries have gotten trained to think that they can outsource parts of their craft to others, and that waiting for 'upstream' to fill some gap is better than hacking a solution yourself and risking a fork. Both of these are bad ideas.

To library authors

Interfaces aren't made in one big-bang moment. They evolve. You write code for one use case. Then maybe you find it works in another, and another. This organic process requires a lengthy gestation period.2 When we try to shortcut it, we end up with heavily-used interfaces that will never be fixed, even though everyone knows they are bad.

A prematurely frozen library doesn't just force people to live with it. People react to it by wrapping it in a cleaner interface. But then they prematurely freeze the new interface, and it starts accumulating warts and bolt-on features just like the old one. Now you have two interfaces. Was forking the existing interface really so much worse an alternative? How much smaller might each codebase in the world be without all the combinatorial explosion of interfaces wrapping other interfaces?

Just admit up-front that upgrades are non-trivial. This will help you maintain a sense of ownership for your interfaces, and make you more willing to gradually do away with the bad ideas.

More changes to the interface will put more pressure on your development process. Embrace that pressure. Help users engage with the development process. Focus on making it easier for users to learn about the implementation, the process of filing bugs.

Often the hardest part of filing a bug for your users is figuring out where to file it. What part of the stack is broken? No amount of black-box architecture astronomy will fix this problem for them. The only solution is to help them understand their system, at least in broad strokes. Start with your library.

Encourage users to fork you. "I'm not sure this is a good idea; why don't we create a fork as an A/B test?" is much more welcoming than "Your pull request was rejected." Publicize your forks, tell people about them, watch the conversation around them. They might change your mind.

Watch out for the warm fuzzies triggered by the word 'reuse'. A world of reuse is a world of promiscuity, with pieces of code connecting up wantonly with each other. Division of labor is a relationship not to be gotten into lightly. It requires knowing what guarantees you need, and what guarantees the counterparty provides. And you can't know what guarantees you need from a subsystem you don't understand.

There's a prisoner's dilemma here: libraries that over-promise will seem to get popular faster. But hold firm; these fashions are short-term. Build something that people will use long after Cucumber has been replaced with Zucchini.

To library users

Expect less. Know what libraries you rely on most, and take ownership for them. Take the trouble to understand how they work. Start pushing on their authors to make them easier to understand. Be more willing to hack on libraries to solve your own problems, even if it risks creating forks. If your solutions are not easily accepted upstream, don't be afraid to publish them yourselves. Just set expectations appropriately. If a library is too much trouble to understand, seek alternatives. Things you don't understand are the source of all technical debt. Try to build your own, for just the use-cases you care about. You might end up with something much simpler to maintain, something that fits better in your head.

(Thanks to Daniel Gackle; to David Barbour, Ray Dillinger and the rest of Lambda the Ultimate; and to Ross Angle, Evan R Murphy, Mark Katakowski, Zak Wilson, Alan Manuel Gloria and the rest of the Arc forum. All of them disagree with parts of this post, and it is better for it.)


1. And trying to distinguish between 'abstraction' and 'service' turned out to obfuscate more than it clarified, so I'm going to avoid those words.

2. Perhaps we need a different name for immature libraries (which are now the vast majority of all libraries). That allows users to set expectations about the level of churn in the interface, and frees up library writers to correct earlier missteps. Not enough people leave time for gestating interfaces, perhaps in analogy with how not enough people leave enough time for debugging.


  • johndurbinn, 2012-11-26: This is all completely wrong.   
    • Anonymous, 2012-11-27: Okay. Now, if you actually want to be helpful, tell us why.   
      • johndurbinn, 2012-11-27: Libraries are still needed even in the age of Google. I understand that you can find everything using the internet, but many older people rely on "old fashioned" technology like bound paper books, and buildings to house those books with indexes to locate those books. There's no reason to attack libraries for being out of date.   
        • Kartik Agaram, 2012-11-27: Ah, I apologize, I'm a programmer and I'm referring to software libraries, which are very different. Somebody else pointed this out as well. My girlfriend is a librarian, so I better fix this title before she sees it.. :)
  • Anonymous, 2012-11-27: No one's going to take the time to upgrade unless the replacement can be dropped in with little effort. It just doesn't happen in the real world.

    So you have two choices: backport all bug fixes to every version of your library where you made a breaking change or simply don't make breaking changes.   

    • Kartik Agaram, 2012-11-27: Lots of things didn't used to happen in the real world -- until they did.

      New ideas take time to percolate through and be acted on. And that's fortunate, because I'm not nearly confident enough about this to claim everybody should start doing this right this instant. Instead I'm experimenting with profligate forking in a little side project of mine: http://github.com/akkartik/wart.

  • rocketnia, 2012-11-27: Great points! Now I have some complementary additions....

    We already build software with such complex social dependencies that no one developer team can really afford to take ownership of it all. For instance, consider how expensive it has been to "seek alternatives" for entrenched consumer platforms like Windows, Flash, and von Neumann architectures. At some point, we mostly-isolated developers must find places to rest, and API documentation gives us something to believe in. The reassurance we get from this documentation may still be feeble, heavily dependent on social faith, but fortunately we continue to find objective ways to validate it.

    As you say, division of labor does require knowing what guarantees our components need and what guarantees external components can be expected to fulfill. However, sometimes we must be willing to impose requirements on systems we don't quite understand, since at least one of the systems we try to interact with is the outside world!   

    • Kartik Agaram, 2012-11-27: Complementary additions, are those like objections? ^_^

      I'm not trying to be purist about this -- especially since we don't understand most of even the software that's technically owned by us. I'm just asking that we think of the entire stack as under our ownership. When you find out about something broken in it, begin first by fixing it in your stack. Then worry about what to do next.

      Hey, it just occurred to me that I'm asking for an attitude of Kaizen [1] [2] towards our software stack.   

      • rocketnia, 2012-11-28: They're more like apologies than objections. :-p

        I find it hard to believe in a universal concept of "broken." Even if I trip on a rug, I don't want to pull it out from under someone else. ;)

        I'm afraid I have to settle into a culture—get with the program, so to speak—before I'm confident enough to derug it.   

        • Kartik Agaram, 2012-11-28: I see what you mean. Perhaps tuning things for yourself doesn't have to 'derug' anybody else?

          It's something so pervasive that we all take it for granted, this idea that we have to put the communal good above our own when it comes to these levels of abstraction. But individualism hath its merits.

  • Kartik Agaram, 2013-05-20: An opposing view: "Code you can reason about is better than code you can't. Rely on libraries written and tested by other smart people to reduce the insane quantity of stuff you have to understand. If you don't get how to test that your merge function is associative, commutative, and idempotent, maybe you shouldn't be writing your own CRDTs just yet. Implementing two-phase commit on top of your database may be a warning sign." http://aphyr.com/posts/286-call-me-maybe-final-thoughts   
  • Kartik Agaram, 2014-05-20: "Users of libraries ought to know something about what goes on inside." -- Donald Knuth, http://www.informit.com/articles/article.aspx?p=2213858&WT.mc_id=Author_Knuth_20Questions   
  • Kartik Agaram, 2015-10-04: "Reusing code without reevaluating its design decisions means we are prone to inherit all of its constraints as well. Often there can be a better, simpler way of doing things, but this requires asking why the code is the way it is. Otherwise, like the proverbial hoarder, we end up with piles and piles of stuff in the mistaken belief that since it was useful yesterday, it must still be useful today." -- http://www.tedunangst.com/flak/post/hoarding-and-reuse   
  • Kartik Agaram, 2016-03-03: "The difference is, are you abstracting away so that you truly can say “I don’t have to worry about this”? Or are you abstracting away because you’re aware of those guts, but want to focus your attention right now in this area. That is what we’re looking for." -- John Allspaw, Etsy CEO (http://thenewstack.io/etsy-cto-qa-need-software-engineers-not-developers)


* *
Nov 24, 2012
Comments in code: the more we write, the less we want to highlight

That's my immediate reaction watching these programmers argue about what color their comments should be when reading code. It seems those who write sparse comments want them to pop out of the screen, and those who comment more heavily like to provide a background hum of human commentary that's useful to read in certain contexts and otherwise easy to filter out.

Now that I think about it, this matches my experience. I've experienced good codebases commented both sparsely and heavily. The longer I spend with a sparsely-commented codebase, the more I cling to the comments it does have. They act as landmarks, concise reminders of invariants. However, as I grow familiar with a heavily-commented codebase I tend to skip past the comments. Code is non-linear and can be read in lots of ways, with lots of different questions in mind. Inevitably, narrative comments only answer some of those questions and are a drag the rest of the time.

I'm reminded of one of Lincoln's famous quotes, a fore-shadowing of the CAP theorem. Comments can be either detailed or salient, never both.

Comments are versatile. Perhaps we need two kinds of comments that can be colored differently. Are there still other uses for them?


  • Hugo Schmitt, 2012-11-26: Yes, maybe that is the case. Docstrings are different from comments on Python, for example.   
  • Kartik Agaram, 2013-04-04: Perhaps the different kinds of comments are meant for readers with different levels of motivation? http://alistair.cockburn.us/Shu+Ha+Ri


* *
Nov 12, 2012
Software libraries suck

Here's why, in a sentence: they promise to be abstractions, but they end up becoming services. An abstraction frees you from thinking about its internals every time you use it. A service allows you to never learn its internals. A service is not an abstraction. It isn't 'abstracting' away the details. Somebody else is thinking about the details so you can remain ignorant.

Programmers manage abstraction boundaries, that's our stock in trade. Managing them requires bouncing around on both sides of them. If you restrict yourself to one side of an abstraction, you're limiting your growth as a programmer.1 You're chopping off your strength and potential, one lock of hair at a time, and sacrificing it on the altar of convenience.

A library you're ignorant of is a risk you're exposed to, a now-quiet frontier that may suddenly face assault from some bug when you're on a deadline and can least afford the distraction. Better to take a day or week now, when things are quiet, to save that hour of life-shortening stress when it really matters.

You don't have to give up the libraries you currently rely on. You just have to take the effort to enumerate them, locate them on your system, install the sources if necessary, and take ownership the next time your program dies within them, or uncovers a bug in them. Are these activities more time-consuming than not doing them? Of course. Consider them a long-term investment.

Just enumerating all the libraries you rely on others to provide can be eye-opening. Tot up all the open bugs in their trackers and you have a sense of your exposure to risks outside your control. In fact, forget the whole system. Just start with your Gemfile or npm_modules. They're probably lowest-maturity and therefore highest risk.

Once you assess the amount of effort that should be going into each library you use, you may well wonder if all those libraries are worth the effort. And that's a useful insight as well. “Achievement unlocked: I've stopped adding dependencies willy-nilly.”

Update: Check out the sequel. Particularly if this post left you scratching your head about what I could possibly be going on about.

(This birth was midwifed by conversations with Ross Angle, Dan Grover, and Manuel Simoni.)


1. If you don't identify as a programmer, if that isn't your core strength, if you just program now and then because it's expedient, then treating libraries as services may make more sense. If a major issue pops up you'll need to find more expert help, but you knew that already.


  • David Barbour, 2012-11-13: Link to discussion on LtU   
  • John Cowan, 2012-11-13: As has been said elsewhere, all libraries aren't alike.  The OS is a library, but most people can and indeed must treat it as a service, not an abstraction; the same for glib and newlib and mscorlib.   
    • Kartik Agaram, 2012-11-13: Those libraries can indeed be treated as a service far more than say ruby gems. But I don't understand why you say they *must* be treated as a service. More programmers knowing how their system works is always better, no? It's better for them because it empowers them, and it's better for us all because it distributes expertise more widely.

      (I flinched when you called the OS a library. That's the one place where current language is actually closer to referring to it as a set of services, and I'm loath to give that up. Even if trying to get people to refer to libc as a service is tilting at windmills.)   

      • John Cowan, 2012-11-13: Well, *must* is doubtless too strong.  But there is such a thing as knowing too much.  I'd rather not, on the whole, know how certain things are done inside the Linux Kernel, much less the Windows kernel; I might be just too revolted. I'd rather work from the API documentation instead, since I know (at least for Linux) that it's sound.

        "What the deuce is it to me?" [Holmes] interrupted [Watson] impatiently; "you say that we go round the sun. If we went round the moon it would not make a pennyworth of difference to me or to my work."   

        • Kartik Agaram, 2012-11-14: Oh look, a fellow Holmes fan! :)

          Perhaps living in SF is getting to me, but I find myself in the unfamiliar position of invoking a trite moral argument. As the world changes faster and faster and we're all bound together in more and more ways, it becomes our increasingly urgent civic duty to know how the meat is made in all walks of life. For a long time I kept my world at arms length and said, somebody make it Just Work. But I'm starting to think that's unsustainable. We should all think about how the meat is made.

          Prioritize the areas around our professions first ('stewards of a profession' isn't just an empty phrase: http://plus.google.com/110440139189906861022/posts/3A2JaRfWTKT), and prioritize areas that tend to change more rapidly. On both counts, how the code is written is pretty high priority for me. And if the API is good it's usually not too revolting. You might think they're independent, but in practice it's hard to have a nice API around a crap implementation, and the decay in the two tends to track quite nicely.

          Otherwise I fear the tragedy of the commons will creep up through the cracks between our individual responsibilities and gobble us up: http://news.ycombinator.com/item?id=4361596. It's not just software that is vulnerable: http://akkartik.name/blog/2010-12-19-18-19-59-soc

  • Anonymous, 2012-11-27: I disagree with this. A non-buggy abstraction doesn't require you to have to learn its internals, and a buggy abstraction does require you to learn its internals.

    So you're basically saying that libraries suck because they have bugs, and when you encounter them, you are now having to debug other people's code?   

    • Kartik Agaram, 2012-11-27: It's not just bugs. Some misfeatures take a long time to come to light. C programmers shouldn't use strcpy() even though it has no bugs, because it's susceptible to buffer overflows. It took us 20 years to realize this. I fear we will _never_ be rid of it even though alternatives now exist and are portable, and there will always be software that uses it and bites us in the ass at the most inopportune moment.

      One of my favorite science fiction novels has this great story about a code archeologist digging into his system and finding, kilo-layers down, a little routine that counts the seconds from a time 30,000 or so years ago. They're talking about time(), but I suspect strcpy()'s there as well, somewhere in that parallel universe. I hope someday I will be able to build a copy of linux without strcpy().   

      • Anonymous, 2012-11-27: Since the susceptibility to buffer overruns is well documented (man strcpy) I wouldn't say it's hidden by using it. I don't need to know anything about the strcpy implementation to know that.

        strcpy() we are stuck with since you can't be compliant to the ISO standard without including it (though you can wean people away from it: gets() for example is nearly dead; that is partly due to documentation, partly due to lint tools warning on every use of it).

        Now one thing I have observed is that in a library you almost never want to use the function with the best name, since that was probably the first one written, it likely has misfeatures.

        For the most part though, I treat libraries like a black box until I run into problems with them. Doing anything else is a recipe for spending so much time worrying that I never get shit done. I once ran into a bug in malloc() and it was a pain to debug and fix (maybe 3 days). On the other hand if every time I had to study the source of malloc() for every system I've ever developed on, I would have wasted a lot more than 3 days of time doing so.   

        • Kartik Agaram, 2012-11-27: Yeah you shouldn't have to learn the source code of malloc() before you can use it. I am (tentatively and respectfully) suggesting that you could have recompiled libc with a reasonable strcpy() the first time you learned that it was a crappy idea. No reason for ANSI to hold sway over your server with its limited dependencies. No reason to give up control over _names_ in your system.

          This isn't practical, of course. Not in the world we live in. But I want to question if it is an ideal worth moving towards.

  • $3618952, 2012-12-19: That's an odd definition of abstraction... the whole point of abstractions is to generalise and hide details, to "remain ignorant" as you put it. So a service is an abstraction. But I'm not really interested in debating definitions.

    I agree with your premise on the risks of relying on third-party software. I should know, having to fight against the numerous bugs in wxWidgets for 3 years. It's a trade-off.   

  • Anonymous, 2013-02-14: I agree with the main point here and think this issue deserves far more attention than it gets. Abstractions should be conveniences. You should be able to bypass them if you know what you are doing. This is especially important for immature abstractions. Abstractions instead commonly serve as barriers that decrease productivity when you need to bypass them. Barriers are less of an issue, and may be helpful, when the abstraction is mature.

    I disagree with shaurz. The whole point of abstractions is convenience: to increase productivity without sacrificing anything in return. Hiding details, "remaining ignorant", separation of concerns, etc is NOT the whole point. That's the tail wagging the dog. Hiding details is not helpful if it results in loss of productivity in the long run.

    I understand the attraction of "information hiding" and its historical evolution. Programming theory originated with mathematics. In that field, from the point of view of using abstractions, there is essentially no practical difference between two equivalent functions "implemented" differently. The will both execute in zero time. With real computers, there may be important practical differences between two equivalent functions. As computer scientists, I think we may have taken our preference for mathematical purity a bit too far. Also, information hiding was perceived to be a necessary mechanism to prevent tight coupling. However, the problems of tight coupling can be mitigated in other ways. The issue that has stymied that realization for so long is that compiled, statically typed languages generally provide no possible way to loosely couple modules. And yet, we still use those for our key abstraction layers (operating systems interfaces and standard OS libraries). Maybe we should be try solving that problem rather than fighting abstractions the same way for another 3 or 4 decades.   

    • Kartik Agaram, 2013-02-15: Thanks! You raise an interesting point about coupling. Codebases without information hiding are paradoxically _more_ likely to be loosely coupled than going through contortions to play to a fixed interface.

      Another way to look at it: designing and software organizing software is hard enough to do right. Minimizing impedance mismatch between pieces gives us half a chance of focussing on the actual problem of loose coupling.

  • Kartik Agaram, 2013-04-22: Gregor Kiczales put this far better in '92: http://www2.parc.com/csl/groups/sda/publications/papers/Kiczales-IMSA92/for-web.pdf (especially the Summary at the end).


* *
Aug 1, 2012
Marx's engines of plenty

From "Red Plenty" by Francis Spufford:

The problem was that Marx had predicted the wrong revolution. He had said that socialism would come, not in backward agricultural Russia, but in the most developed and advanced industrial countries. Capitalism (he'd argued) created misery, but it also created progress, and the revolution that was going to liberate mankind from misery would only happen once capitalism had contributed all the progress that it could, and all the misery too. At that point the infrastructure for producing things would have attained a state of near-perfection. At the same time, the search for higher profits would have driven the wages of the working class down to near-destitution. It would be a world of wonderful machines and ragged humans. When the contradiction became unbearable, the workers would act. And paradise would quickly lie within their grasp, because Marx expected that the victorious socialists of the future would be able to pick up the whole completed apparatus of capitalism and carry it forward into the new society, still humming, still prodigally producing. There might be a need for a brief period of decisive government during the transition to the new world of plenty, but the 'dictatorship of the proletariat' Marx imagined was modelled on the 'dictatorships' of ancient Rome, when the republic would now and again draft some respected citizen to give orders in an emergency. The dictatorship of Cincinnatus lasted one day; then, having extracted the Roman army from the mess it was in, he went back to his plough. The dictatorship of the proletariat would presumably last a little longer, perhaps a few years. And of course there would also be an opportunity to improve on the sleek technology inherited from capitalism, now that society as a whole was pulling the levers of the engines of plenty. But it wouldn't take long. There'd be no need to build up productive capacity for the new world. Capitalism had already done that. Very soon it would no longer be necessary even to share out the rewards of work in proportion to how much work people did. All the 'springs of co-operative wealth' would flow abundantly, and anyone could have anything, or be anything. No wonder that Marx's pictures of the society to come were so vague: it was going to be an idyll, a rather soft-focus gentlemanly idyll, in which the inherited production lines whirring away in the background allowed the humans in the foreground to play, 'to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind…'

None of this was of the slightest use to the Marxists trying to run the economy of Russia after 1917. Not only had capitalist development not reached its climax of perfection and desperation in Russia; it had barely even begun. Russia had fewer railroads, fewer roads and less electricity than any other European power. Within living memory, the large majority of the population had been slaves. It became inescapably clear that, in Russia, socialism was going to have to do what Marx had never expected, and to carry out the task of development he'd seen as belonging strictly to capitalism. Socialism would have to mimic capitalism's ability to run an industrial revolution, to marshal investment, to build modern life.

But how?

There was in fact an international debate in the 1920s, partly prompted by the Bolsheviks' strange situation, over whether a state-run economy could really find substitutes for all of capitalism's working parts. No, said the Austrian economist Ludwig von Mises, it could not: in particular, it couldn't replace markets, and the market prices that made it possible to tell whether it was advantageous to produce any particular thing. Yes, it could, replied a gradually expanding group of socialist economists. A market was only a mathematical device for allocating goods to the highest bidder, and so a socialist state could easily equip itself with a replica marketplace, reduced entirely to maths. For a long time, the 'market socialists' were judged to have won the argument. The Bolsheviks, however, paid very little attention. Marx had not thought markets were very important — as far as he was concerned market prices just reflected the labour that had gone into products, plus some meaningless statistical fuzz. And the Bolsheviks were mining Marx's analysis of capitalism for hints to follow. They were not assembling an elegant mathematical version of capitalism. They were building a brutish, pragmatic simulacrum of what Marx and Engels had seen in the boom towns of the mid-nineteenth century, in Manchester when its sky was dark at noon with coal smoke. And they didn't easily do debate, either. In their hands, Marx's temporary Roman-style dictatorship had become permanent rule by the Party itself, never to be challenged, never to be questioned. There had been supposed to be a space preserved inside the Party for experiment and policy-making, but the police methods used on the rest of Russian society crept inexorably inward. The space for safe talk shrank till, with Stalin's victory over the last of his rivals, it closed altogether, and the apparatus of votes, committee reports and 'discussion journals' became purely ceremonious, a kind of fetish of departed civilisation.

Until 1928, the Soviet Union was a mixed economy. Industry was in the hands of the state but tailors' shops and private cafes were still open, and farms still belonged to the peasant families who'd received them when the Bolsheviks broke up the great estates. Investment for industry, therefore, had to come the slow way, by taxing the farmers; meanwhile the farmers' incomes made them dangerously independent, and food prices bounced disconcertingly up and down. Collectivisation saw to all these problems at once. It killed several million more people in the short term, and permanently dislocated the Soviet food supply; but forcing the whole country population into collective farms let the central government set the purchase price paid for crops, and so let it take as large a surplus for investment as it liked. In effect, all but a fraction of the proceeds of farming became suddenly available for industry.

Between them, these policies created a society that was utterly hierarchical. Metaphysically speaking, Russian workers owned the entire economy, with the Party acting as their proxy. But in practice, from 8.30 a.m. on Monday morning until 6 p.m. on Saturday night, they were expected simply to obey. At the very bottom of the heap came the prisoner-labourers of the Gulag. Stalin appears to have believed that, since according to Marx all value was created by labour, slave labour was a tremendous bargain. You got all that value, all that Arctic nickel mined and timber cut and rail track laid, for no wages, just a little millet soup. Then came the collective farmers, in theory free, effectively returned to the serfdom of their grandfathers. A decisive step above them, in turn, came the swelling army of factory workers, almost all recent escapees or refugees from the land. It was not an easy existence. Discipline at work was enforced through the criminal code. Arrive late three times in a row, and you were a 'saboteur'. Sentence: ten years. But from the factory workers on up, this was also a society in a state of very high mobility, with fairytale-rapid rises. You could start a semi-literate rural apparatchik, be the mayor of a city at twenty-five, a minister of the state at thirty; and then, if you were unlucky or maladroit, a corpse at thirty-two, or maybe a prisoner in the nickel mines, having slid from the top of the Soviet ladder right back down its longest snake. But mishaps apart, life was pretty good at the top, with a dacha in the country, from whose verandah the favoured citizen could survey the new world growing down below.

And it did grow. Market economies, so far as they were 'designed' at all, were designed to match buyers and sellers. They grew, but only because the sellers might decide, from the eagerness of the buyers, to make a little more of what they were selling. Growth wasn't intrinsic. The planned economy, on the other hand, was explicitly and deliberately a ratchet, designed to effect a one-way passage from scarcity to plenty by stepping up output each year, every year, year after year. Nothing else mattered: not profit, not accidents, not the effect of the factories on the land or the air. The planned economy measured its success in terms of the amount of physical things it produced. Money was treated as secondary, merely a tool for accounting. Indeed, there was a philosophical issue involved here, a point on which it was important for Soviet planners to feel that they were keeping faith with Marx, even if in almost every other respect their post-revolutionary world parted company with his. Theirs was a system that generated use-values rather than exchange-values, tangible human benefits rather than the marketplace delusion of value turned independent and imperious. For a society to produce less than it could, because people could not 'afford' the extra production, was ridiculous. Instead of calculating Gross Domestic Product, the sum of all incomes earned, the USSR calculated Net Material Product, the country's total output of stuff — expressed, for convenience, in roubles.

This made it difficult to compare Soviet growth with growth elsewhere. After the Second World War, when the numbers coming out of the Soviet Union started to become more and more worryingly radiant, it became a major preoccupation of the newly-formed CIA to try to translate the official Soviet figures from NMP to GDP, discounting for propaganda, guessing at suitable weighting for the value of products in the Soviet environment, subtracting items 'double-counted' in the NMP, like the steel that appeared there once in its naked new-forged self, twice when panel-beaten into an automobile. The CIA figures were always lower than the glowing stats from Moscow. Yet they were still worrying enough to cause heart-searching among Western governments, and anxious editorialising in Western newspapers. For a while, in the late 1950s and the early 1960s, people in the West felt the same mesmerising disquiet over Soviet growth they were going to feel for Japanese growth in the 1970s and 1980s, and for Chinese and Indian growth from the 1990s on. Nor were they being deceived. Beneath several layers of varnish, the phenomenon was real. Since the fall of the Soviet Union, historians from both Russia and the West have recalculated the Soviet growth record one more time: and even using the most pessimistic of these newest estimates, all lower again than the Kremlin's numbers and the CIA's, the Soviet Union still shows up as growing faster than any country in the world except Japan. Officially it grew 10.1% a year; according to the CIA it grew 7% a year; now the estimates range upward from 5% a year. Still enough to squeak past West Germany, and to cruise past the US average of around 3.3%.

On the strength of this performance, Stalin's successors set about civilising their savage growth machine. Most of the prisoners were released from the labour camps. Collective farmers were allowed to earn incomes visible without a microscope, and eventually given old-age pensions. Workers' wages were raised, and the salaries of the elite were capped, creating a much more egalitarian spread of income. The stick of terror driving managers was discarded too; reporting a bad year's growth now meant only a lousy bonus. The work day shrank to eight hours, the work week to five days. The millions of families squeezed into juddering tsarist tenements were finally housed in brand-new suburbs. It was clear that another wave of investment was going to be needed, bigger if anything than the one before, to build the next generation of industries: plastics, artificial fibers, the just-emerging technologies of information. But it all seemed to be affordable now. The Soviet Union could give its populace some jam today, and reinvest for tomorrow, and pay the weapons bill of a super power, all at once. The Party could even afford to experiment with a little gingerly discussion; a little closely-monitored blowing of the dust off the abandoned mechanisms for talking about aims and objectives, priorities and possibilities.

And this was fortunate, because as it happened the board of USSR Inc. was in need of some expert advice. The growth figures were marvellous, amazing, outstanding — but there was something faintly disturbing about them, even in their rosiest versions. For each extra unit of output it gained, the Soviet Union was far more dependent than other countries on throwing in extra inputs: extra labour, extra raw materials, extra investment. This kind of 'extensive' growth (as opposed to the 'intensive' growth of rising productivity) came with built-in limits, and the Soviet economy was already nearing them. Whisper it quietly, but the capital productivity of the USSR was a disgrace. With a government that could choose what money meant, the Soviet Union already got less return for its investments than any of its capitalist rivals. Between 1950 and 1960, for instance, it had sunk 9.4% of extra capital a year into the economy, to earn only 5.8% a year more actual production. In effect, they were spraying Soviet industry with they money they had so painfully extracted from the populace, wasting more than a third of it in the process.


* *
Jul 4, 2012
In the enthusiasm of our rapid mechanical conquests we have overlooked some things. We have perhaps driven men into the service of the machine, instead of building machinery for the service of man. But could anything be more natural? So long as we were engaged in conquest, our spirit was the spirit of conquerors. The time has now come when we must be colonists, must make this house habitable which is still without character.


  • Kartik Agaram, 2012-08-16: "The future offers very little hope for those who expect that our new mechanical slaves will offer us a world in which we may rest from thinking. Help us they may, but at the cost of supreme demands upon our honesty and our intelligence." -- Norbert Weiner, "God and Golem, Inc.," 1964 http://www.etymonline.com/index.php?term=cybernetics


* *
RSS (?)
twtxt (?)
Station (?)