Apr 9, 2014
Consensual Hells: The legibility tradeoff

My second guest post has just been published over at ribbonfarm.com, on the subject of building capture-resistant organizations. Tl;dr - constrain individual discretion at the top, and empower it on the fringes. But the direct ways to do that won't work. What will? Read on →

comments

* *
Feb 12, 2014
Consensual Hells: From cognitive biases to institutional decay

I've just published the first of a series of guests posts over at ribbonfarm.com. Check it out.

comments

* *
Oct 7, 2013
A new way to organize programs

If you're a programmer this has happened to you. You've built or known a project that starts out with a well-defined purpose and a straightforward mapping from purpose to structure in a few sub-systems.
But it can't stand still; changes continue to roll in.
Some of the changes touch multiple sub-systems.
Each change is coherent in the mind of its maker. But once it's made, it diffuses into the system as a whole.
Soon the once-elegant design has turned into a patchwork-quilt of changes. Requirements shift, parts get repurposed. Connections proliferate.
Veterans who watched this evolution can keep it mostly straight in their heads. But newcomers see only the patchwork quilt. They can't tell where one feature begins and another ends.

What if features could stay separate as they're added, so that newcomers could see a cleaned-up history of the process?

Solution Read more →

comments

* *
Oct 6, 2013
How I describe my hobby to programmers

(I still haven't found a way to describe it to non-programmers.)

Wart is an experimental, dynamic, batshit-liberal language designed for small teams of intrinsically-motivated programmers who want more mastery over their entire software stack. The primary goal is to be easy for anyone to understand and modify.

$ git clone http://github.com/akkartik/wart
$ git checkout c73dcd8d6  # state when I wrote this article
$ cd wart
$ ./wart      # you'll need gcc and unix
ready! type in an expression, then hit enter twice. ctrl-d exits.
1+1
⇒ 2

Read more →

comments

* *
Aug 13, 2013
The trouble with 'readability'

We programmers love to talk about the value of readability in software. But all our rhetoric, even if it were practiced with diligence, suffers from a giant blind spot.

Exhibit A

Here's Douglas Crockford on programming style. For the first half he explains why readability is important: because our brains do far more subconsciously than we tend to realize. The anecdotes are interesting and the presentation is engaging, but the premise is basically preaching to the choir. Of course I want my code to be readable. Sensei, show me how!

But when he gets to ‘how’, here is what we get: good names, comments and consistent indentation. Wait, what?! After all that discussion about how complex programs are, and how hard to understand, do we really expect to make a dent on global complexity with a few blunt, local rules? Does something not seem off?

Exhibit B

Here's a paean to the software quality of Doom 3. It starts out with this utterly promising ideal:

Local code should explain, or at least hint at, the overall system design.

Unfortunately we never hear about the 'overall system design' ever again. Instead we get.. good names, comments and indentation, culminating in the author's ideal of beauty:

The two biggest things, for me at least, are stylistic indenting and maximum const-ness.

I think the fundamental reasons for the quality of Doom 3 have been missed. Observing superficial small-scale features will only take you so far in appreciating the large-scale beauty of a program.

Exhibit C

Kernighan and Pike's classic Practice of Programming takes mostly the code writer's part. For reading you're left again with guidelines in the small: names, comments and indentation.

Discussion

I could go on and on. Everytime the discussion turns to readability we skip almost unconsciously to style guides and whatnot. Local rules for a fundamentally global problem.

This blind spot is baked into the very phrase ‘readable code’. ‘Code’ isn't an amorphous thing that you manage by the pound. You can't make software clean simply by making all the ‘code’ in it more clean. What we really ought to be thinking about is readable programs. Functions aren't readable in isolation, at least not in the most important way. The biggest aid to a function's readability is to convey where it fits in the larger program.

Nowhere is this more apparent than with names. All the above articles and more emphasize the value of names. But they all focus on naming conventions and rules of thumb to evaluate the quality of a single name in isolation. In practice, a series of locally well-chosen names gradually end up in overall cacophony. A program with a small harmonious vocabulary of names consistently used is hugely effective regardless of whether its types and variables are visibly distinguished. To put it another way, the readability of a program can be hugely enhanced by the names it doesn't use.

Part of the problem is that talking about local features is easy. It's easy to teach the difference between a good name and a bad name. Once taught, the reader has the satisfaction of going off to judge names all around him. It's far harder to show a globally coherent design without losing the reader.

Simple superficial rules can be applied to any program, but to learn from a well-designed program we must focus on what's unique to it, and not easily transferred to other programs in disparate domains. That again increases the odds of losing the reader.

But the largest problem is that we programmers are often looking at the world from a parochial perspective: “I built this awesome program, I understand everything about it, and people keep bugging me to accept their crappy changes.” Style guides and conventions are basically tools for the insiders of a software project. If you already understand the global picture it makes sense to focus on the local irritations. But you won't be around forever. Eventually one of these newcomers will take charge of this project, and they'll make a mess if you didn't talk to them enough about the big picture.

Lots of thought has gone into the small-scale best practices to help maintainers merge changes from others, but there's been no attempt at learning to communicate large-scale organization to newcomers. Perhaps this is a different skill entirely; if so, it needs a different name than ‘readability’.

comments

* *
Jun 9, 2013
A new way of testing

There's a combinatorial explosion at the heart of writing tests: the more coarse-grained the test, the more possible code paths to test, and the harder it gets to cover every corner case. In response, conventional wisdom is to test behavior at as fine a granularity as possible. The customary divide between 'unit' and 'integration' tests exists for this reason. Integration tests operate on the external interface to a program, while unit tests directly invoke different sub-components.

But such fine-grained tests have a limitation: they make it harder to move function boundaries around, whether it's splitting a helper out of its original call-site, or coalescing a helper function into its caller. Such transformations quickly outgrow the build/refactor partition that is at the heart of modern test-based development; you end up either creating functions without tests, or throwing away tests for functions that don't exist anymore, or manually stitching tests to a new call-site. All these operations are error-prone and stress-inducing. Does this function need to be test-driven from scratch? Am I losing something valuable in those obsolete tests? In practice, the emphasis on alternating phases of building (writing tests) and refactoring (holding tests unchanged) causes certain kinds of global reorganization to never happen. In the face of gradually shifting requirements and emphasis, codebases sink deeper and deeper into a locally optimum architecture that often has more to do with historical reasons than thoughtful design.

I've been experimenting with a new approach to keep the organization of code more fluid, and to keep tests from ossifying it. Rather than pass in specific inputs and make assertions on the outputs, I modify code to judiciously print to a trace and make assertions on the trace at the end of a run. As a result, tests no longer need call fine-grained helpers directly.

An utterly contrived and simplistic code example and test:

int foo() { return 34; }
void test_foo() { check(foo() == 34); }

With traces, I would write this as:

int foo() {
  trace << "foo: 34";
  return 34;
}
void test_foo() {
  foo();
  check_trace_contents("foo: 34");
}

The call to trace is conceptually just a print or logging statement. And the call to check_trace_contents ensures that the 'log' for the test contains a specific line of text:

foo: 34

That's the basic flow: create side-effects to check for rather than checking return values directly. At this point it probably seems utterly redundant. Here's a more realistic example, this time from my toy lisp interpreter. Before:

void test_eval_handles_body_keyword_synonym() {
  run("f <- (fn (a b ... body|do) body)");
  cell* result = eval("(f 2 :do 1 3)");
  // result should be (1 3)
  check(is_cons(result));
  check(car(result) == new_num(1));
  check(car(cdr(result)) == new_num(3));
}

After:

void test_eval_handles_body_keyword_synonym() {
  run("f <- (fn (a b ... body|do) body)");
  run("(f 2 :do 1 3)");
  check_trace_contents("(1 3)");
}

(The code looks like this.)

This example shows the key benefit of this approach. Instead of calling eval directly, we're now calling the top-level run function. Since we only care about a side-effect we don't need access to the value returned by eval. If we refactored eval in the future we wouldn't need to change this function at all. We'd just need to ensure that we preserved the tracing to emit the result of evaluation somewhere in the program.

As I've gone through and 'tracified' all my tests, they've taken on a common structure: first I run some preconditions. Then I run the expression I want to test and inspect the trace. Sometimes I'm checking for something that the setup expressions could have emitted and need to clear the trace to avoid contamination. Over time different parts of the program get namespaced with labels to avoid accidental conflict.

check_trace_contents("eval", "=> (1 3)");

This call now says, "look for this line only among lines in the trace tagged with the label eval." Other tests may run the same code but test other aspects of it, such as tokenization, or parsing. Labels allow me to verify behavior of different subsystems in an arbitrarily fine-grained manner without needing to know how to invoke them.

Other codebases will have a different common structure. They may call a different top-level than run, and may pass in inputs differently. But they'll all need labels to isolate design concerns.

The payoff of these changes: all my tests are now oblivious to internal details like tokenization, parsing and evaluation. The trace checks that the program correctly computed a specific fact, while remaining oblivious about how it was computed, whether synchronously or asynchronously, serially or in parallel, whether it was returned in a callback or a global, etc. The hypothesis is that this will make high-level reorganizations easier in future, and therefore more likely to occur.

Worries

As I program in this style, I've been keeping a list of anxieties, potentially-fatal objections to it:

  • Are the new tests more brittle? I've had a couple of spurious failures from subtly different whitespace, but they haven't taken long to diagnose. I've also been gradually growing a vocabulary of possible checks on the trace. Even though it's conceptually like logging, the trace doesn't have to be stored in a file on disk. It's a random-access in-memory structure that can be sliced and diced in various ways. I've already switched implementations a couple of times as I added labels to namespace different subsystems/concerns, and a notion of frames for distinguishing recursive calls.

  • Are we testing what we think we're testing? The trace adds a level of indirection, and it takes a little care to avoid false successes. So far it hasn't been more effort than conventional tests.

  • Will they lead to crappier architecture? Arguably the biggest benefit of TDD is that it makes functions more testable all across a large program. Tracing makes it possible to keep such interfaces crappier and more tangled. On the other hand, the complexities of flow control, concurrency and error management often cause interface complexity anyway. My weak sense so far is that tests are like training wheels for inexperienced designers. After some experience, I hope people will continue to design tasteful interfaces even if they aren't forced to do so by their tests.

  • Am I just reinventing mocks? I hope not, because I hate mocks. The big difference to my mind is that traces should output and verify domain-specific knowledge rather than implementation details, and that it's more convenient with traces to selectively check specific states in specific tests, without requiring a lot of setup in each test. Indeed, one way to view this whole approach is as test-specific assertions that can be easily turned on and off from one test to the next.

  • Avoiding side-effects is arguably the most valuable rule we know about good design. Could this whole approach be a dead-end simply because of its extreme use of side-effects? Arguably these side-effects are ok, because they don't break referential transparency. The trace is purely part of the test harness, something the program can be oblivious to in production runs.

The future

I'm going to monitor those worries, but I feel very optimistic about this idea. Traces could enable tests that have so far been hard to create: for performance, fault-tolerance, synchronization, and so on. Traces could be a unifying source of knowledge about a codebase. I've been experimenting with a collapsing interface for rendering traces that would help newcomers visualize a new codebase, or veterans more quickly connect errors to causes. More on these ideas anon.

comments

* *
Nov 26, 2012
Software libraries don't have to suck

When I said that libraries suck, I wasn't being precise.1 Libraries do lots of things well. They allow programmers to quickly prototype new ideas. They allow names to have multiple meanings based on context. They speed up incremental recompiles, they allow programs on a system to share code pages in RAM. Back in the desktop era, they were even units of commerce. All this is good.

What's not good is the expectation they all-too-frequently set with their users: go ahead, use me in production without understanding me. This expectation has ill-effects for both producers and consumers. Authors of libraries prematurely freeze their interfaces in a futile effort to spare their consumers inconvenience. Consumers of libraries have gotten trained to think that they can outsource parts of their craft to others, and that waiting for 'upstream' to fill some gap is better than hacking a solution yourself and risking a fork. Both of these are bad ideas.

To library authors

Interfaces aren't made in one big-bang moment. They evolve. You write code for one use case. Then maybe you find it works in another, and another. This organic process requires a lengthy gestation period.2 When we try to shortcut it, we end up with heavily-used interfaces that will never be fixed, even though everyone knows they are bad.

A prematurely frozen library doesn't just force people to live with it. People react to it by wrapping it in a cleaner interface. But then they prematurely freeze the new interface, and it starts accumulating warts and bolt-on features just like the old one. Now you have two interfaces. Was forking the existing interface really so much worse an alternative? How much smaller might each codebase in the world be without all the combinatorial explosion of interfaces wrapping other interfaces?

Just admit up-front that upgrades are non-trivial. This will help you maintain a sense of ownership for your interfaces, and make you more willing to gradually do away with the bad ideas.

More changes to the interface will put more pressure on your development process. Embrace that pressure. Help users engage with the development process. Focus on making it easier for users to learn about the implementation, the process of filing bugs.

Often the hardest part of filing a bug for your users is figuring out where to file it. What part of the stack is broken? No amount of black-box architecture astronomy will fix this problem for them. The only solution is to help them understand their system, at least in broad strokes. Start with your library.

Encourage users to fork you. "I'm not sure this is a good idea; why don't we create a fork as an A/B test?" is much more welcoming than "Your pull request was rejected." Publicize your forks, tell people about them, watch the conversation around them. They might change your mind.

Watch out for the warm fuzzies triggered by the word 'reuse'. A world of reuse is a world of promiscuity, with pieces of code connecting up wantonly with each other. Division of labor is a relationship not to be gotten into lightly. It requires knowing what guarantees you need, and what guarantees the counterparty provides. And you can't know what guarantees you need from a subsystem you don't understand.

There's a prisoner's dilemma here: libraries that over-promise will seem to get popular faster. But hold firm; these fashions are short-term. Build something that people will use long after Cucumber has been replaced with Zucchini.

To library users

Expect less. Know what libraries you rely on most, and take ownership for them. Take the trouble to understand how they work. Start pushing on their authors to make them easier to understand. Be more willing to hack on libraries to solve your own problems, even if it risks creating forks. If your solutions are not easily accepted upstream, don't be afraid to publish them yourselves. Just set expectations appropriately. If a library is too much trouble to understand, seek alternatives. Things you don't understand are the source of all technical debt. Try to build your own, for just the use-cases you care about. You might end up with something much simpler to maintain, something that fits better in your head.

(Thanks to Daniel Gackle; to David Barbour, Ray Dillinger and the rest of Lambda the Ultimate; and to Ross Angle, Evan R Murphy, Mark Katakowski, Zak Wilson, Alan Manuel Gloria and the rest of the Arc forum. All of them disagree with parts of this post, and it is better for it.)

notes

1. And trying to distinguish between 'abstraction' and 'service' turned out to obfuscate more than it clarified, so I'm going to avoid those words.

2. Perhaps we need a different name for immature libraries (which are now the vast majority of all libraries). That allows users to set expectations about the level of churn in the interface, and frees up library writers to correct earlier missteps. Not enough people leave time for gestating interfaces, perhaps in analogy with how not enough people leave enough time for debugging.

comments

* *
Nov 24, 2012
Comments in code: the more we write, the less we want to highlight

That's my immediate reaction watching these programmers argue about what color their comments should be when reading code. It seems those who write sparse comments want them to pop out of the screen, and those who comment more heavily like to provide a background hum of human commentary that's useful to read in certain contexts and otherwise easy to filter out.

Now that I think about it, this matches my experience. I've experienced good codebases commented both sparsely and heavily. The longer I spend with a sparsely-commented codebase, the more I cling to the comments it does have. They act as landmarks, concise reminders of invariants. However, as I grow familiar with a heavily-commented codebase I tend to skip past the comments. Code is non-linear and can be read in lots of ways, with lots of different questions in mind. Inevitably, narrative comments only answer some of those questions and are a drag the rest of the time.

I'm reminded of one of Lincoln's famous quotes, a fore-shadowing of the CAP theorem. Comments can be either detailed or salient, never both.

Comments are versatile. Perhaps we need two kinds of comments that can be colored differently. Are there still other uses for them?

comments

* *
Nov 12, 2012
Software libraries suck

Here's why, in a sentence: they promise to be abstractions, but they end up becoming services. An abstraction frees you from thinking about its internals every time you use it. A service allows you to never learn its internals. A service is not an abstraction. It isn't 'abstracting' away the details. Somebody else is thinking about the details so you can remain ignorant.

Programmers manage abstraction boundaries, that's our stock in trade. Managing them requires bouncing around on both sides of them. If you restrict yourself to one side of an abstraction, you're limiting your growth as a programmer.1 You're chopping off your strength and potential, one lock of hair at a time, and sacrificing it on the altar of convenience.

A library you're ignorant of is a risk you're exposed to, a now-quiet frontier that may suddenly face assault from some bug when you're on a deadline and can least afford the distraction. Better to take a day or week now, when things are quiet, to save that hour of life-shortening stress when it really matters.

You don't have to give up the libraries you currently rely on. You just have to take the effort to enumerate them, locate them on your system, install the sources if necessary, and take ownership the next time your program dies within them, or uncovers a bug in them. Are these activities more time-consuming than not doing them? Of course. Consider them a long-term investment.

Just enumerating all the libraries you rely on others to provide can be eye-opening. Tot up all the open bugs in their trackers and you have a sense of your exposure to risks outside your control. In fact, forget the whole system. Just start with your Gemfile or npm_modules. They're probably lowest-maturity and therefore highest risk.

Once you assess the amount of effort that should be going into each library you use, you may well wonder if all those libraries are worth the effort. And that's a useful insight as well. “Achievement unlocked: I've stopped adding dependencies willy-nilly.”

Update: Check out the sequel.

(This birth was midwifed by conversations with Ross Angle, Dan Grover, and Manuel Simoni.)

notes

1. If you don't identify as a programmer, if that isn't your core strength, if you just program now and then because it's expedient, then treating libraries as services may make more sense. If a major issue pops up you'll need to find more expert help, but you knew that already.

comments

* *
Aug 1, 2012
Marx's engines of plenty

From "Red Plenty" by Francis Spufford:

The problem was that Marx had predicted the wrong revolution. He had said that socialism would come, not in backward agricultural Russia, but in the most developed and advanced industrial countries. Capitalism (he'd argued) created misery, but it also created progress, and the revolution that was going to liberate mankind from misery would only happen once capitalism had contributed all the progress that it could, and all the misery too. At that point the infrastructure for producing things would have attained a state of near-perfection. At the same time, the search for higher profits would have driven the wages of the working class down to near-destitution. It would be a world of wonderful machines and ragged humans. When the contradiction became unbearable, the workers would act. And paradise would quickly lie within their grasp, because Marx expected that the victorious socialists of the future would be able to pick up the whole completed apparatus of capitalism and carry it forward into the new society, still humming, still prodigally producing. There might be a need for a brief period of decisive government during the transition to the new world of plenty, but the 'dictatorship of the proletariat' Marx imagined was modelled on the 'dictatorships' of ancient Rome, when the republic would now and again draft some respected citizen to give orders in an emergency. The dictatorship of Cincinnatus lasted one day; then, having extracted the Roman army from the mess it was in, he went back to his plough. The dictatorship of the proletariat would presumably last a little longer, perhaps a few years. And of course there would also be an opportunity to improve on the sleek technology inherited from capitalism, now that society as a whole was pulling the levers of the engines of plenty. But it wouldn't take long. There'd be no need to build up productive capacity for the new world. Capitalism had already done that. Very soon it would no longer be necessary even to share out the rewards of work in proportion to how much work people did. All the 'springs of co-operative wealth' would flow abundantly, and anyone could have anything, or be anything. No wonder that Marx's pictures of the society to come were so vague: it was going to be an idyll, a rather soft-focus gentlemanly idyll, in which the inherited production lines whirring away in the background allowed the humans in the foreground to play, 'to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind…'

None of this was of the slightest use to the Marxists trying to run the economy of Russia after 1917. Not only had capitalist development not reached its climax of perfection and desperation in Russia; it had barely even begun. Russia had fewer railroads, fewer roads and less electricity than any other European power. Within living memory, the large majority of the population had been slaves. It became inescapably clear that, in Russia, socialism was going to have to do what Marx had never expected, and to carry out the task of development he'd seen as belonging strictly to capitalism. Socialism would have to mimic capitalism's ability to run an industrial revolution, to marshal investment, to build modern life.

But how?

There was in fact an international debate in the 1920s, partly prompted by the Bolsheviks' strange situation, over whether a state-run economy could really find substitutes for all of capitalism's working parts. No, said the Austrian economist Ludwig von Mises, it could not: in particular, it couldn't replace markets, and the market prices that made it possible to tell whether it was advantageous to produce any particular thing. Yes, it could, replied a gradually expanding group of socialist economists. A market was only a mathematical device for allocating goods to the highest bidder, and so a socialist state could easily equip itself with a replica marketplace, reduced entirely to maths. For a long time, the 'market socialists' were judged to have won the argument. The Bolsheviks, however, paid very little attention. Marx had not thought markets were very important — as far as he was concerned market prices just reflected the labour that had gone into products, plus some meaningless statistical fuzz. And the Bolsheviks were mining Marx's analysis of capitalism for hints to follow. They were not assembling an elegant mathematical version of capitalism. They were building a brutish, pragmatic simulacrum of what Marx and Engels had seen in the boom towns of the mid-nineteenth century, in Manchester when its sky was dark at noon with coal smoke. And they didn't easily do debate, either. In their hands, Marx's temporary Roman-style dictatorship had become permanent rule by the Party itself, never to be challenged, never to be questioned. There had been supposed to be a space preserved inside the Party for experiment and policy-making, but the police methods used on the rest of Russian society crept inexorably inward. The space for safe talk shrank till, with Stalin's victory over the last of his rivals, it closed altogether, and the apparatus of votes, committee reports and 'discussion journals' became purely ceremonious, a kind of fetish of departed civilisation.

Until 1928, the Soviet Union was a mixed economy. Industry was in the hands of the state but tailors' shops and private cafes were still open, and farms still belonged to the peasant families who'd received them when the Bolsheviks broke up the great estates. Investment for industry, therefore, had to come the slow way, by taxing the farmers; meanwhile the farmers' incomes made them dangerously independent, and food prices bounced disconcertingly up and down. Collectivisation saw to all these problems at once. It killed several million more people in the short term, and permanently dislocated the Soviet food supply; but forcing the whole country population into collective farms let the central government set the purchase price paid for crops, and so let it take as large a surplus for investment as it liked. In effect, all but a fraction of the proceeds of farming became suddenly available for industry.

Between them, these policies created a society that was utterly hierarchical. Metaphysically speaking, Russian workers owned the entire economy, with the Party acting as their proxy. But in practice, from 8.30 a.m. on Monday morning until 6 p.m. on Saturday night, they were expected simply to obey. At the very bottom of the heap came the prisoner-labourers of the Gulag. Stalin appears to have believed that, since according to Marx all value was created by labour, slave labour was a tremendous bargain. You got all that value, all that Arctic nickel mined and timber cut and rail track laid, for no wages, just a little millet soup. Then came the collective farmers, in theory free, effectively returned to the serfdom of their grandfathers. A decisive step above them, in turn, came the swelling army of factory workers, almost all recent escapees or refugees from the land. It was not an easy existence. Discipline at work was enforced through the criminal code. Arrive late three times in a row, and you were a 'saboteur'. Sentence: ten years. But from the factory workers on up, this was also a society in a state of very high mobility, with fairytale-rapid rises. You could start a semi-literate rural apparatchik, be the mayor of a city at twenty-five, a minister of the state at thirty; and then, if you were unlucky or maladroit, a corpse at thirty-two, or maybe a prisoner in the nickel mines, having slid from the top of the Soviet ladder right back down its longest snake. But mishaps apart, life was pretty good at the top, with a dacha in the country, from whose verandah the favoured citizen could survey the new world growing down below.

And it did grow. Market economies, so far as they were 'designed' at all, were designed to match buyers and sellers. They grew, but only because the sellers might decide, from the eagerness of the buyers, to make a little more of what they were selling. Growth wasn't intrinsic. The planned economy, on the other hand, was explicitly and deliberately a ratchet, designed to effect a one-way passage from scarcity to plenty by stepping up output each year, every year, year after year. Nothing else mattered: not profit, not accidents, not the effect of the factories on the land or the air. The planned economy measured its success in terms of the amount of physical things it produced. Money was treated as secondary, merely a tool for accounting. Indeed, there was a philosophical issue involved here, a point on which it was important for Soviet planners to feel that they were keeping faith with Marx, even if in almost every other respect their post-revolutionary world parted company with his. Theirs was a system that generated use-values rather than exchange-values, tangible human benefits rather than the marketplace delusion of value turned independent and imperious. For a society to produce less than it could, because people could not 'afford' the extra production, was ridiculous. Instead of calculating Gross Domestic Product, the sum of all incomes earned, the USSR calculated Net Material Product, the country's total output of stuff — expressed, for convenience, in roubles.

This made it difficult to compare Soviet growth with growth elsewhere. After the Second World War, when the numbers coming out of the Soviet Union started to become more and more worryingly radiant, it became a major preoccupation of the newly-formed CIA to try to translate the official Soviet figures from NMP to GDP, discounting for propaganda, guessing at suitable weighting for the value of products in the Soviet environment, subtracting items 'double-counted' in the NMP, like the steel that appeared there once in its naked new-forged self, twice when panel-beaten into an automobile. The CIA figures were always lower than the glowing stats from Moscow. Yet they were still worrying enough to cause heart-searching among Western governments, and anxious editorialising in Western newspapers. For a while, in the late 1950s and the early 1960s, people in the West felt the same mesmerising disquiet over Soviet growth they were going to feel for Japanese growth in the 1970s and 1980s, and for Chinese and Indian growth from the 1990s on. Nor were they being deceived. Beneath several layers of varnish, the phenomenon was real. Since the fall of the Soviet Union, historians from both Russia and the West have recalculated the Soviet growth record one more time: and even using the most pessimistic of these newest estimates, all lower again than the Kremlin's numbers and the CIA's, the Soviet Union still shows up as growing faster than any country in the world except Japan. Officially it grew 10.1% a year; according to the CIA it grew 7% a year; now the estimates range upward from 5% a year. Still enough to squeak past West Germany, and to cruise past the US average of around 3.3%.

On the strength of this performance, Stalin's successors set about civilising their savage growth machine. Most of the prisoners were released from the labour camps. Collective farmers were allowed to earn incomes visible without a microscope, and eventually given old-age pensions. Workers' wages were raised, and the salaries of the elite were capped, creating a much more egalitarian spread of income. The stick of terror driving managers was discarded too; reporting a bad year's growth now meant only a lousy bonus. The work day shrank to eight hours, the work week to five days. The millions of families squeezed into juddering tsarist tenements were finally housed in brand-new suburbs. It was clear that another wave of investment was going to be needed, bigger if anything than the one before, to build the next generation of industries: plastics, artificial fibers, the just-emerging technologies of information. But it all seemed to be affordable now. The Soviet Union could give its populace some jam today, and reinvest for tomorrow, and pay the weapons bill of a super power, all at once. The Party could even afford to experiment with a little gingerly discussion; a little closely-monitored blowing of the dust off the abandoned mechanisms for talking about aims and objectives, priorities and possibilities.

And this was fortunate, because as it happened the board of USSR Inc. was in need of some expert advice. The growth figures were marvellous, amazing, outstanding — but there was something faintly disturbing about them, even in their rosiest versions. For each extra unit of output it gained, the Soviet Union was far more dependent than other countries on throwing in extra inputs: extra labour, extra raw materials, extra investment. This kind of 'extensive' growth (as opposed to the 'intensive' growth of rising productivity) came with built-in limits, and the Soviet economy was already nearing them. Whisper it quietly, but the capital productivity of the USSR was a disgrace. With a government that could choose what money meant, the Soviet Union already got less return for its investments than any of its capitalist rivals. Between 1950 and 1960, for instance, it had sunk 9.4% of extra capital a year into the economy, to earn only 5.8% a year more actual production. In effect, they were spraying Soviet industry with they money they had so painfully extracted from the populace, wasting more than a third of it in the process.

comments

* *
mission
Making the big picture easy to see, in software and in society at large.
about me
Code (contributions)
Prose (shorter; favorites)
favorite insights
Life
Making
Work
Cognition
Economics
History
Startups
Social Software

© mmxiv ak