Posts tagged with 'Programming'
Oct 6, 2013
How I describe my hobby to programmers

(I still haven't found a way to describe it to non-programmers.)

Wart is an experimental, dynamic, batshit-liberal language designed for small teams of intrinsically-motivated programmers who want more mastery over their entire software stack. The primary goal is to be easy for anyone to understand and modify.

$ git clone http://github.com/akkartik/wart
$ git checkout c73dcd8d6  # state when I wrote this article
$ cd wart
$ ./wart      # you'll need gcc and unix
ready! type in an expression, then hit enter twice. ctrl-d exits.
1+1
⇒ 2

Small core

The best way to be accessible is to minimize code. Wart is based on lisp because it seems the most powerful and extensible core for the lowest implementation cost. But that's just the core; read on.

(+ 1 1)
⇒ 2

'(1 2 3) # lists look like function calls ⇒ (1 2 3) # quoting prevents treating them like calls

(def (foo x) # define functions using def   (+ 1 x)) (foo 3) # call functions using parens ⇒ 4

Clean syntax

The crown jewels of lisp are its macros, a massive tool for maximizing density in a readable way. Wart tries to question all of lisp's design without compromising the power of macros. It can guess parens from indentation to be more accessible to non-lispers.

if (odd? n)
  prn "odd"   # prn = print to screen

Used tastefully, taking out parens can make code markedly more readable. My heuristic is to skip parens when I want to highlight side-effects and control-flow, and to insert them when I want to indicate functional computation.

In order to provide macros, lisp uniformly uses prefix everywhere, including for arithmetic. Wart provides infix operators in an elegant way without compromising macros.

4 * 3         # same as (* 4 3)
⇒ 12

This has historically been hard to do because of the burden of providing precedence rules for nested expressions that are both intuitive and extensible. Wart's infix operators have only one hard-coded precedence rule: operators without whitespace are evaluated before operators with whitespace.

n * n-1       # does what you expect

(The catch: you can't use infix characters like dashes in variable names, something lispers are used to.)

Infix is most useful in arithmetic-heavy code for graphics and so on. But it has uses beyond arithmetic. For example, sometimes the clearest way to describe a sub-computation is as a pipeline of operations:

(c -> downcase -> rot13 -> upcase)

There's a final convenience beyond indentation-sensitivity and infix: all function arguments can be referred to by name. Individual calls can decide to pass all their arguments in order, or to insert names to reorder or emphasize specific args.

4-3
⇒ 1

def (subtract a|from b) # 'from' is an alias for the first arg   a-b

subtract 4 3 ⇒ 1 subtract 3 :from 4 # refer to 'a' with its alias ⇒ 1

Argument aliases can also be used for pattern matching, like haskell's as-patterns:

def (foo (a | (b c)))     # 'b' and 'c' name parts of list 'a'
  (list a b c)

(foo '(1 2)) ⇒ ((1 2) 1 2)

Names

After succinctness, the biggest lever for laying out the big picture is a coherent system of names. It's not enough for names to be well-chosen in isolation, they also need to work well together. Wart tries to do this. In addition, it tries to maintain the hard-won coherence over time by allowing all names to be extended and overloaded in arbitrary ways. So you never have to define words like 'my_malloc' or 'append2' or 'queue_length'. Just override the existing name or tweak it for your special corner case.

def (len x) :case (isa queue x)
  ..

Such extensions to len can be made at any time, and might be arbitrarily far from each other. This makes it easier for all queue-related code to stay together, and so on.

Wart's primitives themselves pervasively use this technique. We first build def to define simple functions, and later add support for :case The if primitive at first chooses between just two branches, and is later extended to multiple branches.

Organization

You should be able to do just about everything you want in Wart. However, Wart's internals are also intended to be extremely readable and hackable if you ever need to get into them. Its directory organization is designed to provide a high-level tour of the codebase. Just start reading at the first file and skim the first few files to orient yourself. You don't have to read each file to the end; important stuff is at the top and internal details further down. Start a new file with a numeric prefix, and it'll be loaded when Wart starts up. Reimplement a primitive from xxx.wart to xxx.cc for speed, and the new version will be picked up when you restart.

Tests

Wart is thoroughly unit tested.

$ ./wart test

The tests can be a valuable tool to interactively improve your understanding. Make a tweak, see if existing tests continue to pass. See a line you don't understand, or something that seems redundant? Change it, see if the tests pass.

Speed

Wart is currently orders of magnitude slower than anything you're used to. I've had some limited success using it for prototyping and then gradually rewriting key pieces in C. But even this currently only takes you so far. I'm playing with experimental primitives in the core interpreter that will make partial evaluation as easy as passing code through eval multiple times. But they're still in development.

Where I came from, where I'm going

Wart started with an observation: good programmers build clean and elegant programs, but programs grow less elegant over time. We start out clean, but invariably end up making a mess of things. Requirements shift over time, but reorganization is not convenient enough to keep organization in sync. Historical accidents don't get cleaned up. Newcomers don't have the comprehensive big-picture view of the original authors, and their changes gradually dilute the original design. As coherence decays, so does sense of ownership. The big picture becomes a neglected commons, too hard to defend and nobody's direct responsibility.

How to fight this tendency? Perhaps we can better preserve sense of ownership by minimizing constraints.

  • Make creating new files easy, so there are no constraints in how a file can be organized. This will hopefully improve the odds that file organization will continue to reflect reality as requirements shift and the architecture evolves.

  • Keep features self-contained in files, so a feature can be dropped simply by deleting its files, and the rest of the project will continue to build and pass its tests. (There's a new version of this project where I try to take this idea to the limit.)

  • Backwards compatibility is a huge source of constraints that can bleed ownership. What would a language ecosystem be like without any backwards-compatibility guarantees, super easy to change and fork promiscuously? Wart has no version numbers, and new versions and forks are free to change all semantics to their hearts' desire. Instead of a spec or frozen interfaces, we rely on automated tests. Wart programs are expected to be as thoroughly tested as Wart itself, and to buy in to the investment of hacking when upgrading. The hope is that upgrade effort might be higher than the best-case scenario in other languages, but the worst-case time will remain tightly bounded because clients of libraries will be more likely to understand how they're implemented, and so be more empowered.

The hypotheses are a) that making the global organization easy for outsiders to understand will help preserve the big picture, b) that making reorganizations more convenient by eliminating constraints will help keep the big picture in sync with reality, and c) that understandability and rewrite-friendliness are related in a virtuous cycle. Understandable codebases help outsiders contribute better organizations, and rewrite-friendly codebases help newcomers hone their understanding by attempting lots of reorganizations in sandboxes.

These hypotheses are all hard to verify, I don't expected the job to be done anytime soon. I'd love feedback from people unafraid to bounce between language and implementation. If that's you, check it out and tell me what you think. There are many more Wart code samples at Rosetta Code.

Credits

As an experiment in designing software, Wart was inspired by the work of Christopher Alexander, David Parnas, and Peter Naur. As a language, it was inspired by Arc. Discussions on the arc forum helped develop many of its features.

comments

* *
Aug 13, 2013
The trouble with 'readability'

We programmers love to talk about the value of readability in software. But all our rhetoric, even if it were practiced with diligence, suffers from a giant blind spot.

Exhibit A

Here's Douglas Crockford on programming style. For the first half he explains why readability is important: because our brains do far more subconsciously than we tend to realize. The anecdotes are interesting and the presentation is engaging, but the premise is basically preaching to the choir. Of course I want my code to be readable. Sensei, show me how!

But when he gets to ‘how’, here is what we get: good names, comments and consistent indentation. Wait, what?! After all that discussion about how complex programs are, and how hard to understand, do we really expect to make a dent on global complexity with a few blunt, local rules? Does something not seem off?

Exhibit B

Here's a paean to the software quality of Doom 3. It starts out with this utterly promising ideal:

Local code should explain, or at least hint at, the overall system design.

Unfortunately we never hear about the 'overall system design' ever again. Instead we get.. good names, comments and indentation, culminating in the author's ideal of beauty:

The two biggest things, for me at least, are stylistic indenting and maximum const-ness.

I think the fundamental reasons for the quality of Doom 3 have been missed. Observing superficial small-scale features will only take you so far in appreciating the large-scale beauty of a program.

Exhibit C

Kernighan and Pike's classic Practice of Programming takes mostly the code writer's part. For reading you're left again with guidelines in the small: names, comments and indentation.

Discussion

I could go on and on. Everytime the discussion turns to readability we skip almost unconsciously to style guides and whatnot. Local rules for a fundamentally global problem.

This blind spot is baked into the very phrase ‘readable code’. ‘Code’ isn't an amorphous thing that you manage by the pound. You can't make software clean simply by making all the ‘code’ in it more clean. What we really ought to be thinking about is readable programs. Functions aren't readable in isolation, at least not in the most important way. The biggest aid to a function's readability is to convey where it fits in the larger program.

Nowhere is this more apparent than with names. All the above articles and more emphasize the value of names. But they all focus on naming conventions and rules of thumb to evaluate the quality of a single name in isolation. In practice, a series of locally well-chosen names gradually end up in overall cacophony. A program with a small harmonious vocabulary of names consistently used is hugely effective regardless of whether its types and variables are visibly distinguished. To put it another way, the readability of a program can be hugely enhanced by the names it doesn't use.

Part of the problem is that talking about local features is easy. It's easy to teach the difference between a good name and a bad name. Once taught, the reader has the satisfaction of going off to judge names all around him. It's far harder to show a globally coherent design without losing the reader.

Simple superficial rules can be applied to any program, but to learn from a well-designed program we must focus on what's unique to it, and not easily transferred to other programs in disparate domains. That again increases the odds of losing the reader.

But the largest problem is that we programmers are often looking at the world from a parochial perspective: “I built this awesome program, I understand everything about it, and people keep bugging me to accept their crappy changes.” Style guides and conventions are basically tools for the insiders of a software project. If you already understand the global picture it makes sense to focus on the local irritations. But you won't be around forever. Eventually one of these newcomers will take charge of this project, and they'll make a mess if you didn't talk to them enough about the big picture.

Lots of thought has gone into the small-scale best practices to help maintainers merge changes from others, but there's been no attempt at learning to communicate large-scale organization to newcomers. Perhaps this is a different skill entirely; if so, it needs a different name than ‘readability’.

comments

* *
Jun 9, 2013
A new way of testing

There's a combinatorial explosion at the heart of writing tests: the more coarse-grained the test, the more possible code paths to test, and the harder it gets to cover every corner case. In response, conventional wisdom is to test behavior at as fine a granularity as possible. The customary divide between 'unit' and 'integration' tests exists for this reason. Integration tests operate on the external interface to a program, while unit tests directly invoke different sub-components.

But such fine-grained tests have a limitation: they make it harder to move function boundaries around, whether it's splitting a helper out of its original call-site, or coalescing a helper function into its caller. Such transformations quickly outgrow the build/refactor partition that is at the heart of modern test-based development; you end up either creating functions without tests, or throwing away tests for functions that don't exist anymore, or manually stitching tests to a new call-site. All these operations are error-prone and stress-inducing. Does this function need to be test-driven from scratch? Am I losing something valuable in those obsolete tests? In practice, the emphasis on alternating phases of building (writing tests) and refactoring (holding tests unchanged) causes certain kinds of global reorganization to never happen. In the face of gradually shifting requirements and emphasis, codebases sink deeper and deeper into a locally optimum architecture that often has more to do with historical reasons than thoughtful design.

I've been experimenting with a new approach to keep the organization of code more fluid, and to keep tests from ossifying it. Rather than pass in specific inputs and make assertions on the outputs, I modify code to judiciously print to a trace and make assertions on the trace at the end of a run. As a result, tests no longer need call fine-grained helpers directly.

An utterly contrived and simplistic code example and test:

int foo() { return 34; }
void test_foo() { check(foo() == 34); }

With traces, I would write this as:

int foo() {
  trace << "foo: 34";
  return 34;
}
void test_foo() {
  foo();
  check_trace_contents("foo: 34");
}

The call to trace is conceptually just a print or logging statement. And the call to check_trace_contents ensures that the 'log' for the test contains a specific line of text:

foo: 34

That's the basic flow: create side-effects to check for rather than checking return values directly. At this point it probably seems utterly redundant. Here's a more realistic example, this time from my toy lisp interpreter. Before:

void test_eval_handles_body_keyword_synonym() {
  run("f <- (fn (a b ... body|do) body)");
  cell* result = eval("(f 2 :do 1 3)");
  // result should be (1 3)
  check(is_cons(result));
  check(car(result) == new_num(1));
  check(car(cdr(result)) == new_num(3));
}

After:

void test_eval_handles_body_keyword_synonym() {
  run("f <- (fn (a b ... body|do) body)");
  run("(f 2 :do 1 3)");
  check_trace_contents("(1 3)");
}

(The code looks like this.)

This example shows the key benefit of this approach. Instead of calling eval directly, we're now calling the top-level run function. Since we only care about a side-effect we don't need access to the value returned by eval. If we refactored eval in the future we wouldn't need to change this function at all. We'd just need to ensure that we preserved the tracing to emit the result of evaluation somewhere in the program.

As I've gone through and 'tracified' all my tests, they've taken on a common structure: first I run some preconditions. Then I run the expression I want to test and inspect the trace. Sometimes I'm checking for something that the setup expressions could have emitted and need to clear the trace to avoid contamination. Over time different parts of the program get namespaced with labels to avoid accidental conflict.

check_trace_contents("eval", "=> (1 3)");

This call now says, "look for this line only among lines in the trace tagged with the label eval." Other tests may run the same code but test other aspects of it, such as tokenization, or parsing. Labels allow me to verify behavior of different subsystems in an arbitrarily fine-grained manner without needing to know how to invoke them.

Other codebases will have a different common structure. They may call a different top-level than run, and may pass in inputs differently. But they'll all need labels to isolate design concerns.

The payoff of these changes: all my tests are now oblivious to internal details like tokenization, parsing and evaluation. The trace checks that the program correctly computed a specific fact, while remaining oblivious about how it was computed, whether synchronously or asynchronously, serially or in parallel, whether it was returned in a callback or a global, etc. The hypothesis is that this will make high-level reorganizations easier in future, and therefore more likely to occur.

Worries

As I program in this style, I've been keeping a list of anxieties, potentially-fatal objections to it:

  • Are the new tests more brittle? I've had a couple of spurious failures from subtly different whitespace, but they haven't taken long to diagnose. I've also been gradually growing a vocabulary of possible checks on the trace. Even though it's conceptually like logging, the trace doesn't have to be stored in a file on disk. It's a random-access in-memory structure that can be sliced and diced in various ways. I've already switched implementations a couple of times as I added labels to namespace different subsystems/concerns, and a notion of frames for distinguishing recursive calls.

  • Are we testing what we think we're testing? The trace adds a level of indirection, and it takes a little care to avoid false successes. So far it hasn't been more effort than conventional tests.

  • Will they lead to crappier architecture? Arguably the biggest benefit of TDD is that it makes functions more testable all across a large program. Tracing makes it possible to keep such interfaces crappier and more tangled. On the other hand, the complexities of flow control, concurrency and error management often cause interface complexity anyway. My weak sense so far is that tests are like training wheels for inexperienced designers. After some experience, I hope people will continue to design tasteful interfaces even if they aren't forced to do so by their tests.

  • Am I just reinventing mocks? I hope not, because I hate mocks. The big difference to my mind is that traces should output and verify domain-specific knowledge rather than implementation details, and that it's more convenient with traces to selectively check specific states in specific tests, without requiring a lot of setup in each test. Indeed, one way to view this whole approach is as test-specific assertions that can be easily turned on and off from one test to the next.

  • Avoiding side-effects is arguably the most valuable rule we know about good design. Could this whole approach be a dead-end simply because of its extreme use of side-effects? Arguably these side-effects are ok, because they don't break referential transparency. The trace is purely part of the test harness, something the program can be oblivious to in production runs.

The future

I'm going to monitor those worries, but I feel very optimistic about this idea. Traces could enable tests that have so far been hard to create: for performance, fault-tolerance, synchronization, and so on. Traces could be a unifying source of knowledge about a codebase. I've been experimenting with a collapsing interface for rendering traces that would help newcomers visualize a new codebase, or veterans more quickly connect errors to causes. More on these ideas anon.

comments

* *
Nov 26, 2012
Software libraries don't have to suck

When I said that libraries suck, I wasn't being precise.1 Libraries do lots of things well. They allow programmers to quickly prototype new ideas. They allow names to have multiple meanings based on context. They speed up incremental recompiles, they allow programs on a system to share code pages in RAM. Back in the desktop era, they were even units of commerce. All this is good.

What's not good is the expectation they all-too-frequently set with their users: go ahead, use me in production without understanding me. This expectation has ill-effects for both producers and consumers. Authors of libraries prematurely freeze their interfaces in a futile effort to spare their consumers inconvenience. Consumers of libraries have gotten trained to think that they can outsource parts of their craft to others, and that waiting for 'upstream' to fill some gap is better than hacking a solution yourself and risking a fork. Both of these are bad ideas.

To library authors

Interfaces aren't made in one big-bang moment. They evolve. You write code for one use case. Then maybe you find it works in another, and another. This organic process requires a lengthy gestation period.2 When we try to shortcut it, we end up with heavily-used interfaces that will never be fixed, even though everyone knows they are bad.

A prematurely frozen library doesn't just force people to live with it. People react to it by wrapping it in a cleaner interface. But then they prematurely freeze the new interface, and it starts accumulating warts and bolt-on features just like the old one. Now you have two interfaces. Was forking the existing interface really so much worse an alternative? How much smaller might each codebase in the world be without all the combinatorial explosion of interfaces wrapping other interfaces?

Just admit up-front that upgrades are non-trivial. This will help you maintain a sense of ownership for your interfaces, and make you more willing to gradually do away with the bad ideas.

More changes to the interface will put more pressure on your development process. Embrace that pressure. Help users engage with the development process. Focus on making it easier for users to learn about the implementation, the process of filing bugs.

Often the hardest part of filing a bug for your users is figuring out where to file it. What part of the stack is broken? No amount of black-box architecture astronomy will fix this problem for them. The only solution is to help them understand their system, at least in broad strokes. Start with your library.

Encourage users to fork you. "I'm not sure this is a good idea; why don't we create a fork as an A/B test?" is much more welcoming than "Your pull request was rejected." Publicize your forks, tell people about them, watch the conversation around them. They might change your mind.

Watch out for the warm fuzzies triggered by the word 'reuse'. A world of reuse is a world of promiscuity, with pieces of code connecting up wantonly with each other. Division of labor is a relationship not to be gotten into lightly. It requires knowing what guarantees you need, and what guarantees the counterparty provides. And you can't know what guarantees you need from a subsystem you don't understand.

There's a prisoner's dilemma here: libraries that over-promise will seem to get popular faster. But hold firm; these fashions are short-term. Build something that people will use long after Cucumber has been replaced with Zucchini.

To library users

Expect less. Know what libraries you rely on most, and take ownership for them. Take the trouble to understand how they work. Start pushing on their authors to make them easier to understand. Be more willing to hack on libraries to solve your own problems, even if it risks creating forks. If your solutions are not easily accepted upstream, don't be afraid to publish them yourselves. Just set expectations appropriately. If a library is too much trouble to understand, seek alternatives. Things you don't understand are the source of all technical debt. Try to build your own, for just the use-cases you care about. You might end up with something much simpler to maintain, something that fits better in your head.

(Thanks to Daniel Gackle; to David Barbour, Ray Dillinger and the rest of Lambda the Ultimate; and to Ross Angle, Evan R Murphy, Mark Katakowski, Zak Wilson, Alan Manuel Gloria and the rest of the Arc forum. All of them disagree with parts of this post, and it is better for it.)

notes

1. And trying to distinguish between 'abstraction' and 'service' turned out to obfuscate more than it clarified, so I'm going to avoid those words.

2. Perhaps we need a different name for immature libraries (which are now the vast majority of all libraries). That allows users to set expectations about the level of churn in the interface, and frees up library writers to correct earlier missteps. Not enough people leave time for gestating interfaces, perhaps in analogy with how not enough people leave enough time for debugging.

comments

* *
Nov 24, 2012
Comments in code: the more we write, the less we want to highlight

That's my immediate reaction watching these programmers argue about what color their comments should be when reading code. It seems those who write sparse comments want them to pop out of the screen, and those who comment more heavily like to provide a background hum of human commentary that's useful to read in certain contexts and otherwise easy to filter out.

Now that I think about it, this matches my experience. I've experienced good codebases commented both sparsely and heavily. The longer I spend with a sparsely-commented codebase, the more I cling to the comments it does have. They act as landmarks, concise reminders of invariants. However, as I grow familiar with a heavily-commented codebase I tend to skip past the comments. Code is non-linear and can be read in lots of ways, with lots of different questions in mind. Inevitably, narrative comments only answer some of those questions and are a drag the rest of the time.

I'm reminded of one of Lincoln's famous quotes, a fore-shadowing of the CAP theorem. Comments can be either detailed or salient, never both.

Comments are versatile. Perhaps we need two kinds of comments that can be colored differently. Are there still other uses for them?

comments

* *
Nov 12, 2012
Software libraries suck

Here's why, in a sentence: they promise to be abstractions, but they end up becoming services. An abstraction frees you from thinking about its internals every time you use it. A service allows you to never learn its internals. A service is not an abstraction. It isn't 'abstracting' away the details. Somebody else is thinking about the details so you can remain ignorant.

Programmers manage abstraction boundaries, that's our stock in trade. Managing them requires bouncing around on both sides of them. If you restrict yourself to one side of an abstraction, you're limiting your growth as a programmer.1 You're chopping off your strength and potential, one lock of hair at a time, and sacrificing it on the altar of convenience.

A library you're ignorant of is a risk you're exposed to, a now-quiet frontier that may suddenly face assault from some bug when you're on a deadline and can least afford the distraction. Better to take a day or week now, when things are quiet, to save that hour of life-shortening stress when it really matters.

You don't have to give up the libraries you currently rely on. You just have to take the effort to enumerate them, locate them on your system, install the sources if necessary, and take ownership the next time your program dies within them, or uncovers a bug in them. Are these activities more time-consuming than not doing them? Of course. Consider them a long-term investment.

Just enumerating all the libraries you rely on others to provide can be eye-opening. Tot up all the open bugs in their trackers and you have a sense of your exposure to risks outside your control. In fact, forget the whole system. Just start with your Gemfile or npm_modules. They're probably lowest-maturity and therefore highest risk.

Once you assess the amount of effort that should be going into each library you use, you may well wonder if all those libraries are worth the effort. And that's a useful insight as well. “Achievement unlocked: I've stopped adding dependencies willy-nilly.”

Update: Check out the sequel.

(This birth was midwifed by conversations with Ross Angle, Dan Grover, and Manuel Simoni.)

notes

1. If you don't identify as a programmer, if that isn't your core strength, if you just program now and then because it's expedient, then treating libraries as services may make more sense. If a major issue pops up you'll need to find more expert help, but you knew that already.

comments

* *
Jun 18, 2012
How to use a profiler

All of us programmers have at some point tried to speed up a large program. We remember "measure before optimizing" and profile it, and end up (a few hours later) with something intimidating like this and.. what next? If you're like me, you scratch your head at the prospect of optimizing StringAppend, and the call-graph seems to tell you what you already know: Your program spends most of its time in the main loop, divided between the main subtasks.

I used to imagine the optimization process like this:

1. Run a profiler
2. Select a hot spot
3. ...
4. Speedup!

But the details were hazy. Especially in step 3. Michael Abrash was clearly doing a lot more than this. What was it?

Worse, I kept forgetting to use the profiler. I'd have a split-second idea and blunder around for hours before remembering the wisdom of "measure before optimizing." I was forgetting to measure because I was getting so little out of it, because I'd never learned to do it right.

After a lot of trial and error[1] in the last few months, I think I have a better sense of the process. Optimization is like science. You can't start with experiments. You have to start with a hypothesis. "My program is spending too much time in _." Fill in the blanks, then translate the sentence for a profile. "I expect to see more time spent in function A than B." Then run the profile and check your results. Skip the low-level stuff, look for just A and B in the cumulative charts. Which takes more time? Is one much more of a bottleneck? Keep an eye out for a peer function that you hadn't considered, something that's a sibling of A and B in the call-graph, about the same level of granularity.

Do this enough times and you gain an intuition of what your program is doing, and where it's spending its time.

When you do find a function at a certain level of granularity that seems to be taking too long, it's time to focus on what it does and how it works. This is what people mean when they say, "look for a better algorithm." Can the data structures be better organized from the perspective of this function? Is it being called needlessly? Can we prevent it being called too often? Can we specialize a simpler variant for 90% of the calls?

If none of that generates any ideas, then it's time to bite the bullet and drop down to a lower level.

But remember: optimization is about understanding your program. Begin there, and profiling and other tools will come to hand more naturally.

comments

* *
Jul 14, 2011
Evolution of a rails programmer

Idiomatic rails action for registering a user if he doesn't exist:

After a year of programming in lisp, I find it most natural to write:

Is this overly concise/obfuscated? I like it because it concisely expresses the error case as an early exit; most of the space is devoted to the successful save, which is straight-line code without distracting branches. It's clearer that we either pick an existing user or create a new one. Form follows function.

comments

* *
Aug 3, 2009
Tests are a technique to manage programmer anxiety about code. When I feel anxious about some aspect of my code I write a test case.

Does programming language affect level of anxiety? Definitely. Does programmer skill affect level of anxiety? Absolutely.

Writing tests becomes more important when you're part of a team. Your choices affect not just your anxiety but that of your teammates. That's why it's reasonable to be more dogmatic about TDD in a team.

A lot of 'getting better at TDD' is just getting better at listening to yourself. When I started programming the little anxieties would pile up until I'd painted myself into a corner. With experience I pay more attention to the little anxieties.

The secondary effect: after some time doing TDD I feel less anxious just knowing that I can write a test if I want. The benefit of the tests as an artifact is secondary to me; what they primarily do is keep me from getting stressed and giving up to go play poker.
me

comments

* *
May 23, 2009
Realism in simulation

"The goal of simulation is not to simulate reality as closely as possible. With an accurate model you cannot find commonalities."
Tom Slee

comments

* *
mission
Making the big picture easy to see, in software and in society at large.
about me
Code (contributions)
Prose (shorter; favorites)
favorite insights
Life
Making
Work
Cognition
Economics
History
Startups
Social Software

© mmxiv ak