Are you sure?

Screenshot of Are you sure?

Today I'm releasing Are you sure?, a little Chrome extension that prompts you before you visit certain websites. I often get distracted by Reddit or Hacker news when I'm meant to be doing something useful, and this is my own particular take on that problem.

I think that in many cases we overreact to mild cognitive flaws and biases, unreasonably punishing our monkey brains that are already trying as hard as they can to keep up. While I respect the power and depth of extensions like StayFocusd, I can't help but feel that strict schedules, time limits, and having to write lines on a virtual blackboard to use your computer is more masochism than self-improvement.

For my needs, I mainly trust myself. If I really want to, there are a hundred ways I can get around a website blocker. And sometimes I really do need to visit Hacker News or Reddit for work. I don't need a strict taskmaster, I need a helpful friend; someone to ask "hey, man, is this really what you want to be doing?". Are you sure? is an extension from that school. It only prompts you once, just enough to interrupt your muscle memory and ask you to actively make a decision about whether to waste time or not.

After that, it's up to you.

To-do blocks

A pile of wooden blocks

I've been thinking about Bret Victor's video on The Humane Representation of Thought, and his earlier Rant on the Future of Interaction Design. Summarising intensely: we use representations (like graphs) so we can go from a domain that doesn't suit us (like reams of numbers) to domains that do suit us (like our highly optimised visual system). But Bret's point is we only use a very limited set of those domains. Most of what computers do is strictly visual and occasionally aural, with no tactile or kinaesthetic component: "Pictures Under Glass". In the transition from static tools like hammers to dynamic tools like computers, we've lost most of our sensory range.

Bret's direction is towards physical computer systems, or dynamic "smart" matter. That sounds hard, so I'm going to leave that to him. Good luck, buddy! In the mean-time, there's nothing stopping us from trying to exploit these extra underused sensory dimensions today. After all, the graph was invented long before Excel. So before Smart Matter, we can think about some ways to use more Dumb Matter to get a few more axes into our models, or arrive at insights that come more easily with when you have something you can grab and move around.

I decided to try a more humane equivalent of the classic to-do list, which I have creatively named to-do blocks. My foray into Humane Representations looked like spending an hour and a half in my shed with some scavenged bed slats and a circular saw. In the end, I walked away with a downy coat of sawdust and 14 blocks of wood: 2xlarge, 4xmedium and 8xsmall. Each size is 4x the next, so you can build larger blocks out of smaller ones. A pile of wooden blocks

Each block represents a task, so I write the task's name on it in pencil. I'm currently only using it for meaningful tasks ("take out the garbage" doesn't make the cut), and only things that I'm actually working on (sorry, Great American Novel). The goal is to have a physical representation of "my plate" that gives me a good physical intuition for the tasks and projects I'm doing.

I've already found a bunch of interesting ways to represent and manipulate my tasks that would be hard or impossible on paper:

You can cover one task with another task if it has to be done first.
Tasks that are close to each other are related, I've been putting things I want to work on less further away, but also roughly arranging left-to-right by due date.
Weight and size
Bigger tasks get bigger blocks, you can stack smaller blocks on top of bigger blocks for subtasks. The biggest blocks are actually kind of unwieldy, which I think is a good fit.
Texture and colour
I'm not currently doing this, but I could paint or coat blocks in something to distinguish different categories like work/personal.
The blocks live on a separate desk at the moment, but I take the block I'm working on and put it next to my keyboard to remind me what I'm meant to be doing. I have the "website" one here right now.
Spatial awareness
I can easily tell how many things I'm doing, the average size of those things, how related or unrelated my tasks are, and so on. Temporarily rearranging blocks (eg, making a stack of things to do today) works well because I can easily remember where they went before.
It's really fun to mess around with the blocks, so it doesn't feel like a chore to keep track of my to-dos. If I can't decide what to work on, I juggle blocks until I drop one. You're it!

I'm amazed how many interesting ways to make use of the extra information have come up without really trying that hard. But beyond any of that, it's surprising just how much better it feels to be doing something with my hands, to feel shape and texture, and operate in 3d space for a change. Even if what I'm doing turns out to be no more useful than a regular to-do list, I'd be happy to keep it for the humane-factor alone.

What I'm wondering is, what else could you use blocks for?

When it all comes together

I've been working on a little Javascript utility library called Catenary for the last couple of weeks. It's based on the concatenative (aka stack-based) paradigm, and particularly inspired by the Factor language. It initially just started as me noodling around writing some Factor primitives in JS, but it's turned out to be a lot more fun than I expected.

It's still not ready for the real world yet, but while working on it today I had one of those "a-ha!" moments. I think they're the most satisfying thing you can really experience while working on an idea. You stumble around in the dark working off nothing but feel, completely lost. But, slowly you piece together shapes. Certain movements start to seem familiar. Suddenly, something clicks. This part is the same as that part. In fact, maybe they're all the same part. A-ha!

With catenary, you can write things like this (final syntax pending):

coffee> cat(2, 3, 4)
{ _stack: [ 2, 3, 4 ] }
coffee> cat(2, 3, 4).plus
{ _stack: [ 2, 7 ] }
coffee> cat(2, 3, 4).plus.times
{ _stack: [ 14 ] }

Basically, we use property access to perform operations on a data stack. As well as performing operations on the stack, you can get data into and out of the stack using the special functions .cat and .$, like this:

coffee> cat(1, 2).cat(3)
{ _stack: [ 1, 2, 3 ] }
coffee> cat(1, 2).cat(3).plus
{ _stack: [ 1, 5 ] }
coffee> cat(1, 2).cat(3)
{ _stack: [ 6 ] }
coffee> cat(1, 2).cat(3)$

So that's fun, but when you want to do anything higher-order you need a way to create those function chains without executing them immediately. For that, you can use a function-building style, which allows you to build up a Javascript-friendly function whose arguments become values on the stack, like this:

{ [Function] _funstack: [ [Function] ] }
coffee>, 4)
{ [Function] _funstack: [ [Function], [Function] ] }
coffee>, 3, 4)

The way it works is actually fairly simple, the function stack (aka _funstack) is built up with every property you access. When you're ready to call your function, it initialises an empty data stack and calls all of the funs in the funstack in order.

My particular a-ha came today when I was trying to sort out how to do flow control. To call a function, on the stack, you do this:

coffee> cat(1, 2,
{ _stack: [ 3 ] }

Which can be pretty awkward, especially if you want to chain a lot of function calls together. So I started working on the idea of a operator which would allow you to enter function-building style at any time. That is, you could refactor the above into this:

coffee> cat(1, 2).fun.exec(
{ _stack: [ 3 ] }

Under the hood, works exactly the same as our original function-building style. It creates an internal funstack with cat.exec in it, which is then invoked when you call the returned function. The only difference is that instead of starting with an empty data stack, you use the one you had when is called ([1, 2]). Another way to think about it is that is a placeholder that gets filled in later with the argument ( when the function is called.

So what's the a-ha? Well, if you remember from before, it adds things to the stack after the stack is created, so cat(1,2,3) is equivalent to cat(1).cat(2).cat(3). And that's the exact same thing as! It just adds things to the funstack as well as the data stack.

Which means all three of these are the same:

coffee> cat(1, 2, 3)
{ _stack: [ 1, 2, 3 ] }
coffee> cat(1, 2).cat(3)
{ _stack: [ 1, 2, 3 ] }
coffee> cat(1, 2).fun(3)
{ _stack: [ 1, 2, 3 ] }

In that last example, adds 3 to the stack, then calls its (empty) funstack.

That led to the realisataion that there might be no need for a separate idea of function-building style and regular style at all. Every catenary can have a data stack and a funstack, and that means a much more elegant system.

In fact, I've had this vague uneasy feeling about data stacks and function stacks for a while. I keep catching glimpses of something that might mean I could drop the distinction entirely and just have one single glorious omnistack.

That's at least one more a-ha away though.


Screenshot of livewall

A while back I wrote a little Bash script called livewall to do live wallpaper on OS X. I'd heard that nature boosts productivity and I thought I'd see if I could add a little bit of earthy soul to my sterile aluminium-and-glass work environment. Given at least some of those results suggest that just seeing a green rectangle would be sufficient, I may have somewhat overcomplicated the task.

I looked at a few commercial solutions, but none of them seemed that amazing. Most were more expensive than what I wanted and didn't seem to do very much other than just play video. They also had in-app purchases, so you would have to pay more once you installed them to access the full library of wallpapers. More importantly, though, I'd recently resolved to get my hands dirty more. Although I could certainly pay someone for software to do what I need, there's a certain rugged outdoorsman-type satisfaction in solving your own problems by the sweat of your brow - for certain fairly low standards of ruggedness.

Two breakthroughs made it fairly easy. The first is that my beloved VLC actually has a wallpaper mode, though you have to do a bit of massaging in recent OS X versions to make it work. The second is that Youtube actually has quite a lot of suitable nature videos:

Armed with livewall and my library of nature videos, I had a quite relaxing couple of hours working with a more animated Yosemite background. Then I went outside and saw the sunset, and I realised how woefully inadequate even a high-res display is when compared with the beauty of the real world.

Oh well, I tried. Besides, you can't be outside all the time.

Motion interpolation is (wonderful) voodoo

For a project I'm working on, I wanted to create a nice spinning Earth background video. After digging around for a while I discovered that NASA had exactly the sort of video I was looking for. Only problem is it's only 8 seconds long. In my ideal world, it would spin slower and last longer, but you can't just pull more frames out of the air... or can you?

A technique called motion interpolation is able to do exactly that, with certain limitations. Most of those limitations aren't a problem for something as simple as a spinning globe, so my issue was just finding some software to do it for me. Many modern TVs actually have interpolation built in with brand-appropriate names like SmoothTrueMotionFlow, there seemed to be a few Windows video player plugins, and it seems to be a popular feature in high-end video suites. Unfortunately for me, I didn't want to hack my TV, install Windows or pay thousands of dollars. My search seemed doomed.

In the end, my saviour was a Swiss student named Simon Eugster, whose slowmoVideo is an open-source was created as a thesis project. It works by splitting the video into individual frames, then comparing each frame to nearby frames to calculate the optical flow - basically a measurement of how much each part of the video is moving. It then uses that information to create aditional frames, sews the video back up and there you go.

It turned out that slowmoVideo was way more sophisticated than I needed. It doesn't just slow down video, it allows you to draw custom slow-down-speed-up curves so you can do the whole Hollywood fight scene thing. It can even add motion blur. I felt a bit lame just drawing one long straight line and turning off all the options, but, hey, it worked and I got my slow-mo Earth.

If I hadn't heard about motion interpolation, I'd still be thinking that slowing down a video without making it jerky or blurry was impossible. I guess this is just one more to add to the pile of impossible computer graphics tricks that also includes resizing by deleting boring parts of an image, removing motion blur, and doing all your image editing by just drawing a box and telling Photoshop to figure out the rest.

Whatever they're putting in the water at SIGGRAPH must be pretty strong.

Caniuse for the command line

Screenshot of caniuse command line tool

Today I'm releasing a CLI I wrote for Caniuse is a great site that acts as a beacon of hope for those trapped in web developer purgatory by showing which features are supported by which browser versions. With the post-HTML5 world being all about "living standards" and the rapid pace of browser development, it's pretty tricky to keep track of whether your favourite feature still doesn't work in Mobile Safari.

I spend a fair bit of time on the command line, so for me this was a natural choice. The caniuse website is nice, but nothing beats the convenience of having the results appear instantly in your terminal. Plus, writing actual UIs is hard.

It lets you filter for specific browsers with --browser, and narrow down the versions you care about with --current, --future and --era. You can also get multiple results at once, and display them compactly with --oneline and --oneline-browser, which is pretty fun if you're into that kind of thing. Screenshot of caniuse command line tool

The site's data ended up being really easy to access, because it's available as an npm module. That let me spend more time working on important things, like making pretty colours and finding the right unicode character for "partially supported".

If you have Node, you can install it by typing npm install -g caniuse-cmd.

Keeping a NOTES file

Screenshot of a NOTES file

Sometimes on a project, it feels like everything just flows. The ideas turn into code, the code works properly, it solves your problems and then you hit commit and go to the beach. Unfortunately, especially for more interesting projects, you tend to spend a lot more time being stuck. The key observation to blow this whole thing wide open is just around the next corner, but until then you just bang your head against it and feel dumb.

The best solution I've found for getting over that wall is to talk to someone about it. Unfortunately, using a real person is fairly limiting. Firstly, they have to be familiar with the project, or you have to explain the project to them in enough detail that your problem will make sense. Secondly, in most cases the person you use won't get that much out of it. You're talking for your own sake, not theirs, and ultimately that makes it unsustainable.

The state of the art, then, is Rubber Ducking. That's when you explain your problems to an inanimate object (like a rubber duck) instead of a person. You can assume your rubber duck has good domain knowledge (rubber ducks are notorious generalists), explain the problem whatever way best helps you, and have your epiphany safe in the knowledge that the rubber duck is really chill about just being a means to an end.

But there are a few issues with talking out your problems that even the mighty duck can't solve. Some people think better in writing than speech. Some problems don't translate well verbally, especially very structural ones. And worst of all, talking doesn't persist anywhere. I find myself running over the same ideas again later on and questioning my reasoning, or coming back to a project after some time away from it and not remembering what I'd decided.

My attempt to solve this, in the great unix tradition of all-caps files like TODO and README, is the NOTES file. When I get stuck, I just open up a NOTES file in my editor and start free-form writing about what I'm stuck on. Sometimes that looks more structural (lists of ideas, example syntax, comparing various API designs) and sometimes it looks more like a self-directed Q&A: "what about X issue?" "how do you solve that while still keeping Y?"

I've found this uniquely helpful. If it doesn't outright solve my problem, it often clarifies it to the point where I know what I need to do to solve it. What's more, I find the NOTES file acts as a great starting point the next time I come back to the project. The latest text makes great background reading so I can get immersed in it again quickly.

If you're stuck on something, I highly recommend giving a NOTES file a try. Worst case scenario, you've accidentally written some documentation. is a simple fun site I threw together at a WebRTC meetup. It's mostly just the code from the JSFeat Sobel Derivatives demo mashed up with Soundcloud, but I think the result ended up being pretty cool.

Everyone I've shown it to has immediately started waving their arms around like an idiot, which I think is an important activity to encourage.

The inside-out universe

There are a lot of different ideas that I like, but one particular theme seems to reappear time and time again. I've tried to put it into words a few times, and this is the closest I've come: it's about building a universe from the inside.

When you work with a system enough, you learn and internalise its rules and resources. Maybe the rules of physics, or the tools of Photoshop, or the irrefutable tao of Rails. You learn how to combine resources and manipulate rules to make and do whatever you want. Eventually you become so familiar with that system that it becomes second nature. You no longer think "I'm going to use Photoshop to resize this image"; that would be like saying "I'm going to use physics to go to the park". You just resize the image. You live in the universe of Photoshop as much as you live in the physical universe, and you're building things using the rules of that universe.

More recently, one of those things you can build is a new universe, with its very own set of rules. Of course, societies, languages, games and so on have existed for ages, but they're all built from people and behaviour and tend to be hard to design. But with a computer you can build a universe easily, a consistent universe that follows the rules you decide on.

But when we create a computer program we have to step outside it, and build our amazing new universe while living in the drab universe of squiggles in a code editor. But what if it didn't have to be that way? What if you could create a universe-creating universe where the tools you use to build it aren't accessed from the outside, but from the inside?

Distributed data pattern

Diagram of the distributed data pattern

I've recently been working on yet another distributed system, and I noticed a pattern that I've seen sometimes but that I wish I could see more of. I think it provides a useful lens for thinking about and designing distributed systems.

To start with, you have local data and remote data. You address data using a scope, which is some way of addressing that data. Think URLs, database queries, document ids, that kind of thing.

On top of that, you have events. These are messages attached to a particular scope that you access using a pub/sub pattern.

Lastly, you have updates that are implemented in terms of events. That is to say, updates are a special kind of event to synchronise the data associated with that scope.

Here are some varying levels of this pattern in real systems:

Channels are scopes. IRC protocol messages are events. JOIN/PART/NAMES are update events to synchronise user lists. TOPIC/MODE synchronise other channel state.
Database names and document ids or view functions are scopes. Replication (the _changes feed) implements update events. However, no other events are possible so there's no way to send messages to other people watching the same document.
Queues and topics are scopes. Events are implemented as publish/subscribe. However, there's no notion of persistent data attached to the scope.
Database numbers and keys are scopes. PUBLISH/SUBSCRIBE commands implement events, but they are not filtered by scope. The various update commands are not implemented as events, so there's no way to watch for database changes.
URLs are scopes. You can use server-sent events or websockets to implement events, but they're not scoped and not pub/sub. There's also no connection between HTTP documents and those streams - they accept different urls and you have to manage that mapping manually.

Ultimately, most things I work on that don't implement this pattern end up needing it to be reimplemented in some way, either by implementing events on top of the database (as in Couch), building your own mapping between events and data (as in Redis), or just doing whatever and hoping it works out (as in the web)

Random survey app

Sketch of a random survey app

I was at a Less Wrong meetup the other day and the conversation got around to self-measurement, as it often does. It reminded me of this app idea I've had floating around for a while.

I've tried a few different methods for self measurement, but none of them have suited me particularly well. I mainly use spreadsheets, which are great and very flexible but hard to stay on top of. If I'm particularly busy I tend to forget about them, which is sadly the exact time when I'd most like to have good information.

I've also dabbled with automatic window tracking software like RescueTime and Time Sink but it's never really stuck. Time Sink is basically unmaintained now and didn't give me very useful information. RescueTime was great but I find the idea of every activity on my computer being stored in one NSA-friendly database supremely creepy.

One third way I've heard about is random sampling. Instead of having to actively manage your measurements or build them into a monitoring system, you just get a prompt at random times asking you quick questions. Over time, your responses will form a complete picture of what you want to know.

Unfortunately, there don't seem to be any good solutions for this at the moment. Most of what's out there seems to focus specifically on one type of question, either mood or "what are you doing now?", but I'd like to ask custom questions. There an iOS app called Reporter that seems pretty nice, but I'd really like something for Android. In particular, something that integrates with Android Wear would be super cool.

In my ideal world, someone else would build that for me, but I suspect I might end up doing it myself. Unfortunately, it'll still be pretty tricky to track my time while doing it.


Screenshot of a timetable

One technique I've found pretty useful for keeping my days on track is using a timetable. That is, dividing my entire day up into calendar entries describing what I should be doing at any given moment. I haven't really seen much of the humble timetable since I left university, but I think there are a few qualities that make it useful for creative work.

A timetable might sound overbearing, but it's important to remember that, like any measurement or management tool, it's only as draconian as its consequences. In my use, I don't find much benefit in getting upset when something happens and throws the timetable out. Rather, I adjust the timetable so that it makes sense for the rest of the day and use it as an opportunity to think about whether that adjustment was a good thing.

In fact, I'd say that's the main benefit of having a timetable. Without one, your day can slip arbitrarily and you don't even notice. Someone calls and you lose fifteen minutes. You get sidetracked by some interesting website and half an hour's gone. Suddenly the day's over and you're surprised to discover you've done half as much as you expected. With one, you not only notice the slippage, you can correct it by making changes to the rest of the day.

The other main benefit is that it forces you to put a bit more effort into planning your day. My default plan looks roughly like "do things until I'm done", which has the benefit of simplicity but not much else. Explicitly blocking it out makes it obvious when there aren't enough hours to do everything you were planning, particularly on days where other commitments intrude. It also makes it much clear when you're not leaving yourself enough time for breaks, exercise or fun.

Basically the only downside is having to remember to put aside time to do it, which is the kind of problem you can solve with the judicious application of even more timetabling.

Chia Groot

Chia Groot

Today I did a fun little DIY project as a gift for a friend. I bought a baby Groot bobble head, cut its head open and filled it with dirt and chia seeds. The ultimate outcome should be a Groot chia pet, but I'll have to wait a couple weeks to see if it turns out.

I also put up step-by-step instructions in case anyone else wants to make one.


I had a fun idea a while back for a kind of extreme real world game: get a bunch of people together with a fixed amount of time and try to recreate civilisation from nothing out in the wilderness. I call it rebuilding.

The difficulty of the game mainly depends on how you define civilisation and how you define nothing. I think it would be reasonable for beginners to start with a knife and comfortable clothes, but if you're hardcore you could start with nothing but a loincloth. It would be tempting to cheat by starting with specific things that are impractical to make, like steel or a microscope, but I think that defeats the point.

Instead, the right place to cheat is by careful definition of the word civilisation. For a first attempt I think you would pass by just setting up some kind of functional dwelling + basic agriculture. Then you could move on to more advanced goals like building a non-human-powered vehicle, launching something 100 metres into the air, or isolating penicillin.

A good goal would have a lot of creative ways to approach it. The vehicle doesn't have to use a traditional engine, maybe it could be wind powered or run off firewood. There are a few differnt mechanical and chemical ways to make something go high up in the air. Penicillin... yeah, that one's pretty tough. You would also want to set up lots of intermdiate goals as waypoints, partly to give you something to focus on on the way to the major goal, and partly so you have a string of other successes to look back on when you don't reach your chosen definition of civilisation. The game's meant to be hard, and I think it's fair if it takes a few attempts.

A big part of it would be very careful preparation. Having a plan, or even several plans, and having practiced some crucial skills early on would go a long way to making the task achievable. Picking the right location for the goal (or the right goal for the location, if there aren't many choices) would also be crucial. If something happens such that you need real-world tools or medical attention, that would qualify as "dying" in the game. Obviously you wouldn't want to have phones and cars around, so you'd need to have a plan for how to get someone out if something goes wrong.

Ultimately I think it's a game that's appealing as an exercise in realising how far we've come and getting in touch with some of the technological developments that we take for granted now. Being able to go back and, in some small way, relive the experience of being on the front line of civilisation would be an amazing thing to experience.

Feedback loops

Feedback loops

Something that I've really started to appreciate the importance of recently is feedback loops. Any time I'm doing a project there's a loop between my actions and their results. I come up with an idea, I start figuring out how to implement it, make a couple passes at it, get something I'm happy with, release it and then show it to people. If they like it, or don't, that's good information and goes back into the next cycle of the process.

But I've been beginning to realise that my feedback loops are often way too long. I have projects and ideas that sit on my hard drive for months or years without seeing the light of day. Part of that is not wanting to release something crappy that will get me bad feedback, but another more significant factor is that my projects aren't structured the right way. It's easy to make something that's useless until it's finished, but much harder to make something that is useful and can gather feedback as early as possible.

This isn't new, in fact it's a central tenet of agile development, which I've found useful in a business context, but I'm beginning to realise that I haven't really appreciated the value of thinking this way in general. The feedback loop is critical because it's a fundamental part of any optimisation process. Any time you're solving a dynamic problem you'll be limited by how quickly you can see if your solution's working.

My recent realisation is that short feedback loops are more useful for creative projects, not less. When you have an opportunity to go deep into the woods and make something really interesting is when it's most important not to get lost and forget the grounding your ideas have in reality. More creative projects are also more resistant to traditional analysis techniques; they're less cerebral and more intuitive. You won't be able to solve the whole thing at once, and making steady progress is going to be impossible without feedback.

Perhaps most importantly, a short feedback loop is very motivating. I've had many projects stall out because they've been languishing in a half-done state for too long and I just lose the energy to keep caring about them. But every bit of feedback and every minor reinforcement propels me forward and makes me want to see the next iteration. Even without the other benefits, that alone is worth it.

Photoshop Light

Sketch of Photoshop Light

In honour of Dorkbot, which I went to for the first time today, here's a Dorkbot-style art-meets-technology idea I had a while back: Photoshop Light.

You set up a person in a chair with lights shining on them from 5 different directions: top, bottom, left, right and front. Each light is a digital projector being coordinated by a Photoshop-style interface. As you draw in the interface, the projectors change their image to make it appear like you're drawing on the person in the chair.

The tools would be mostly designed for painting shadows and light, tinting, and adjusting colour temperature, the goal being to experience the subtlety and importance of lighting in our perceptions. There would be a few presets for, say, regular light, dim light, evil-looking underlights, that sort of thing. But mostly I expect people would just play around and try to make their friends look either good or bad, depending on their preferences.

There would, of course, be the obligatory take-a-photo-and-tweet-it functionality built in.

Illegal npm modules

Recently, I was a little surprised to learn that the FSF claims that depending on a library makes a derivative work and thus spreads the GPL. To me, that seems obviously ridiculous (how is it possible to create a derivative work of something you haven't modified? Or distributed? Or sometimes even looked at?). Indeed, you can have some fun watching the FSF tie itself into knots trying to avoid the inevitable conclusion that if you make a derivative work by calling code, then basically all non-GPL software is in violation of the GPL.

Anyway, this got me thinking about how in node-land, the common pattern is many many small modules. Indeed, if I install the top 60 most popular npm modules I get about 3000(!) dependent modules including dupes. With so many modules it would be very easy to accidentally include a GPL library somewhere. If you did that, your npm module is (according to the FSF) in violation of the GPL and therefore (according to the FSF) in violation of international copyright law and therefore (according to the FSF) illegal.

I thought it would be fun to find out how many people are breaking Stallman's intergalactic copyright law, so I quickly grabbed the most starred modules and installed them. Npm is actually a couchapp so this was pretty easy to do.

$ wget '' -O starred.json
$ coffee -e "console.log require('./starred').rows.sort((a, b) -> a.value - b.value).slice(-60).map((x) -> x.key[0]).join('\n')" | xargs npm install

Then I waited for a very long time.

For the next step I used the awesome licensecheck module (don't worry, it's not GPL - you can visit that page without creating a derivative work). Many npm modules don't include license information in their metadata, because programmers are lazy, so it employs various sophisticated techniques to figure out and normalise the licenses into a consistent output. And I got back this:

$ licensecheck --tsv | awk '{print $3}' | sort -k1 | uniq -ci | sort -n | tail -12
   2 BSD*
   2 WTFPL
   2 WTFPL2
   3 AGPLV3
   4 BSD-like
   5 Do
  15 unmatched:
  41 Apache
  51 ISC
 160 BSD
 874 MIT

Luckily, it seemed like there were no hidden time bombs deep in the dependencies of the most popular projects. The three AGPLV3 entries that turned up are all part of the fairly popular pm2 project. But if it's popular that probably means there are other things depending on it...

$ wget '[%22pm2%22]&endkey=[%22pm2%22,{}]&reduce=false&include_docs=true' -O depends_on_pm.json
$ coffee -e 'console.log require("./depends_on_pm").rows.length'
26? Let's hope all of them are GPL! Especially because, if they aren't, any recursive dependents would also be in violation, probably without even realising it.

$ coffee -e 'console.log require("./depends_on_pm") -> "#{}: #{x.doc.license || x.doc.versions[x.doc["dist-tags"].latest].licenses?[0].type}").join("\n")'
anthtrigger: MIT
bosco: MIT
bute: MIT
debian-server: MIT
diy-build: BSD
ecrit: MIT
ezseed: GPL
foxjs: MIT
g-dns: ISC
gatewayd-4: undefined
gitbook2edx-external-grader: BSD
hls-endless: MIT
hubba: undefined
itsy: MIT
lark-bootstrap: MIT
nodemvc: MIT
npm-collection-explicit-installs: MIT
nshare-demon: MIT
pm2-auto-pull: ISC
pm2-plotly: MIT
pod: undefined
radic: MIT
tesla: MIT
wordnok: MIT
yog-pm: BSD
zorium-paper-demo: undefined

...Oh dear.

Github Automaintainer

automaintainer sketch

One thing I've always had trouble with is the long tail of Open Source projects. For the larger, more popular projects there's it's usually fine; there's an established community and some particularly organised and motivated maintainer has usually stepped forward. But for smaller ones the effort of maintaining a project seems disproportionately high compared to the effort of making the project in the first place. Who wants to review pull requests or build a community for some 50-line thing you threw together one time?

I think a great way to solve this would be some kind of Github Automaintainer. You would authorise it to access your repository and it would read a special file configuring the rules by which it should maintain your project. Rules would look like:

Though obviously those numbers and even the rules themselves would be customisable.

I think a lot of smaller projects could get away with nothing but that kind of minimal rule-based organisation, and it would be easier for everyone. Maintainers wouldn't have to do much work to maintain the project, and the explicit rules would make it easy to understand how to become a contributor to the project and how commits are approved.

And who doesn't like knowing their project is managed by a robot?

Computers are special

Macman eating worlds

A topic came up in conversation today that I think is really important: what's so special about computers? It's often said that software is eating the world, meaning that every business, every process, every aspect of our lives seems to involve computers in some way. We talk to friends on computers, spend money with computers, and the few places where computers aren't being used much are just waiting targets for software businesses. Indeed, it seems like the dominant startup model at the moment is find an industry where they don't use enough computers and make them use more.

Okay, but why couldn't medicine be eating the world? Or insurance? Or internal combustion engines? What's so special about computers? Well, I think the best way to answer that is to look at another world-eating invention: currency. Trading existed before currency through barter and gifting, but the idea of currency fundamentally changed trading because it acted as an abstract unit of value. You don't have to trade a bushel of corn for a bunch of bananas, you trade a bushel of corn for 20 units of value, which you can turn back into a bunch of bananas.

In practice, that doesn't sound particularly special: you're still trading corn for bananas. But having an abstract unit of value involved gives rise to all the other possibilities of the modern economy. Savings, loans, interest, insurance, investment, taxation - all of these things wouldn't work well or at all without it. The reason why currency is special is because it created that abstract unit that let us connect everything else together. Of course, currency itself is based on an even more important abstraction: numbers, an abstract unit of information.

Computers create a similar abstract unit, but a much more powerful one: an abstract unit of operation. That is to say, anything that can be done can be done by a computer (with only a few fairly obscure exceptions). Of course, to begin with most of those things are pretty mundane. I could write a document already, now I type into a computer and the computer writes the document. But, in exactly the same way, having that abstract unit of operation has transformed everything we do, and will keep transforming it into the future.

So while it might be tempting to think there'll be a next big thing and computers will stop being such a big deal, it just isn't possible. They may change shape or be made from different things, but the abstract unit they embody will never stop being relevant. Computers are special because they are the abstraction of doing things, and there will always be things to do.

Groot is risen

chia Groot with long planty hair

Quick update on Chia Groot. He's been growing for a week now and looking amazing.



I've been pretty busy the last two weeks, and I think my creativity has suffered as a result. Being busy sounds very productive, but on reflection I think it is a signal that your time is being badly managed. The feeling of busy-ness is a feeling of stress, a kind of animal urgency. It stops you from seeing beyond your immediate situation and engaging your higher-level thinking. I think if you want to be creative you need to cultivate the opposite of busy-ness: space.

Space is that feeling when you aren't reading, watching or catching up, not doing or going or working. Space is when the external fades away and you're left alone with your thoughts, to wander where they lead and explore with the freedom of a child. It's very easy, and very tempting, to fill every second with stuff, but if you can make time for space I think you'll start to hear some quiet voices that were drowned out by the noise.

I think I'm going to start going for more walks.

From how to what


Something I've noticed is that as technology begins to mature, the particular internal details become less important. As the technology becomes sufficiently advanced, it becomes completely invisible. At that point, we don't feel like we're interacting with technology. Rather, we're working with its consequences directly. In other words, technology makes the transition from how to what.

Software is still very early on that journey, but there are a few places where that transition has begun. Dropbox is an early example whose success is, I think, fairly misunderstood. Dropbox doesn't succeed because it's simple, but because it's invisible. You just use your files like normal and they appear everywhere. How? Doesn't matter. You only have to think about the what: your files.

On the other hand, the technology for connecting with people is nightmarishly how-centric. The blame for this mostly lies at the feet of the dominant user-capture strategy in social startups, but the number of different ways you can send a message, share a picture, post an update or make a call is completely dumbfounding. The most damning result of this is Android's Share menu, a giant grid of "how" options that comes up before you even get to pick who you want to share with.

I think the first technology to make all of that invisible and let you just focus on the people, rather than how you talk to them, will do very well for itself.

A slave to biology

DNA bars

It strikes me that in a very real sense we are slaves to our biology. What I mean by this is that because our bodies require a constant input of resources for maintenance, we are indebted to that requirement. This puts us in a kind of debt bondage: we have to work to pay off that biological debt, and the penalty for non-payment is death.

For that reason, the notion of wage slavery seems short-sighted to me; it's not wages that are responsible, it's our biological needs that give power to those wages. If some freak of genetics caused you to be biologically self-sufficient - powered by the sun or some such - you would have no particular need to work. The idea of wage slavery under those circumstances would seem very far-fetched.

Indeed, if you earn enough money you can reach a point where you have enough to cover your body's needs for the rest of your life. I would say that at that point you have purchased your biological freedom. Though obviously some people don't earn enough to do that until very late in life, and some people inherit resources enough to be born free.

To accept this view is to accept the notion of biology as unfair. Why should we be born into a world with this debt attached to us? Wouldn't it be more just if we could start with a blank slate? And indeed to some extent many countries accept this in the particular subset of medical needs. We acknowledge the unfairness that some people are born with or acquire expensive medical problems that drastically increase their biological debt, and we pay to free them of that debt.

But you have to ask, isn't anything that causes you to get sick and die a medical problem, even if it affects everyone? And shouldn't it be a priority for us as a species to remove that burden?



I've been working on a project to do EEG visualisation lately. It's a bit of fun, but I'm way further down the rabbit hole than I ever expected to be re: signal processing. I think FFTs are dark voodoo, so having to become familiar with Welch's method for Power Spectral Density estimation is a bit of a shock to the system.

Two things have been a big surprise and enormously helpful so far. Firstly, the sheer ridiculous comprehensiveness of Python's scientific and maths libraries. You've heard you need a window function but you're not sure what that is? Don't worry, scipy has 18 to choose from.

The second is 0mq. I wanted to run something over websockets but Python's threading/event model stuff was giving me the crankies. I whipped up a Node thing in 5 minutes and hooked up 0mq in even less time. Since then I've basically pulled everything out into smaller processes loosely coupled with pub/sub. It feels good.

I'm still a little daunted by the crazy pile of brain stuff there is to figure out, but at least I know that all my problems can still be solved with service-oriented architecture. And Welch's method for Power Spectral Density estimation.

Charity as arbitrage

exchange rate

I had an interesting thought occur to me on a walk the other day. While most charity is framed as an act of altruism, it's something we only tend to do when there's a significant difference in value between the good for you and the good for the person you're giving it to. Measured in currency, if you give someone $10 they're $10 better off, and you're $10 worse off. But measured in utility that $10 might be worth a lot more utilons to the homeless guy you give it to than it was to you.

A reasonable measure of altruism would be an exchange rate from your utilons to someone else's. An exchange rate of 1 would mean someone else is exactly as valuable to you as you are. That would imply strange things like not minding if someone steals something of yours as long as it gives them the same utility as it gave you. A number like 0.5 would mean you are willing to sacrifice 50% of what you want to give someone else what they want. In practical terms, I think most people would have a number fairly close to zero. Plenty of people give to charity, but few are willing to sacrifice much to do it.

So how does charity work in a society with such low altruism exchange rates? Ultimately, it relies on arbitrage. Charities act as middlemen between people with a lot of money (where it's not worth much) and people with very little money (where it's worth a lot). They can exploit that difference to do a trade at the point where the money exchange rate matches the utility exchange rate.

I wonder how hard that would be to measure...

Voiding some warranties

exchange rate

I was trying to figure out what to do with two broken Nexus 5s - one of them had no wifi and the other had a broken power button and a cracked screen - and I finally just opened them up out of desperation. It was actually nowhere near as bad as I expected and I ended up making a fairly respectable phone out of the parts.

For some reason I tend to assume there's no point in trying to repair modern electronics, what with the whole "not user serviceable" thing and Appleisation of hardware. But you can do anything a technician at some repair centre can do, if you're willing to learn. I was surprised to find that there's actually a lot of resources out there on repairing modern smartphones, where to order replacement parts and so on.

Maybe it's a bad habit taking "that's a hardware problem" to mean "and therefore not my problem". Every time I've made it my problem it's been a lot less hard - and a lot more fun - than I expected.

Ghosts: the game


I went to a launch party for my friend Malcolm's zombie card game, and it got me thinking about tabletop game ideas. There's one that's been floating around in my head for a while based on two important concepts: demonic possession and recursion. I call it Ghosts.

The basic idea is that players are characters in a semi-scripted scene. They get a character card with various facts about themselves, and a list of goals like "get out of the office without the boss asking you to stay late", or "ask Annie from accounting on a date". The base level of the game is that you act out the scene trying to achieve your goals (though obviously some of the goals will conflict). The game plays out in turns, with each turn involving a series of conversations between characters.

However, the scene is also haunted! Some players begin as ghosts, but you also become a ghost if your character leaves the scene for any reason. Ghosts have goals much like characters do, but their goals are in terms of other people in the game, eg "help someone achieve their goal", "prevent someone achieving their goal", "cause as many people as possible to hate each other", etc.

At the start of each round the ghosts choose who to haunt. Haunting means that when a player is about to speak in a conversation, the ghost can tell that player what to say instead. Each person can only have one ghost haunting them at a time. However, ghosts can haunt other ghosts: a friendly ghost can haunt a friendly ghost that haunts a character, and all three of them will succeed at the same time. A malevolent spirit could haunt a friendly ghost who is haunting a character, and ruin twice as many goals.

The goals of the characters and the ghosts would be secret except to the person haunting them. That would mean that over time the ghosts could build up knowledge of who wants what, but the characters would stay in the dark and have to trust in friendly ghosts to help. As characters satisfy their needs and leave the scene, there would be more and more ghosts vying for control and the scene would devolve into chaos. The game is scored at the end by how many goals you achieved.

Obviously there's a lot of tweaking needed to make sure it's fun, but I think it has a pretty appealing mix of fun hammy acting, intrigue and strategy. And, most importantly, spooky ghost hijinks.

Fail scale

fail scale

An interesting idea I heard recently is "near miss reporting". Instead of doing failure analysis as part of the reaction to a failure, it's much better if you can start that analysis before the failure has actually occured. I think there is something really compelling about taking failure from a binary to a scale. Maybe it wasn't a failure exactly, but "not quite as much success as you'd like" is still a valuable signal.

It also makes for an interesting way of quantifying your own goals. If you think in terms of simple pass/fail, you're wiping out a lot of nuanced information. How much did you succeed? If you just scraped over the line, that's not the same thing as succeeding easily. For anything you want to be able to do consistently, like a skill or a habit, relaxing before you hit the comfortable win stage may well mean you slip back into failure when you stop focusing.

It might even be worth considering at what point a win is too easy - perhaps if you go all the way into the green zone it's worth taking on a harder challenge or spending less effort next time.

Anonymity and democracy

troll masquerade mask

It's well known that many online communities start to get bad as they grow. There are a lot of theories on why this is, some of my favourites are Gabe's Greater Internet Fuckwad Theory and evaporative cooling. I think there is also an underexamined root cause which is also the main assumption underlying democracy. The assumption is that people have value.

I should be clear: I think people have value. It's a good assumption for democracy, but because democracy is the social system we're most familiar with we tend to carry those assumptions over into the design of other systems. Large online communities struggle to deal with the weight of non-valuable people: trolls, spammers and other undesirables. And you can never really get rid of them. Anyone you ban can make a new account. And even if you could could get rid of them permanently there would be some new troll basically identical to the last.

Indeed, maybe the point is that there's no difference; a new user is a new identity, and one of the wonderful things about the internet is not being limited to one fixed identity that's often assigned by others. In the future I think we'll reach a point where it's common to have many identities for different communities and different purposes. We'll swap identities as quickly as a change of clothes, adopting new ones when the old ones don't suit, for playing a particular role or just for fun. As technology improves there will even be autonomous identities that can, themselves, spawn more autonomous identities.

How can we possibly create a social system to handle that? The Facebooks and Googles of the internet are obsessed with keeping our online identity tied to our real-world identity because they know that's the only way to make the people-have-value assumption hold true online. But what if we just abandon it? I think we could build richer systems on the assumption that identities are not valuable. Instead, they have to earn value.

Many communities already do this to some extent. On Hacker News, certain features unlock as you gain more karma. On Reddit, repeated successful posts make you less likely to be caught by the spam filter. But you could go further and make a community where posting is not allowed until you demonstrate value in some other way, like voting on other new posts. Google's Pagerank is another interesting example. It assumes new websites have basically no value. Instead, they gain value by being referenced by other sites. You could have a similar social system that passes value by endorsements.

There are also other solutions, usually suggested to fight spam, where new identities or actions (like sending an email) have a monetary cost, thus proving a certain degree of real-world value. While that's definitely better than being tied to a particular real-world identity, I think money is not a great analogue for value in many communities. A determined evildoer can afford to just trade money for mischief. A better way would be to allow users to earn value within the community itself. Why bother to make an identity to do harm if you first have to do at least as much good?

Even if this is already happening in some ways, it's important to recognise the trade-off in question: you can't have identities be free and inherently valuable at the same time. The true power of virtual identity has yet to really come into its own, and I don't think it will until we are willing to sacrifice that inherent value.

Voice Print

voice print sketch

I was thinking today about conversations. I've definitely had the experience before of being in a conversation where the other person just talks and talks and you can't get a word in. But I've also had that feeling the other way, where I start to wonder if I've just been dominating a conversation with my thoughts and ideas. Obviously, I can just ask people I talk to, which is helpful to some extent, but in a sense I'm just swapping my opinion about the content of the conversation for theirs.

Since I like data, maybe it's a problem that can be solved with data. So here's an idea for an app called Voice Print, which allows one or more people to train it to recognise their voices. You then leave it to record, and it keeps track of who's talking and how much. At the end of the conversation you can get a summary with all sorts of neat information: who talked the most, who tended to respond to whom, how long the gaps in conversation were, how long did each person wait in a gap, how much quiet was there during the conversation. So much data.

Of course, you could do the same thing manually by just recording your conversations, but, ugh, manual labour. Plus I imagine people would be much more squeamish about you recording everything they say than just tracking the general flow of the conversation.

Single page screenshot

I've always wanted to make one of those cool single-page websites like or Today, I got my chance.

I keep missing or nearly missing SydJS because I forget when it's on. They have an app thingy but the push notifications usually go out a day or two before, which is sometimes too late for me to plan around it. This month I'd agreed to something else and then had to cancel, which was pretty frustrating. I figured I should channel that frustration into something useful.

So, I'd like to present When the fuck is SydJS?. It does roughly what it says on the tin. As a bonus, it also provides an iCal feed so you can get reminders. I'm hoping to keep the data up to date via crowd-sourcing - it's on GitHub.

Context table

context table

Following on from my other voice idea, it would be kind of fun to make a context table. That is, a device that listens to your conversation, pulls out keywords and searches for them, displaying a kind of passive live-updating screen with information relevant to the context. You'd need to do a bit of processing to figure out what words tend to be searchworthy (proper nouns, stuff outside the top N most common words etc), but it wouldn't have to be terribly accurate to work.

In fact, most of what you'd need to build a simple web version is just the Web Speech API generating AJAX requests to Wikipedia's API.

Obviously, the coolest thing would be if you could then turn that into a table that automatically displays relevant things to what you're talking about but, hey, gotta start somewhere.

Eat Your Vegetables

Today the topic of unpopular services being forced on you by popular services came up. Like Apple with iTunes Ping, Facebook with Messenger and, of course, Google+.

Obviously, the temptation to use one of your popular services as a lever to get people interested in an unpopular service makes sense, especially if you think people will come around once they give it a try, but the danger is that you end up suffering from what I call the Eat Your Vegetables problem.

These companies obviously think they're giving people something that will be good for them in the long run, like a delicious plate of greens. Problem is, before you've eaten vegetables you don't know whether you'll like them, so you take your cues from how they're presented. "Eat your vegetables or you won't get dessert!" is a strong signal that vegetables aren't tasty enough to stand on their own.

Sometimes a new service genuinely just isn't very good, in which case it was probably dead, vegetables or not. But sometimes it can make things worse for a decent product. By all accounts, Facebook Messenger is fairly good, but there was still a big backlash when users were forced to install a separate app. Of course, Facebook also owns WhatsApp and Instagram, but nobody's complaining about having to install them. That's the power of not having to Eat Your Vegetables.

Stack trends

stack trends

I've been running the "which web framework is everyone using now?" gauntlet again for a project, and finding the whole thing pretty exhausting. I'm sure at some point we'll figure out the right answer and stop making new web technology stacks, but not any time soon. While it's nice to see so much developer effort going into making the web better, it is tough to stay on top of.

What would be really helpful is a website to catalogue and track the stack trends in web development. For example, one of the biggest up-and-coming frontend stacks is Webpack + Flux + React. I would call that the early-2015 trend. The 2014 equivalent is Browserify/Gulp + Angular + Bootstrap. The 2013-ish era was Grunt + Backbone + jQuery. I think there are a few candidate late-2015 trends on the rise. Maybe Om or one of the Web Component-based systems (Polymer seems like the front-runner but I quite like the hybrid model of RiotJS).

A lot of this you can figure out by just paying attention to what you read on Hacker News or hear recommended by other developers, but it sure would be nice to have a concrete reference for the generations of web technology. Often each generation is a refinement or a reaction to technologies from the previous generation, so you can learn a lot from finding out why those technologies changed. Different components of the stack usually complement each other, too - advances in one area lead to new tools in another.

And aside from being able to share a common language of stack trends, probably the greatest benefit would be just having a single place to look for a complete list of new hotness to evaluate when starting a project.

Friend Compass

friend compass

Here's a nifty idea I had a while back. I've occasionally had that issue where you're meeting up with someone and can't figure out where they are, particularly at big events. Sometimes there's a landmark we can refer to, but other times I really just want something that can just tell me which direction to walk in.

My idea for solving that is called Friend Compass. It just displays a simple compass but instead of (or, I guess in addition to) showing you which way is north, it shows you which way someone else with the app is. You pick a friend from your contact list, they get a message asking them to install/open the app if they want to be found. Once the pairing is done both apps would show where the other is relative to you.

There might be some tricky stuff to get good results in crowded areas with bad GPS reception, but hopefully with not too much effort Friend Compass could eliminate the entire class of "Where are you? I thought you said near the tree. No, I meant the other tree!" problems for good.

Everything is dollars


There's a great trick I used to use when doing unit conversions using Google's calculator. It's great for asking natural questions in units it knows about, like how long it would take to download 1 gigabyte on dialup or how many years before the Opera House is underwater. But sometimes you have an important question where you want to refer to a unit it doesn't know about, like how many sneezes it takes to fill the Goodyear Blimp.

It turns out the answer is 4.78 million dollars or, uh, 4.78 million sneezes. Dollars make a great "generic standin" unit. They're readily available, easy to remember, and don't accidentally cancel with any other units leaving you with some funny number in megahertz per square nanofarthing or something. Unfortunately, at some point Google added the "unit" unit, which works even better but doesn't feel as clever.

But something occured to me along the way: the dollar really is the perfect everyunit. There isn't a good way to convert, say, degrees celsius into metres, or seconds into kilograms. But there are tonnes of great ways to convert degrees, metres and seconds into dollars. I've sometimes run into the issue of explaining or justifying decisions to people who have decisionmaking oversight but very little knowledge of the decision domain. Or, to put it in another way, they have to check my numbers but they don't know the units.

Maybe you start off explaining that bigger monitors make programmers more productive, or that you need to buy better servers to keep the site up. But as soon as you start talking quantities you immediately run into lots of domain-specific units. How big are these monitors? How much more productive? What's the productivity per inch? You want me to pay x dollars for y servers with z IOPS? What's an IOPS again?

The dirty secret is that you can save yourself a lot of trouble if you can convert everything into dollars and cancel out the units. Figure out dollars profit per IOPS, multiply through by IOPS per dollar cost. You end up with the wonderfully unitless dollars per dollar, which is very easy to reason about.

Everything is dollars. All praise the everyunit.

3D unprinter


There's been a lot of buzz about 3D printing - justified, in my opinion, but maybe not for the reasons people think. Obviously, additive manufacturing opens up a lot of possibilities that the subtractive equivalent isn't capable of. But actually I think the community and the attitude that has appeared around 3D printing is more important than the technology itself.

With a little effort, we could have the same revolution in CNC routing. Machines right now tend to be big and expensive, and they don't have the same DIY community support, but there's no reason that has to be the case. We could make DIY router parts out of other parts, share designs, interoperate with standard model files and design better software for novices to use.

If we did this, I think it would become much more obvious that the glorious future of home DIY manufacturing can't be realised with 3D printing alone, for the very same reason it was invented in the first place: some things are easier to create additively, and some subtractively.

Maybe at some point we'll even have a desktop-sized combined printer/cutter. That would really be something.

Mood days

mood days

Here's an interesting idea I've been trying to figure out what to do with: it seems like some days have particular moods. Maybe because of the things you have planned that day, or other things that happened recently, or just your general mood taking form in what you do. Some days seem very productive, other days don't. Some days seem energetic, others are relaxed.

I've noticed that on the days I know are going to be very distracted, there seems to be very little point in fighting it and trying to focus. I've previously gotten quite frustrated trying to fit in an hour of focused thought between visitors and social events. The solution was presented to me by a friend: don't fight mood days, embrace them. If it's going to be a crazy distracted day, use the opportunity to knock off a lot of small tasks that might otherwise disrupt a perfectly good focused day.

Of course, the flip side is that if you're feeling particularly calm and focused, that's a bad time to start ticking off little to-dos or answering emails. And there's obviously some degree to which you can influence the mood of a day, especially if you have a lot of time to play with. But often you don't, and I think learning how to make the best of what the day has to offer is a big improvement over trying to swim against the tide.


mood days

I've been trying to get a bit more serious about self-measurement lately, but I keep running into the complete lack of decent tooling. I have a few projects in the works that might make things a bit easier, at least for me, but all the same it's strange that the state of things is so primitive. It's particularly puzzling because so many nerds I know seem to be really interested in it.

Honestly the only explanation that seems to make sense is this: Firstly, most people only care about tracking calories. Secondly, most people outside the first category don't have the resources to do anything but try to subsist on an ungodly diet of spreadsheets and IFTTT. Thirdly, the tiny remainder of people who do care and can do something are all falling over each other in a manic land grab for a platform to own and sell literally every piece of information ever recorded about you. If you listen carefully at night you can hear them salivating all across the Valley.

Anyway, none of the purpose-built stuff seems to suit, so I'm investigating what I can get done with my old standby, the devops/monitoring toolchain. It's massively overpowered for this, but, hey, better than underpowered. The graph pictured here is Grafana plus InfluxDB showing a plot of my working hours per week by category.

I'm pretty impressed at how much insight I could get from that relatively quickly. The green line is me slowly ramping up my creative output and backing off last week because I overdid it a bit. The yellow line spiked up because I wanted to spend more time walking and thinking, and you can see the blue line hovering around 5-10 hours, which is mostly spent writing here.


I had an interesting thought occur to me today. In the spooky post-Edward Snowden world, the holy grail is to make web services that don't actually have to have access to your data. For example, Google Chrome's syncing feature has optional encryption that prevents Google from being able to see your browser history. It's encrypted with your password on your computer and then stored on their servers; they can't decrypt it. It's very difficult right now to do that for anything more complex but, ideally, we'd be able to have that same level of security everywhere.

And that's the interesting part - the thing we want now is the very same thing that the copyright industry has been trying to invent for decades: DRM. We want to have web services that do things with our content, but only certain things that we specify. And, much like traditional DRM, this cloud DRM would require some kind of enormous technological breakthrough to get off the ground.

Interestingly, something like that is on the horizon. Fully Homomorphic Encryption, or FHE is basically a system where the code itself as well as the data is encrypted. It's pretty fringe stuff and right now the performance is way too bad to be practical, but the theory is there and it's probably a matter of time before it happens. There are also sort of ghetto-homomorphic systems like CryptDB that do clever hacks to transform the store the encrypted data in a form where it can still support certain operations without doing proper FHE.

All that aside, though, the really scary thing to consider is that any progress in this field is simultaneously progress in online privacy and in DRM. With each step we come closer to a world where our personal data is safe on other computers, but content on our computers can be similarly kept safe - from us.

Super Inceptagon


I played a bit of Super Hexagon for the first time in probably a year or two. It was frankly a bit scary how easily it came back. I completed the "Hardest" difficulty in a few tries (there's also hardester, hardestest and hardestestest). Strange to think that there must be some small part of my brain dedicated to high intensity hexagon-related activity.

Anyway, it got me thinking about sensory bandwidth. One of the unique things about Super Hexagon is that the game is so difficult and fast that you very quickly feel like you're pushing up against your own processing limits. On the harder difficulties, any hesitation between seeing the state of the screen and pressing the keys on the keyboard is instant death. The time it takes to recognise objects visually becomes a significant bottleneck.

So what if you could add a few more forms of sensory bandwidth? I'd love to see a version of Super Hexagon where the state of the screen is reflected in sounds as well as visuals or, better still, touch. Maybe the combination of all those forms of information would make the game trivial. Or maybe the real problem is making sense of the information quickly enough, and more of it wouldn't help.

Either way, it'd be fun to find out.


Today I'm releasing Tabletone, a Javascript library for making live looping music grids like the above. It's super easy to use, you just write some custom HTML tags and lay it out the exact same way as a table. Here's the markup for that grid:

    <tt-cell pulse src="reapm-01.mp3"></tt-cell>
    <tt-cell pulse src="reapm-02.mp3"></tt-cell>
    <tt-cell pulse src="reapm-03.mp3"></tt-cell>
    <tt-cell pulse src="reapm-04.mp3"></tt-cell>
    <tt-cell pulse src="reapm-05.mp3"></tt-cell>
    <tt-cell pulse src="reapm-06.mp3"></tt-cell>
    <tt-cell pulse src="reapm-07.mp3"></tt-cell>
    <tt-cell pulse src="reapm-08.mp3"></tt-cell>

Under the hood it's a motley combination of components from the Web Audio API, including AudioBufferSourceNodes for looping and AnalyserNodes for pulsing. I also got a chance to try out the new Fetch API which is actually really nice.

So if you've ever been thinking about putting some live loops on your webpage - now's the time!

Less is Moore

Simple problem

My experiments with concatenative programming have led me to learn a bit about about the philosophy of Chuck Moore, creator of Forth and all-round programming language badass. What makes him such a powerful and strange force in the programming world is his opposition to generality in software design. While the dizzying heights of the Java 90s taught as all that there can be a thing as too much abstraction, Moore sits at the exact opposite end of the spectrum: not an architecture astronaut, but a software ascetic.

I think the canonical reference for his philosophy is an unpublished book available on his website called Programming a Problem-Oriented Language. In the first chapter he defines and expands on the "Basic Principle": Keep it Simple. Some quotes should reveal what I mean about asceticism:

As the number of capabilities you add to a program increases, the complexity of the program increases exponentially. The problem of maintaining compatibility among these capabililties, to say nothing of some sort of internal consistency in the program, can easily get out of hand. You can avoid this if you apply the Basic Principle.
Do not put code in your program that might be used. Do not leave hooks on which you can hang extensions. The things you might want to do are infinite; that means that each one has 0 probability of realization. If you need an extension later, you can code it later - and probably do a better job than if you did it now.
The conventional approach, enforced to a greater or lesser extent, is that you shall use a standard subroutine. I say that you should write your own subroutines. [...] Although it takes hundreds of instructions to write a general-purpose subroutine, you can do what you need with tens of instructions. In fact, I would advise against writing a subroutine longer that a hundred instructions.

In a more general sense, Moore's philosophy rejects frameworks and other overgeneralisations. Computers are general-purpose so your programs don't have to be. If you're solving problem X and realise "hang on, that's actually a special case of problem Y", it's actually the single most dangerous point in the development of your solution; you're only one step away from the logical conclusion of "I should solve Y instead". Now you're solving the wrong problem. Maybe a more interesting problem, maybe a more generally applicable problem, but not the problem you set out to solve. Simple solution?

I came into contact with a version of this philosophy even earlier, in The Mote in God's Eye by Larry Niven and Jerry Pournelle. In the story, there are aliens called Engineers who only build special-purpose things. To them, there's no such thing as a generic chair. They would instead build a Sam-chair to my exact proportions. If those proportions changed because of a series of brownie-related incidents, they'd rebuild the chair. Every item in their world is custom-made for its particular purpose.

Both Moore and Niven's specialisation philosophies came from resource-constrained environments: the Engineers because of the limited physical resources on their home planet, and Forth from a time when computers were slower and less forgiving. I don't think this is a coincidence. Constrained environments require efficiency and elegance in design. There's no space for extra abstraction flab; each idea has to pay rent for the cost of keeping it.

But is that true of the modern era? Communicating this text on a system probably 10+ layers removed from the bare metal, it's hard to argue that you don't get a lot done by just embracing all of the frameworks and platforms, efficiency be damned. I think this is a significant factor that makes Moore, like Forth, a fairly fringe player in the programming language scene. However, it would be wildly incorrect to conclude that either of them should be ignored. Writing your own block-level storage might be going too far, but I guarantee you feel the consequences of going too far the other way on a regular basis.

When you take on a framework, you're like a consumer buying a product: if it does a hundred things you don't need, or doesn't do things the way you want, well, tough. That's what we've got for sale. But as a programmer you're not a consumer. You're a producer. You aren't forced to accept an abstraction that doesn't work for you, or solves a problem you don't have. The option to build the Engineer-style specialised solution that conforms exactly and only to your needs is always there.

And I think software development would be a better place if we learned from Chuck Moore and took a look at that option a bit more often.

Sociometric panel show

Proportional opinions on Ed Snowden

I've seen a few different panel and debate shows and I've always thought it's a real shame that the people are arranged in such an uninteresting way. John Oliver famously held a proportional debate on climate change with 97 people on one side and 3 on the other. Obviously, that was a shambles to make a point, but I think there could be a useful and interesting format hidden under the guise of comedy.

A while back I learned about sociometric games, where you get people to represent their relationships with each other using dimensions in physical space. It's a lot of fun to get people to run around and stand where they were born on an invisible room-sized map, and you often learn some surprising things too (the birthday paradox works in space as well as time).

Anyway, my idea is this: why not make a sociometric panel show? As John Oliver did, get a roughly representative group of people (though maybe not quite so many) and lay them out in a space according to some axis of their beliefs. Then you can see the clusters of agreement laid out physically, interview whole groups at a time, and maybe even - on rare occasion - see someone adjust their beliefs and move to a different spot.

Ideally that would get you the best of both worlds: fringe viewpoints would be represented, but clearly identified as fringe by the small number of people supporting them. Mainstream viewpoints would get less time per person (if interviewed in groups) but seem more authoritative by weight of numbers.

Of course, it'd be a lot of work to make sure it runs smoothly. The lighting and camera setup would be very tricky. And most importantly you'd have to be very careful to avoid a Robbers Cave type situation. The last thing anyone wants to see on TV is people with different opinions fighting each other.

Mood Organ

The Mood Organ

I was talking to a friend today who'd made his own scientific relaxation soundtrack by mixing together nature sounds and music he'd selected based on research-indicated tempo, modality and instruments. It was kind of amazing, and it reminded me of a really exciting idea that Philip K. Dick nearly had in Do Androids Dream of Electric Sheep?.

In that book, everyone has a Mood Organ, which is a device that allows you to dial in any particular mood you want to feel. So, if you're feeling a bit down you can punch in the code for happiness and it'll improve your mood by tramsitting special mind-control waves into your brain. Amazing idea, but I don't think the Science Waves were really necessary. We already have mood-influencing waves: music!

How many times have you played a happy song to bring on or reinforce a good mood? Or a fun fast song for driving or riding? I think we have all the tools we need to make a real Mood Organ. The missing piece is a solid understanding of what properties of music are best suited to bring on certain moods. Some research already exists, but usually only with an up or down mood, and only with specific songs.

What would be really exciting is if you could actually do some kind of component analysis on different kinds of music and use that to generate an infinite stream of new music for a certain mood. The Mood Organ would be a kind of radio where instead of picking a "rock" or "classical" station, you tune into "wistful" or "angry".

Could be a lot of fun. Or a lot of angst. I guess it depends on what you're into.

The Amazing Singing Tea Strainer

I have a small mystery on my hands: why is my tea strainer singing? My initial thought was something to do with the water running over the holes making it vibrate like a reed instrument. But it only seems to happen when the water is hot, not cold. What's the heat got to do with anything?

It's times like this I wish I knew more physics.


Code Runner

I've noticed that with some things I work on, I can actually get a lot of benefit by just giving myself less time to work on them. I'm not talking about Parkinson's Law, that scary enabling vehicle for chronic underestimators everywhere, just an observation that for some tasks I paradoxically seem to get better results and enjoy the work more when I have less time to do it in.

Having reflected on this a bit, I think it's to do with a terrible addiction to optimisation. When I have a bunch of options - like say, a menu at a restaurant - it takes a lot of time to sort through all of them because I'm trying to optimise for the best result. One way to fix that is by limiting the number of the options on the menu. But another, more practical way is to restrict the amount of time I have. That way I'm now optimising for the best result I can get in 5 seconds. A much more tractable problem!

As applied to work, I think there are many cases where you don't need the best solution, just a good enough solution. The time restriction can often make you focus on getting that good solution without worrying about whether there's a better one. And less time optimising also means less time until you get results, which shortens your feedback loop and gives you better information to make decisions with.



One thing I've been noticing recently is the cost of not being decisive enough. There are a lot of times when I have things to do, but I can't decide which things, or I don't fully commit to one of them. So maybe I want to either do some work or relax, but because I can't decide I end up doing something silly like working with the TV on, which is way less productive and less relaxing than either option on its own.

Decisiveness is always a bit scary because you're ruling so much out every time you make a decision. You gain one thing that you can do at the cost of a million things you could've done. Of course, if you think about it that's total nonsense. You could never have done more than one thing, but that abstract truth doesn't seem to translate well into system 1 monkey brain reasoning. It's so easy for that pure unrefined potential to start seeming valuable when you're not paying attention.

The whole multitasking culture is based on this crazy idea that it's better to half-arse a bunch of things than fully arse one thing at a time. I mean, the motivation is pretty obvious; everyone wants to believe they can do more, and to the extent that believing it makes you spend more it's obviously a behaviour that a lot of industries want to encourage. This is kind of related to yesterday's post, but the benefit of having more options only really kicks in after you've selected one of them (hopefully the best one). Up until that point the extra options are just a cost you pay in time and cognitive load.

Getting through the decision stage quickly and on to doing is best. Overthinking and taking ages to decide is worse. But even that beats never deciding at all.

Expanding Explanations

This is an Expanding Explanation. Expanding Explanation. An expanding explanation is a way of explaining a complicated concept complicated concept (particularly, one that has a lot of distinct parts) in a top-down way. top-down way, meaning that it starts with more general ideas and allows you to work your way down to more specific ones. It has a few advantages over traditional explanations. few advantages over traditional explanations: you can skip over expansions that aren't relevant to you, explore them at a suitable pace and in whatever order you like, and most importantly it's fun to click on things. But there are some disadvantages too. some disadvantages too: you can't quickly scan through the entire text without expanding it, the expansion state doesn't save so if you come back you have to start over, and it takes more effort to write. All up, I think it could be useful for certain situations. the concept may not be useful in every situation, but there are times when it could make understanding a complicated set of ideas easier and more fun. Error pages, for example. for example: This webpage is not available. This webpage cannot be loaded because the DNS lookup failed. DNS lookup failed. DNS (Domain Name System) is a system that matches a domain namedomain name (a human-readable website name like to an IP address.IP address (a number like used by computers to route traffic on the internet). This is probably due to typing in the wrong address typing in the wrong address, which causes the DNS lookup to fail because it's looking for a website that doesn't exist, or a problem with your internet connection. problem with your internet connection, which prevents the DNS lookup from reaching the internet at all. You can try again later or go through some troubleshooting steps. troubleshooting steps:
  • Make sure you entered the website address correctly.
  • Check if you can visit other websites. If not, your internet may be down.
  • Try restarting your router and turning your wifi off and back on.

Project Space

project space

One thing I'd really love to use is some system that could encapsulate a project, including all of its files, software and running state. I have a bunch of different projects that I work on at different times, and keeping track of all the windows for each of them is a constant struggle.

Ideally, it'd be something kind of like a lightweight VM. You open up all your code editors, tools, browser windows, running services and so on, arrange them however you like, and then save that configuration as a Project Space. Then, when you're done working on it, you can close the space and everything goes away. When you want it back you can open it again and everything resumes just as it was.

I'm currently using Mac OS X's Mission Control and, as of this writing, have about 15 virtual desktops, almost all of them for project windows. With something like Project Space I could get that number much closer to zero and maybe not seem like so much of a hoarder.



I've always thought it's a shame that offices are such static, monolithic things. Especially as more people are working remotely, travel for work, or have flexible hours, it seems like the ability to make a new-generation flexible workspace would be a really big deal. In software infrastructure we have containerised and packaged services delivered on demand, and I don't see why we can't do the same thing for workspaces. I think of it as Office-as-a-Service.

It'd work like this: your office can be made of any number of standard components. You choose chairs, desks, screens, decorations, etc and set them up however you like in the room. When you check out, pictures are taken of the room recording the position and orientation of each item. Then, any time you like, you can migrate your office to a new location; a new city, a new country, a new spot in the same building, whatever. You can even suspend the office on days you're not using it to free up the resources.

When you check in to a new place, a crew of technicians would set up an instance of your office exactly the way you left it. They'd pull the requisite components out of storage (imagine an IKEA attached to each OaaS venue), move it all into the location, then use some kind of AR-style overlay to make sure all the items line up exactly with the saved images. You walk in and sit down as if nothing ever changed, even though you're in a different place. It's like your office is a magic room that disappears and reappears wherever and whenever you want it.

You could even have custom items that get shipped from place to place for things that aren't really replicable. But I think having too much custom stuff and turning the OaaS into a glorified moving service would be kind of missing the point. If you can make your office entirely out of the standard components then there would be very minimal set-up time and cost. You could push a button and have an instance of your office set up in under an hour.

Come to think of it, there's no reason it would need to be just offices. Imagine if you could get on a plane today, arrive tomorrow, and go straight to the same bedroom you left from – but in Barcelona or Tokyo? That would really be something.

Out Front

Out Front logo

Here's a neat idea for a new take on classifieds. There are a lot of times you're getting rid of stuff that doesn't quite meet the threshold for putting up on eBay (or Gumtree, or Craigslist or whatever), but you'd still rather not just throw it out. I recently had this exact problem with an old TV that maybe someone would have found useful, but not useful enough to pay for. Out Front is an app that allows you to say "Hey, there's a TV out front at this address. Come take it, or don't. I don't care. Don't talk to me."

There is an existing site called Freecycle that serves this same niche, but the main problem with it is that it's way too high maintenance. You've got to sign up, make a posting, respond to people who ask you questions, arrange to meet up. That's just as much hassle as selling it! With Out Front, you write what it is, take a picture with the camera, tag the location with your GPS, and that's where your involvement ends.

In fact, it doesn't even need to be your junk. If you see something thrown out on the street, you could just tag it as you walk past. Anyone could contribute, and the end result would be a glorious database of free junk to examine at your leisure.

Universal Send

Sketch of Universal Send

I've previously mentioned the remarkable shittiness of Android's Share menu. Though, in fact, it's a more general problem. This year, Google deprecated Chrome to Mobile, one of their most useful extensions, because "oh well you have tab sync anyway". Which seems like fairly strong evidence that nobody at Google understands the desire to send things. Well, I do, and here's my sketch for how that could work.

Your devices would take a privileged position, so you could very quickly send a link from one screen to another. After that, all your contacts appear in most-contacted order. If that contact uses Universal Send too, then the thing you're sending would immediately be sent to whatever device or system they prefer. If not, you'd get the option to choose how to send it: e-mail, SMS, Hangouts, WhatsApp, etc.

When someone sends you something, you have the option to choose how you'd like to receive it (or whether you'd like to receive it at all). You could also build up rules to say things like "anything sent from this contact should go to my email", or "always send things to my phone unless I specifically ask otherwise".

Whether Google sees it or not, the ability to send stuff from one place to another is a fundamental need that isn't covered by the general case of syncing. I think something like Universal Send would be a huge improvement on what we have now.

The standard problem

When I was younger I always thought language was a set of rules you had to obey, a viewpoint I later learned was called prescriptivism, which is contrasted with its opposite, descriptivism. As I got older, my views mellowed away from the strict "what's in the dictionary is correct" school, but I've never found it in myself to go full descriptivist. Instead, I always thought of English as a standard, suffering from the same problem as any other standard.

When the web was first developing there were very few standards. Occasionally people published documents for one thing or another, but ultimately the standard was whatever got implemented. As time went on, standardisation did happen, but it was a largely descriptivist effort. Tags would appear in the browsers, then eventually those tags would appear in an HTML standard. It was a bit of a mess because the standard was constantly playing catch-up, and never seemed to quite cover the actual reality of the web.

For that reason the W3C eventually made a play at prescriptivism in the form of XHTML. Unlike HTML, which was fairly accommodating, XHTML was strict. If your webpage didn't adhere exactly to the specification, it wouldn't display at all. All behaviour was completely prescribed, specified and easy to predict. The standard was, of course, a complete failure. Nobody really used it because it was too different from what they were used to, which just worked.

So what we have now is HTML5, a kind of balancing act between the two which does a remarkable job of solving the standard problem. It clearly specifies a prescriptive standard, including what behaviour is required to be conformant. But the standard itself is mostly just a codified version of existing behaviour, so in most cases whatever you wrote before will still work.

You can kind of visualise it as a rubber band stretched around the space of existing implementations. If it's too loose, everyone can just do whatever they want. Too tight and it'll snap completely - everyone will ignore it. But if it's just tight enough, it will push the fringe non-conforming stuff back in line without inconveniencing the core features that everyone is pretty used to.

So I suppose that's ultimately become my model for languages. It seems clear to me that having no standard for English doesn't make sense, but neither would a standard handed down from on high that arbitrarily removed or renamed words in popular use. Instead it would have to play that same careful balancing act, trying to adjust behaviour without going too far and being ignored.

I'd sure hate to be the person tuning that rubber band.

The amazing Chinarouter


I've been sitting on this bad boy for a while now but today I finally got a chance to set it up. Technically, its name is the NEXX WT3020H, but that's not particularly catchy so I've been calling it the Chinarouter. It's got two 300Mbps ethernet ports and 802.11n. It's tiny, it's USB-powered, and you can flash it so it runs Linux. The price for all this? Around US$15.

The OpenWRT wiki page was a little daunting but in reality it was just a matter of downloading the image, opening the web UI and hitting "upgrade firmware". I spent a bit of time messing around with settings, but the result is incredible. I now have a cheap, pocket-sized tool I can throw at any problem that has a vaguely networky shape.

I'm currently using it as an OpenVPN router after approximately following these instructions. One port is plugged into my regular network, and traffic from the other port and wifi are both put through my VPN before they hit the internet. No more messing around with client settings or draining extra battery on my mobile devices, I just connect to the "vpn" wifi network and my connection is secure.

I'm completely blown away by all the possibilities of this thing. You want to add wifi to a device that only supports ethernet? The Chinarouter can do that! You wanted that magical TOR-in-a-box that got shut down? It was literally just a Chinarouter with some custom firmware. I've even got a plan to use it to do network audits that I'll probably write about later.

Seriously, I am so in love with this thing. Between the Chinarouter and the new Raspberry Pi, the number of tiny Linux boxes in my house is getting ludicrous. If this is the future, I'm in.

Scoping calendar


I've been noticing lately that it's sometimes easy to lose track of the way my longer term goals connect with my shorter term ones. For example, I might decide at the start of the month that I want to work on some project, but then on a week-to-week or day-to-day basis I get distracted by other things or forget to put aside time for it. Prioritising day-tasks against day-tasks or week-tasks against week-tasks seems to be fine, it's just when the timescales collide that things go awry.

So here's my rough sketch of a system to fix it. I call it a scoping calendar. I'm a big fan of calendars and timetabling, but they only let you schedule things at an exact time. The scoping calendar lets you schedule something on any month, any week or any day, without needing to pick exactly when you'll do it. That means you can think about and plan your work on a very high level.

But the real beauty of the idea is that the different levels interconnect, and you can be high- and low-level at the same time. When you have more specific plans you can go back and add them in. Any time that you schedule on a per-day basis fills in progress for the week, and same for the week to the month. And the amount of time left unallocated in the month shrinks as the month goes on, letting you know if you've scheduled more than is possible.

Ultimately what you end up with is a view that shows you your goals, your commitments towards those goals and your progress on every time scale simultaneously, letting you bridge the gap between long-term and short-term planning.

Continuous everywhere

Progress over time

There's a difficult tradeoff when trying to shorten a feedback loop: if you take less time to do something, won't it be less good?

It's intuitively obvious that there's a tradeoff between time and quality: to make something good takes more time than making it mediocre. But there's a hidden "something" term in there that gives us an extra dimension to work with. A more complicated thing also takes more time than something less complicated. Perhaps a better way of thinking about it, then, is as a time-quality-scope tradeoff. I've come to think lately that the correct answer is to minimise that scope as much as possible.

Put another way, imagine you have a graph of your progress over time. When you're in the middle of working on something but not done yet, you can visualise it as a gap in that graph. When you finish, the progress goes up in one big jump. I think of scope, in this context, as that gap: the length of time where your progress function is undefined. If someone asks "how's the project going?" you don't really know the answer. And trying to estimate when you'll be finished halfway through a gap is basically impossible.

But you can decrease the size of the gaps in that function by just defining "done" differently. Maybe if you are clever about it, you can have twice as many done points while still achieving the same progress over time. The undefined gaps will shrink, you'll have better and more accurate information, and be able to make decisions more quickly.

It's interesting to consider the ultimate extreme of this philosophy: what if you could get to a point where there are no gaps, and your progress is defined on every time-scale? It would mean no git, no deployment system, no edit/save cycle: everything you do is immediately put into production. It would mean a keystroke is a function is a module is a project. It would mean your progress function is continuous everywhere.

Bezier Playground

Bezier playground

Here's a neat thing I made a long while back: the Bezier Playground. It was part of a general plan I had at the time to make a series of interactive explanations of concepts that are very had to explain algebraically, but easy visually and interactively.

The Bezier curve was actually the first moment it dawned on me that for some things symbols are just a bad way to think about them. I actually had to implement Bezier curves for a computer graphics course, and after staring for a while at the closed form in the textbook with my eyes crossed, I happened across this gif on Wikipedia. Everything became clear, and I put together my implementation based entirely on that one image.

Later on when I ran across Bret Victor's Kill Math and related work, it felt so right and so familiar that I went and threw together this playground as a tribute. Enjoy!

Projects page

I've just added a projects page to the site.

Although everything in there is, at the moment, pulled straight from my posts here, I'm quite pleased about having a separate place to just list things I've done. I should be filling it out more soon with other past projects, assuming I can actually still get them to run well enough to generate screenshots.

I'm also considering adding an equivalent page for ideas, but that seems like it could get out of hand very quickly.



I was hunting through my Code directory and I found this neat demo I made last year for a WebRTC hack day. It's called VRoom and it's an experiment with audio spatialization and environmental effects. It uses a PannerNode in HRTF mode to pan the sound and apply extra processing to make it sound more directional.

There are maybe a few cleverer ways to do environmental effects. Hard Mode would be to actually do full audio raytracing, but that's probably overkill for most things. Instead, I think it might be sufficient to just use two ConvolverNodes; one based on a distance query to cover reflective noises (like echoes on concrete) and one based on an intersection query to cover refractive noises (replacing the current distance-scaling effect).

It's pretty crazy when you think about just how much there is in the Web Audio API. And all the new web APIs for that matter. There's not much difference between the web and a complete OS anymore.


glasses $39.99

(The above is actually clickable - try it!)

This is an idea I had today while I was buying stuff on eBay and enduring the interminable 12-click process you have to go through: Yes, I want to buy it. Yes, I confirm I want to buy it. Yes, I want to pay for it. Yes, I want to pay for it with PayPal... and so on. I assume it's to stop you buying stuff by accident, but it's still a bit ridiculous.

Google have, for a long time, been fighting the good fight against confirmation screens. Their philosophy is to ask for forgiveness rather than confirmation. Yep, you just accidentally pressed that button and deleted your email. Did you want it back? Okay, just hit undo. All better.

Two problems with that: firstly, not every transaction is undo-able. Spending money, for example, is a lot of hassle to undo. The other problem is that it's still frustrating when the computer does something you don't want, even if you get an undo button. I can't count the number of times I've accidentally deleted emails on my phone while trying to scroll. That frustration doesn't undo either.

So here's an interesting way to look at it: just make the button a bit harder to click. I'm okay with waiting a couple seconds if I don't have to navigate to any more screens, confirm anything else or hit undo. I creatively called it hold-to-confirm. If you want to take a look at the source there's a cleaner version on GitHub.


I was talking with some people at dinner a little while ago about the ethics of killing animals for food. It's a question I've always found fascinating because I think that at some point, our technological means will make it unnecessary. Shortly after that it will become morally questionable. And shortly after that, barbaric and embarrassing. It's right at the nexus of a lot of interesting moral questions that we don't have good answers to: What is the value of life? How do we allocate moral weight to beings? And, my favourite, what is life anyway?

I find the non-commutativity of life particularly interesting. Creating a life is not as good as taking a life is bad. So if you can either have no children at all, or have two children but kill one of them, the former is ethically fine and the latter is deeply wrong. However, the net result of the first decision is one less person than would have existed with the second decision.

If you could go back in time and convince someone not to have children, and those children cease to exist in the present, have you murdered them? What if you cut out the time travel and just convince someone to not have children today? If you could choose for the entire human race to just stop reproducing tomorrow, meaning the complete eradication of our species in the next hundred years, would that be ethically superior to, say, the one-off genocide of a billion people? But what is that if not bigger numbers plugged into the same moral equation?

The relevance to animals is of course that there are so many animals that are alive because we keep them for meat. Perhaps it is barbaric that we breed and slaughter them. But what about all those lives that would never have existed otherwise? Are they really worth nothing? If you could choose to live for twenty years or no years at all, what would you choose?

I'm struck by the pig that wants to be eaten from Hitchhiker's Guide. It feels so weird because it takes two important values – individual choice and protection from harm – and rams them into each other. What do you do with someone who wants to be harmed? Our current, fairly un-nuanced answer is that someone who wants to be harmed doesn't really want to be harmed and actually has a mental illness. But there's a large subculture of people who enjoy being recreationally harmed without needing any psychiatric treatment at all! To say nothing of extreme sports, daredeviling and other high-chance-of-maiming activities.

I think to address at least some of these questions it would be helpful to have a notion of metaconsent. That is, perhaps there is no ethical issue eating an animal that wants to be eaten, but there is an issue creating that animal. It can consent to being eaten, but it can't metaconsent to wanting to be eaten. The decision to have those values was forced on it. It is equivalent to someone who hypnotises you into desperately wanting to eat tree bark. After the deed is done, that same person would be doing you a favour by feeding you all their extra tree bark. However, they still did you a grievous wrong by making you want it in the frist place: they violated your metaconsent.

The thorny question is how you could possibly obtain metaconsent from a being that doesn't exist. Obviously, we have not metaconsented to the desires we have or the things we value, they just happened to us. The universe is not a moral being, however, and as we take the ability to create life from nature we also take the responsibility to do better. One option would be to try to imagine a discussion with a version of the being that has a neutral position on the subject. An animal that doesn't feel strongly about being eaten or not eaten would still prefer not to *want* to be eaten, because being eaten also interferes with most other goals.

However, for many things it would be more complex. Either a neutral position might not be meaningful (what's a neutral food preference?), or tautological (would you rather want short or long socks, assuming you have no current feelings about them?). The latter case could be an indicator that metaconsent would be granted – I certainly wouldn't be dismayed to learn that I've been genetically predestined to prefer short socks – but there are some decisions where saying "see, no preference!" could be hiding a more serious problem. Maybe you could construct a hypothetical animal with no preferences of any kind. Why would it care, then, about whether it wants to be eaten or not?

I think a secondary option would be a metaconsensual form of the categorical imperative, or the veil of ignorance: if the position might also apply to you, would you metaconsent to it? It's pretty tough to justify creating an animal that wants to be eaten if you can't imagine ever metaconsenting to wanting to be eaten.

To bring it back around, then, we can use the same techniques to have a hypothetical discussion with the cows that we breed. Would they want to be brought into existence? By the first test, I think yes; if a cow takes a neutral position on being born and then eaten, then any other desires (enjoying grass and so on) would push it into wanting to exist. The second criteria is a little trickier. You could imagine an equivalent situation where humans are subjugated by some kind of evil aliens who kill us and eat us sometimes. Would we collectively rather have a lot fewer people in exchange for no more killing and eating? I think each individual person would rather exist, but maybe collectively we would agree that the improvement in dignity is worth the lost lives.

I don't know if it's very satisfying to finish with a maybe, but I think that this way of thinking at least provides an entry point to reasoning about the morality of bringing beings into existence or changing what they value. I'm certainly a lot less impressed with the animals-are-better-off-being-eaten argument, at least.


Screenshot of Catenary

Well, it took a while but today I'm proud to release Catenary, a concatenative programming library for Javascript.

When I first saw Forth and Factor I didn't really think much of them. Mostly I messed around a few times, figured out that doing arithmetic was super tough and then got bored. It was only later on that I began to realise the elegance of making a language with so little in it. That elegance ended up prompting me to start messing around with what eventually became Catenary.

Although it took several rewrites to get there, I'm fairly chuffed that there ended up being only 76 lines of code in the core. On the way I managed to throw out nearly every concept I started with, and in fact part of what took a long time was getting stuck because I threw out one too many and had to go back. So I'm fairly confident there's not much left to trim. Along the way I ended up learning a lot of significant stuff about why concatenative languages are the way they are, and which particular concepts are unnecessary sugar and which are actually fundamental.

I'm not sure if concatenative-programming-in-the-large will ever hit the mainstream, but I think that, like functional programming, it can supply a really interesting framework to structure your ideas around. Catenary was designed in that spirit - it's meant to supplement traditional Javascript, not replace it. If you can go in and out of concatenative style at will, maybe the result will be that certain problems begin to look much easier.

If this project managed to make people think about concatenative patterns in regular code the way they do about functional ones, that would really be something amazing.


Black hole having an identity crisis

In certain sciences like astrophysics it's often impossible to measure things directly. You can't really fly out with a giant galactic tape measure to figure out the size of the Milky Way, instead you measure stars that pulsate in proportion to their energy and use that energy and how bright they appear to figure out how far the light must have travelled. Similarly, you can't measure the mass of a black hole, you rely on the effect its mass has on nearby bodies. In fact, you basically can't detect black holes at all except by their effects on things around them.

It occurs to me that sometimes people have the same property. You can't really measure someone's intentions, personality or abilities. Instead, you have to measure the indirect effects those qualities have on the world around them. There's no perceivable difference between an amazingly intelligent person who never says anything and an amazingly dumb person who never says anything. Someone who would do amazing things if they ever had the chance but never gets that chance has still achieved exactly as much as someone who never tried at all.

A possible (depressing) conclusion to that line of thinking is that you don't exist except as measured by the opinions of others and the results you achieve. But those measurements are so unreliable. In poker they talk about a bad beat: a hand where you do everything right but lose anyway, or vice versa. How can you possibly be happy with your decisions if all you care about is the win? And how can you even have an identity if it's just what people think you are? There is, however, one (and only one) person your internal state does make a difference to: yourself.

I think to be able to function effectively in the face of adversity or indifference you have to be able to measure yourself on your own terms. If you can define without reference to the outside world the shape of good and bad, of success and failure, then you're also free from trying to control those external factors which are, fundamentally, beyond your control.

Measuring yourself by your effects on the things around you may be simple, even easy, but the end result is being dependent on those things for your happiness, and even your identity itself.

The Quantified Wallet

Burn rate

I've been trying to get a bit more of a handle on my finances lately, so I figured I'd try to get all the data into the time-series database I've been messing with. Actually getting data out of the various banks turned out to be a nightmare of badly formatted pseudo-CSV and export systems that give you slightly less than 6 months of data at a time.

Still, I've got it going now. I've written scripts to clean the CSV up and other scripts to load it into my database. I've got scripts up to my ears. And it kind of occurs to me that I still don't necessarily have all the information I need. The pictured graph is how much I spend each week. Ideally I'd be able to break that down by what kind of things I'm spending it on and filter things out and so on, but that's still a ways off.

I'm pretty surprised that none of the banks seem to be working on analytical tools like this given how useful they are. Plus they'd easily be able to beat any third-party offerings because the crappiness or straight up nonexistence of their APIs and export tools would make competition impossible.

Pain Fade Down

This is a cover I made of Pain Fade Down for a presentation today at SydJS. Pain Fade Down is a song from the Darwinia soundtrack, and you can find the original on trash80's website.

It turned out to be surprisingly tricky to recreate a lot of the sounds! The original used a text-to-speech engine for the vocals which I tried to recreate with vocoding to questionable success. Also the bleepy bloopy synth was really tricky to match tonally because of the delay effects. If you're reading this and are a musician, I'm happy to take suggestions!

It's built using my Tabletone library, which you can also find on Github.


Literate you

I recently realised how nice Catenary's source looks on Github. I think it's mostly a coincidence caused by Literate Coffeescript happening to be a subset of Markdown and Github having really nice Markdown rendering.

Honestly looking at it kinda makes me want to write everything using this particular Markdown-as-literate-programming style, even non-Coffeescript projects. It'd be lovely if there was a general tool that let me use any language embedded in Markdown and extract it out in a build step. Though I suppose part of the difficulty is that language would need to be flexible enough that it doesn't limit your ability to write your thoughts in whatever order makes sense.

I know Knuth's original literate programming had a whole metalanguage that let you break completely away from the conventions of the language but in the immortal words of Sweet Brown: ain't nobody got time for that. It feels much cleaner to have something that still looks (mostly) like the original language and keeps its structure.


I've been trying to write something every day, but last night I was out late and fell asleep as soon as I got home. My first instinct is to make up for it by just posting twice today, but the problem with that is that it lessens the instructive value of having failed. I think it's important to be reminded occasionally that success isn't something you achieve once, it's a process that requires constant maintenance. Suceeding at writing every day, then, is something that can go away fairly quickly.

But I do gain a lot of value from feeling like I am trying to keep up a particular pattern, and it feels good to have a long chain of daily posts stretching back. The most important thing is that the habit has significance, and that it's something I take seriously. Eventually I arrived at the compromise decision of posting twice, but making one of those posts about the failure itself.

I think the strategy that would have prevented this is writing before I left home, or alternatively being prepared with something I could write quickly as soon as I got home regardless of how late that was. Doing those things obviously won't help this failure, but it should make one less likely next time.

And in the end that's the best you can ask for.

A strange game

War map

One thing I've noticed is that a lot of people, myself included, tend to have fairly particular ideas about what their game is. A programmer, say, is usually explicitly not playing the business game. A researcher is not playing the marketing game. An office worker is not playing the politics game. Indeed, in many cases that game is seen as somehow not relevant, or unworthy, or even just a mug's game - one where the only winning move is not to play.

Which would all be fine, except that you don't actually get to choose which games to play. If there are office politics and you deliberately remain ignorant about them, you're not achieving some noble victory, you're just hobbling yourself. Similarly, a researcher who avoids marketing isn't making some strategic non-move move by letting their ideas languish in obscurity. The status quo for these kinds of games isn't neutral, it's failure; you don't start at 50% and get worse as you play, you start at 0% and work your way up.

Reality doesn't care about how you've divided things up in your head. Your options for influencing it are a big bunch of levers; some shaped like engineering, some shaped like business, others shaped like people skills and politics. If you pull the right set of levers you can get the things you want, and from that perspective it's pretty hard to justify ignoring most of them because, while they might work perfectly, they're the wrong shape damnit!

Not playing may not always be a losing move, but it's putting your success in the hands of more experienced players and hoping that their victory aligns with yours. If not, the only winning move is to play.

Predictor-surprise pattern

Two computers playing peekaboo

An interesting thought I had a little while back is that a lot of aesthetically pleasing things, like music, art or comedy, seem to rely to some degree on making and breaking expectations. Comedy is maybe the most direct example, because a good joke is usually being surprising or confronting in some way. But in music too there is a constant back-and-forth between expectation and reality. You hear a refrain two times, you're all primed to hear the same thing again, but then it changes subtly the third time. For whatever reason, this seems to be very enjoyable.

It'd be interesting to make a foray into generative music or art by explicitly building an audience model that works as a kind of predictor. The predictor would be constantly searching for patterns in the sequence of notes it's seen, trying to accurately predict what comes next. However, since you're running the predictor and also generating the notes, you can do something a bit interesting: change what comes next if the predictor would have predicted it too easily.

The system would then have some tunable 'surprise factor'. The higher it is the less willing you are to let the predictor win and the more you will subvert its expectations. My prediction is that after a while experimenting you would find some particular value that seems to be the sweet spot for making enjoyable music. But I'm prepared to be surprised.

High water mark

Achievement, overachievement and underachievement

It strikes me that we often measure ourselves by what we've done previously, but in some cases that can result in very perverse incentives. The classic example here is the employee who pulls the heroic all-nighter to meet a deadline. The project is saved, the people shout and cheer, and that employee is branded as the all-nighter hero. But guess what happens next time the project is running behind schedule? Oh hero, where are you? Soon the heroism becomes expected and you've created yet another crunch-cycle zombie.

One thing that has surprised me is that this effect seems to persist even without the necessity of the bad-project-management-stress bogeyman. On my own projects, if I achieve beyond what I expected for any length of time, my expectation rises to meet that achievement, even when I explicitly set more modest goals to anchor that expectation. I like to visualise this as a high water mark being set by the steady ebb and flow of productivity, and it's very dangerous if left unchecked. As inflated expectation stacks upon inflated expectation, a small project with modest expectations can quickly swell into a Sistine Chapel behemoth.

I find that the only way to reset that expectation is to sometimes deliberately do the bare minimum. Watch the clock and get out at 5pm exactly. At least 1000 words? Sounds like exactly 1000 words to me! Minimum fifteen pieces of flair means I'm wearing fifteen pieces, and if you want more you need to set the minimum higher. I'm not saying do this all the time, that sounds like a recipe for mediocrity, but I think it's healthy to wipe out the high water mark on occasion.

Never pushing down means the only influence on expectations is upwards, and there's no way that can keep up forever.

The Elo paradox

Penrose stairs

I was showing yesterday's post to a friend and he made the point that the feeling of constant improvement is very important, motivationally speaking, and that works against keeping your goals modest and reasonable. In fact, I think there's a very fundamental conflict between motivation and improvement, which I call the Elo paradox.

Chess, as well as many online competitive games, use an Elo-based rating system. Essentially these systems try to create a predictive measurement for you as a player, such that two people with equal ratings are equally likely to win a match against each other. There have been various improvements since then, but the core concept is the same: your skill can be represented as a prediction of how likely you are to win. It's a powerful idea and yields very accurate skill measurements.

But as good a rating system as it is, Elo is terrible game design. Almost everyone's journey in an Elo-ranked system looks the same: they come in as a beginner with an abysmal score. They have an initial burst of improvement that pushes their score into the low end of average. They work hard on their average score and eventually turn it into a slightly above average score. Their score stops going up. They have a bad week and lose a bunch of games. Their score drops. They stop playing for a little while. They come back rusty. Their score drops even more. They stop playing for good.

The problem is that we want to feel a sense of progress. It's nice to be better than you were yesterday. But the tragic reality is that won't always be true. Mostly you're the same, and sometimes you're worse. That's the Elo paradox, in a nutshell: you can't have an accurate measurement of your ability that always increases.

For that reason, I think getting your motivation from an intrinsic measurement is fundamentally silly. Maybe it works for some people, but it seems obvious to me that you'll always run up against that skill ceiling at some point or another and see your results taper off. Instead I prefer to focus on cumulative output. That has the nice quality of being a measurement you can always control, and it always goes up.

I might not move faster than yesterday, but I've moved further. I might not be smarter, but I've thought more. I might not be better, but I've done more.

Open Spam Net

Spam net

I've been looking into migrating away from Gmail for a while, largely out of concern that Google can shut down my email on a whim. My mail already runs through my own mail server, but I use Gmail for its nice UI, searching, and other neat features. I still haven't found a mail client that I actually like, but I'm hopeful that something will come along.

Aside from UI, the other big advantage of Gmail over your own server is the quality of its spam filtering. This isn't even just an algorithmic thing, although I'm sure their algorithms are top-notch. Rather, it's systemic: Google has access to everyone's email, so they can do spam filtering across everyone's email at once. They can apply detection techniques in the large that are impossible for an individual mail server operator to match.

Well, maybe. Most spam filters use some kind of Bayes classifier. Essentially, when you mark some emails are spam or not spam, you know the probability of different features (keywords, usually) appearing in those emails. What you really want to know is the reverse: the probability of spam given the features. That's exactly what Bayes' theorem is for.

And there's no reason that model has to be restricted to a single user, or single server. You can chain many layers of predictions together in a Bayes network, and you could use that exact structure to federate your predictions. So: I think a certain set of features are likely to be spam. You can subscribe to my ideas of spamminess, and any time I'm wrong, not only does your prediction of those features being spammy go down, your prediction of me being right goes down too!

The nice thing about this is that there's very little benefit in spammers getting hold of it. Anyone (including a spammer) entering the network would start with a 50/50 predictiveness, which is to say there's no assumption that they have any value. They would earn that value by making successful predictions, and if spammers want to help fight spam then great.

The end result would be a massive predictive network where each user's spam predictions are combined. A place where everyone from big corporate networks to little private VPS mail servers can collaborate to fight spam together.

Idea Globe

Idea Globe

A project I've been meaning to do for ages and finally got around to today is the Idea Globe. I really like coming up with and working on ideas, but up until now I haven't had a good place to put all of them. I keep a notebook, of course, but it's full of all sorts of other things as well, and the linear structure isn't great for quickly looking through ideas when I want to work on something.

I figured a fun way to approach the problem would be to make a big Earth-like globe where all the ideas are just sort of floating around in no particular order. I added topics as well so I can associate different things I've been thinking about when I want to come up with a new idea. You can pin an idea or a topic by clicking on it, and then it stays put while the others keep revolving. The ideas also display a brief description and sometimes a sketch when they're selected.

There's still a bit of work to do: mainly it's a bit too crowded at the moment. I think I might need to only display a random subset of ideas at time, and probably do some random-looking-but-not-really-random spacing so that the different ideas don't run into each other.

It's great to have a place to throw up all my random thoughts that aren't even at the point where I'd write a post about them. I encourage you to go have a play around and check out some half-baked ideas. However, you won't see the idea for the Idea Globe in there. As of today, it's graduated!

User model

An interesting thought: as we've made our devices smaller and more integrated we've also made them harder to interact with. To get around these form-factor issues has required branching out into all sorts of alternate input systems: keyboard swiping, predictive text, voice recognition, maybe even finger gestures in the air. At the core of all of these are probabilistic input systems that try to guess what you mean by making assumptions about you.

For text input, at least, the system starts with a pre-trained model of what words I most likely want to write. Then I can train it by adding new words and using certain words more or less often. But the systems aren't integrated very well: words I write more often aren't more likely to be chosen by voice recognition, even though that would be a valuable signal. And I think more generally there's an inkling of an idea that could be really great if it was developed: having a model of the user using your system.

A proper user model could go beyond just learning which words you use most and actually change the way your computer works to better suit you. For example, a model of my reaction time could tell that I didn't mean to click that button that just appeared under my mouse 100 milliseconds ago. A model of my listening habits could tell that I only play music or video games, not both at the same time. It could also tell you that I have specific decibel preferences for different audio sources. A model of my waking hours could adjust my screen temperature, notification preferences and music preferences all at once.

Obviously, anything that users interact with has some kind of user model if it stores any user information or changes its behaviour according to preferences or feedback. However, I think that making it explicit – and, more importantly, centrally managed – would be an amazing improvement in the way we interact with computers.

At least until the ad companies get ahold of it.

Corporate governance

We consider ourselves not just a company running a website where one can post links and discuss them, but the government of a new type of community.

Reddit Ex-CEO Yishan Wong on free speech

I was talking to a friend about Google today and it reminded me of this strange transition I've noticed where as internet companies get bigger they become more like governments. I think this isn't at all a coincidence, because the modern internet startup is built around many of the same concepts. A government has a monopoly and uses it to act as a gatekeeper for trade, enforce policy, and control membership. A big internet company is much the same, but its monopoly is a platform monopoly instead of a monopoly on force.

But I think a lot of companies don't recognise the moral dimension of their rapidly burgeoning power. The xkcd-style "banning you isn't violating free speech" idea, for example, seems fairly short sighted. Maybe banning you from a forum isn't taking away your free speech, but what about banning you from every forum? Or all of Facebook? Or everything owned by Google? Or the internet entirely? Hey, we're not taking away your free speech because that only applies to the state!

Unless we're willing to claim that large internet companies have no responsibility to the people whose lives, communications and digital property they govern, we have to consider expecting the same guarantees from them as we would from any other government. Maybe there is no legal requirement for Google to uphold free speech, or due process, or property rights. But maybe there should be.

As a company grows from being some scrappy startup to a foundational piece of internet infrastructure, I think its responsibilities need to grow along with it.


Today I was trying to explain what I find frustrating about non-engineers in management positions, and I finally have a word I'm happy with: mechanics. I think that what distinguishes good management from bad is an understanding of the mechanics. That is to say, the fundamental behaviour of the rules that govern the domain of the business.

A classic example is the seven perpendicular lines. The sketch is funny because it uses mechanics everyone is familiar with: colour and geometry, which makes the all-too-common ignorance of mechanics comically absurd. But the less-funny reality is that this is a well-recognised management problem. Mostly, I think the blame falls on the idea of the universal manager: since management is management regardless of the problem domain, you can apply the same set of management tools to any problem.

In reality, although there is a subset of people-domain management tools that apply universally, the problem-domain management tools are very different between domains. And any attempt to pave over that with bluster or can-do optimism is bound to fail to the extent that the domain is constrained by its mechanics. You may be able to manage your way through a discussion of bike sheds without understanding their mechanics, but not nuclear reactors.

But even on fairly simple mechanics it seems like business types easily fall over. I've seen cases where the mechanics are so simple you could explain them to a child and see more comprehension than you get from the adult running the business. In these cases I believe it goes further than not knowing mechanics, I think it's not wanting to know mechanics, as if they would somehow sully the theoretical purity of universal management.

Luckily, I've found at least in software an increasing awareness that you need to know how software works to manage software projects, but I'm sure it's still happening elsewhere. Maybe I've just been lucky to not see it as much, or maybe the universal managers have all tranferred their transferrable skills to other industries.

Either way, it's an important point to make: mechanics matter, and if you don't understand them you have no business managing a technical team.

The complex unknown

The topic of Tinder's surprise success came up today and I think most people see it as a sign of changing attitudes towards online dating and, indeed, dating in general. While that's true, I think there's something a bit deeper at play there. If it was just online dating becoming more acceptable in general, we would expect parallel increases in the popularity of other dating sites. However, Tinder has been a stand-out success compared to its competition.

My theory for this is that Tinder's simplicity makes it fundamentally better. It doesn't have quizzes, complex profile searching, statistical matching algorithms, human matchmakers, or any of the other bells and whistles that other sites use. In reality, it seems like a picture, a brief bio, mutual attraction and a message box is sufficient. Tinder gets rid of all the extra trappings and just lets you engage with the core value of the service: see people you like, talk to them, meet up.

So could Tinder have just shown up 10 years ago and eaten the other big dating sites with a decade head start? Well, probably not. I think that the extra complexity acts as a kind of buffer against ideas that are too new and confronting to engage with directly. If you're nervous about rejection, or meeting the wrong kind of person, or whatever, having complicated systems to put your faith in is very comforting. At the time those were very necessary, but my predicion is that now that online dating is more normal and less scary, we'll see more simplified dating sites take over for good.

I've noticed that many of the hot new disintermediation startups – Uber, AirBnB etc – seem to be following that same path of taking an existing complex thing that has become familiar and shaving that complexity down. I wonder what other still-complex fields might be simplifiable in that same way.

Brain Ball

This is the Brain Ball, a visualisation I've been working on as part of an EEG project. It shows a top-down view of a (fairly abstract) head, representing frequencies of brain activity as different colours. It's actually running on a Raspberry Pi, doing EEG processing in Python, and then sending the data through a web socket to do the visualisation in HTML using Canvas.

What you see in the video is one of the more remarkable (and earliest!) EEG results: closing your eyes causes a large spike in Alpha wave activity (shown in blue), especially near the rear of the brain.

Open services

It seems like the world of open source is becoming increasingly irrelevant to end-users. Developers, of course, still value the ability to access, modify and run source code, but for most modern software that model is insufficient. 15 years ago we could run our own copy of Mozilla, but today it's not possible to run our own Facebook or Twitter. Not just because the source code isn't available, but because it's not even meaningful to speak about "running" Facebook. Facebook is the sum of many pieces of software running in data centres, devices and browsers across the world.

Modern software is expected to sync across multiple devices, to be accessible from other computers, and to interoperate with other software including instances of itself. To embrace those requirements means a huge expansion of what open source means into something new entirely. Forget the specific mechanism of source code, we need to recapture the general goal of anyone being able to contribute to and customise the software in their lives. Software that runs over a network is usually called a service, so I'm thinking of it as open services.

An open service is something that you can take and run for yourself in a way that is meaningful. An open service social network would allow each user to run their own instance if they wanted while still maintaining a connection to and sharing updates with the rest of the network. An open service news site would let you create a custom news feed for friends, family, colleagues or a community without that data being accessible to others, but still provide you with the ability to integrate with other news feeds if you want.

Individual parts of this vision already exist, such as the concept of federated systems, but this goes further. An open service is also something like a federation of code. It should be possible to make and share your modifications to code in the modern networked software world as easily as you share the data itself.


I was reminded today of an observation I made a while ago: any commitment means making sacrifices. It's very easy to say "I'm going to get this project done no matter what", but the reality of what that means is actually pretty extreme. That "no matter what" means you would be willing to sacrifice not only other projects but friends, family, sleep, even happiness entirely to get it done. But when the chips are down most people aren't willing to sacrifice that much, and probably for good reason. Usually we either don't really mean the commitment, or don't consider that sacrifice involved.

This lack of realisation also happens in much less extreme examples. Something like "I'm going to exercise every day so it becomes a habit" is a popular and often abandoned commitment, because people don't really think through the consequences. It's not just exercising, it's sacrificing the ability to not exercise – not exercise when you're tired, not exercise when you're busy, not exercise when you feel sick, not exercise when there's something way better and more fun to be doing. It often breaks down because the commitment you made didn't reflect your actual priorities and, confronted with an actual test, those priorities won.

Given that fragility, I think there's a lot of benefit in being more explicit about priorities and sacrifices. If there's something you want to get done, or some commitment you're considering taking on, better to play it off against the other priorities in your head before you commit to it. Where does it really stand? There's nothing wrong with "I'm going to exercise every day unless I have a work thing on or I'm tired", but it is a different commitment. Maybe a less impressive one, maybe one less likely to form a habit, but also a more honest one.

And, really, a dishonest commitment was only ever going to deceive you up until the point where you had to sacrifice for it anyway.


I had an interesting idea for a modern-style disintermediation business today: Ultratemping. Basically, an instant marketplace for short-term casual workers. The hypothetical scenario is this: you're running a restaurant and you've got a surprise rush because someone important on twitter said nice things about you. You're running a retail store and two of the floor staff call in sick an hour before their shift. You're setting up an event and it's running behind schedule because you don't have enough crew. You need people and you need them right now.

So Ultratemping lets you find people quickly. You put up the job you need done and it gets broadcast to everyone in the area with the requisite kind of experience. All the workers have ratings and recommendations from previous jobs so you can pick whoever seems best. They show up and get to work. Obviously you'd be paying a premium, but in certain situations it'd be worth it, especially if they were people who were well-known for their ability to show up quickly and get stuff done with minimal setup time.

Maybe it would even lead to a place where people are using the app as their main source of income, swooping in to help out a different business each night like some kind of cross between Yojimbo and Gordon Ramsay.

State privacy

I had an interesting idea today. There's been a constant back-and-forth in recent years about the balance of privacy protections in the face of both government and corporate desires for increasing levels of access to peoples' data. Often the problem comes down to, well, exactly how much privacy do you need anyway? The "if you have nothing to hide" argument is a way of saying that you don't really need any privacy, which should be transparently false. But conversely it's hard to argue that there is no possible end that could justify an invasion of privacy. If there's some sensible middle ground, where is it?

My idea is this: we already have a notion of state privacy, usually called state secrets or classified information. So let's start there. Given the vast and disproportionate power of a state compared to an individual, you might think that we would have stronger protections in the interests of balance. In fact, the opposite is the case. It's possible (these days, commonplace) for the US government to completely remove information that would violate its privacy from court cases. And outside a courtroom, revealing its private information is punishable by death.

It would be interesting to see what a world would look like where we could label our own secrets as classified information, with accompanying legal protection. A world where we could legally prevent that information from appearing in court cases against us because it would be harmful to our relationships, hurt our business opportunities, or compromise our physical safety. And we could sentence people to death or a lifetime in prison for revealing our secrets without our permission, even if the secret was that we acted immorally or broke the law.

That'd be quite a world to see, though I definitely wouldn't want to live there.

Strace and self-consciousness

Have you ever noticed that thing where you're really in the zone – you're writing, or playing a game or a sport, and you have that magical flow feeling where it all just seems to be happening effortlessly – and then you become aware of that fact and suddenly lose it. Daniel Kahneman talks about the fast and the slow systems of thought, and to me it seems obvious that flow is rooted firmly in the fast system, until it gets bogged down in slow-system meta thoughts.

The best analogy I can think of is the Linux system utility strace, which allows you to inspect what a program is doing by tracing system calls that the program makes to the operating system. It's super useful, but it doesn't come for free. If I do something like copy 100GB of zeroes around:

$ dd if=/dev/zero of=/dev/null bs=1M count=100k
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB) copied, 6.43048 s, 16.7 GB/s

And then I run it with strace:

$ strace -co/dev/null dd if=/dev/zero of=/dev/null bs=1M count=100k
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB) copied, 10.5155 s, 10.2 GB/s

You can see it takes over one and a half times longer! We can run ps to make sure:

$ ps 21278
21278 pts/0    t+     0:12 dd if=/dev/zero of=/dev/null bs=1M count=100k

That STAT t+ means it's currently being interrupted for tracing. In fact, it's being interrupted on every single attempt to read and write data which, since that's all it's doing, is a lot of the time. Obviously this is a pathological case, but if there's anything we know about biology it tends to be pretty pathological! Strace was designed to be as efficient as possible, whereas the same is definitely not true for our own introspective abilities. Even so, it's usually considered a bad idea to use strace in production.

And I think we lose a lot when we can't turn off our own introspection for production. A simple exercise in speaking to a group of people becomes an endless cascade of "did I really just say that? Oh god I'm doing this all wrong". A simple game of tennis becomes an exercise in simultaneously trying to move a racket while being aware of every minor rule of good playing form. A simple writing exercise becomes a game of writer-vs-editor where you get bogged down in stylistic questions when you really should be just letting the words flow.

Introspection is important, of course. Judging and evaluating what you do is the only way you get better at it. However, to do your best work at something you have to be completely immersed in it, and that means leaving the introspection until later.

Zombie code

I've been trying to do some stuff with OpenGL lately and, uh, wow. I don't know if it technically qualifies as the worst API I've ever seen - there's some fairly robust contenders in the "enterprise web services" category - but it's definitely got a straight run at the title. I remember it being a bit gross when I used it a few years ago but if anything it's managed to get even worse. It's layer upon layer of new features melted over a terrifying spongey core of half-arsed backwards compatibility.

Anyway, it got me to thinking about that funny thing that happens with code, where it sticks around long past its use-by date. I'm fairly certain an old mailing list system I wrote in an afternoon in Python nearly a decade ago is still kicking around in production somewhere (sorry). But every time I still manage to convince myself that this time it really is temporary, and someone totally will come along and fix it later. Unfortunately, nobody ever does.

I've wondered sometimes if the right answer is just to always write code as if it's going to last ten thousand years. I don't mean write it so it could cover every eventuality into the far future, that would be awful. But maybe, if you were diligent, you could always write code with a certain essential craftsmanship. Code that you wouldn't be ashamed to see dug up in ten thousand years.

Or at least the next six months.


A screenshot of automata

I started off wanting to make a particle system using magic shader magic, but I ended up making a Game of Life simulator instead. I'm pretty impressed with all the crazy stuff you can do using shaders and vector math. On a Raspberry PI GPU I can get pretty close to 60fps at 1920x1080 cells, which is not bad considering how terribly optimised the code is.

The pictured automata is actually something a bit different from regular Game of Life. I tweaked the numbers a bit because I wanted something noisier. It kinda has this strange quality where it feels like it's moving in lots of different directions at once. I'll try to make a WebGL version of it sometime soon, because it's very mesmerising.

Brain Light

This is a project I've been working on with Laura Jade, a friend and local artist. Much like the earlier Brain Ball, it lights up different colours depending on your brain activity as measured by an EEG headset. I had to fairly dramatically depart from the ball, though, because what shows up well on a screen is totally different from what shows up on a projector.

It still shows Alpha, Beta and Theta waves as blue, red, and green, but the colours are all full-saturation so that they show up well on the perspex brain. The flickering patterns are generated by the cellular automata I wrote, so I'm pretty pleased about that. That said, in the end it looked better with a much smaller pixel grid than I expected. I made the GPU rendering powerful enough to drive 1280x1024 pixels and in the end we only used 32x24.

The sacrifices we make for art!

"They", not "it"

A while back an editor pulled me up on a tech piece I'd written. There were a bunch of sentences like "Google are changing the way they do business", which he'd corrected to "Google is changing the way it does business". He said, "it's weird that you keep doing that. A company is a singular; an 'it', not a 'they'". I wondered where I'd picked up such a strange deviation from standard English, because it definitely didn't feel incorrect.

I was reminded of this recently when I saw an article on the SourceForge fiasco that quoted a source like this: "SourceForge are (sic) abusing the trust that we and our users had put into their service in the past." Obviously I'm not the only one who can't sneak that particular formulation past a wary editor!

On reflection I think the "they" vs "it" question goes deeper than syntax, and speaks to an important point about the nature of corporations, especially in the context of SourceForge-style malfeasance. We use the plural when we're speaking about a group as a collection of individuals, and the singular to refer to the group as a distinct entity deserving a separate identity. This becomes particularly apparent in business, where a company represents the transition from one to the other: the "they" quite literally becomes an "it" by incorporating.

But there are a lot of issues with creating a magical person. If two people get together and poison a river, they committed a crime. But if they incorporate first, it committed a crime. The dangerous power of being able to create a legal whipping boy as a liability shield is part of the reason why there have been consistent efforts to weaken those protections in recent years.

Legality aside, there's no reason for us to adopt the idea of corporate personhood in our thoughts or conversations. Up until the point where Google's self-driving technology starts directing the company, it's not an "it". Trying to discern Google's character, speak to Google's motivations, or appeal to Google's morality is only a path to confused thinking. It doesn't have those things, rather they are just a bunch of people who do things together. Those actions add up to systematic behaviour, but they don't add up to a new person. Google is a "they", not an "it".

Though I still don't think I'm going to be able to get that in print.

Embrace the suck

I was talking to a friend today about package managers and it reminded me of one of my favourite pieces of software: the Australian e-tax system. It integrates with about ten different awful government backend web services. It's built on some nightmarish HTML-meets-Visual-Basic-form-builder framework. It has, embedded in its hellish code dungeon, all of the various zillion arcane tax rules and things. In many ways, it is the grossest, most awful thing ever created. Yet I like it. Worse still, I respect it.

Paul Graham talks about schlep blindness, the tendency for startup founders to avoid complicated and unpleasant startup ideas, or even unconsciously filter them out before even thinking about them. I think there's a similar thing for code. There are certain kinds of software problems that are absolutely glorious to solve. Problems that, by the judicious application of the right algorithms and clever design, become completely solvable in a way that is elegant and satisfying.

Unfortunately, most problems that actually matter to people are only part elegance. The other part is all of the extra stuff that takes way longer than it should, that involves badly specified problems, arbitrary constraints and meaningless plumbing of data from one place to another. In a word: suck. The holy grail is to live in a world with no suck, but that world is very crowded. So what about going the other way? Embrace the suck and look for the least elegant, most complex, most unsatisfying class of problems. Maybe there's some seriously overlooked value there.

The example in question: I think we'd all be better off with a universal package manager that works everywhere, and can install things in any programming language. But there's no way to get something like that off the ground; people won't give up their existing package managers and trying to get everyone to agree on a new standard will never work.

But one thing that could work is going deep, deep into the valley of suck, and returning with a system that includes every other package manager's special formats and quirks in one horrific bundle. So it talks to npm for Node packages, talks to PyPI for Python packages, talks to the various apt servers, and knows how to manage each individual package format's expectations about how installing works, deals with conflicts between the packages... I mean, ugh, that is one profoundly sucky problem space. However, I don't think there's any other way to make progress there.

I can't find it now, but I remember seeing a really impressive exchange on the Chrome bug tracker. Someone was complaining about the way Chrome switches to another tab when you close an old one, and they said "this is just going to lead to more and more special cases". The developer replied "that's fine, we're trying to capture people's expectations and those involve a lot of special cases, so we'll just include them all".

What an idea! Just include every special case. My instincts were screaming about how awful that would be, how unbounded the suck. But they did it, and it works great. I don't know how Chrome chooses to switch tabs when I close them, but it matches my expectations, whatever those are.

All work and all play

When I do something, I usually think of it as either work: something useful, something that creates value, helps people, or causes me to be closer to my goals, or play: something enjoyable, something that lifts my mood, brings me happiness and helps to restore my energy and motivation. In an ideal situation, I can do things that are both work and play, but one thing I've learned is that it's unrealistic to expect that all the time. Even for things you really care about, there are some parts that are all work and no play.

But there's a much more dangerous category of activity, which is no work and no play. These are things that you thought were going to be useful work, but slowly turned into useless work. Or things that were fun, but slowly stop being fun and eventually just become big time sinks. At that point, you're usually doing a no-work-no-play activity out of obligation or habit, often without really considering why you're doing it.

Pruning those activities out is a surprisingly liberating experience.

Open source recruiting

It's pretty interesting that open source projects don't recruit the same way commercial projects do. In a company you usually have particular people responsible for going out and recruiting employees, advertising the availability of positions and so on. But the only time I think I've seen a similar thing is from companies like Mozilla or Red Hat who do commercial open source, and their recruitment is just for the commercial side. For an open source project, usually you get involved by just turning up on your own initative and submitting some code.

Interestingly, commercial open source companies often take advantage of the benefits of the open source way. An open source contributor already has to show a degree of initiative in showing up and demonstrate ability by sticking around and having their contributions accepted, which is a reliable signal that the person in question would be a good hire.

But I think the open source community would be a lot more robust if it embraced some of the benefits of traditional hiring. Recruiting lets you specifically target certain people and certain roles that are under-served, and advertising for open source contributors would pull in a lot of people who otherwise wouldn't have considered the project, or even open source in general.

I suspect recruiting might appear to project maintainers as one of those "we don't play that game" type things, or perhaps they've just never considered it. Either way, I think adopting some more traditional recruitment strategies could be a serious benefit to open source projects and the movement in general.


Screenshot of Contextable

I put together a neat demo of my "context table" idea. To make more efficient use of letters I abbreviated it to Contextable, which also his the nice side effect of being meaningful outside of being installed under an actual table. A brief recap: the idea is to display relevant results from Wikipedia as you talk.

To solve the problem of which words or phrases are meaningful enough to display, I actually ended up using a pretty nifty technique involving Bloom filters. I built a big filter of the million most popular pages on Wikipedia, cut out some common stop words and used that to determine what's worth loading from the Wikipedia API.

I pick an arbitrary phrase size (4, in this case), and look up the next 4 words in the input. If I don't get a match, I try the next 3 words and so on. This means that I get more specific results: "Super Bowl Sunday" instead of "Super Bowl" and "Sunday". Doing that many requests to the API would be way too many, but the Bloom filter can easily handle lots of lookups quickly.

If you're curious, the source is on GitHub. Happy contexting!

The present

It's kind of definitionally true that you can't change the past, and it really saves a lot of anguish to stop trying. Caring about the past is like caring about the rotation of the earth or the speed of light; it's so far beyond your control as to be meaningless. But there's no equivalent statement about the future. You can change the future, and depending on the situation it can be either easier or harder to change things the further they are in the future. Easier because your efforts can multiply together over time (like saving money), and harder because other factors also multiply together (like predicting weather).

But it seems so strange that there's only one moment when you can actually make a change. Sandwiched between an unimaginably large number of moments in the past and an infinite number of moments in the future, there's only one moment that's the present. And it's so small that it's hard to even define. All of your thinking and planning counts for nothing unless it reflects an action in that single moment. All that pressure concentrated on a single point in time.

And an instant later it's meaningless, and you start over.

Are you sure? 2.0

Screenshot of Are you sure?

Today I'm releasing version 2.0 of Are you sure?, a Chrome extension I wrote to reduce the impact of distracting websites. I originally wrote about it back in February.

Since then I've noticed that the single button click becomes a bit too automatic at times, so I figured I'd tweak it a bit to make it slightly less easy to click through. And my thoughts immediately turned to my previous idea of hold-to-confirm. So I brought the two together and I think the result is better and more polished than before.

I've noticed some weird behaviour with alerts as well, so I'm pretty glad to be done with them. A victory for productivity!

Negative association

Sometimes I remember the wrong thing. I don't just mean an irrelevant thing or an incorrect thing, but literally the exact opposite of the right thing. For example, the order of arguments to ln, or which way to turn the lock on my bathroom door. And in those situations I often do the most mistaken and counterproductive thing to fix the problem: I think "okay, I'll just remember that it's the opposite from the way I expect". As soon as I think that, I'm doomed to a miserable cycle of doing the (correct) opposite, my expectations reversing, doing the (wrong) opposite and then getting hopelessly confused.

I like to think of the brain as being mostly an association machine: a thing happens, another thing happens, and those two things become more strongly associated. This fairly simple mechanic seems able to produce an amazing breadth of capabilities, from the more obvious pattern matching to the vastly less obvious statistical estimation. But there's one particular task this association machine is pathologically bad at: disassociation.

This is the classic problem of trying to not think about elephants: as soon as you're thinking about not thinking about elephants, you're thinking about elephants. There's no mechanism for us to build an disassociation, or break down an existing association. This leads to problems not just with elephants but with all sorts of situations: when you learn bad habits, it's hard to un-learn them; when you break up with someone, it's hard to stop thinking about them; when something traumatic happens, it's hard to forget it.

Worse still, in our attempts to create a dissociation, we instead end up creating the closest equivalent: a negative association. So we can't stop thinking about elephants, but we can say that people who think about elephants are idiots, and thus make thinking about elephants painful by association. But it's important to realise that a negative association is still an association! And building a stronger and stronger negative association only makes the thought more frequent, and the negativity more painful.

As far as I can tell, there is no way to un-make an association. The best we can do is make some other, stronger association override it. Instead of thinking about how much of an idiot you are for thinking about elephants, the better approach is to think about leprechauns.

Right to die

A morbid thought today, while reading about the horrors of the US prison system: if we are going to sentence people to a lifetime of incarceration, shouldn't we at least offer them the choice to die instead? It seems to me that, as deeply uncomfortable as it is, suicide can be a rational action when faced with a sufficiently bad alternative. Suicidal desires being a marker of mental illness isn't so much an indictment of suicide as it is a signal that someone has severely overestimated the badness of their other options.

Reading about the valiant efforts of prison staff to revive and repair the bodies of suicidal inmates so they can continue to live in conditions they would rather die than endure is a particular kind of horror. I remember reading Fahrenheit 451 and being most disturbed not by the book burning, but by a particular scene where the stereotypical housewife's life becomes so meaningless that she decides to kill herself. But the medical technology in the story is so good that paramedics just come by, patch her up, and she wakes up the next morning as if nothing has happened. It's implied that they've done this a number of times.

The right to die is the final relief valve of life. You can at least know that no matter how bad things get, they can't get worse than death. Taking that away is perhaps the most profound violation I can imagine.

Abstract and construct

I've noticed a pattern that seems pretty universal when discovering and exploring systems. First, you attempt to distill the system down into its most minimal and general representation. That is, you abstract it: you can describe a shape as a mesh of triangles. You can describe triangles as lines. You can describe lines as points. You can describe points as numbers. And you can describe numbers as other stuff too. But abstraction is only half the story. After you have created this minimal and general representation, you work backwards from the abstraction to build new concrete things. That is to say, you construct.

One of the most accessible demonstrations of abstract-and-construct is cartooning. You take real people, animals and other objects, distill them down into abstract shapes, and then manipulate and distort those shapes in ways that would be impossible in the real world. You see something surprising and your eyebrows raise. Abstract: eyebrow height = maximum(surprisedness × eyebrow sensitivity, eyebrow limit). Construct: let's set the eyebrow sensitivity and limit really high. And suddenly you get surprised cartoon characters with their eyebrows flying out the top of their heads and smacking into the ceiling. (If you're interested, I recommend Understanding Comics, which I probably lifted that example from).

Perhaps a more rigorous example is the periodic table. Mendeleev was able to abstract the structure of the chemical elements into various repeating patterns, and then use that abstraction to theoretically construct new elements that were only later isolated in the real world. That's not to say that abstract-and-contruct always works out usefully. Physics, for example, has generated lots of wacky things like tachyons, perfectly reasonable constructions that have never (and probably will never) be observed in the real world.

There are some truly wonderful things you can do by distilling down to an abstraction and then constructing back out again. But a part of me also wonders: is it truly universal? The cartoon example seems to show us that there is something quite intuitive about the way abstract-and-construct works: we don't have any trouble believing that the high-eyebrowed character is just a very, very surprised person. Could this process just be an artefact of how our brains work?

If that's the case, a different kind of intelligent being might have some other way of finding truth that works as well for them as abstract-and-construct has worked for us. Maybe it would be able to view many things as being very similar to each other without needing to wrap them up in an abstract concept. Maybe if it was powerful enough it could just store all the information it learns and mine it directly for truth. Maybe the answer to the Fermi paradox is that the aliens all think we're too dumb to know how our own eyebrows work.

Of course, the most vexing part is that the only way I can think to explore that question is to abstract and construct. So let's hope that's enough.


I've noticed a lot of tech businesses aiming for the "local" space, meaning they try to create a network or a marketplace on the scale of about a city. Sometimes this is because of something inherently local about the market (dating, classifieds), sometimes as a novelty (Ingress), and sometimes as a strategy to split up a big intractable market into lots of smaller markets (Facebook back when it was per-campus only). But I think it'd be interesting to go further and explore the hyperlocal: networks the size of small suburbs or city blocks.

An easy example is home cooks who can easily make extra food and sell it to people who can't be bothered to cook. It would be wildly inefficient to scale this beyond a very local region; as soon as you involve a delivery van it goes from being a slight extra cost to a huge hassle. But if all your customers are within a 5 minute walk, they could just come by when the food's ready without requiring too much logistical gymnastics.

Or imagine if you have a home coffee machine and you don't mind making an extra coffee for people once in a while. There's no way you could feasibly keep up with the demands of running a real coffee business, but servicing the needs of a few people in the local area isn't out of the question. It would also be super expensive to keep a regular coffee shop open for just a few customers, but not so if it's just running out of your house anyway.

I think there are two main things that make hyperlocal a pretty compelling idea: Firstly, it allows for businesses that operate on a super tiny scale. There are lots of tools available today for a business owner who wants to scale up, but not so many for one who wants to stay small. Secondly, it could add back a certain sense of local community that seems to have been inadvertently lost in the rush for bigger cities and better technology. Being in touch with lots of microbusinesses in your local area would be a way to engage with that community.

But maybe the most interesting bit would be that it's a way for more people to experience a taste of running a business, even if it's a very tiny one.

Harsh reality

It's strange that, even if you know you're good at something, even if lots of people tell you you're good at something, you still get a special kick from someone offering you a job. Similarly, you can create a thing purely on the basis of it being good according to your standards, without regard for other peoples' expectations. But it still feels great to see someone you've never met praising that thing. I think this goes beyond simple ego and into something to do with the nature of reality.

One of the best definitions of reality I've heard is that it's something that doesn't change when you change your mind. There are people who believe in relativism: the idea that there is no absolute truth, only the truth as you see it. While that might make for an interesting abstract debate, the point is that when you drop a rock and really believe that gravity will make it go upwards, it will still go downwards. Reality doesn't care what you think.

None of this would be necessary or even meaningful if our minds weren't so damn malleable. It's hard to imagine, for example, having to explain to a computer that there's a difference between things as they really are and things as you wish they were. But, for whatever reason, those seem equivalent to us, and it's easy to get misled. You think you're eating well when you're eating badly and not thinking about it, thinking you're making progress on some work when you've actually been distracted most of the time, and so on. It's actually very difficult to trust your own assessments when they are so easily influenced by what you wish was true.

And I think that's what causes the special kick you get from a good job interview or someone you don't know talking about you. There's no fuzziness there. There's no sense of, well, maybe I'm just making this up because it's what I want to believe. It's just harsh reality: these people have no reason to lie to you, and they're saying it anyway. Hard metrics are the same. Assuming you're rigorous, the numbers don't lie. If they say 5 hours, you did 5 hours. There's no room for fuzziness or self-deception.

This is the reason why I've started to prefer more quantitative self-assessments over vague qualitative ones. There's a feeling you can't get from introspection alone, and that's the feeling that reality agrees with you.

Distraction pad

I stumbled on a neat trick today while I was trying to get some work done. Sometimes it seems like there's an endless stream of distractions when I try to focus. I reduce them as much as I can, but even so there are unavoidable distractions that come from my own thoughts: a task I've been meaning to do, something I've been meaning to look up, someone I've been meaning to get in contact with. And the problem is these things might be important, so I have to think a bit about them before I can go back to focusing.

So I started using a random notebook as a distraction pad. Anything that came into my head that might distract me, I just wrote down. And as soon as I did that, the need to think about it just melted away. The thought was dealt with now, so there was no point in dwelling on it.

Hilariously, the vast majority of things I wrote down are totally pointless. Now that I'm not working, my desire to look up how to mark an email as read from an Android watch has completely evaporated. But even knowing that I probably won't do anything with these stray thoughts, writing them down seems to convince me that they're in hand and don't need further attention.

Thanks, brain. You're a source of constant curiosity!

Finger-drumming search

Finger-drumming search diagram

Here's an interesting idea: a finger-drumming music search engine. Sometimes we know what kind of music we're in the mood for without knowing an exact genre name or artist, but maybe you could tap out a rough rhythm pretty easily. One of the most distinctive elements of most musical styles is their drum beat. A finger-drumming search engine would turn that beat into a fingerprint that you could match by tapping a screen or a microphone to the rhythm you want.

There are already music search engines like Musipedia or Shazam, but none that are designed to specifically pick up on drum beats. Mostly that's because they assume you're looking for a particular piece of music. I think it would be interesting to search through whole genres and even different tempos within a genre.

Plus it'd mean the desk-drumming skills I have acquired over the years can be useful at last.

No takebacks

A calendar full of monday 1st

Sometimes when I don't achieve a goal I feel the urge to try to make up for it by compromising something else. I plan to take the bus to an event, but I didn't leave enough time so I take a taxi instead of being late. Or I've scheduled my current task until 2pm followed by some exercise, but it's 2pm now and the task isn't done, so I skip the exercise to make up for it. The worst is when I plan to do something before bed, and then stay up too late when it takes longer than I expect. I like to think of these as "takebacks", because I'm trying to take back time that I've already given away.

Occasional takebacks can be reasonable enough. After all, being able to adapt to exceptional situations is very useful, and people usually build in flexibility to their plans for that very reason. But the danger is that the flexibility can hide systemic problems that would be obvious in a stricter setting. Maybe you never leave enough time for getting ready or always underestimate your tasks. However, instead of those resulting in obvious consequences like lateness or tasks not getting done, you end up spending too much money and getting fat and tired.

Another manifestation of excessive takebacks is the ever-slipping deadline. The project was meant to be finished today but isn't, so tomorrow you just keep working like crazy to try to hit the deadline for yesterday. If it's still not done, you just keep working, all the while convincing yourself that you're still trying to hit last week's deadline. Of course, the deadline is gone, there's no way you can take time from the future and put it back in the past. The right answer is to eat the missed deadline, make a new deadline based on the actual facts and proceed sensibly from there.

But it's easier said than done. Nobody likes to fail, especially because failures sometimes have serious consequences. Of course, the takeback has consequences too, but future consequences instead of present consequences so they're easier to ignore. All that time has to come from somewhere, though, and unfortunately the problem often cascades: a takeback from yesterday means not enough time today, which means even more takebacks to deal with tomorrow. And what about when a real emergency happens? All that flexibility you planned in is already used up by everyday takebacks.

For me, at least, I've been aiming for no takebacks. It might not always work out, but I think it's a noble goal. It's a hard, humbling thing to just say "I wanted it done by 2pm. It's 2pm, it's not done, and I failed." But in the long run it's better to have learned from a series of real failures than fake successes.


It seems like as you go through life, you gain constraints. Good constraints, for the most part, things like "don't throw food at people" or "you can't put both pant legs on at the same time". I think part of what we consider learning a skill is just internalising the constraints of the domain that skill applies to. Some of those constraints are necessary and some aren't, but it's pretty tough to know which.

Worse still, constraints seem to become part of our identity. You learn not to be too loud at parties, and eventually you start thinking of yourself as being a not-loud-at-parties person. Even positive qualities like listening to people have hidden constraints, like never not listening to people. You usually internalise those constraints to the point of not even thinking about them, but you're still constrained just the same.

There's a Buddhist concept called shoshin, or beginner's mind, based on the idea that the best attitude when studying is that of a beginner, without preconceptions. I think this applies even more strongly to problem solving: many seemingly intractable problems stem from not being willing to abandon certain constraints that you take for granted.

And of course the constraints that you take the most for granted are the ones that make up your identity, the ones that you consider part of who you are. We are very reluctant to let go of them, if we notice them at all, because we feel like they would make us into someone different. But maybe by abandoning one of those constraints you would discover that it wasn't actually necessary, and with it gone your problems become easier to solve.

In that sense you would become someone different, but someone better.

Hangouts Theatre

Hangouts drama masks

One area that I think is tragically underexplored is new kinds of performance that are made possible by the internet. The amazing power of broadcast-era technology all but killed theatre and live shows as entertainment for the masses, but these days we have enough technological power for new kinds of non-broadcast performance.

I'm most excited about the future of the current set of popular livestreaming services (Twitch, Ustream, YouTube, Livestreamer). Right now streaming is still fairly fringe, but it's made a popular niche for itself in video games and political protests. But as we get to a point where people think of streaming as non-chalantly as they do taking photos or videos on a modern smartphone, I think we'll start to see some truly novel forms of performance spring out of that.

I have a moderately interesting idea along those lines, which is to make a theatre show entirely set in Google Hangouts. Hangouts works much like a traditional stage: people enter, speak for some amount of time, and then exit. I think you could create a really compelling experience using the up-close intimacy of videoconferencing. Imagine Hamlet online: the angst, the confrontations, the intrigue - all conducted via video calls!

And the coolest thing is that you could do it all live. You would actually be watching the actors connect and disconnect from the call as it happens. And although it's a fairly timid step into the possibilities of online performance, it would really be something special to create a bit of that live theatre excitement online.

Advertising-driven design

Someone was asking me today why it seems like software is easier to use than it was 10 years ago. There are obvious answers to do with the general improvement of the industry, software becoming more mainstream, and the design leadership of Apple, but I think there's an under-appreciated factor at play too: advertising.

In the traditional packaged-software market, you normally pay for the product once, and then pay some additional amount for upgrades. The problem with that is that it makes user satisfaction a fairly weak incentive. Bad design decisions only have a direct impact on sales in the next release cycle, which could be years away. Meanwhile you have to rely on indirect signals like user feedback, which isn't always useful. Loud voices are vastly overrepresented, and the noise makes it easy to miss feedback on more subtle problems.

However, many companies now are moving towards "cloud" offerings, where access to the software is paid by subscription. Although this still means the incentive is indirect, the feedback loops are much smaller. If you make some bad decision, sales will suffer almost instantly, making it easier to track down the cause. It still has the issue, though, that it's not always easy to figure out why. People will rarely say something like "I'm canceling my subscription because your text boxes are too small", you only find out that that's a problem when it's rolled up with 20 other problems as a more general "I don't feel like your product is easy to use."

Advertising-supported software products, however, are in a whole separate category. Because advertising is usually paid on a per-impression basis, you're incentivised by every single customer interaction with the product. If your text boxes are too small, you will be able to see an impact every time a user interacts with a text box. And because ad services often report on an hourly-or-sooner basis, you'll see it in real time. It makes sense in that context to micro-optimise every tiny interaction and each little detail.

You can see that attention to detail as being essential for good design. Steve Jobs was famously obsessed with small details at Apple, to the point of blatant micromanagement. However the results in terms of usability were spectacular, and far ahead of anyone else at the time. But it seems that micro-optimisation can be driven as much by systematic incentives as it can by megalomaniacal leaders driven by attention to detail.

Of course, there's no guarantee that the incentives in advertising will necessarily line up with the desires of the user. They often do, inasmuch as a user who isn't happy will stop using the product and therefore seeing the ads. But you can easily spot counter-examples where products with very little competition will run user-hostile ads or make bad design decisions despite the incentives. If you look at the long-term consequences of ignoring users, though, it seems to work out fairly badly for most software companies.

So perhaps ads are, at least in some ways, a force for good in software development. One thing's for certain: if you want the worst offenders in bad design, unresponsiveness to feedback, or just plain user-hostile decisions, you won't find it in ad-supported software despite its sometimes questionable reputation.

No, the worst of the worst is enterprise software, with the most distantly aligned incentives of all.

Discomfort as a signal

Discomfort is an interesting thing. Unlike pain, which usually indicates something is wrong and needs immediate attention, discomfort is a much milder aversion. It says something like "hey, maybe this isn't such a great idea". I like to think I've gotten good mileage out of discomfort; if I'm doing something and it feels uncomfortable, usually that means I'm doing it the wrong way. Maybe I'm sitting wrong and it's making my back uncomfortable, or I've gotten the wrong idea and it makes my thinking uncomfortable, or I've made some bad design decision and it makes my work uncomfortable. I can use that signal to change or stop what I'm doing.

However, lately I've been reconsidering the value of discomfort as a signal, particularly in the cases where it informs your work and your thought. Discomfort could be a signal that you've made life too hard for yourself and you need to go back, or maybe it's just that the thing you're doing is essentially difficult. It would be easy to slip into a situation where you get used to easy problems and averse to hard, complex ones because you value comfort too highly. The Stoics actually speak about voluntary discomfort: the idea that you should deliberately experience discomfort to avoid getting too complacent. I'm starting to see the benefit in that approach.

It might even be that there is some optimum level of discomfort, where things are exactly as hard as they should be – but no harder. I don't know how exactly you'd find that level, but I'm sure it's not zero.

Stream programming

In my recent dalliances with Brain Things I've finally had a chance to play around with array programming, a kind of programming where instead of just adding numbers, you can add vectors, arrays and even matrices. I was mostly using NumPy with a bit of GLSL at the end, and I have to say it's pretty nice.

I actually had to train myself out of my old ways of thinking; I kept wanting to take my big nested arrays and iterate over them, which led to a lot of ugly nested looping code. Later on when I figured out how to array properly, all of that code just disappeared and left me with disturbingly simple one-line mathematical expressions. Crazier still, that shorter and prettier code was also much, much faster.

Which got me thinking of a feature dear to my heart: streams. Streams are sequences much like arrays or vectors, with the difference that you can't necessarily access the whole sequence at once. They particularly suit communication-heavy applications like databases, web servers and, y'know, just about everything we make these days. That's not a coincidence; modern applications tend to be distributed and communicative by nature because they need to scale up and individual CPU cores aren't getting any faster. Streams are so useful that they're baked into Node in a more fundamental way than most other languages.

But I think maybe not fundamental enough. I'd like to see an implementation of streams baked into the language itself, so that operations for manipulating streams of numbers are as primitive as operations on numbers themselves. Imagine if you could add streams, multiply streams, filter streams, connect streams together – all with a simple syntax. And it could be more than a syntactical improvement: being able to describe the stream operations in a semantic way might yield some really cool performance improvements, such as doing stream operations in highly-optimised runtime code or even the kernel.

Being in JS-land for so long has made me forget how cool it is to have custom syntax for things that deserve it. Array programming is definitely a worthy addition to any language, and I think stream programming could be too.


This post is a day late because I was exhausted last night and just conked out before writing anything. At least, that's the proximate cause. Digging deeper, though, the root cause is that I've been enjoying writing less recently. I've been attempting to take less time per post which, although useful for other reasons, has also made it easy to focus too much on the act of writing itself rather than the reason I'm doing it.

I had a long talk with a friend today who helped me remember why: it's because I like ideas and I want to share them. I've imposed arbitrary restrictions on top of that goal to make it more focused, achievable and manageable, but those should never become the goal itself. So I think the way to prevent this failure for next time is to stay focused on the true goal, and keep an eye on my enjoyment when writing, which seems to be a good proxy.

Plus I'm happier that way, which is nice.


There's a funny thing that happens to beginner musicians, singers, runners, and just about everyone doing something physical: they tense up. Runners tense their fists, guitarists their forearms, singers their throats and so on. It's like a reaction to being slightly out of your depth: you overcompensate by trying to add more power the only way you know how. Of course, all that extra power is just your muscles fighting themselves, and an important part of getting better is learning how to stop doing it.

I think there's an analogue in psychological tension too: when we have a difficult thing to do, it's easy to add extra unnecessary difficulty to it and make it seem bigger than it is. I sometimes pick up an old project that I abandoned and feel a sense of guilt that I didn't see it through the first time. Of course, that guilt is nothing but another kind of tension: your thoughts fighting themselves.

Ideally, you want each thing you do to take only as much energy as it actually needs. Anything extra is just inefficient.


It seems to me that there's something unique about performing something, as opposed to practicing it. Normally we think about that distinction in terms of the arts, but I think it applies equally well to other work. Part of what makes code-on-a-whiteboard interviews so famously bad, for example, is that most people are used to writing and rewriting code in private before anyone else gets to see it. That's not to say you can't write code in public, but it's a skill you'd have to develop separately.

I think the distinction between practice and performance involves a kind of bar that you set, saying "until it gets to this point, I'm not making it public". And you behave differently when that line isn't there. It provides you with a kind of safety: if you make a mistake or do something wrong, you can just go back and fix it before it passes the bar. Not having a bar forces you to deal with your mistakes rather than erase them, and it forces you work differently because you know you whatever you do is going to be judged immediately, rather than waiting until you're ready.

Maybe that's not what you want all the time, but it seems very valuable to be able to perform what you know, rather than just practice it.


Sam in Elian Script

Today I happened upon the marvellous Elian Script. It's a kind of pigpen alphabet, where each letter corresponds to a position in a tic-tac-toe grid. The first nine letters are represented with equal-length lines. The next nine by unequal-length lines, and the last nine (well, eight) represented by unequal-length lines with a mark of some kind. The rules are so simple that they leave an enormous range of calligraphic expression, and the results can be really striking.

It got me thinking about the idea that there's a certain fundamental relationship between ambiguity and art. What I mean is that art relies on making creative choices that can't exist if those choices are prescribed by necessity. So I wouldn't describe the integers as art, for example; there's really only one possible set of integers. But I wouldn't be totally opposed to more abstract mathematics being described as art, to the extent that there's no particular right answer to a lot of high-level maths.

I've heard it argued that languages are redundant to serve as a kind of error correction. You can say "he is at the beach", but if you say "he are at the beach" people will look at you confused - but why? It's not like the meaning is unclear. On the other hand, maybe you meant to say "we are at the beach". The redundancy works as a kind of fallback so you can catch problems. Some constructed languages set out to remove that redundancy, at the cost of less error correction.

But that error correction actually provides an interesting source of artistic choice. You can break the rules of grammar and rely on there being enough redundant information to fill in the blanks. A lot of poetry relies on creative breaking of traditional grammatical rules. But even without breaking rules there's a lot of creative choice between valid but equivalent words and structures. You can order words in numerous different ways. There are numerous different ways you can order words. The different ways you can order words are numerous.

And there's maybe no better example of ambiguity and art going together than the delightful phenomenon of QR code art. QR codes are designed with a high degree of error correction (much higher than English) so that they can be easily identified even when captured by bad phone cameras in bad lighting conditions, out of focus, and tilted or rotated in crazy ways. All that ambiguity gives you a lot of space to play in.

Elian Script fascinates me because I'm not used to (western) character systems having that much ambiguity. They're possibly the most strictly symbolic system I can think of. Even calligraphy is fairly limited by having to faithfully represent so many different shapes. By contrast, having such a minimal and distinctive set of actual constraints, Elian Script leaves so much choice and so little direction that you feel obligated to create your own. Sam in Elian Script


Facebook pill

I was doing an exercise today that involved goal factoring, where you attempt to break down a complex motivation into much simpler sub-motivations. For example, you like to read books, but why? Maybe you enjoy the quiet and relaxation of reading, or you just like knowing things, or you enjoy the identity of being someone who reads. Depending on those sub-motvations, you might find better, more targeted ways to achieve them.

The specific question in this case was to goal-factor my time sinks: things I put a lot of time into without much reward. Obviously there's some sub-motivations there that are actually valid, becuase it's not very common to choose to do something that doesn't appeal to you in any way. So I thought about a few of the beneficial factors of social news sites like Reddit or Hacker News, things like being a constant source of novelty, information and new ideas. But I also realised there's another important effect that you get, not just from social news but from all entertainment: analgesia.

I've noticed that the times when entertainment is most appealing to me are when I'm experiencing some kind of psychological discomfort. If I've been thinking too hard and my brain's tired, boom, Reddit. If I'm trying to avoid thinking about something, TV lets me avoid it for hours. If I'm in a bad mood, if I'm frustrated at some problem, if I'm tired and irritable, the internet is there, waiting with open arms to take me away from my worries by bombarding me with constant novelty.

Viewed from that light, it's easy to see one way that entertainment becomes pathological, much the same way as any other painkiller. It gives you relief from discomfort, but unfortunately doesn't address the discomfort's source; that's still there waiting for you when you're done, giving you every reason to continue avoiding it with more entertainment. Not only that, but over time you can become accustomed to using entertainment to avoid problems, making it difficult to deal with them any other way.

That's not to say that entertainment is necessarily unhealthy, or that analgesia is the only value it has, but it's worth considering that at some point it turns from recreational into pathological. And the real danger is that it feels good the whole time.

Art Teacher

Sketch of Art Teacher

Here's a neat idea I had today: an art teacher app. Basically it runs you through a series of drawing exercises where you take a picture of what you've drawn with your phone, and it analyses the result and gives you feedback. So if your circles aren't circular enough, or your lines aren't straight, it can tell you, and give you a specific score for each exercise that you improve over time.

You could add in-app purchase type stuff for learning to draw specific things or licensed characters. So you can learn to draw Mickey Mouse if you're willing to pay a couple dollars, and Disney gets a cut. Maybe even have some kind of marketplace where people can put up their own exercises or figures, and take a cut that way. Then the app and basic exercises could be free, which would be a nice thing for kids who want to learn to draw.

I quite like the idea of a specific kind of educational software that works even without any other education. The idea that someone could learn with nothing but code to teach them is one of the coolest things I can think of.


Official Seal of Done

Something I've noticed is my tendency to over-focus on getting something done. I've often spent a long time working on something in private with the goal of getting it to the done stage: a point where I'd be happy to put it out in public and let the world have at it. I've previously noted that this leads to long feedback loops and performance anxiety. In addition, waiting until something is done can take an unnecessarily long amount of time, and deprives other people of the opportunity to judge for themselves whether what you're doing is good enough for them.

Those are all problems on the way to done, but the biggest one is on the other side: what next? Sometimes you may have the benefit of a project that you throw over the wall and never have to touch again, but usually getting it done is just the beginning. To get people to care about a project is a whole new task that really has very little to do with how done it is. And what if you're not as oracular as you imagined, and your beautiful done thing actually still needs some work? Are you going to do that work if, to you, it is the essence of completeness?

So I am learning to be happier with projects that aren't done. In that spirit, here is something I have been sitting on for way too long: Robot Party. It is a kind of mash-up of actor-model programming and IRC, with the goal of making a kind of virtual space where people and code can both exist as tangible things. The gory details are in the code, but I'll do a better writeup later.

You're welcome to get in contact if it's something you'd be interested in getting involved with. It's definitely not done, but perhaps it's done enough.

Performance review

BFF Likert scale

A friend recently asked me to fill out an anonymous survey for a CFAR workshop he was doing. It contained all sorts of questions about how effective the person was, whether they responded well to criticism and contrary information, whether they think clearly and help solve problems etc. I found the whole thing quite confronting, but in that good discomfort sort of way. How often do your friends tell you your strengths and weaknesses?

It occurs to me that we recognise this problem – the essential conflict between politeness and useful feedback – in the workplace, and we use performance reviews to overcome it. The particular structure changes from company to company, but the core idea is the same: set aside a specific place and time for feedback to break that social barrier. So why not do the same in personal relationships? I admit it gives me the heebie jeebies a bit, but it could be a pretty productive way to find problems or issues that are obvious to your friends but not to you.

Or maybe you'd find out you have no flaws, which would be okay too.


This is aquire, a little node library I've been working on to do asynchronous node module downloading and importing. Whereas previously you'd have to do this:

$ npm install coffee-script@1.7.0

And then this:

coffee = require('coffee-script')
eval(coffee.compile("(-> console.log 'hello, world!')()"))

With aquire you can do it all in one step:

aquire = require('./aquire')

.then(function(coffee) {
  eval(coffee.compile("(-> console.log 'hello, world!')()"))

It invokes npm, installs the module and all its dependencies for you at run time, and then resolves the promise when everything's ready. Modules are cached in a specific aquire_modules directory so only the first time should be slow.

There are still a few bits to work out, like how to deal with multiple different versions being required simultaneously. I also want to add client-side js support. But for now at least I think it's an interesting take on the various module loading systems.

Catchup app

Catchup app sketch

I was trying to organise coffee with a friend today and it was pretty irritating going back and forth about times. It's a problem that's come up a lot recently and I'm getting dangerously close to the point where I start making something to solve it.

So here's my first sketch for how it might look. Basically you enter emails of the people you want to catch up with. Then you get a calendar widget to choose what times to suggest. You can import your calendar to speed up the process, and if the other invitees are already in the system their calendars are overlayed as well so you can narrow down the times even more.

Then it generates some standard language like "I'm free all day Tuesday, Wed 3-5pm, Thurs 9am-1pm & 3-5pm. You can check out my full calendar here:". When someone clicks the link they can narrow it down to their own availabilities. The point is that it interoperates well by allowing you to keep the discussion happening through email by default, but with the website augmenting the process so it should ideally only take one round trip.

I haven't made any particular steps towards this yet, but give it a few more catchup discussions and I'll probably get cranky enough to do it.

Automatic conspiracy

The All-Seeing Gl-Eye-Der

There's something incredibly compelling about big conspiracy theories. Secret groups of well-connected people running the world like a shadow government, tentacles reaching into every boardroom, every cabinet meeting... that sort of thing. Of course, there's no evidence for any grand-scale conspiracy and, although there are examples of small scale conspiracies here and there, even those are pretty rare. But maybe the issue is that conspiracy-hunters are looking for the wrong thing.

Inherent in the idea of a conspiracy is an agreement between people, something explicit and human-powered. But if the past century has taught us anything, it's the incredible power of systemised and automated processes, with as few people involved as possible. Surely a conspiracy that could be effective on a global scale would have to be very different from what we'd imagine. It wouldn't be run by people, but by processes. A kind of automatic conspiracy.

Let's imagine some nefarious secret organisation wants to keep people stuck docilely wasting time on their computers all day for some Evil Purpose. The conspiracy would set about researching ways to make computers addictive, give them features reminiscent of slot machines, and carefully optimise every facet of their operation to maximise those compulsive qualities. A spooky theory to share with your fellow conspiracy buffs! But of course it is actually happening, and the only part that isn't true is that there is some secret cabal driving the process.

Instead, it's just a series of fairly straightforward incentives. People value free things irrationally highly, so it's a good idea to make your money from advertising or microtransactions instead. Both of those work better the more you get your users to engage with your product. And, of course, you're competing with all the other products to do this most effectively. The modern analytics-driven software world lets you experiment on your users and optimise every decision to maximise engagement. If that engagement happens to look a lot like addiction, well, I guess our users just really like us.

If a group of people were driving this process, we would of course ask them to justify its ethics. But there is no group. If there's a puppetmaster pulling the strings it's none other than our old friend the invisible hand: each person acting individually is acting collectively, as long as the incentives are aligned. And because what we see doesn't look purposeful, we don't question its purpose.

But the systems that we have built give rise to far greater conspiracies than we could dream of hatching with mere humans in a darkened room.

Je te manque

One of the more random surprising things about the French language is the verb manquer, which means to miss something. However, it doesn't work like you would expect coming from English. Je te manque doesn't mean "I miss you", it actually means "you miss me"; the verb works backwards! At first blush that seems patently ridiculous, how could that possibly make sense? French is a subject–object–verb language. The subject is "I", the object is "you", so how could it be that the object is doing something to the subject?

But a brief foray through English emotional verbs reveals that we should not be throwing too many stones. "I miss you" and "I love you", sure, but "you annoy me", "you upset me", "you inspire me", "you amaze me", "you impress me". If you can don your non-native-speaker hat for a second, how do any of those make sense? We get frustrated at someone, and instead of saying "I havefrustratedfeeling you", we turn it around and say "you frustate me". We force them to have the agency for our emotion.

How do we decide which emotional verbs get the not-my-problem treatment? I assume that reflects our own attitudes towards them. We like to think of ourselves being in the driver's seat for love and hate, but – especially for the negative emotions – we would rather that responsibility lie elsewhere. And imagine if it didn't! We'd have no elegant and snappy way to express an idea like "you offended me", we'd have to settle for "I haveoffendedfeeling you". The emotional nexus of our communications would have to stay centered on our actions and our decisions, with no opportunity to use linguistic tricks to outsource that responsibility to others.

Sounds fairly confronting, but I bet if I was transported to that world, after a while this one wouldn't causemissingfeeling me at all.

Bootleg classroom

I've had the idea kicking around in my head that it'd be pretty fun to make a more informal teaching and skill-sharing environment for adults. It'd be something like a mini-conference environment, where people split up into small groups, but with an unconference-style lack of structure.

At the start of each meeting you'd go around the group and ask each person for a list of things they know about and would be willing to share, which you turn into a big topic list for the event. Everyone groups up by the topics they'd like to be involved with - either learning or teaching. This bit would be a bit tricky because you can only be in one group at a time, but if people just kept moving to their next available preference it should converge on a stable solution.

Once your groups are sorted out, everyone splits off and runs their individual sessions. Depending on how much time you have, you could probably run a few of them. Ideally it'd be a pretty fun and informal teaching environment where, over time, everyone learns a bit of whatever they're interested in. If you ran these for long enough, it would probably settle down into a cluster of general meetups as people's skills develop.

I think there's a lot of room for innovation in the way we do community-building, especially in the tech community. Right now there are only a few, fairly unambitious formats like casual drink meetups, talks, and occasional hackathons. Taking existing top-down formats and democratising them seems like a pretty rich vein to tap.

Cell city

Cell city sketch

Been thinking through an idea that a friend put in my head a while back: ambient games. Basically, games where instead of having to pay them constant attention, you just make occasional adjustments and the game happens mostly by itself. The main source of enjoyment is satisfaction in watching your decisions unfold over time, rather than big hits of activity at once.

I've started sketching out some ideas for how I might make such a game, and the above is what I've got so far. Essentially, it works a lot like a cellular automata version of a city-building game: you have a big grid of cells, which change intensity or color, points, which cause activity in nearby cells, and zones which set rules for how activity in one cell causes activity in nerby cells. By changing the arrangement of points and zones, you can create any number of different crazy patterns.

But it's not just about making pretty shapes. The cells are constantly moving and changing, and stability is by no means guaranteed. Depending on the rules you use, your world could die off and leave nothing behind, or chaotically spiral into a seething mass of white noise. Getting that balance right over time to make a world that you find pleasant is the ultimate goal of the game.

And of course, it's never truly complete. Just like a garden, there'll always be something that needs doing. But you don't have to do it right now.


Robochess sketch

Yesterday's gaming dalliance reminded me of an old idea I've had kicking around for a while. It's a turn-based strategy game similar to chess in that it's on a grid and you have various pieces that move, but that's pretty much the whole similarity. It's more similar to programming battle games like Core War, but I really like the name Robochess.

Each piece on the board is actually a robot, only able to make moves according to its programming. That programming is done in a fairly simple visual programming language built into the game. Each instruction costs a certain amount of Instruction, which is a limited resource in the game. Robots can move, add Instruction to themselves or adjacent robots, as well as mine Instruction from themselves or adjacent robots. There's no attacking, rather you beat the other robots by either mining all their instructions out, or turning them evil by changing their programming.

And actually everything in the game is a robot. The junk tiles are just robots with no useful instructions, the walls are wallbots that just do nothing, and your main robot is just a robot with a special "do what the player wants" instruction. If that instruction is mined out, you lose. Alternate victory conditions could include being the first to mine a particular victory instruction out of one of the junk piles, or capture all the instructions available on the map.

I'm sure there'd still be some complexities to solve, but I think the core mechanic of a strategy game with programmable pieces would be pretty fun.

Dogfooding only works if you're a dog

In software development it's considered good practice to find a way to use the software you write. So if you're writing a new web browser you, as soon as possible, start using that web browser for your daily internet surfing. This is called dogfooding, and it's good because you're much more likely to see (and fix) the flaws in your software when you use it on a regular basis. The name comes from the phrase "eating your own dog food" which, if you think about it, kind of has the counterargument built right in.

Dogs aren't people. Dogs don't even necessarily like things that people like. And if you end up making the world's most delicious dog food as judged by humans, you've obviously gotten your wires crossed re: product design. So it is, too, with software: developers are, in very specific ways, unlike a lot of the population. A software team tends to be concentrated in one area, with one culture, and spend most of their day with powerful computers and fast internet. And when those contextual assumptions break down, you see a lot of rough edges.

Google Docs has pathological difficulty remembering that you use non-US formatted dates, but I imagine that doesn't come up very often because its developers are all in the US. Trying to use most Android apps without a stable internet connection is a complete nightmare that most people developing Android (and Android apps) aren't likely to experience often. I used a location-sharing app today that incorrectly assumed I was in a car because of my speed, presumably because the developers don't catch a lot of trains.

This isn't an invective against dogfooding; I think it's still an important aspect of development. But it's important to recognise its limitations. For testing that your software passes the minimum bar of "a person can use it without having a rage-seizure at how annoying it is", dogfooding works well. And for "does it meet the needs of our development team and people like them", it's also an excellent tool. But most software is not targeted at developers and, even if it is, there are a lot of developers on the other side of the world who might want totally different things.

So if your goal is to make dog food, please don't forget to test it with actual dogs. Also, dog food isn't nutritionally complete for humans. That's not a metaphor. Don't eat dog food.

A Null of Nulls

In some programming languages, not least of which my personal albatross of Javascript, there is a concept of a Null. Null is fascinating because it's kind of an un-value, it's not "yes", not "no", but more like "I disagree with your question". If someone asks "Have you stopped beating your wife yet?", the only correct answer is Null. Buddhism has the concept of Mu, which is a similar philosophical un-value.

But we can't stop at one Null - not at all! Because the problem is, once you have a Null, you can ask questions to which the actual answer is Null. An example: imagine a function, first, which returns the first element in a list, or Null if there is no first element. So if you pass first an empty list, you will get Null. But what if you pass it a list containing exactly one element: a Null?

Not to worry, because Javascript has undefined! So if you ask for the first element of an empty list in javascript, you get undefined. And if you ask for the first element of a list whose first element is undefined? Then you still get undefined. Oops.

One answer to this is Exceptions, which are the equivalent of when someone asks if you've stopped beating your wife, saying "don't ask me that" and bailing out of the conversation entirely. Probably a sensible response! But this is a fairly brittle approach, because your conversations can change flow unpredictably and you have to be aware of every kind of question that you can't answer.

But I think the most elegant answer of all comes from functional programming. Instead of having Nulls, they have a special Maybe value. Maybe is a way of explicitly saying "I will give you an answer, or a not-an-answer". So a Maybe Boolean is either Just True, Just False, or Nothing. So it's not meaningful to say "answer yes or no: have you stopped beating your wife?", but you could say "Maybe answer yes or no: have you stopped beating your wife?"

Why is this better? Well, unlike Javascript's ugly null and undefined values, you can have a Maybe of a Maybe. The answer to "what is the first element of an empty list?" is Nothing. The answer to "what is the first element of a list which only contains Nothing?" is Just Nothing. If you put that in a list and get the first element, it's Just Just Nothing. And so on. There's an infinite tower of Nulls, each un-answering more than the last.

I wonder if eastern philosophy has any notion of a meta-Mu?


An interesting idea that came to me today in a conversation: what about a search engine that queries people instead of the web? There are certain kinds of queries that you could answer better with people's opinions than with simple data. Some examples are matters of opinion, like "what is the best Frank Sinatra album?", or require a certain level of empathy or human-level understanding to answer, like "I'm feeling down, what should I do?"

When you visit the site, you'd get a people query box, as well as a live feed of current queries. Any query you have an opinion about you can click on to answer, with the caveat that questions expire after some very brief time, on the order of 15 seconds. So any answer you have needs to be brief - it's not a detailed Q&A site like Stack Overflow. When you answer a question, you can see (and rate) other people's answers which changes the order that they appear.

The experience would then be something fairly similar to a traditional search engine: enter the query, hit the button, results appear. The only difference is that instead of your query hitting a big database, it hits the collective opinions of all the other people using the site.


In my post about backwards verbs I noticed that many of our emotional verbs – like "inspire" or "upset" – have the person feeling the emotion as the object, rather than the subject. That is, you're not feeling the emotion about someone, rather someone else is makefeeling you. But today I found a fun opportunity to run that transformation the other way: the word should.

In Cognitive Behavioural Therapy, "should" is considered a pathological word; it doesn't express anything about the actual goals or requirements motivating you to do the thing, rather it is just a judgement, or a way of inducing guilt. Instead of saying "I should work out more", CBT encourages you to turn that into something like "I want to work out more", which is a similar idea without the judgement.

But I think it would also be interesting to do some verb-reversing to it. What if, instead, you should a person? That is, you give them the feeling that there is some obligation that they aren't meeting. For example, telling someone "you should smile more" is actually you shoulding them. With this form of the verb, the person attempting to create the obligation is the subject, rather than the object. It is usually considered pretty rude to should someone.

Shoulding also provides you with an interesting framework for breaking down "I should..." type statements. After you think "I should exercise", you could follow that up with the realisation that "exercise is shoulding me". Or, perhaps more accurately, "I am shoulding myself about exercise". Separately to analysing your actual motivations for the exercise, you could also consider why this particular activity has caused shoulding.

I don't want to should you by saying you shouldn't should, but this new form of the verb has the benefit of separating your motivations from obligations so you can deal with them both individually.

Or, at least, it should.

Why-do list

I've always had a love-hate relationship with to-do lists. The concept is beautifully simple and they have the potential to be very effective, but they have never worked well for me in the general case. I've previously written about to-do blocks, an attempt to modify the to-do list concept to be more visual and better for project-level work. I think there are a lot of other interesting ways to modify to-do lists, and I'd like to start with the why-do list.

A why-do list is essentially the same as a to-do list, except that after each task you write why you want to do it. One failure mode of regular to-do lists is that they get clogged up with tasks you don't really want to do. It's easy to think of all the stuff you have been putting off and make that into a list, as if it will motivate you, but it usually doesn't. However, if you use such a list as a way to focus on and remind yourself of the motivation behind the task, it could be made into something useful.

Obviously there are a lot of things that could be classed as a "why", from "because it benefits my long-term goals" all the way to "because it causes a biochemical reaction in my brain that I perceive as pleasure". My instinct on that subject is to write the why that would be most appealing to you at the moment you'd be deciding whether to do it. The very abstract long-term goals may not be very motivating at the time, so reframing them as short-term feely benefits like "feeling responsible and organised" or "satisfaction at having completed it" would be more likely to work.

I've also been keeping the entries positive, rather than using whys like "so I don't starve to death alone and unloved". Those may be very motivating, but I think they would be just as likely to motivate you not to look at the list. I haven't experimented with doing any kind of conditional rewards like "if I finish this project I'll buy myself an ice cream", but those might work well in conjunction for the more motivationally challenging tasks.

Anyway, that's the why-do list. I think it might be particularly useful as a place to dump those tasks that really give you trouble, or to parlay negative, obligation-laden shouldy motivations into positive ones that help you actually want to do the things on your list.

The steel robot effect

I've noticed that it's easy to unintentionally inflict harm in emotional conversations, especially when you're upset. I'm sure there are a lot of different reasons, but one in particular that stands out is what I like to call the steel robot effect. We always keenly perceive our own vulnerability and our own emotions, but in other people it's easy to assume their actions aren't motivated by that same humanity. So we see ourselves as weak fleshy humans, and others as mighty steel robots.

There are a couple of ways that can cause issues. The first is that if you're upset at a steel robot, you won't feel particularly sensitive to that robot's emotions and are likely to go too far in your reaction. Maybe even deliberately so because you feel like you have to work harder to break through its metallic exterior. The second way is that if a steel robot is critical or angry, you won't think to question or analyse the emotions behind its actions. Presumably if a robot is mad at you it must be motivated entirely by pure robotic logic.

Now, if you put those two failures together: seeing each other as cold and invincible, reacting to actions without considering the hurt emotions underneath, and lashing out disproportionately in return against the shiny metallic visage that seems so unflinching compared to your seething internal state. And, of course, that cycle just keeps building until something gives and you suddenly realise that it wasn't a robot at all, just a child in a robot suit. And you've hurt them badly.

It is something of a guilty relief, at that point, to discover that you're both hurt. To realise that you're just people after all. But it would be a lot easier if we could avoid believing in steel robots in the first place and engage innocently, weak fleshy human to weak fleshy human.

Aeroplane Problems

An 80% solution aeroplane with no wings

In a conversation today one of my favourite topics came up: Aeroplane Problems. I don't mean problems with aeroplanes, but rather problems that are like inventing the aeroplane, where there are lots of different factors that all have to go right before you ever get off the ground.

An Aeroplane Problem is often poorly understood, which makes it difficult to solve because you can't tell exactly what's going wrong. "It's still on the ground" could be caused by any number of failures. If you think you've fixed the failure but nothing's moving, it could be that you haven't actually fixed it, or it could be that you have fixed it and there are still more issues. Worse still, maybe when you fix that next issue this one will un-fix itself again.

Aeroplane Problems are often deceptively simple to explain. "I want it just like it is now, but flying" is an easy thing to say, but actual flight requires mastering complexity far out of proportion to the simplicity of that goal. It can be a very long time between defining an aeroplane problem and even seeing something that resembles success. This can be enormously demotivating.

And to me, the classic (non-aviation) Aeroplane Problem is motivation, or maybe something closer what Covey called effectiveness. To consistently get good work done requires a lot of different things: meaningful work, good working habits, discipline, enjoyment, and probably more I don't know about. When these things fail and we stop wanting to do work, it's easy to assign labels like laziness or procrastination, but those are just synonyms for "not working". There's no reason to think they're any more meaningful than a bunch of different words for "plane crash". The plane's still crashed, and we still don't know why.

So how do you solve an Aeroplane Problem? Well, the only way I can think of is to break it down until it's a bunch of smaller, more tractable problems. The issue is that doing that requires theory. Now that we (basically) understand aerodynamics, aeroplanes are no longer an Aeroplane Problem. Chemistry got rid of the Aeroplane Problems of alchemy. And maybe someday motivation will have a theory complete enough to solve it too.

In the mean time, there's always random guessing!

Easy mode and hard mode

I've often heard that to really train effectively and improve at something, you need to be working hard. The Deliberate Practice paper, in fact, suggests that you should be working so hard that you can only manage it for a few hours, and maybe considerably less than that. This is a sort of supreme effort, pushing at the absolute frontier of your current capabilites in order to expand them.

By contrast, I've come to believe that really creative, playful and fun work only happens when you are not working too hard. That is, when your current efforts are below your capabilities. When you're giving everything to the inherent difficulty of your current task, there is nothing left over for unnecessary difficulty. Adding frills to your garment, flourishes to your magic trick, a garnish on your meal; these frivolous things aren't possible if you're struggling just to get it done at all.

I think there's a lot of value in that frivolity. Perhaps the most famous example is Feynman's wobbling plates, which he started messing around with for fun and ended up winning a Nobel Prize for. But even if your fun doesn't score you a meeting with the King of Sweden, it can still be valuable. Being able to play in the space you care about strikes me as inherently meaningful and I think the act of creation is itself meaningful in proportion to how creative that creation is.

But you can't get to that place where you can have fun with it if you never practice hard enough to make it easy. And people who are truly great manage to be creative even at very hard things. Is that a contradiction? I don't believe so. Rather, what you consider hard has become easy to them. When they practice, they are doing even greater uncreative work that you don't get to see.

So it seems to me that easy mode and hard mode, far from being opposites, are in fact complements. The easy mode provides creativity, fun, and new directions for the hard mode, and the hard mode is what allows the easy mode to exist at all.

Erasure multi-coding

There's a really cool kind of error correction called an erasure code, which gives you a super-durable version of a message that, as long as you recover at least some minimum number of bits, you can recover the whole message. It doesn't matter if the bit you're missing is all at the end, all at the start, or randomly spread throughout the message, you can still get it back.

I find the idea of language as error-correction code fascinating, but I don't think anyone is claiming language works anything like an erasure code. Some important words, if you lose them, just mean the sentence is a bust and you have to repeat it. But not all kinds of language are made equal, and there's one particular kind of speech I think deserves particular recognition for its sophisticated error correction, and that's political speech.

The issue with political speech is that you might be taken out of context and your message corrupted or used against you. In official media this is limited to the occasional out-of-context quote, but blogs can do what they like, and people's memories are even worse. There's every chance someone will pick out a few important words they have an issue with and completely forget the rest of your speech. Which is to say, if you want to make robust political speech you need to use something like erasure coding to avoid being misinterpreted.

So, for example, you wouldn't say "I don't believe that children should be made to work in coal mines" because if someone isn't paying attention to the first half they just get "children should be made to work in coal mines". Much better to say something like "I believe that children shouldn't be made to work in coal mines" - no sequence of removed words there can make a sentence that paints your position poorly. I think this is the reason why you see politicans tread so carefully when it comes to speaking about their own positions on issues.

Though actually what's going on is a little more interesting than straight erasure coding. English is by no means comprehensive enough to always guarantee your message will make it through intact but, if you're careful, it can be comprehensive enough to get some other message through. You can say something like "we've seen a massive redistribution of wealth to the top 1% of earners in the last half-century". If you only hear "redistribution of wealth" you think this person supports socialism. If you only hear "top 1%" it makes you think this person supports Occupy Wall Street. Maybe for the right speech, both of those misinterpreations are good enough.

As far as I know, there's never been any serious research into any kind of erasure multi-code, that could resolve into anything in a set of acceptable messages depending on which parts are erased. Presumably that's because for most engineering purposes you want to recover the actual message, not something else. But for politics, well, saying different things to different people with the same words seems to be the holy grail of communication.

Teaching your inner child

I've recently been thinking about the difference between knowing about doing something, and knowing how to do something. You can read all about badminton from the internet, learn all the rules, join badminton forums, watch badminton games, argue about important badminton-related issues with other fans. Badminton could become your life without you even being able to actually play the game.

This sounds pretty obvious when we talk about sports or other physical pursuits, but we're happy to ignore it when it comes to learning. Consider language education in schools. You are tempted with the delicious bait of learning how to communicate in a new language, but then hit with the switch: grammar. Grammar isn't how to use language, it's about language. All of a sudden you're memorising arbitrary tables of information which, of course, don't help you to actually communicate. I suppose it's no coincidence that immersion learning, widely considered the best method, is built around actual doing.

One of the best books on this is Timothy Gallwey's The Inner Game of Tennis, which describes that exact phenomenon as applied to tennis. Gallwey argues, long before any of the serious work on dual process theory, that you should consider yourself as being made up of a logical reasoning self and an associative, intuitive self. Most people, he argues, teach tennis to the reasoning self, when it's the intuitive self who has to actually play. And isn't that silly?

I'd go a step further and say that most of the time we don't even acknowledge the existence of the associative self. Your reasoning says "I have realised I am eating too much, I will stop doing that", then later there is food in front of you and you eat too much. You are confused. "I knew I didn't want to eat too much, but I ate too much anyway. Maybe I am dumb and broken!" Maybe. But the more likely culprit is trying to use reasoning on your associative self, who simply doesn't understand it. You thought about not eating, you didn't think of how not to eat.

So I think that there is a big gap waiting to be filled along those lines. If the techniques we use for teaching are too rational, too abstract, too about-ish for our associative brain to deal with, then we need new techniques. And I imagine they're going to look totally unlike the way we teach the reasoning self.

Dress rehearsal

I've previously written about the problems of trying to be done before you put something out into the world. But sometimes it can be very difficult to convince yourself to let go of that, especially when you know the thing you're releasing may be imperfect, incomplete, or just not good enough. Today I'd like to share a technique I've found very helpful, which is running dress rehearsals for the release process.

A while back I made a short guitar song for a friend. I'm by no means expert-level at either guitar or singing, but I can make something I wouldn't be offended to have associated with my name – which I figured was good enough. However, even after a couple hours of practice at the song I wasn't at a point where I felt confident to record it and send it. I basically knew what I was doing, but the practice hadn't really come together to make a complete performance.

So I set everything up like it was a performance, and I did a dress rehearsal while recording. Predictably, I wasn't happy with it at first, but after only two more tries I had something that I thought was good enough to send. What I'd done was remove the distinction between "this is a practice run" and "this is a performance run". The two were identical, and I only chose after the fact which it would be.

This might not be possible in every case, of course, but I've found it particularly useful for my (mostly digital) needs. You just act as if you are releasing your work, but with some very minimal safeguard at the end that prevents it from being a real performance.

When the time is right, you just remove the safeguard.

Habits as raw material

I have another technique I've been working on recently, similar in some ways to using dress rehearsal to overcome the practice/performance gap. Rehearsing works by pre-loading the effort required to do something for real, so that when the time comes the actual effort is very minimal. I think there's some real value in generalising that idea in terms of habits, or more specifically the kind of work you do habitually.

When I started writing every day, it was mostly because I figured it would be easier to stick to a more regular schedule. While that's definitely been true, lately I've noticed another interesting effect: because I'm already committed to writing this amount, it now costs me very little to write about something. Previously if I had an idea for something to write, it would seem like a bit of an ordeal. Now, it's something more akin to relief: "Oh great! That's what I can write about today!". It's like I have an existing stock of writing that I have to use for something.

In that sense, I'm starting to think of habits in general as raw material that you can bring to bear on your work. If you can set yourself up in a situation where you have a lot of raw material, it will be much easier to do the things you want. Not only does the process itself get easier with practice, it requires less initiative than doing something for the first time. Of course it's possible to do things without that raw material, but like any resource-constrained activity it's harder and less likely to work.

In a way, it seems like thinking backwards. Instead of setting an important goal that will require a certain set of skills and activities to achieve, you develop skills and activities as habits, and assume that something important will come along later. But in a more realistic sense it's unlikely that you'll find an important goal without spending a lot of time just doing things, and even if you did, you might not have the ability to make it happen.

But when that important thing comes up, it will probably look a lot like the other normal things you've been doing all along. And at that point all those raw materials will really come in handy.

Where-do list

Continuing my previous to-do list variations, including the to-do blocks and why-do list, I have an interesting one I've been using recently that I call a where-do list. The concept is fairly predictable from the name, but I think there are a few features that make it interesting.

Where-do lists are useful for tasks that you tend to need reminding of on a regular basis. Classic examples include washing dishes or casual exercise (like a few push-ups here and there). Trying to stay reminded of these maintenance tasks on a constant basis doesn't work very well and leads to a certain degree of low-level background stress. Instead, you create lists that are attached to particular places, like the entrance to the kitchen or the side of the couch. When you see a where-do list it prompts you to do the stuff on the list at the time you see it.

You could, of course, put all those things on a regular to-do list, but since they're never really done all they do is clog it up. Instead you can clear maintenance tasks both out your main list and out of your brain, since you don't need to think about them except in the time between the where-do list reminding you of what to do and you doing it.

Copyright 2.0

So it's fairly well agreed that copyright lasts way too long, at least according to anyone who isn't being paid by Disney. The original copyright term was only 14 years, and there's actually a fairly compelling economics paper showing that if you pick a reasonable set of parameters, you end up with 15 as the magic number.

But it's not likely copyright law will change anytime soon. Too many people make too much money from it, and copyright's now a global trade issue; governments don't want to be the only suckers on the world stage with some weird copyright law. But who needs the law anyway? Or, more accurately, who needs new law? We could make our own copyright!

Since you can put basically whatever restrictions you want in a license agreement, you have a kind of delegated copyright power. If you decided that a particular kind of copyright law would be better, and assuming it's a subset of current copyright law, there's nothing to stop you from applying it to your own work.

So maybe some serious copyright people could get together and draft up a Copyright 2.0 with 15-year terms and remixing and whatever else Lawrence Lessig thinks is a good idea. It won't do much about Disney, but I think there are a lot of people out there who would be happy to voluntarily give back the extra powers that copyright law handed out.

Two kinds of perfect

When I first started using Uber, I gave the drivers a lot of 3- and 4-star ratings, but never 5. I think I was just holding out for the best possible experience. After all, 5 stars is the maximum possible stars: a perfect score! So to achieve those lofty heights a driver would have to not merely be good, or even great, but perfect. Driving me from one place to another on its own is not impressive enough to justify 5 stars. What would be? Well, I never found out, because it never happened.

I think of that kind of perfect as perfect-complete. The idea that perfection is measured by how much is done, and true perfection can only be achieved when you do everything. A perfect Uber driver would surely do more than just drive! I assume that's why they carry mints and bottled water now. Once everyone is doing that, maybe they'll give backrubs. Perfect-complete is an unattainable goal, because there's always some way to do more, some bigger scope you could fit your perfect-complete thing into where it will no longer seem complete.

Later on, after realising the absurdity of holding on to my 5th star for a driver who would do my taxes or bear my firstborn, I started rating drivers in a different way. I considered what the ideal experience I expected from a trip was. Surprisingly, mints and bottled water did not appear; I realised that what I wanted was to be taken to my location quickly by someone who was pleasant. That was all. Now most drivers get 5 stars unless they do something wrong.

I think of this second type as perfect-correct. It doesn't require doing everything, but it does require doing things properly. This kind of perfection is actually achievable, and what's more I think it is actually worth achieving. It's possible (and useful!) to define the limits of what you expect, and once that's done it's very satisfying to be able to nail down what perfect would be in some finitely-bounded way.

It's far too easy to set perfect-complete expectations that require skills, abilities, and resources far beyond what we have at the time. In the scrabble to achieve these perfect-complete things we can end up sacrificing a lot, including the perfect-correct quality of actually doing what was needed. The dream of that first kind of perfect can only be a disappointment, whereas the second is a path to taking pride in the craft of what you do.

The anthropic principle of problems

One of my favourite arguments is the anthropic principle, which is basically that you can't say too much about how likely life is or isn't to exist, because we're that life. No matter how unlikely it is, assuming it happened somewhere, that life would be standing around the same as us thinking "how strange that we happen to exist despite all those odds!" If it didn't happen, well, nobody would be around to notice. What I like about it is that it injects an extra source of information into the argument: the existence of the arguer. It has this in common with Descartes' famous "I think therefore I am".

There's another anthropic principle I've been thinking about, which is a kind of anthropic principle of problems. Problems that are easy to solve go away, problems that are hard to solve stick around. It's easy to think something like "how strange that we humans happen to have this set of systematic biases that, among other things, prevent us from noticing and correcting those exact same biases!" But, of course, that's exactly the anthropic principle at work. We presumably have other sorts of biases that are already corrected, and we don't even think about them.

And, of course, it's also easy to think "how strange that I have some particular set of problems that are difficult to solve". But there's nothing strange about it; you also have problems that are not difficult to solve, you just don't think of them as problems. Probably, you don't think of them at all.

Transactional shell

Here's a neat idea I had today: a transactional system shell. Once or twice I've accidentally nuked some files I wished I'd kept around, or overwritten something by accident. On desktop OSes the way that's solved is by making destructive operations actually non-destructive in various ways, like a special trash folder or OS X's version history system. But when you're using commands on a shell, there's no such protection available.

One option that I've started using is change the default to confirm before you delete or overwrite files, but it's a bit annoying. Worse still you train yourself in the habit of just hitting yes every time, which means it's inevitably going to lead to confirming by accident sometime too. Instead, you could use an overlay filesystem: a small filesystem just containing the changes you've made. When you start a transaction, all changes from that point happen in the overlay filesystem, and when you're ready you can commit them for good.

The nice thing about this is that, on top of protecting you from yourself, it would also provide a safer environment for running other people's shell scripts. If you run a script or a command in a transaction, you could confirm what changes it's made afterwards and decide whether it's done what you expect. Even if it has, it's nice to know exactly what changes have happened.

All the other software in my life seems to have gone to a safe-with-undo-by-default system, so it'd be great to see shells get there too.

Strong statements, meaningful statements

A strong, meaningful statement

An idea I picked up somewhere, I think on Less Wrong somewhere but I can't find a reference now, is the idea that your theories should divide the space of concepts as evenly as possible. That is, there should be as many things that would confirm your theory as disprove it. You can picture there as being a big cake of things that might be true. To get the maximum information, you want to cut that cake in a way that lets you throw away the most of it each time you cut it. I like to call a statement like this meaningful.

A meaningful statement is something like "pigeons are blue" (or equivalently "pigeons aren't blue"), because it divides the space of pigeons into blue and not-blue. A meaningless statement would be something like "everything is made of energy". What would a not-made-of-energy-thing look like? If that's not a conceivable thing that could exist, then your concept space isn't being divided at all. You can imagine the cut in the cake going right down the side, missing the cake entirely. No matter what the outcome, it can't give you any new information! A meaningless statement

Another way I like to think about statements is their strength. A strong statement is one with a clear test of its truth. An example of a strong statement is something like "I will go to the shops by 2pm", a weaker one would be "I will go to the shops today", weaker still would be "I will go to the shops sometime", and the weakest of all would be "I will try to go to the shops". You can see that, much like the meaningless statement, a weak statement also divides less of the concept space, but the reason is that it has a chunk of the concept space that it neither accepts nor rejects. The cake is still being cut, but with a big chunk left in the middle. A weak statement

So if you put these two disciplines together you get strong, meaningful statements. These are statements that have a clear test of truth, and will divide a lot of things into either truth or falsehood when that test is applied. I recently learned an example of such a statement, which I'm told is related to Gendlin's Focusing technique. You say "my life is awesome and everything is going great". Although it's an emotional statement, it's strong because words like "awesome", "everything" and "great" are pretty definitive, and it's meaningful because there are a clear set of things in your life that could be either great or not great.

When most people make a statement like that, they feel uncomfortable, because it's not true that their life is awesome and everything is going great. So the next step is to keep adding exceptions to it: "everything is going great, except I wish I had a better job, and I wish I was in better shape, and I don't get enough holidays". Each additional statement is making additional cuts into the cake, effectively doing a binary search through the space to quickly converge on the exact set of things that are standing between you and an awesome life.

That's not to say I think every statement needs to be strong, or even meaningful, but I believe that strong and meaningful statements are the best way to quickly cut a path through concept-space. And if that turns out not to be the case, at least I'll have learned a lot by finding out.

Care fight

I've noticed that a lot of people-related problems basically come down to who is willing to put the most effort or investment into the situation. Although there is often an official power structure, or theoretical selection criteria, decisions seem in practice to be made in line with whoever tries the hardest. Any objections are powered by a fixed quantity of caring, and eventually people run out and just go along with whatever. I like to call these situations care fights.

One of the best pieces of advice I've heard is never go into a meeting without knowing what you want out of it. That's not to say I always follow the advice, but whenever I have I've gotten great results. If you go in thinking "I guess we're having a meeting about something", this is classic care fight losing strategy, and there's a high chance you'll get railroaded into whatever gets decided. Although you might make a better decision if you had five minutes to go away and figure out what you care about, by that point the decision is already made unless someone cares enough to argue back.

Sales is another classic care fight. Salespeople drive you towards a close by caring a lot more than you do about it, and a sales situation without a plan is even worse than a meeting. If you aren't sure what you want when you go in, guaranteed you'll end up wanting what the salesperson wants you to want, or at least having to muster an enormous amount of caring to get back in control of the situation.

The ultimate extreme of the care fight is consensus building, a process by which a group is required to come to a unanimous (or near-unanimous) agreement to act. As long as there are even a small number of people who disagree, their issues are expected to be heard and worked with. While apparently some groups function well with this system, you can imagine that their decisions are heavily biased towards whoever cares the most, whether or not their ideas are the best.

On the other hand, a care fight may not be such a bad thing. In abstract I think it's good that someone who cares a lot about something should be more likely to get what they want than someone who doesn't care much. I think the trick is to be very militant about your level of caring. Something like what colour shirts to order, absolutely, lose that care fight. But something like a car sale or a business meeting can be very dangerous not to care about, because the people who do care the most are unlikely care about the same things you do.

Living asynchronously

One thing that's always been difficult for me is keeping track of things that don't need to be done now. Some activities require a bunch of steps to be taken one after the other; I like to think of those as synchronous activities. Synchronous activities are pretty easy, because you just do the next thing until you run out of things to do (or get interrupted by something else). Reading, gardening, and even programming (sometimes) are all examples of synchronous activities.

By contrast, some activities require you to take a step, wait for something to happen, and then take another step. For example, doing laundry. First you collect all your clothes and put them in the machine, then wait for the machine to finish. You hang up the washed clothes outside, then wait for them to dry. Then you take them inside and put them away. There are two all-important "then wait" steps in the middle there, where you stop, go do something else, and then hopefully remember to resume the process later on. I think of these as asynchronous activities.

Unfortunately, asynchronous activities are harder to manage, and they seem to pop up all the time. Washing is my favourite example, but organising something via email is another one that gets me a lot. Programming can also be very frustrating when it's asynchronous. Occasionally I have to work on a project where the build process takes more than a couple seconds, and it's significantly more difficult that way. That extra time between writing the code and seeing if it works often gets filled with something distracting, and it's easy to end up pretty far off track.

I think reducing that asynchronicity of your activities is a good idea, especially in programming, but sometimes it's not really feasible. Washing and drying are always going to take some time, at least until all our clothes are made out of graphene or whatever. In the mean time, the most valuable trick I have found is to take the responsibility for remembering out of your head. Instead of trying hard to remember when the washing machine is done, set an alarm for however long it takes. For software build processes you can do even better, and set them up to make a chime when they finish.

If you can offload that responsibility onto machines, it makes you much less likely to lose track. But it also means you're free to forget about whatever you're waiting for until the reminder you've set up interrupts you. The alternative is constantly trying to remember all the things you might be meant to be doing, and I wouldn't wish that on anyone.

Semi-encrypted email

One thing that I think is a real shame is that encrypted email has never really taken off. Part of that is the abysmal state of UI in encrypted email, to the point where even fairly serious power users and developers have trouble using encryption. Part is to do with the big webmail companies not supporting encryption. But I think a significant factor even if the other two are solved is that encrypted messaging in general is the worst kind of network effect problem.

I say worst because most network effects are just that your thing is only valuable in proportion to how many other people are using it, but encryption manages to be even worse because most of the existing strategies to mitigate a network effect don't work. For example, one thing you can do is provide an easy onboarding process: people who don't use Facebook much still get their Facebook messages as emails. Encryption makes that kind of backwards-compatability fundamentally impossible. You can't make encrypted email work for people who use unencrypted email; that's the whole point of encryption!

But maybe there's something interesting you can do if you're willing to relax your definition of encryption a little bit. Along with each unencrypted email, you could send an encrypted copy of its contents as an attachment. That on its own isn't very useful, except that the encrypted copy doesn't actually have to be a copy; you can change it to anything you like. If you recieve such an attachment and it's different from the unencrypted contents of the email, you display it instead. So by default nothing changes, but there's an extra space for adding hidden messages.

I call this idea semi-encrypted email, and I think it has a number of pretty neat qualities. Firstly, if a lot of messages have encrypted attachments it reduces the suspiciousness of any individual encrypted message. You still have the ability to send regular email to regular people, and they'll just see yet another useless attachment. However, there's a nice backwards-compatible upgrade path to layer encryption on top.

I think there would be some encryption and key distribution-related hurdles to jump, but I think those are fairly tractable. At this point the main thing holding encryption back isn't technology, it's user design, and that's where I think the new frontier of crypto innovation is.


An interesting way to think about decisions and how to evaluate them is to consider input and output separately. Inputs are all the decisions and things that you control. Outputs are the consequences, or the goals that you measure but don't necessarily control directly. It seems like in certain situations you can come to very different conclusions depending on whether you're measuring input or output.

For example, imagine a startup you're working on doesn't get any traction, runs out of money, and fails. Okay, so you must have done something wrong if you failed. But that scenario only describes the output half of the situation. Perhaps the startup failed because you made mistakes, or perhaps you did the right things but the startup concept itself was just bad. Or, and few startup people seem willing to admit this, maybe you just got unlucky. I think in this case measuring your own actions – the input – is more useful.

On the other hand, imagine you have a friend who is going to drive home drunk, and you want to stop them. Maybe you talk, ask, plead, convince, argue, and so on, but nothing works. "Well", you think, "I have tried everything" and give up. This is you measuring the input: you've done what you could, and even if that doesn't get the result you wanted, that's good enough. Then someone else just hides your friend's keys. You hadn't thought of that, and it's pretty hard to justify your input-centric view when you didn't achieve the goal and someone else did with the same resources.

So what happened there? I'd like to believe that you can always just focus on inputs, because they're the things you control. Trying to maintain control over the things you don't control sounds like a recipe for constant cognitive dissonance and unhappiness. But the problem is that, like in the drunk friend scenario, you don't always have a complete understanding of the input space. Unless you are certain you have a complete understanding of what decisions you could make and how to evaluate them, you can't rely exclusively on measuring input.

I think the right way to incorporate both elements is to avoid evaluating outputs, but rather measure your inputs, and use your outputs to evaluate your measurements. That way if your understanding of the inputs is incomplete, it will show up in your outputs, but you still won't end up judging your actions based on your outputs directly.

The third dimension

I think right now battery technology, or, I guess, energy storage in general, is the most significant arm of technological development. So many other important technologies depend on batteries, and would benefit so much from significant breakthroughs in battery technology.

Mostly, I think of batteries along two dimensions: battery life, and battery size. Obviously those are kind of related, in the sense that you can make a longer life battery if you can figure out how to make batteries smaller and just use more of them. Unfortunately, progress in those areas seems to have stagnated over the past few years.

But, having started using a phone with the new Quick Charge system, I've begun to realise that charging time actually makes up a fairly significant third dimension. If you can charge a device quickly enough that recharging stops being inconvenient, then battery life stops mattering as much. At an extreme, you can imagine a battery that charges instantly, at which point you could just touch your phone to a charging point for a second if you ever notice it getting low.

It seems like this is one of those situations like CPU development where physical limitations are stopping us from making much direct progress, and future gains have to come from making clever end-runs around the problem. Those end-runs come from figuring out some other dimension like recharge speed that can still be optimised, even if it isn't the thing you set out to improve.

Stuck in the middle

When setting goals it seems like the temptation is to go long-term and general ("what do I want my life to be?" "where will I be in 10 years time?" "what do I want to do before I die?"), or short-term and specific ("I want to quit smoking", "I want to get this project finished", "I want to get in shape"). But I think there are issues with both those approaches.

The abstract goals suffer because whatever you decide in the long term is hard to translate into consequences now. And your specific goals suffer because often the short-term turns out to be longer than you expect. In effect, I think both "do immediately", and "do eventually" are off the mark. Instead, I prefer to think "what would I like to be in the middle of doing?"

The reason is that being in the middle is actually where you will be the vast majority of the time. You'll only be in a position to know if you've satisfied a 10-year goal after 10 years, and in the mean time there's a lot of unknowns. Conversely, most of the immediate goals are actually part of a pattern of actions over a longer time. So instead of "getting in shape", it may be more useful to think "if I want to be the kind of person who is in shape, what would I need to be in the middle of doing often."

I think these kinds of questions encourage thinking about your life as trying to converge on a continuously optimal situation, rather than a series of discrete "win" points which, although they feel good, are fleeting and lose their flavour quite quickly.


It's easy to think of constraints as limiting; by definition, they place limitations on what you can do. Especially if you're working creatively, it seems obvious that the more freedom and flexibility you have, the more room you have to be creative. However, I've found the opposite to be the case, and in fact reducing my available options and the number of ways I can be creative has been a very useful creativity-promoting technique.

I think the reason for this is that your brain works very effectively as an optimisation engine. However, each additional dimension, each degree of freedom it has to optimise severely increases the difficulty of that optimisation. It's the reason you don't learn piano by starting with improvisation: the optimisation-space is too large. If you can cut it down to a manageable size, you get much better creative output.

I've found this helpful not just in terms of reducing the complexity of the creative problem, but in reducing unrelated problems. For example, I think if you want to be creative in your work, you should be uncreative in how you work. Trying to compose a sonata on a honkey-tonk piano that you only play at your mate Steve's house on weekends is adding unnecessary difficulty, even if it seems more artsy and creative. A more stable environment makes it easier to be creative within that environment.

So constraints, in the sense of things that constrain the problem space, are more than just a creative trick. In fact, I consider them an essential tool for solving problems.


I had a neat idea today for a way to teach the mechanics of programming. It's in the spirit of Zed Shaw's "learn the hard way" philosophy which says, in short, an often neglected part of teaching programming is teaching the mechanics of writing code, rather than just the theory of programming. In practice, that tends to look like a lot of typing.

So I think it'd be great to make something that focuses on a common mechanical technique – say, abstracting two identical statements into a function, or wrapping something in a loop, or renaming a variable – and could generate an infinite number of variations of those problems, and check the solutions for correctness. A standard session with this tool would look like running through fifty demo exercises in quick succession.

Although this might seem like it would only be useful for new programmers, I think there would be some benefit even for experienced programmers. Much the same as even advanced musicians still play scales, being able to do the basic mechanical techniques more quickly and mindlessly could help save a lot of time and energy in total.

Superlative number market

I was reminded today of an idea I've wanted to implement for ages: a superlative number marketplace. I've often wondered if there's a good way to solve the very pressing question of which is the biggest superlative number. Is it a squillion? A zillion? A bajillion? Until now, we've had no way to find the answer. However, I believe with the appropriate combination of the wisdom of crowds and free market economics, this problem like so many others will fall.

The way it works is this: you start with some amount of superlative currency. Say, a squillion dollars. You can then place buy and sell orders on a big market to trade away your puny squillions for the much more valuable gajillions or hojillions. Or vice versa, if that's your thing. Over time, these trades would settle on an approximate collective valuation of the various fictitious numbers.

There are still some things to work out. Mainly, I'm not sure how value enters the system as people sign up. Do new players automatically get a fictitious unit of their choice? That seems like it would violate the no inherent value doctrine. On the other hand, the absolute values aren't really important, it's the exchange rates revealed by trading preferences. I don't know enough to really predict whether how different sources of value would distort the market.

Regardless, I think it's important that these minor problems be resolved in short order, so we can get to the important business of determining which is the most superlative superlative number. We do this thing not because it is easy, but because it is hard.

Been-doing list

Another in my series of to-do list variations: the been-doing list. One thing I think to-do lists are particularly bad at is ongoing or maintenance goals. "Go do this thing" makes for a great to-do item, but "keep doing this thing forever" just sounds unsatisfying. However, ongoing goals can be very useful! It's just a matter of finding the right way to frame them.

With the been-doing list, you instead write down habits or ongoing goals you want to achieve, and keep a record of long you've been doing them. I think putting a check next to the habit each day you keep it up is a great way to do it, but I've also heard of people using physical calendars and marking off days. Either way, the point is that over time you build up a record of how well you've stuck to the habit.

With a regular to-do list, the main motivation is to clear the list, to work subtractively. While that makes sense for tasks, I think the best thing for habits is the opposite: to feel like you're buidling something over the course of many small steps.


This weekend I've been playing Factorio, a kind of base building game where you start off gathering resources and making structures, but slowly automate more and more until eventually you're like a kind of logistics gardener, just walking around and tending your production pipelines.

The great thing about the game is that the level of abstraction scales up as your abilities grow. You start off learning the game at a very low level, but as you learn how the basics work it quickly becomes unnecessary to use them. Instead, you can move on to more advanced skills that abstract away the things you already know. It gives a wonderful sense of satisfaction later on looking at the immense amount of work you could have done from scratch, but didn't.

I think this is an important quality in other fields too, not least of which is software development. Good software is designed by building large abstractions on top of smaller ones. And yet we often do a woeful job of passing on those abstractions to others in a meaningful way. Despite how great it is to learn from the bottom up, we usually teach from the top down. Or, often, just teach the top and hope they figure the rest out.

It's no coincidence that games are surprisingly far ahead in how they educate and motivate people. Unlike in the real world where you are often compelled to stick around, entertainment has to earn your attention every moment, or you'll simply do something else. It's amazing that in an environment that unfavourable, many games can still approach deep concepts and complex skills. More than anything else, I think it's a testament to how much we enjoy learning and exploring when it's done on our terms.

Motivation-driven development

One thing I've been noticing recently is that certain kinds of work get much harder as it gets later in the day. Mostly these are things that require a lot of thinking, but also things that are not particularly enjoyable at the time despite being beneficial in the long run. These kinds of tasks seem to require a particular level of motivation that it's easy to run out of.

For some reason, however, it's very tempting to leave those things until later. It's not even necessarily that they're unpleasant; it kind of feels like you need to work up to them, like it would be too much of a jump to go from nothing to immediately working on a big complex problem or something you aren't expecting to enjoy that much.

However, I'm beginning to think that feeling has no actual basis. When I do the most challenging stuff first I almost invariably feel better about everything else after it. I know that the most difficult thing is already in hand, so the rest starts to seem much more manageable and predictable. What's more, as I start getting tired, it's great to have the easiest and simplest things left to do because I know I have less motivation available.

In software it seems common to crown systems for prioritising work as "X-driven-development", for values of X like test or or behaviour. I like to think of this as motivation-driven development. However, I'm not arguing for doing the most motivating work early, but rather the work that requires the most motivation. I think that in many situations it's the most scarce - and important - resource you have.

Build your own database

One unfortunate habit I see people get into is only using one kind of database. Once you get familiar enough with a particular family – say, document databases, or traditional relational databases – it's easy to try to shape all your future problems into that paradigm. Oftentimes that ends up meaning just one single database that you constantly push the limits of when you should really use a different one that suits the domain better.

But maybe there can be a single answer, if it's an answer that lets you adapt the database to the situation. What if you had a build-your-own-database? Instead of prividing you with a high-level preworked answer to the various tradeoffs, it would give you a lower-level set of primitives that you could combine to let you fit the database more naturally to the problem domain.

I'm not entirely sure what those primitives would be – probably some combination of indexes, hash joins, triggered events, and maybe a few options for low-level storage – but I think it'd be an interesting project to figure it out. Databases are too useful and powerful to stick with just one.

Assisted time lapse

I was out for a walk today and I started thinking about how my neighbourhood has changed in the time I've been living here. Not drastically or anything, just that slow suburban development along with an increase in people. But it made me wish I had one of those great time-lapse videos that people make. Problem is, who's going to set up a static tripod for half a decade?

But maybe we could make assisted time lapse software, much like the recent generation of assisted panorama software. Instead of having to carefully line up the shot for each photo, you just move your camera around and a display shows you how close you are to replicating the previous shot. Once you've lined it up closely enough, the picture is taken automatically.

I know there are a lot of amazing personal transformation time-lapses that get made which could probably benefit from it too. I think it's really powerful being able to compress time like that, so instead of minutes or hours we can think on the scope of months or years. It'd be nice to have something to make that process a bit easier.


I re-watched Inception today for the first time since it came out. I think it's aged particularly well. I don't mean in terms of the visual effects or anything, more that the themes in some movies get kind of played out or dated by the particular era they come from. The Matrix's grungy 90s futurism seems kind of naive when we're actually living in the future and, if anything, it's not grungy enough.

Anyway, it seems as good a time as any to mention E-X-T-E-N-S-I-O-N, a Chrome extension which adds an inception button after every mention of the word inception on a webpage. I actually made it back in 2011 but, like its namesake, I think it's aged pretty well.

Brain Sounds

Related to my previous work on brain visualisation and brain sculptures, I've been working on brain, uh... audialization? Whatever the word is, I'm trying to find interesting ways of representing the EEG data with sound.

This one's a fairly direct mapping from EEG frequencies to audio frequencies using a pentatonic scale. 32 synthesizers are playing sine waves with a volume in proportion to the relative power of each corresponding EEG frequency. I had a few other variants using different kinds of mappings (plain linear sounded discordant and weird, exponential sounded relatively normal but creepy), but I think this one is the most promising because it's the least confronting.

I think there are some other interesting areas to investigate by using fewer and more sophisticated measures, and sending those as inputs to a more general music generator function. That'd be less directly related to the EEG signal, but probably leave more room for interesting artistic license.

Path of least resistance

Georg Ohm: Resistance

It seems to me that the most fragile part of any work is just before you get started. Once you get into a groove it's easy to keep going, but before you start you have a lot of options and no particular investment in any of them. At that point it always seems like there are other, more tempting things to pursue.

In fact, I think one of the main differences between work and play is that play is easy to get into – you just start playing. Sometimes play can be just as hard as work, but games almost always have a really easy entry point: you just click "continue game". Every time you're done with one thing, the game presents you with a new thing immediately afterwards. I'd call this the quality of having low resistance.

But there's no reason your work can't have low resistance, at least some of the time. If you specifically aim to have a clear "next thing" at each point, especially when you're stepping away from a project, or know you'll be finishing your current task soon. The goal is to never be at a point where there's something easier to do than the work you want to be doing.

And it probably doesn't hurt to increase the resistance of things you want to do less either.

Lost in translation

An interesting problem I've noticed in many software businesses is that you find this understanding barrier between software people and non-software people. A non-software person just can't understand what a software person is doing – at least, not without becoming a software person themselves. So they have to rely on indirect understanding: either measuring that person's eventual output, or being willing to trust that they are doing the things they say they are.

Of course, you can just hire a software person to tell you whether the other software person is doing what you expect. But then you've just moved the barrier. Now you need to trust the new person. I think it's for this reason, among a few others, that software businesses are generally considered to be best run by software people. That way the management doesn't have that kind of understanding barrier.

But software hasn't necessarily earned special unique snowflake status on this. I realised later on that, in fact, I have the same understanding barrier with sales. I know the general idea of sales: you convince people to buy things. I can tell if it's going well (people are buying things), or badly (people aren't buying things), but beyond that I don't really have the skills to evaluate. Beyond looking at those obvious measurements, I wouldn't be able to evaluate whether a head of sales was doing a good job.

I believe this is a general problem with work in any kind of expertise-driven field; either you need to have the expertise to evaluate it, or you have to trust someone else who does. It's concerning how many instances of this problem the average company must have and, of course, the obvious resulting failures like embezzling. But I also think it's worth considering how many companies you might never hear of because they try to swing for the fences in two expertise-driven fields, say, software and architecture, and run into the understanding barriers of both.

So is the answer just to find trustworthy people to act as your understanding barrier sherpas for every field you might need? Maybe, but that strikes me as an inherently fragile answer. Even if you can reliably and scalably find trustworthy sherpas, they still have to translate things into concepts that are meaningful to you, which creates limits in what you can understand and what you can contribute back.

If that's important, then I think it's better to just learn the things. Become a software person, become a sales person, become an architecture person; at least enough that you can understand and evaluate the work of others. If you aren't able to do that, you'll just end up interacting with them superficially, like a tourist who doesn't speak the language.

Bounded society

A friend pointed me to an interesting concept a while back called bounded rationality. In short, it's a way of asking how optimally something with a limited capacity can make decisions. While a supercomputer could run complex calculations or simulations to determine the best course of action, our brains have a much smaller computational ability. Instead, we rely on various tricks and shortcuts that, although not strictly optimal, may be the best we'll get under the circumstances.

I've heard fairly often that democracy is a similar kind of best-we'll-get system. "the worst form of government, except for all the others", as usually attributed to Churchill. Although more autocratic systems can be more effective and powerful in the short term, they always seem to go bad after a generation or two. Or to put it another way, the best case of an autocracy can be better, but the worst case is much worse.

I've also heard it said about capitalism as social system. We could certainly do some pretty amazing things if everything wasn't so focused on strict exchanges of value. At least, in the best case. Much like with political systems, though, the worst case of capitalism seems positively inviting by comparison to the alternatives. To me, what capitalism does best is turn a whole lot of individual selfish actions into a fairly workable system. It doesn't require a lot of trust.

From these examples and others, it seems evident to me that the way we design societal structures is a kind of bounded rationality writ large. Perhaps in an ideal world we could have a perfectly rational society that ensures the best possible outcome for each member, but our own limitations make that impossible. Instead, we have to try to make the best society we can on the back of those limitations.

That said, it's also worth considering that our limitations aren't permanent. As our technology and our culture develops, some of those assumptions will be invalidated and those old infeasible systems may stop seeming so infeasible after all.

At the rate our technology is moving, maybe it's already happening.

Universe tuning

I've been prototyping some stuff for the ambient cellular automata game, and it never ceases to amaze me how much like alchemy messing with cellular automata is. Too much CREATE_THRESHOLD and everything explodes in brilliant white. Too much DESTROY_THRESHOLD and it all disappears.

It reminds me a lot of the precarious nature of our own universe, where the various phyiscal constants would prevent us from existing at all if they were only a little bit different. After messing around for a while, I got the above parameters to make something fairly lifelike and aesthetically appealing. It certainly wasn't easy, though!

From a game design perspective, I think a big challenge is going to be minimising the amount of arbitrary fiddling while still allowing for a lot of flexibility in changing the automata behaviour and eventual results. I can now fairly reliably explain why gods are so capricious in all our mythology: they're irritated that they spent so long tuning physical constants and the universe still doesn't work like they want.

Also, if you're the cellular automata I just made and you're sentient and coming to this page for answers, I'm really sorry.

Divide-by-zero days

A calendar showing NaN

It's pretty convenient that brains seem a lot more tolerant of nonsense than computers. Specifically, it's fortunate that, however we're wired, "this statement is false" doesn't send us into paradoxical paroxysm. It's not clear that an artificial intelligence would necessarily fall foul of that problem either, but there are a lot of ways in which the inherent rigour of computers seems to make things more difficult than the fuzzy logic and forgiveness of our fleshy processing units.

On the other hand, I think there are some interesting parallels between the failure modes of formal systems and failure modes of our informal equivalents. In (non-exotic) mathematics, division by zero is undefined. On a computer, division by zero will variously return infinity, blow up or return a Null. Luckily we are not affected by such trivial problems! Well, maybe...

A wonderful reddit comment from a while back defined the "zero day", a day where you make no progress towards your goals. The author insists that zero days are the absolute number-one thing to avoid, and my experience agrees: they are uniquely demoralising. So why is this? Obviously to some extent it just feels bad to not achieve goals, but why does it feel so much worse to get nothing done than to get nearly nothing done?

My theory is that we extrapolate out from our present conditions when we think about the future. So our information about the future depends on our information about the present. The less we're doing in the present, the less information we have. If we work all day we have a fairly robust estimate of how much work we can achieve in a week. If we work for an hour that estimate is less reliable.

But if we don't do any work at all, we have no basis for a prediction. Our estimate is undefined: a divide-by-zero.


the git-issue octopus

I've been thinking recently about issue tracking for my various projects. No doubt a lot of it is going to happen on GitHub because that's where most open source stuff happens, but I feel like pinning my projects exclusively to GitHub is a bad idea. On a small scale, having a little TODO file or something would be fine, but what about on a larger scale?

What I'm thinking about is making a special "issues" branch, disconnected from the rest of the tree. It has one file per issue in some kind of structured format, probably JSON. That file contains data about the issue - a description, tags, etc. Comments are commit messages with optional changes, with one separate thread of commits per issue file. All the threads are octopus merged at the end and that forms the HEAD of the issues branch.

Then I could make a two-way gateway that converts GitHub issues into this format and back, and a couple of command line and web tools to use it natively. Issue nirvana!


↑ Click to play ↑

I made this today while experimenting with cellular automata parameters, and I found it uniquely beautiful. After a while the patterns stabilise and start to look like a kind of constantly-moving circuit diagram. I call it Chip. The source is on GitHub if you're that way inclined.

Do you say "made" or "discovered" with cellular automata?


Some recent images from my ongoing Brain Activities:

Watching a nice video of a beach

My brain on palm trees

Listening to classical music

My brain on Erik Satie

Listening to The Prodigy

My brain on The Prodigy

Looking at Reddit's /r/gore

My brain on disgust

The two measurements are fractal dimension and sample entropy. I'm told they're both kinds of nonlinear analysis, though I confess the definition of nonlinear somewhat escaped me. Speaking of non-linear, the algorithm I'm using for calculating sample entropy is ridiculously slow. I think it's at least n2, maybe worse. There's apparently a faster version using magical K-D trees but it's not very well described anywhere and the maths is a bit over my head.

However, maybe I can just ditch sample entropy entirely for another entropy measurement. I've recently learned that you can get a very robust entropy measurement by just zipping your data and taking the ratio of the compressed to uncompressed size.


I didn't write anything last night because I went straight from non-stop brain stuff to a party. I can't think of a way that I would have made those decisions differently under the circumstances, so I think not writing that evening was perhaps inevitable, or at least inevitable given priorities I'm otherwise happy with.

One thing worth considering is that if I'd had a pre-prepared emergency post, I could perhaps have just dropped it in. I'm not sure if that defeats the point, but it's something I might try for next time.


I know a lot of people who would be quite keen on some kind of brain preservation. There's been some interesting progress in phyiscally preserving brains in a way that, presumably, preserves their function until we have the technology to reanimate them. Though, of course, it's fairly unlikely that your recently-thawed brain would be transplanted into a new body directly. Instead, we'd likely have some way to store and transfer the information represented in that brain.

And for many others that digital transfer, mind uploading and so on, is the real point. If you could upload your brain into a computer, it would never be possible to really die. Of course, your body could die, but your mind could live on, either in a new body or, perhaps, simulated directly on a computer itself. There's some fairly significant ethical mazes to navigate, but there's no arguing that there's something fairly compelling about immortality.

But what if only part of your memories could be reconstructed? Or if the version of you that lived on was imperfectly duplicated – had some slight personality changes or quirks not present in the original. We seem fairly comfortable with assigning a continuous identity to people who experience personality changes or lose memory from strokes. Obviously it would be ideal if the replication was completely accurate, but something is, perhaps, better than nothing.

The craziest thing I've heard along this vein is the idea that once you accept imperfect replication, you might not need mind uploading at all. Maybe I die with no brain scan, no cryogenic process, and a direct copy of my mind is impossible. However, you have thousands of hours of video of me talking, thinking, interacting with people. Could you reconstitute a mind by working backwards from that material? What if it's not video but third parties' memories of me? Or my writing? Perhaps this post itself could someday be used to remake its creator – or, at least, someone pretty similar.

The uncertain question, of course, is how much that reconstituted person would be you. Or if it's not you, would that still be worthwhile? Our notion of identity is very limited, and seems unlikely to hold up in the face of the serious complexity the future will bring. For me, I think I'd be happy to know that someone who thinks like I think is out there, who remembers some of the things I remember or shares similar ideas. Whether that person is me or not may be beside the point.

Once you get that far, it starts to seem like the whole mind uploading thing might not even be necessary. If what I want is for my mind to live on beyond the lifetime of my body, and I'm willing to accept that it may happen imperfectly or piecemeal, I can start doing that today. Each time I share an idea, I'm imperfectly transferring that part of my mind. If the person receiving that idea likes it enough to share it with others, the process will repeat.

And if that idea lives on, jumping mind to mind through the generations, maybe that is a kind of immortality.

Bayesian democracy

I was reading today about liquid democracy, which is a kind of improvement on direct democracy. Most of what makes direct democracy difficult, aside from the logistical difficulties of holding so many votes, is just that most people don't need to have an opinion on most issues. In liquid democracy you can just delegate your vote to someone else when you want. That can be a friend or a community leader all the way up to a traditional-style professional politician.

But of course you still have the problem of knowing who to delegate to. In some cases you can rely on there being a public figure whose opinion you trust. However, for many issues that process could be just as onerous as figuring out which way to vote. Some of the liquid democracy systems also seem to suggest that you could delegate certain topic areas, ie you have an environment delegate and an economy delegate. Though it's not clear exactly who would decide how things fall into those categories, or what would happen if an issue falls into both.

I think there's a more elegant solution to this: use Bayesian inference. Every vote is option, and by default you're assumed to vote with the majority on any given issue. However, you can at any time change that default vote and that updates the prediction of your future votes. If you vote in a pattern that is similar to a bloc of other voters, your default votes will become more similar to theirs. Essentially, it's the same algorithm you might use for any recommendation engine, it just recommends a vote.

I call this idea Bayesian democracy. I think it could be a pretty interesting generalisation of liquid democracy, allowing you to delegate your vote, not to another person, but rather to a statistical model of your own preferences.


I've been spending more time working on prototypes recently, as in code that you write to figure something out or explore an idea, rather than having a particular end goal. It seems like it's very easy to get fixated on trying to achieve something, but there's also a lot that you can get out of not achieving anything and just sort of seeing where the code takes you. That tends to happen at the start of any new project anyway, but it feels even better without the vague sense that I should stop screwing around and get things started.

A recent prototype I made is promserver, a minimal Promise-based webserver. The goal is to make something with very few features that just does the minimal work necessary to bridge between some code and the web. Here's what a hello world server looks like:

promserver 8000, (req) -> "hello, world!"

And you can return an object if you want headers and things:

promserver 8000, (req) ->
  status: 418
  body: "I'm a little teapot!"

The fields are modeled after the fetch API, and anything in a Promise is automatically unwrapped so you can do fun things like this:

promserver 8000, (req) -> fetch ""

I think it's pretty nifty for a couple hours messing around. I'm not sure it'll go anywhere, but I'm also not sure it needs to. Chip also came out of some similar experimentation earlier. I think this prototyping thing has been a big win so far; the consequences of experimenting like this seem disproportionately good.


I got partway writing through my post last night, but it turned out to be significantly longer than I expected. This has happened a couple of times and normally I just eventually cut my losses and start over with something shorter, but I ran out of time to do that.

I usually have some idea of what will turn out to be a long post rather than a short one, so I think the right way to prevent this in future is to give those some consideration in advance, and maybe set aside time specifically for the ones that will be longer. I have a few half-finished longer posts from earlier, so I think it would be helpful for resurrecting those as well.

Names vs IDs

There are a lot of things out there, and to distinguish them from each other it's necessary to have some kind of reference that identifies them. I'd like to make an argument that there are really only two ways to do that: Names and IDs. I believe they are distinctly different, even opposite, and that attempts to mix them in forms like usernames or logins result in name-like IDs that are inelegant, ineffective and user-hostile.

An ID is something that uniquely identifies a resource. It is designed primarily for the use of machines and consequently it is not necessary that it be human-readable or meaningful in any way. A name, by contrast, is a representation of how humans identify things. A thing can have more than one name, a name can refer to different things depending on context, and the right name for something depends on the person.

I first encountered this dichotomy ages back on Freenet, where there were two main kinds of identifiers: Content-Hashed Keys (CHKs) and Keyword-Signed Keys (KSKs). The former are defined by their content, and are thus a perfect kind of ID: that content will always have that ID, and that ID can only ever be assigned to that content. The latter, on the other hand, used keys derived from a simple word like "gpl.txt". That was more like a name, but unfortunately still had some ID-like semantics. A KSK was expected to map to exactly one resource, even though anyone could define one. The predictable thing happened and someone eventually replaced "gpl.txt" with goatse.

Web addresses are another example of names vs IDs. The DNS system maps human-meaningful domain names to machine-meaningful IP addresses. An IP address is fairly ID-like, though not totally unique (the same site can usually be reached through multiple IPs, though there are interesting ideas to change this). However, what really lets down the internet is the structure of domain names.

Domain names, much like Freenet's KSKs, are an attempt to define a human-readable but still unique name-like ID. The result is something that doesn't function well as either. To prevent the GPL-to-goatse problem, we maintain at great expense a global database of domain-to-IP mappings and treat them as purchasable property. The result? Valve, the multi-billion-dollar videogame company and creators of Steam, the biggest online game store, own neither nor In fact, at the time of this reading, neither site has anything on it. What a total failure at representing name-like semantics.

On the other hand, Googling "Steam" or "Valve" will give you the right result. Not only that, but if you're a plumber and you spend all day searching for plumber things, you'll get personalised results that are more likely to suit your plumbery interests. The result is that Google is a better name system than DNS, and user behaviour reflects it. It's very common now to type the name of a website into Google to find it, with sometimes hilarious results.

I believe this is part of the reason for the runaway success of Google. Search, as it's usually defined, is a process of finding information. Queries like "best Beatles album" or "Kim Kardashian's baby" are examples of finding information. But we also use search as a lookup, the same way you would have once looked up a name in a phone book, or a book in a library index. What set Google apart was that it was fast enough, and simple enough, to function not just as a search engine, but as a universal name lookup service.

Other systems can learn a lot from search engines in their name handling. For example, on Facebook you don't really use usernames to refer to people. Instead, you have a user ID that's just a big meaningless string of numbers, and your real-world name. When you want to look someone up, you either have a link to their profile (using their ID) or, more commonly, you just type their name in the search box and they appear. Those names aren't unique, and they're not universal - they change depending on the context and the person doing the searching.

But the thing you have to give up with names is a sense of ownership. If your name is Chris and you join a group with another Chris, there's no trademark dispute resolution process, you just both get called Chris. Or maybe one of you, whoever's better known, gets to be Chris and the other becomes "Other Chris". But these rules aren't designed to protect the primacy of your name as a piece of intellectual property, they're fluid and based around whoever is associated most strongly with the name in a given situation.

Most of our name-like IDs, things like usernames and domain names, are compromises because of weaknesses in computers or human-computer interfaces. In many cases, the computing power and system sophistication just wasn't there in the early days of software to allow for handling names properly. Usernames date back to the unix logins of the 70s, and DNS from the 80s. Back then it would have been not only computationally difficult to do proper name searching, but difficult to build UI for doing name lookups that would be responsive enough. And if your only method for exchanging IDs in the real world is writing them down or saying them out loud, it's important that they be memorable.

However those restrictions are way out of date now, and we have more than enough resources to revisit those compromises. Modern multi-user desktops select a user from a list rather than typing a login. Modern website lookups are mostly done through Google. And I think other name-like IDs will also lose their relevance as we build new systems that supplant the old. The next big frontier is website logins, which a lot of different companies are trying to own.

My hope is that once this particular internet turf war is over and we leave behind our current balkanised mess for a universal notion of identity, we can take on the big dog of ugly name-like IDs: email. Can you imagine if, instead of messaging an arbitrary series of characters, you just message a person? What a triumph of what over how!

The disadvantage you know

Things can seem pretty difficult at times. I'm worried about money, or my work isn't going well, or I'm just in a bad mood. I think what life must be like for people who don't have these problems, and I feel envious of that. How easy it must be to be wealthy, happy and successful! Perhaps you are immediately jumping to say "but everyone has problems!" Yes, perhaps. But is it so difficult to accept that for some people life is just better than it is for you? Is it so impossible that there is someone out there who, for no good reason, has your life plus a little bit more?

I think the more interesting response is to ask: how weak is your imagination that the only kind of better you picture is you plus a million dollars? What about you plus a thousand limbs? Plus a brain the size of a planet? What about you plus a galaxy of robot servants capable of rendering a creation so expansive that the sum of today's humanity couldn't even comprehend it? What about your mind modified to be in a state of pure, absolute bliss without beginning or end?

It seems evident in these moments that there is so much – infinitely much – that we don't have. That we will never have. And yet the thought that I may never rule the universe or transcend time and space itself doesn't really bother me on a day-to-day basis. And if that doesn't bother me, perhaps there is no reason to be bothered that my life isn't better in other ways either.

Creative tooling

A friend remarked the other day that if you want to make a lot of things, it's worth spending a lot of time on your tools. With my recent prototyping kick I've been noticing how often I seem to be repeating a fairly similar sequence of setup steps. I've been mostly messing around with shiny web technology things, so the setup mostly involves local webservers and Coffeescript build scripts, but each kind of project tends to have its own standard setup process.

It occurs to me that depending on the balance of new vs existing projects in your work, the total cost of setup would be vastly different. If you tend to work on multi-year-long ongoing projects, really any degree of setup cost is unlikely to matter. On the other hand, a 37signals-style web consultancy business will probably see multiple new projects a month. So it's important to keep that cost down. However, even that is a very different calculation compared to creating a new project each day, or even multiple per day.

It might sound excessive, but I actually think making multiple projects per day can actually be a pretty good way to do things. If you're looking for new ideas and trying a few different designs, or you want to write code in a highly decoupled (dare I say microservice) style, or you want to validate your assumptions with some throwaway code before you go all-in – all of these are great reasons to create new projects early and often.

But for that to make sense you really need your new project creation process to be really efficient. If it takes, say, 15 minutes I think that's still too long. Ideally it'd be under a minute from deciding to make a new project to being able to start meaningfully working on it. I'm nowhere near that point at the moment, but I think it could be feasible with the right set of creative tools.

I think the biggest improvement would be something like a palette of semi-reusable code chunks. When I find myself doing the same thing a few times in different projects I could drop a copy of that repeated code in the palette and then pull it out the next time I need it. I'd want to be able to do that at different scales – from single lines of code to whole files all the way up to multiple files spread across different directories.

There'd be a lot of tricky work involved to make something like that work well, but I think it'd be pretty useful. The less friction for creating a new project, the easier it is to create and the more experimental you can be.

Spaced propaganda

I was thinking today about the way that reposts survive on sites like Reddit. You might think that there's no value in posting something that's already been posted earlier, and thus the existence of reposts reveals a flaw in the ranking system in some way. However, I'm not convinced this is necessarily the case.

Firstly, people can sometimes appreciate being reminded of something, like an old joke that you've forgotten still makes you laugh when you hear it again. I'd call that individual forgetfulness. But there's also a second kind: over time new users join the site, older users drift away, and often users will miss new content as it comes in. The result is that even if individuals had perfect memory, the group would lose information. I'd call that population forgetfulness.

We have a fairly robust system for managing individual forgetfulness: spaced repetition. You repeat each thing you want to remember on an exponential scale with an exponent that you adjust for each item depending on how difficult it is to remember. This is a promising idea for managing population forgetfulness; could we generalise it to groups?

To do that you'd first need a way to effectively measure how well the population remembers something. For something like Reddit, you could possibly rely on the repost's score. In the more general case, you could probably use a sampling technique and survey people on their memories. Either way, you'd then need some extra statistical trickery to turn that into the right factor for your spaced repetition exponent. Presumably the tuning would then look like targeting a certain confidence interval of remembering.

Anyway, it occurs to me that once you start thinking about the more general problems you can solve, a lot of them turn out to be pretty unsavoury. For example, how often you should show someone an ad seems like it would be modelled fairly well by population spaced repetition. Similarly, how often should you repeat a message as a repressive government in order to indoctrinate people? I guess it would work for that too.

Well, hopefully it can be used more for good than for evil. Or, if not, I hope we come up with some decent defenses against it.

With great power

It's no secret that we're getting better at being humanity. Average life expectancy is increasing, global poverty is decreasing, people are more educated, and our science and technology are making us more powerful and more capable as a civilisation year after year. It's funny to think that as recently as a hundred and fifty years ago, the germ theory of disease was still considered a fringe crackpot theory, and anaesthesia was a party trick rather than a surgical tool.

But today a surgeon working without asepsis or anaesthesia would be considered a dangerous maniac, and quickly be imprisoned. Similarly, our advancing views on the harm of corporal punishment for children has made it illegal in many countries where it once was common practice. In these cases and others, the driver of our morality is our capability. Before we knew about bacteria, how could we fault a doctor for having dirty hands? Before we knew about the dangerous consequences of physical punishment of children, what basis would there be for making it illegal?

I believe that, in a similar vein, there are many commonplace things today that are products of ignorance or a lack of capability to do better. The difficult thing is knowing which ones, but I'd like to hazard a couple of guesses. The first big one is psychology. Our understanding of brains and minds is so primitive today that it's hard to stop finding things that will seem barbaric and negligent in another hundred and fifty years, from our attitudes towards and treatments of mental illness to our casual ignorance of the influences and exploitation of cognitive biases. How can we have a morality of memetics when most people don't even know what it is?

The second, perhaps closer to home, is the construction of software. There was a time when physical construction was more like alchemy than science. You would put up a structure, sometimes it would stay up, sometimes it would fall down. Over time certain patterns became apparent and super amazing 10X master builders appeared who could more-or-less intuitively navigate those patterns, though it was still not entirely clear why their buildings didn't fall down. Today, we have architects who are expected to follow certain principles. If they don't, the building falls down and they are responsible, because they should have known better.

It's not clear at exactly what point we will be able to say that we should have known better with software development. Firms that produce software are already considered liable if their software hurts someone, but software developers under employment are not yet liable if they write bad code. And I'm not sure they can be yet. Who among us is so sure of the right way to write software that we would be willing to encode those ideas in law?

I long for the day when a software developer's signature means as much as a doctor's or an architect's; after all, our bad decisions can already cause similar amounts of harm. But to get there we need to be better, and we're not better enough yet.

The Sound of Life

↑ Click to make noise ↑

I have to say I'm pretty chuffed about this one. In a way it's a followup to Chip from the other week, but this one took a fair bit more wrangling to get to behave like I wanted. I really like the outcome though.

The way it works is that the Game of Life is overlaid on a larger grid of audio regions, making an equal temperament scale with octaves on the y axis and octave divisions on the x axis. The more cells that are alive in a given audio region, the louder it gets relative to the others. The regions are also marked with colour for extra pretty.

You can scale up the grid and the audio regions to pretty large sizes, but I didn't find many settings that worked as well as these. 12-tone is traditional western music, but you get a lot of dissonance when there's too much going on. The 5-tone is apparently very similar to Gamelan music, though I suspect an actual gamelan musician might have something to say about that.

More details and a bigger demo are up on Github.

Relationship technology

It can be quite difficult to explain the modern internet to people who haven't spent much time with it. Not even just in terms of how it works or how to do specific things, but more fundamental questions like "why are you doing any of this?" You start to sound a bit nonsensical trying to explain the intricacies of whether to favourite or retweet, or whether someone is a friend friend or a Facebook friend, or what "seen @ 5pm" with no reply until 6pm means.

The point is that the complexity of the technology itself is dwarfed by the complexity of the relationships we build around it. You could produce an acceptable clone of the functionality of most popular websites without much difficulty, but to clone their community is effectively impossible. I suppose it's no surprise that community has become so important to our technology; even as apes we built and lived within intricate social structures. What we see now is just that the setting for those structures has gone from plantain picking to agriculture to industrialisation to... whatever this is.

But putting aside communities, it's also interesting to consider how technology has shaped the kinds of relationships we have on an individual level. The rise of broadcast media has made it much easier to have very asymmetric relationships, like between a celebrity and an audience member. Of course, we've always had celebrities, but the big difference is that the asymmetry used to be obvious; nobody thought they were Liszt's best friend just because they saw him play piano. Broadcasting makes it possible to hide that asymmetry and make a one-sided relationship feel two-sided.

I was once told that the difficult thing about television is that you have to have the voice and body language of someone very close to the audience, but the volume and eye contact of someone much further away. The goal is to create the feeling of intimacy without actual intimacy. If you can do that, you can save your intimately non-intimate recording once and distribute it to thousands of people, each of whom can feel a connection with you even though you have no idea who they are.

And that only gets us as far as TV and radio, nearly a hundred years ago! Now we have systems that let you share your diary with the world, broadcast text messages back and forth, publish your activities and movements, record off-the-cuff video, or even stream yourself live. There are so many ways to speak the language of intimacy online that in many ways it's a better environment than offline. But with the crucial distinction that this online intimacy is designed to scale, to be packaged and distributed. And there's no requirement it be reciprocal.

I'm not saying this is necessarily a bad thing, but I think it does require a rethinking of how we understand our relationships. If I watch someone talk about their life every day and I form a connection, am I forming it with that person? Or with the version of them I have constructed to fit my needs, like a character in a book? And if, so, where does this imaginary intimacy lie on the spectrum from quirky imaginary friend to Nicolas Cage body pillow?

On the other hand, maybe we should be prepared to accept a kind of abstracted intimacy: I form a relationship with a group, and the members of that group form a relationship with me. Neither the group nor me in this scenario are strictly capable of reciprocating a relationship; one is a collective with no single identity, and the other is a kind of ur-person, reconstituted from various snippets of public information but incapable of volition. So maybe we each form a half-relationship with someone who doesn't really exist, and that's still okay.

Whatever the case, this is only one example of the weirdness that's being left in the wake of all our new relationship technology. The future's going to be an interesting place.

Shell recipes

I've recently been doing a disproportionate amount of typing commands into Linux machines, and it got me thinking about the way that so much advice in the Linux community looks like "here, just copy paste these series of commands", or "download and run this arbitrary shell script". Heck, even the official NodeJS distribution has a just-run-this-shell-script.

But, at the same time, it's kind of hard to argue with results. You often do need to just run a lot of commands in a sequence and, well, that's what a shell script is for. Maybe the answer isn't to shy away from it, but dive in head-first and make a more sophisticated workflow around running arbitrary shell scripts. I'm thinking something a bit similar to IPython notebooks, where the text is interleaved with code and you can run the commands one at a time and verify their output. It'd be a kind of literate approach to shell.

Maybe you could even combine it with a transactional shell to confirm that each command did what you expect. It'd be strange to get to a place where shell scripts could be considered user-friendly, but I can't think of a good reason why not.


I've been going deep into the guts of Git lately, and I have to say it's really beautiful. Not the code necessarily, but the purity of the design, the primitives on which the system is built. Everything follows so simply from just a few initial concepts. You have a content-addressable object store which stores files, or trees of files, or commits which point to trees and other commits. Because everything is based on the hash of its contents, each commit forms a Merkle tree that uniquely identifies everything, from the commit itself to the commits before it to the trees and files themselves. Gorgeous.

To me that is the absolute essence of great code, to find a minimal set of concepts with maximal conceptual power. You can really feel the difference between a system that has been built on elegant foundations and one that's just compromise upon compromise trying to make up for an irredeemable core. Good primitives are often so pure and powerful that they extend beyond code and end up more like philosophical tools. A content-addressable store is the idea of referring to things by a description of what they are, rather than an arbitrary label. Git's way of representing history is the idea that you can get rid of time entirely, and just define the past as the set of things that must have happened for the present to be like it is now.

It's extraordinarily satisfying when you learn a new primitive that opens up a whole new class of follow-on ideas. Even more satisfying is when you are struggling to find the right set of primitives to build something powerful on, and then everything suddenly clicks into place.

But the most satisfying of all – I assume – is discovering a brand new primitive. Something that nobody's thought of before. Relatively few people have found ideas that powerful, but it must really be something to unearth a whole new way of thinking, like peering into the universe itself.


An idea came up today that's been floating around in my head for a while. I keep running into issues where no single computer I have access to has the exact mix of resources I need, and I wonder why it is that running things across machines is so difficult.

An example: I was recently working on some large files. I had to copy them around, write some custom code to do processing on them, and then turn them into dvd images and upload them somewhere. The problem is that my home internet connection is too crappy to upload a lot of files in anything close to a reasonable amount of time.

So instead, I provisioned a cheap little ARM-based cloud machine in France. Unlike Australia, Europe has good internet, so the uploading and downloading was no longer a bottleneck. But the latency is really high, so I had to kind of awkwardly shuttle things back and forth so I could write code on my local machine and run it on the remote machine.

During the whole process I remember thinking how cumbersome the whole thing was. It's great that I could do it at all, but it definitely wouldn't be described as a seamless process. I think if the Glorious Cloud Future is to occur, we need something better.

What I'd like to see is a kind of metacomputer: a computer built out of other computers. It would automatically distribute computation depending on the kind of resources required and the cost of transferring data between resource locations. The end result would be that you can add lots of different kinds of resources, and even do it dynamically, and the system turns that into the best computer it can.

In my example, it would recognise that the cost of transferring the large files is high and the cost of transferring my keystrokes is high, but the cost of transferring code is low. So the file processing would be allocated to the remote server, but the process that turns keystrokes into code (my editor) would be allocated to my local computer. However, if the server was much closer to me (but I still had crappy internet), maybe it would just move all the computation to the remote server and leave my local computer as a dumb terminal.

What's even more exciting about this is that you could integrate such a system so well with cloud server platforms. If the metacomputer can automatically redistribute resources when they become available, there's no reason it couldn't automatically add more resources when needed. You could even give it a value-of-time measurement, up to which point you'd be happy to spend money if it saves you processing time.

It's such a shame our computer architectures have not changed significantly in the last half-century, even as the context we use them in has changed a lot. I think at some point it's gotta give, though, and when it does I hope metacomputation is where we end up.


A while back I managed to get a fairly respectable system out of an Android tablet with a keyboard running a stripped-down Ubuntu in a chroot container. The process was somewhat involved, but despite the macguyveresque sense that it was all held together by tape and prayer, it was actually quite stable and I used it for a long time as a portable development machine.

In fact, later on I realised that it was actually the best Linux desktop environment that I've used. You get all of the standard apps and things you're used to (it even runs Photoshop... kinda), but under the hood it's still a fully functioning Linux machine that you can do Real Work with. The only problem is there's a kind of disjointedness because the two halves aren't really working together, they just happen to mostly stay out of each other's way.

The more I think about it, the more I realise that with only a little bit of rejiggering, you could bring those two halves together. You could have a standard Linux environment all the way up to running system services, one of which is the Android Runtime. Then any apps you want to run happen in the sandbox on top of that. You'd end up with something fairly similar to the current developer-friendly state of Mac OS: pretty UI up front, serious unix business in the back.

Maybe that would also be a reasonable direction for the Perpetual Year of the Linux Desktop. New attempts to remake the desktop environment are all the rage these days, but none of them come with millions of apps. It seems like if you could weld the Android frontend onto the existing Linux backend, you'd have an easy winner.

I wonder if anyone's already working towards this. Seems like a no-brainer to me.

The Second Degree

A friend once said to me that if I ever needed access to a gun, despite guns being illegal and pretty tough to find in Australia, I could get one easily. I should just find the dodgiest person I know, and ask them for the dodgiest person they know. That person, without a doubt, could get me a gun. I thought about this briefly and realised that my second degree of dodgy is probably in jail.

It would be pretty fun to make an exercise of going through a few different characteristics, finding the friend-of-a-friend who maximises each one, and interviewing them to find out a bit about what it's like in their life.

A few interesting examples:

It would also be a little bit interesting to see how transitive those properties end up being. Does the most interesting person I know have more interesting friends? It'd definitely be interesting to find out.


I fell asleep unexpectedly early last night and didn't write anything. This shares a common theme with most of my failures in that something happened around the time I was going to write, either social-related or a sudden attack of tiredness. I think a sufficient level of sacrifice would also prevent those failures, but I'm not convinced it's sustainable long-term or in line with my true priorities.

Separately I've had a class of... you might call them semi-failures, or lack of imagination as far as specification goes. I've been writing every day, as defined by the time between when I wake up and go to sleep. Unfortunately, sometimes (like during timezone transitions) the mapping between my day and an Earth day gets a little bit out of whack. For that reason, my posts have been a little bit behind schedule; I've still been writing them every day, but the date they're labeled with is a few days behind the date I write them. Effectively, my dates and Gregorian dates have gone out of sync, and I suspect I'm in need of some calendar reform.

Related problems with posting to my own schedule are that the time when new posts will appear is somewhat unpredictable for others, and that it's easier for me to accidentally miss a day without an absolute reference for where I'm up to. So ironically I need a solution that is simultaneously more consistent (so posts appear regularly) and more flexible (so it can survive occasional life intrusions). Luckily, I have just such a solution prepared.

I'm going to shift from writing in arrears to writing in advance. That is, I'll alter the site so that a post only becomes visible if its publication time is earlier than the current time, and I'll write (up to) one day ahead of the time each post will be published. This should mean a more consistent reading experience without really changing my writing experience. I'm also going to change the post date on the articles to midnight UTC, which is 10am here. That should mean that in the event of a disastrous good-night's-sleep-related accident, I'll still have time to make my deadline in the morning.

Doing things this way also provides the opportunity to front-load posts if I expect to be away from a computer for a little while. I'm not sure if I'll actually do that, but it's nice to have it as an option. Meanwhile, I need to actually make the transition to normal calendar dates, which I guess means having my own annus confusionis.

Look forward to a big flood of posts today!


Long ago I learned about the idea of gradable and ungradable adjectives or, as I thought of them, non-binary and binary adjectives. The difference being that you can be very hot, very scared or very tall, but you can't be very unique, very pregnant or very amazing. The latter class is binary: it's either true or false. I think with superlatives (very fantastic, very amazing, very awesome) it's reasonable to discourage their use. After all, "very" is an intensifier, and if you already have the most intense form of a word, intensifying it more isn't really necessary.

However, cases like "very pregnant" are interesting, because they bespeak a certain confusion about the way we analyse language vs the way language is constructed. While it's true you can construct a formal grammar in which certain properties are binary and certain properties aren't, I don't believe that is actually reflective of our thoughts or our speech. "Pregnant", like "German", "boiling", or "fatal", is a cluster of concepts that we associate together. Much like nouns, which in theory refer to a single thing, but whose basis is really a fuzzy cloud in concept-space. You can easily reveal the nature of that cloud by turning the noun into an adjective: what is the most "chairy" chair you can think of? What is the least chairy?

I explained this idea to a friend and asked whether there was anything you couldn't do this trick with; that you couldn't make non-binary if you tried hard enough. The response was obvious, in retrospect: mathematics. Can x be "very equal" to 3? Obviously not. And in a sense that's the point. Our formal systems are designed to have these strange rigid properties that are alien to us.

Perhaps, if things were different and we were beings of pure binary logic, we might find ourselves inventing systems for fuzzy reasoning instead.

The outlier problem

I was really saddened when I learned about Steve Jobs's death, not least of which because of the circumstances leading up to it. Jobs had pancreatic cancer, normally an instant death sentence, but in his case he had an exceptionally rare operable form. However, Jobs elected not to have surgery, hoping he could cure the cancer with diet and meditation. Unfortunately, he could not, and by the time he returned to the surgical option it was too late.

But the real tragedy isn't just that Jobs died from something that may have been prevented, it's that he died from the very thing that brought him success in the first place: hubris. Jobs had made a habit throughout his career of ignoring people who told him things were impossible, and that's not a habit that normally works out very well. For him, improbably, it worked – very well, in fact – until one day it didn't work any more. This is the essence of what I call the outlier problem.

We often celebrate outliers, at least when they outlie in positive ways. Elite athletes, gifted thinkers, people of genetically improbable beauty. The view from here, huddled in the fat end of the bell curve, gazing up at the truly exceptional, makes them seem like gods. But it's worth remembering that we are clustered around this mean for a reason: it's a good mean. This mean has carried us through generations of starvation, war, exile and death, and we're still here.

It's important not to forget that an exceptional quality is a mutation, no different than webbed toes, double joints, or light skin. Sometimes being an outlier lets you get one up on the people around you and start a successful computer empire. Sometimes it lets you remake the music industry, the phone industry, and the software industry in successive years. And sometimes it means you die from treatable cancer.

I remember Steve Jobs, not as a genius or an idiot, but as a specialist: perfectly adapted to one environment and tragically maladapted to another.


I've been messing around with RTL-SDR lately, which is what led to the ATC feed you see above. I'm pretty impressed with how much you can get done with nothing but a $20 TV tuner and some software. As well as air traffic, I've had some fun moments being reminded that there's still a non-internet radio service, reading pager messages, and listening to the hilarious hijinks of taxi dispatchers.

There's some super serious signal processing stuff you can do using gnuradio, up to and including communicating with recently-resurrected space probes. But most of the software available seems geared for that kind of heavy duty signal processing, with not much in the way of resources for the casual spectrum-surfing enthusiast. The software above is CubicSDR, which is great, but currently limited to analogue FM/AM signals.

It occurs to me that this would be a great area to inflict some hybrid web development on. You could have a nice modular backend in a fast language like C or Go to do the signal processing, and feed that into a JS+HTML frontend. The modularity would make it easy to add new decoding components for things like digital radio, TV and so on, and the HTML frontend would make it easy to create and iterate on different ways to visualise the signals.

Plus, being web-compatible would give you a lot of cool internet things that are currently pretty difficult. For example, an integrated "what is this signal and how do I decode it" database, or Google Map of received location data. The last piece of the picture is that a sufficiently advanced web UI would solve the cross-platform division that's currently making my life more difficult than it needs to be.

I'm really excited about the potential of SDR. The software is currently just a little bit too awkward to be suitable for general use, but it's so close! Most of the individual components are there, it's just missing a bit of glue, sanding and polish.


Certain colours – magenta, for example – are not real in the physical sense. That is, there is no magenta wavelength. In fact, everything on the colour wheel between red and violet, which is all of the purples, only exists in our heads. There's every reason to think that if aliens appeared and we showed them purple, they would say "that's just red and blue!" and laugh at us.

Purple exists because we have our own mental colour system, which is an imperfect mapping of the physical colour system. And this doesn't just mean that we see colours wrong sometimes, or that there are certain colours we can't see, but that there are also colours we can see that never existed at all: imaginary colours. But all of our mappings have this same property; there are certain characteristic edge cases that can lead to imaginary results.

There are a lot of theories for why celebrities often seem to suffer from depression, addiction and public meltdowns. One possibly too easy answer is that we would all act out if we could, but regular people don't have the resources. I'd like to suggest an alternative: empathy. When we see people doing things, we use our empathic system to recreate that feeling in our own minds. But much like colour vision, empathy imperfectly maps the external to the internal.

We sometimes misinterpret feelings, and sometimes feel nothing in a situation where we should have empathy. Is it possible there could be certain imaginary feelings that do not exist except when we feel them second-hand in someone else? I believe so, and I believe one such feeling is fame, or success. The feeling of "now I've made it; I'm here; I did it; I'm great now". We feel this feeling in others, but I don't believe we feel it in ourselves.

So what could be more destabilising than being driven by fame? You see celebrities and successful people and long to feel like they feel. But, of course, you don't know how they feel. And some day, by luck or hard work, you end up like them – and the feeling's not there. What do you do next? Where do you turn if the thing you've been looking for turns out to be an illusion?

Merkle versions

I've become a big fan of semantic versioning since its introduction. The central idea is that versions should be well-defined and based on the public API of the project, rather than arbitrary feelings about whether a certain change is major or not. It also recognises the increasingly prominent role of automated systems (dependency management, build systems, CI/testing etc) in software, and that they rely much more than puny humans do on meaningful semantic distinctions between software versions.

But one thing that can be troublesome is being able to depend on the exact contents of a package. Although it's considered bad form and forbidden by some package managers, an author could change the contents of a package without changing its version. Worse still, it's possible that the source you fetch the package from may have been compromised in some way. What would be nice is to have some way of specifying the exact data you expect to go along with that version.

My proposed solution is to include a hash of that data in the version itself. So instead of 1.2.3 we can have 1.2.3+abcdef123456 That hash would need to be a Merkle tree of some kind, so as to recursively verify the entire directory tree. I couldn't find any particular standard for hashing directories, but I suggest git's trees as being one in fairly widespread use. You can find out the git tree hash of a given commit with git show -s --format=%T <commit>.

Two interesting things about this idea: firstly, the semver spec already allows a.b.c+hash as a valid version string, so no spec changes are required. Secondly, because the hash can be deterministically calculated from the data itself, you don't actually need package authors to use it for it to be useful! You could simply update your package installer or build system to check your specified Merkle version against the file contents directly, whether or not it appears in the package's actual version number.

It's funny, I never thought of versioning as something that would see much innovation, but I guess on reflection it's just another kind of names vs ids situation. I wonder if there will be a new place for human-centered numbering once it's been evicted from our cold, emotionless version strings.


While thinking about Merkle versions I realised that there's no easy and commonly accepted way to hash a directory. I've actually had this problem before and I ended up doing some awful thing with making a tarball and then hashing that, but then it turned out that tar files have all sorts of arbitrary timestamps and different headers on different platforms, which made the whole thing a nightmare.

Since I suggested git tree hashing would be a good choice, I thought I'd put my money where my mouth is. It turns out that git doesn't expose its directory tree hashing directly, so you have to actually put the directory into a dummy git store to make it work. That all seemed too hard for most people to use, so I made Gish, which is a reimplementation of git's tree hashing in nodejs.

It ended up being one of those "this should only take an hour oh god where did my afternoon go"-type projects, but I'm happy with it all the same. Hopefully it proves useful to someone and, even if not, I know a whole lot more about git trees than I used to.

Prototype discipline

As I've started to make more of a habit of prototyping, I've noticed that actually the difficulty isn't so much in making the prototypes themselves. On the contrary, making prototypes is usually fun and interesting in equal parts. Instead, the big difficulty is making prototypes the right way, so that you get something useful out of them, and so that they stay light and exploratory.

The first thing I've noticed is that it's important to have a particular direction in mind. I've heard it said that prototypes should answer a question, but I'm not sure that's necessarily true. There's definitely a place for that kind of specific question-answering prototype, but for me I've found the most benefit in using prototypes just to explore. That said, the exploration goes a lot better if it's focused on a specific idea-space.

Another important thing is keeping the scope and the expectations small. It seems to be particularly easy for new ideas to creep in – which is great, in a way, that's the point – but you have to be able to figure out what to say no to. The other risk is to start treating the code like something that has to be perfect-complete, with all the trappings of a kind of project that it isn't. I've also heard similar-but-not-quite-right advice on this front: that it's okay for prototype code to be bad. I think you lose a lot by writing code you're not happy with even in the short term. The trick is letting it be good prototype code and not something else. The goal is exploration, not to make a polished final product.

I'm beginning to see prototypes as an essential component of the continuous everywhere model: if you can decrease the size of the gap between having an idea and seeing a working version of that idea, it gives you a lot more information and a lot more flexibility in which ideas you explore and how.


Well, getting back up to speed took slightly longer than I thought. However, as of this post I am now officially writing in the future, which is fairly exciting. I figure it seems like as good a time as any to go into a little bit of detail on the website itself.

The whole thing is a couchapp being served and rendered entirely by CouchDB. Each post is created as a JSON document in the database. Here's this post, for example. All documents of a certain type are then rolled up into the bytype view. You can then query that view to get recent posts, for example all of the posts in September. Finally, those views and documents are rendered by some database-side Javascript (yes, really) using Mustache templates into the amazing website you see before you.

Obviously a lot of this stuff is really tightly coupled with the CouchDB philosophy. I think Couch has a lot of qualities that make it really great for a site like this, not least of which is that I can have my own local copy of the website and through magic replication, the site just copies itself into production when I'm ready. In fact, you can copy it too! Just point your CouchDB's replicator at the API endpoint.

I've also finally gotten around to putting the code up on GitHub. I'm not sure why that would necessarily be useful to you, but in case you're curious, there it is. Various parts have been floating around since 2011 or so, which is at least four stack trends ago. Feels good to put it up at last.

Wet floors

An amusing anecdote from the first time I met a good friend of mine: He was writing some code to dedupe files on his fileserver and needed to pull some logic out of a loop to run it somewhere else. He copy-pasted it rather than abstracting it out into a function, saying "oh man, I bet this is going to come back to haunt me". Literally ten seconds later he changed the logic in the body of the loop without changing it in the place he'd copied it to, hitting the exact problem he was worried about.

I think of those situations as wet floors, after a time I was in a KFC and I saw the workers behind the counter skidding around on an oily floor right next to the deep fryers. I spent a long time thinking about how one of those kids was going to slip and put their hand in boiling oil before I even realised I could do something to prevent that outcome. Of course, when I went up to warn them the response was "oh, yeah, that is dangerous". I'm fairly certain they didn't actually clean the floor.

It occurs to me that this is a consistent pattern in software development and elsewhere: you see a problem just waiting to happen, and you notice it but instead of doing something you say "that's going to be a problem". Later on when it is a problem, you can even say "I knew that was going to be a problem". Though that is a deft demonstration of analytical and predictive ability, it could perhaps have been put to better use.

It sometimes seems like the drive to understand things can be so strong that you lose sight of the underlying reality. "I understand how this works" can be so satisfying that it makes "I should change how this works" unnecessary. Or perhaps it's just that understanding is always a positive; it's often not that difficult, and it feels good when you do it. Whereas acting in response to your understanding can be a lot of effort and doesn't always work the way you want.

There is also an element of confidence. Something you believe in a consequence-free way is very different from something that has serious costs if you're wrong. I've heard it said that the hardest job is being responsible for the Big Red Button. When you press the Big Red Button, it brings everything to a halt and costs hundreds of thousands of dollars, but not pressing it costs millions, maybe destroys the whole company, and definitely your career. It must take enormous confidence to press that button when necessary.

A related technique that I quite like is the pre-mortem, where you pretend something has gone wrong and explain why you think that was. What's considered powerful about it is that it removes the negative stigma from predicting failure, but I think there's something else as well: a pre-mortem directly connects your knowledge of failure to the reality of failure. That is, it forces you to imagine the eventual result of "this is going to be a problem": an actual problem.

Perhaps all that is required to defeat wet floors is to drive up your own confidence in that belief, or associate it strongly enough with the actual failure it predicts.

Are categories useful?

I remember reading some time ago about the Netflix Prize, a cool million dollars available to anyone who considerably improved on Netflix's movie recommendation algorithm at the time. Of course, the prize led to all sorts of interesting techniques, but one thing that came out of it was that none of the serious contenders, nor the original algorithm (ie the actual Netflix recommendation engine) used genres, actors, release years or anything like that. They all just relied on raw statistics, of which the category information was a very poor approximation.

So I wonder, if it's true for Netflix, is it true for everything? The DSM-5, effectively the psychiatry bible, had a bit of controversy at least partially because of its rearrangement of diagnostic categories. What was once Asperger's is now low severity autism, and many other categories were split further or otherwise changed. However, the particular validity of a treatment for particular symptoms hasn't changed (or, if it has, not because the words in the book are different now).

Medical diagnostics seems to mostly be a process of naming the disease, and then finding solutions that relate to that name. However, that process can take a long time and doesn't always work. Maybe it would be better if we got rid of the names, and used some kind of predictive statistical model instead. You'd just put as much information is as you can and be told what interventions are most likely to help. The medical landscape would certainly look pretty interesting, but I suspect not in a way that doctors or patients would reassuring, even if it did result in better outcomes.

Ultimately, that seems like the point of categories. They're not good for prediction by comparison to other methods, and often they're plagued by disagreements over whether a particular edge case fits the category or not. However, the alternative would mean putting our faith in pure statistics, and I'm not sure people are ready for that.

Can you imagine a world where we don't categorise things? Where you don't need to determine if something is a chair or not, just whether it's likely you can sit on it? You wouldn't be considered a cat person, just someone statistically likely to be interested in a discussion about feline pet food. Maybe we could all get used to predicting outcomes, rather than needing to understand the internal system that leads to those outcomes. It sure would make life a lot simpler.

But I doubt that's going to happen any time soon.

The unbearable rightness of CouchDB

As I mentioned recently, this website is built on CouchDB. CouchDB is in many ways a very innovative but still very simple database, and it has the unique quality of genuinely being a "database for the web", as the marketing copy claims. However, lately most of the time what I feel about CouchDB is not joy but more a kind of frustration at how close – how agonisingly close – it is to being amazing, while never quite getting there.

The first one that really gets me is CouchApps. They're so close to being a transformative way of writing software for the web. Code is data, data is code, so why not put it all in one big code/database? Then people can run their own copies of the code locally, have their own data, but sync it around as they choose. Years before things like unhosted or serverless webapps were even on anyone's radar, CouchDB already had a working implementation.

Well, kind of. Unfortunately CouchApps never really had first-class support in CouchDB. The process of setting one up involves making a big JSON document with all your code in it, but the admin UI was never really designed to make that easy. The rewriting engine (what in a conventional web framework you might call a router) is hilariously primitive, so there certain kinds of structures your app just can't have, and auth is a total disaster too. The end result is that most of the time you need to tack some extra auth/rewriting proxy service on the front of your gloriously pure CouchApp. What a waste.

There are other similarly frustrating missed opportunities too. CouchDB had a live changes feed long before "streaming is the new REST" realtime databases like Firebase showed up, but never went as far as a full streaming API or Redis-style pub/sub. It has a great inbuilt versioning model that it uses for concurrency, which could have meant you magically get versioned data for free – but didn't. It has a clever master-master replication system that somehow doesn't result in being able to generate indexes in parallel.

I should say that, although it frustrates me to no end, I really do respect CouchDB. At the time it came out, there were no other real NoSQL databases and a lot of the ones that have come since went in a very different direction. Compared to them, I admire CouchDB's purity and the way its vision matches the essential design of the web. But in a way I think that's exactly what makes it so frustrating. That vision is so clearly written in the DNA of CouchDB, and it's such an amazing, grandiose vision, but the execution just doesn't live up to it.


When creating something, it often seems like you start from a particular point and fill the rest in around it. For example, you start with an amazing character idea and build a plot around that character, or alternatively you start with a great plot idea and find characters that can drive it. Or, if you're Asimov, you just write your ideas down and hope for the best.

In software businesses there are similar starting points. You can start with a particular product and figure out the engineering necessary to build it - that's most modern web startups. You can start with an engineering breakthrough or scientific discovery and figure out how to turn it into a product – that's basically the rest of the startups. Even in the software itself, you often have to choose which components to start with, which database or web framework or game engine, and that decision then shapes all the subsequent decisions you can make.

And I think that really misses something. Because when you commit hard to an early decision it means there are significant limitations to how you can make all the other decisions. You can often feel this in a codebase, a kind of impedance mismatch where you can see lots of translation layers between different modules because they're designed in different ways that don't line up well. Or you avoid that by only using modules that fit nicely with the existing ones, even if that means they don't work well in other ways.

I once read that Jony Ive's philosophy on design is different because he spends so much time thinking about materials and how they can complement or inform product design. The particular choice of metal or plastic, or what kind of manufacturing process to use, doesn't come right at the end, as it does with many other companies, but is part of the process the whole way through. Instead of saying "we want a laptop this size, therefore we'll make it out of metal", or "we want something made out of metal, how about a laptop?", it's more like "we like laptops, we like metal, I wonder if those can go together".

Ultimately I think this kind of philosophy is best. Obviously there's nothing stopping you from picking one most important starting point and fitting everything around that, but I believe that really amazing feats of design and engineering only happen as a kind of simultaneous equation. You consider all of the possible options for all of the aspects of the thing you're making, and among those you find a set that fits together so beautifully that the resulting product just falls out naturally.

Of course, that's much easier said than done. All of the options for all of the aspects is a fearsome combinatorial explosion to deal with. In practice, you probably have to pick your battles on that front, and be sensitive to the limitations of your poor fleshy brain and the time available to the project. However, I think that in many cases picking one place to start is a very early optimisation, and an unnecessary one. Taking the time to think about the right set of primitives can give you something much better than you could have designed incrementally.

The Shenzhen Shuffle

Although I've had a certain low-level exposure to the riches of China, it never managed to blossom into a serious electronics habit. But all of that has begun to change recently, starting with a gift I received of some ESP8266 modules, which are basically tiny WiFi SoCs that can even run Lua.

The chips are pretty fun on their own, but nothing compared to the stuff you can do if you have some sensors, lights, wires, breadboards, battery packs, voltage regulators, solar panels... Suffice to say the electronics binging that has probably been my destiny since the age of ten is finally being fulfilled. The process has this great multiplicity: each new thing you buy gives you more options when combined with all the things you already have. And that gives me an idea.

I've never really been into the whole buyer's club type thing, but it seems like this could be really great place for it. You pay some fairly small amount ($10-20/month) and in exchange you get random new electronics stuff delivered each week. China Post's notorious 20-40 day lead time isn't really an issue once you start pipelining the mail. And you could get some pretty cool stuff for the money, especially with collective buying power. Here's some I found in the $1-3 range in a few minutes of searching: mini Arduino knock-off, LED matrix, RFID module, motion sensor, ultrasonic distance sensor.

I think an appropriate name would be The Shenzhen Shuffle. Any takers?

Garbage-collected tabs

I tend to keep a lot of tabs open. I mean a lot, like around 200 at the time of this writing. That's spread over 20 windows, with about 10 tabs per window. There's something about the spatial nature of tabs that really works for me, better than bookmarks or other things. Especially because I tend to have a lot of little projects going at once, they work sort of like a project space. You can keep a bunch of associated research together and close it all at once when you're done with it.

But it appears Chrome is not strictly designed for this kind of usage. It tends to get particularly slow with a lot of tabs, both from enormous memory consumption and because websites seem to like to use your CPU for things in the background. I've found The Great Suspender to be helpful for this, though I feel a bit like it shouldn't have to exist. And even without the resource consumption issues, it's still tough to manage all the windows and tabs.

I think an interesting approach would be something like the way garbage collection in programming. What you want is to only keep the windows and tabs around that are still relevant to what you're doing. Each new tab you open would have a reference to the tab you opened it from, and each tab would have a freshness that indicates how interested you are in that tab. Whenever you interact with something it freshens that tab and tabs connected to it. When things get a low enough freshness they are suspended, and if they are left suspended for too long they disappear.

Though unlike with real garbage collection, you'd never actually delete anything. Instead, they would go into some kind of garbage collected tabs and windows history where you could pull them back out if they were needed for something. You could also possibly have some kind of pinning system for things that you want to keep as a long-term reference. Maybe you could even have a nice UI for tabs or windows that are going to disappear, on concertina them if they're stale tabs on a fresh window.

There would be a lot of tuning to do, especially as far as what updates freshness and how that propagates between tabs, but I think it could be an interesting model. The way I use browser tabs (and, judging from The Great Suspender's install numbers, hundreds of thousands of others) isn't in the same iron-clad way that we are used to for desktop windows. It's less like "I need this information to stay here forever" and more like "I'm interested in this now and, in an hour, who knows?"


I had an interesting idea for a game the other day, based on the good old fashioned heist trope. You're riding atop a train in some kind of post-apocalyptic badlands, with gangs of baddies riding up alongside and trying to climb on board and take the train over. You run around on top of the train fighting them off, but you can also upgrade the train and add turrets and traps and so on.

Essentially it'd be an iteration of the standard tower defense formula with a few action/rpg elements. But I really like the possibilities that the setting gives you; the train never stops moving, although the scenery changes and the baddies get stronger, there's a kind of absurdist constancy to it. It'd be fun to play with that in the dialogue of the game too. Why are you on top of a train? Where's the train going? Why does it never seem to get there?

Plus I think as an art direction you could do some really fun things with post-apocalyptic vehicular battles. Deserts, forests, icy tundra. Molemen on motorbikes. Spider robots with rocket launcher legs. I mean, the ideas just write themselves.

Charisma transform

It's an interesting quirk of our biology that some ways of representing data are much more meaningful than others. We have a very high visual bandwidth, for example, so charts and graphs are much easier for us to understand than numbers. Not so for computers, where the visual information would have to be reverse-engineered back into numbers before it could be analysed. We are similarly very good at understanding movement, but it's trickier to represent things kinematically (though there are some pretty amazing experiments already).

Rather than visualisations or animations, I think of these functional attempts to shift data into a more easily digestible form as transforms, no different to the kind of transform you might do when converting data between file formats. It's just that these formats are tailored for our own peculiar data ingestion engine. You could say that transforms like these are designed to exploit particular capabilities of our hardware.

There's one capability that I think is both powerful and underexplored: our empathy. As inherently social creatures, we are very efficient at understanding and simulating the behaviour of others. However, most tools we understand mechanistically, like a car or a keyboard; you know that everything that happens follows directly from something else. But it only works for simple behaviour. How do you understand the behaviour of a complex network, or a country's economy? Doing a strict mechanistic analysis is too hard in many cases to be useful.

You can understand these complex behaviours much more easily if you can transform them into the empathic domain where we have specialised understanding. And if we could find a way to effectively describe the motivations of an economy, say, by casting the major forces as characters, assigning them emotions and values that reflect their real-world behaviour, I think it would make the whole thing a lot more intuitive.

What's more, our tools are rapidly exceeding the complexity where we can reason about them in anything but an abstract way. Historical computer interfaces had a simple mapping between actions and results; there's a list of commands, you type a command from the list, it does the same thing every time. But what about voice interfaces? Or search results and other name queries? Or really any system with a user model? Smarter computers are, unfortunately, less predictable computers.

I believe that to keep these complex tools usable we will need to develop a charisma transform: something that can represent the behaviour of that tool in a humanlike way that we can more easily model. I think our interfaces will have to develop personalities, or something that we can understand the way we understand a personality. I expect this will take some time and most of the early attempts to be pretty ham-fisted, but it seems inevitable that we'll have to go in this direction as systems become more complex and our capability to logically understand them gives out.

The end of knowledge

There's a great quote, sometimes attributed to Kelvin, but apparently fabricated from things said by one or more other people, that goes "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement." Of course, this was in the late 1800s, just before the discovery of relativity, nuclear physics, quantum mechanics, subatomic particles, black holes and the big bang theory. So I guess you could say that turned out to be a bit short-sighted.

Though, when you think about it, what is the answer? Will we ever know everything? I think the instinctive answer is "no", because the universe is too big, and to an extent maybe we want it to be too big. But, if you follow that through for a second, how could it possibly be true? Purely from an information theory perspective, there's no way you can encode an infinite amount of information in a finite space, so, worst case, there must be a finite description of the observable universe.

That description could be as large as the universe itself, if the unverse was purely structureless and random. I'm not even sure if that's possible; if the universe used to be smaller and denser, the information now can't be greater than the information then unless we assume some external source injecting information in. Regardless, the universe seems to have structure – in fact, a lot of structure – so I can't see any reason it won't, eventually, be completely described. I think, at some point, we will know everything.

And where does that leave us? I mean, what do we do when the universe's mysteries are completely explained to us? Perhaps it will all seem pointless then. But on the other hand, there are a lot of domains where people effectively know everything now and it doesn't seem to bother them. It's possible to know everything about a given programming language, for example, or bicycle repair. I don't think people who use programming languages or repair bicycles are filled with existential dread. Or, at least, not because of the completeness of their knowledge. And many fields seem to just generate an infinite stream of new things.

In the end, I suppose I'm making an argument that essential complexity is finite, but I don't think the same is true of accidental complexity. I read an Iain Banks book where a super-advanced species lived only for a kind of futuristic Reddit karma. Maybe that's where we'll end up.

Going meta

A while back I read the most amazing NASA report. It was just after Lockheed Martin dropped and broke a $200+ million satellite. The sort of thing that you might consider fairly un-NASA-like given their primary mission of keeping things off the ground. They were understandably pretty upset and produced one of the greatest failure analyses I've ever seen.

It starts by saying "the satellite fell over". So far so good. Then "the satellite fell over because the bolts weren't installed and nobody noticed". Then "nobody noticed because the person responsible didn't check properly". Then "they didn't check properly because everyone got complacent and there was a lack of oversight". "everyone got complacent because the culture was lax and safety programs were inadequate". And so on. It's not sufficient for them only understand the first failure. Every failure uncovers more failures beneath it.

It seems to me like this art of going meta on failures is particularly useful personally, because it's easy with personal failures to hand-wave and say "oh, it just went wrong that time, I'll try harder next time". But NASA wouldn't let that fly (heh). What failure? What caused it? What are you going to do different next time? I think for simple failures this is instincitvely what people do, but many failures are more complex.

One of the hardest things to deal with is when you go to do different next time and it doesn't work. Like you say, okay, last time I ate a whole tub of ice cream, but this time I'm definitely not going to. And then you do, and, you feel terrible; not only did you fail (by eating the ice cream), but your system (I won't eat the ice cream next time) also failed. And it's very easy to go from there to "I must be a bad person and/or ice cream addict". But What Would NASA Do? Go meta.

First failure: eating the ice cream. Second failure: the not-eating-the-ice-cream system failed. Okay, we know the first failure from last time, it's because ice cream is delicious. But the second failure is because my plan to not eat the ice cream just didn't seem relevant when the ice cream was right in front of me. And why is that? Well, I guess ice cream in front of me just feels real, whereas the plan feels arbitrary and abstract. So maybe a good plan is to practice deliberately picking up ice cream and then not eating it, to make the plan feel real.

But let's say that doesn't work. Or, worse still, let's say you don't even actually get around to implementing your plan, and later you eat more ice cream and feel bad again. But everything's fine! You just didn't go meta enough. Why didn't you get around to implementing the plan? That sounds an awful lot like another link in the failure chain. And maybe you'll figure out why you didn't do the plan, and something else will get in the way of fixing that. The cycle continues.

The interesting thing is that, in a sense, all the failures are one failure. Your ice cream failure is really a knowing-how-to-make-ice-cream-plans failure, which may itself turn out to be a putting-aside-time-for-planning failure, which may end up being that you spend too much time playing golf. So all you need to do is adjust your golfing habits and those problems (and some others, usually) will go away.

I think to an extent we have this instinct that we mighty humans live outside of these systems. Like "I didn't consider the salience of the ice cream" is one answer, but "I should just do it again and not screw it up" is another. That line of thinking doesn't make any sense to me, though; your system is a system, and the you that implements it is also a system. Trying to just force-of-will your way through doesn't make that not true, it just means you do it badly.

To me that's the real value of going meta: you just keep running down the causes – mechanical, organisational, human – until you understand what needs to be done differently. Your actions aren't special; they yield to analysis just as readily as anything else. And I think there's something comforting in that.


I don't remember where – Covey, maybe? – but I once read that it's much less cognitive effort to be idealistic than pragmatic. For example, if your answer to the question "is lying ever the right thing to do?" is "sometimes", then every time a situation comes up where you might lie you have to think about it. Is this time the time I should lie? What about now? Do I have enough information to be sure? Whereas if your answer is "no, never", then it becomes very simple. Should I lie in this situation? No, because I never lie.

Obviously, it comes at the cost of inflexibility; if you never lie, you do actually lose out on opportunities where lying could be advantageous. Maybe for some things that's worth it, and I'm not sure if lying is one of them. I've personally noticed, though, that every degree of freedom I give myself is an extra bit of cognitive load that I have to endure in order to get the right result. Sometimes, like when designing something new it can pay off, but with habits I've found inflexibility has serious benefits. But, still, how do you know which is best?

Covey (probably)'s answer is that that's what you call character: the things you're willing to give up being able to change situationally, where you'd be happy to say "I will never lie", or "I will never steal". Perhaps there aren't that many of those things, and that seems fine; each one is a fairly significant sacrifice. But there is also a benefit, in that each one also frees you from a class of things you need to consider in a situation. In essence, the more character you have, the easier your decisions will be, and the fewer situations in which you will be able to achieve an optimal result.

I think, to an extent, everyone has things they think of as character traits; if you were asked "what's your character", I'm sure you could come up with something. But I wonder how many of us would be happy being pinned to a specific character, or would be willing to say that we act in-character all of the time. And if there are some qualities that you admire, some things that you would be proud to have as a character, how hard would it be to commit to them? And would it make your life easier to do so?

Mixing signals

What turning on an amplifier sounds like

I was very frustrated today by power-saving technology. I thought it would be a nice idea to hook up my TV, sound system, and selected other electrical macguffins to a master/slave powerboard. That way, I turn on the TV, everything comes on at once. I turn off the TV, everything turns back off again. More convenient and super environmentally conscious/dolphin-friendly. How could I lose?

Well, it turns out that all this environmental friendliness is starting to trip over itself, because most devices now start up in standby mode. So you can't just power them on to power them on, you have to power them on and hit a button on the remote. Some devices, by pure dumb luck I assume, will accept the switch already being pressed in when they start. If that's the case, you can hack the behaviour you want if you figure out a way to hold the button in permanently. I may or may not have a G-clamp affixed to the sound system in the loungeroom.

However, the amplifier in my room was less cooperative. I spent some time searching for a way to trick it into powering on when it powered on, to no avail. Everything seemed hopeless. In a gesture akin to burning a goat carcass, I even emailed the manufacturer's product support address. I have not yet received the bland non-reply I know is my due, but when it arrives I'll be sure to assume a suitable pose of supplication, head towards the Philippines, and pray that I never sink so low again. However, while drowning my sorrows in the depths of old product manuals, I made a curious discovery.

The amplifier has two remote-control ports, one out and one in, so that you can synchronise it with other hardware made by the same company. Of course, I don't have any hardware made by the same company, but, hey, a control port is a control port. The ports were standard 3.5mm audio jacks, so I figured I could break that out to a couple of wires, and then... learn enough signal processing to reverse-engineer it, I guess? I was actually at a bit of a loss about how to proceed. The public transport-themed Bus Pirate seemed like a pretty good bet, but I'd have to order it and I still wasn't confident I'd necessarily get it to work.

But then it suddenly hit me. If it looks like an audio jack, maybe it sounds like an audio jack! I don't need to understand the protocol, I just need to be able to replay it. So I plugged some earphones into the IR-out port and damn near blew my eardrums. Turns out digital signals are not designed for casual listening. Still, that's definite progress! I eventually managed to hook up a USB sound card and a line cable, recorded the "sound" of me turning the amplifier off and on, swapped the cable to the IR-in port and played it back.

And, would you believe it? It worked! I don't think in my entire life I've ever had something deserve to work less and still work. I am gobsmacked by my own hillbilly ingenuity. I am now controlling my amplifier by playing sounds at it from a tiny ARM box that detects when the TV is turned on via HDMI-CEC. I put the remote control audio files up on Github. So if you have an HK-series amplifier, or you just want to hear what a remote control sounds like, that might be useful to you.

But, uh, maybe turn your volume down first. Remote controls are loud.

The Internet Debates

I've written a bit before about new kinds of performance made possible by the internet. And in that vein I was thinking a bit today about the future of debate. Sadly, formal debating seems to be basically irrelevant these days. Outside of political debates, which tend to be fairly heavy on rhetoric and light on substance, there isn't much of a public debate scene. And what little there is is often dominated by personality and spectacle, rather than ideas and good argument. Worse still, the closest we get to taking advantage of the enormous interactive scale of the internet is taking the same debate format and streaming the video to the internet.

My idea for doing better is called The Internet Debates. The debate has two teams and an impartial moderator. The moderator can decide the format, but let's assume four rounds of 7 minutes per team, for a total of just under an hour of content for the whole debate. In reality, the debate itself would be constructed over eight weeks because, instead of being live, each round would be a video collectively created and edited from the combined abilities of the internet. Or, at least, anyone on the internet who wants to participate.

A team's round goes in two parts: firstly, anyone who wants to submits content for the round. These will usually be short (<1min) videos making a particular point. Anyone who has submitted a video can vote on everyone else's to help rank them. In the second part, anyone can submit a candidate edit combining enough content to fill the 7 minute round. Those are also voted on by anyone who has submitted, and the end result becomes the team's video for that round.

In a sense, it's pretty similar to Kasparov v World, but World v World and tweaked to suit the format. Hopefully, much like in the Kasparov game, strong voices on each team would rise to the top and there would be a healthy discussion about how best to present the arguments. At the end, the moderator would declare a winner, but you'd also do a pre- and post-poll delta winner to figure out which team changed the most minds.

I think it would be particularly interesting watching the debate develop over the course of a few weeks. Perhaps people would be driven by feeling that the side they believe most stringly isn't being represented well, and go from watching to participating. It seems like the kind of thing you could get very invested in.

Contextual beliefs

Through the course of our lives we slowly build up a particular understanding of the world, and a particular set of beliefs that allows us to navigate that world. This is a process that works pretty well, but it seems like sometimes we have a lot of difficulty passing that wisdom on. I've very rarely had the experience of someone else describing a particular system or belief to me and being able to accept it wholesale. This seems strange, because something that's true should be easy to identify and accept once it's pointed out to you.

For sure, some of this is just plain stubbornness or NIH syndrome: why accept someone else's system when you could figure it out for yourself with the minor disadvantage of time, effort, and opportunity cost? However, I don't think that alone is enough to explain why it's often difficult to pass along beliefs. It seems to me that the issue is one of context: that often our beliefs only make sense against the background of particular situational realities or hidden assumptions, and the belief falls apart when removed from that environment.

This leads to a second, perhaps worse form of failure to pass on beliefs: passing on bad beliefs. I think of this as the problem that shows up when telling someone "just ignore what other people think of you" or "just be yourself". That is very easy to say once you have a well-developed understanding of social rules and the limits of acceptable behaviour. If you actually ignore what other people think of you, you may well end up yelling "I'm bored" in meetings or peeing on the street. And it's probably unwise to just be yourself if your self wants to hump a stranger's leg in public. These beliefs only apply in a particular context.

And sometimes I think that context is widely shared enough that you can say things like "just be yourself" and most people will be able to apply that belief in a way that preserves its wider applicability. But I think the bigger that idea is, the less likely that is to be true. Your standard universal life advice beliefs are going to cut across so many different situations that it would be very difficult to construct one that avoids all contextual pitfalls. However, I think there is a, much harder, way to make universally applicable beliefs: eliminate context.

Science, by and large, tries to follow this process. The goal is to construct universal laws: not just true the few times we tried them, but true everywhere. This process turns out to be extremely difficult. The laws of mechanics, for example, went from only true in everyday situations involving friction and gravity, to only true for reasonable-sized bodies at sublight speed, to true at any speed for very large things or very small things, but not both at the same time. And there have been a lot of physicists working on mechanics for quite a long time. Being exhaustive is exhausting.

Perhaps the best we can do with our own beliefs is just to call out their limitations and recognise the assumptions and context they live in. I would love to see a Theory of Everything for everyday life, but it does not appear one is coming any time soon. Maybe if all the scientists can wrap up physics and move on to pop psychology we're in with a shot. In the mean time I'd be in a much better place to trust the current state of the art for life advice if it was a bit more modest in domain.


Over the course of this week I'm going to be trying an experiment, so all my posts will be one paragraph long. This is partly an exercise in lowering the high water mark, partly to see what I can get done within the constraint, and partly just for experimentation's sake. There is a real temptation to get stuck in one particular groove with something I do consistently, so I think it will be valuable to inject some instability once in a while. Wish me luck!


I think it's great that you can turn everything into dollars, but there is sometimes a question of whether you should. One thing that has always seemed strange to me is that fines are essentially a punishment you can trade. Similarly, I've heard that the upper class in China sometimes hire people to go to prison for them, which is more ridiculous but only by degree. The danger is that when we abstract things into dollars they become fungible; interchangeable; without significance. Mostly that's useful, but I think that some things need to be different. Otherwise, what's the point? With software, as with money, it's easy to get so caught up in the abstraction you forget why you invented it in the first place.

The Goose and Gander License

I've been spending a lot of time in SDR-land lately, which, owing to the popularity of the GPL there, has mostly turned into an exercise in being angry. I don't think it's a bad license per se, but I have serious reservations about interpreting "derived work" to mean anything that links with your code. I was in an ornery mood, so I put together the Goose and Gander License (or GGL), which turns the tables somewhat and gave me a chance to revel in the ridiculousness of viral licensing. I can certainly see the appeal now.


Sometimes I've observed a particular kind of scope confusion that trips people up. You put your metadata in your data and then end up thinking total nonsense. A good example is Infinity. Infinity isn't a number, it's a statement about the behaviour of numbers. It's metadata, and you shouldn't treat it like data. My favourite trick when I see these is adverbing them. Ban the word "infinity". Instead, only use "infinitely". A number can't be infinite, but a series of numbers can go infinitely. I think we should apply the same thing to random. There's no such thing as a random number, and we should stop saying it. Instead, a number can be generated randomly.

Under pressure

A funny thing I've noticed is that, although I really dislike stress, I work very well under pressure. When the hammer drops and there's a looming deadline, it's easy to keep moving, keep making decisions and do the best I can under the circumstances. This seemed like a paradox, until I recently realised that the difference has a lot to do with practice vs performance. In a high pressure situation it's difficult to think, but easy to act. So maybe it isn't so much a preference as a balance; low pressure for creativity, high pressure for output.

Make it Rain

A great idea came up the other day: a Make it Rain app. Instead of giving someone physical money, or making a boring online transfer, you could point Make it Rain at them and flick virtual bills at their face up to the value you want to send. The transfer itself could be backed by PayPal, Bitcoin or similar, so you can use it without the other party having to install anything. However, if you do receive money with the app open there would obviously be a flurry of bills tumbling down the screen. What a time to be alive.

Post mortem

With this I conclude my week-long experiment in brevity. I can't say I particularly enjoyed it, but it was definitely worth trying. One observation in particular: it's easy to assume, especially for computer folks, that you can just change the underlying implementation and keep the high-level behaviour the same. But it's just not true. Expressing certain things is easier in certain formats, and changing the format changes the kinds of things you express. For what I want, I think longer (but not too long!) is often (but not always!) better.


Things got a little tricky since Monday. I've been particularly busy lately, and I think adjusting back to longer posts is something I could have prepared for a bit better. My writing mainly runs on habit, and shaking up that habit was perhaps dangerous. To top it all off, normally I'd be writing in advance which would absorb a bit of variation, but I've fallen out of doing that because of technical issues that meant the posts weren't appearing when I posted them early.

I think in this case the main kind of preventable failure is not heading off the near misses and various warning signs early enough. Actually, I wrote about grading things on a "fail scale" some time ago, which I neglected to do for this. If I want my writing to be consistent it's not enough that it merely happens, it needs to happen comfortably. That way when things go wrong I'll still have some margin to work with.

Sound Capsule

I've been having a lot of fun with the ESP8266 lately and exploring the various consequences of being able to wire up arbitrary things with Wifi for very little money. One wild idea that popped into my head is somewhere between Geocaching and a public art project, which I call a Sound Capsule.

A Sound Capsule is a public Wifi network, but when you connect to it there's only one webpage you can reach (like those awful airport wifi networks). The page has a little audio player on it that plays some sound, and a button that lets you record your own audio for the next person to listen to. However, you can only record one thing at a time; the previous one gets deleted. The device itself would be a self-contained solar/battery type rig so you could hide the physical components away somewhere safe; you only want people to interact with it as a mysterious Wifi network.

I think it could be pretty interesting seeing what people do with it. Would it just be all random trolling or would people actually leave interesting messages? Maybe you could have a whole conversation with a stranger. Or maybe people would just make silly noises. It really feels like it could go either way.


For some reason injustice has always bothered me more than I would expect. Looked at as utilitarian style goodness-vs-badness, there are a lot of bad things that aren't necessarily unjust, and there are a lot of unjust things that have fairly mild consequences. Ordinarily I'm happy enough with utilitarianism as a basis for ethics, but there's just something that bothers me in particular about something that's not just wrong, but unfair.

A bit of reflection reminds me of a very similar feeling I get when I'm working on or learning about a system that is designed badly. The concepts don't work, the primitives are badly chosen, or the thing is just plain ugly. I get frustrated by that because I'm looking at the gap between what that system is and what it could be, and it feels like a dissonant chord sounds. The closer-but-not-quite it is, the more frustating it is.

I think in essence my feelings about justice are a specialised form of that same system aesthetic. Bad things happen all the time, but only a small number of bad things make you realise that a societal system that gave rise to them is severely broken. I've come to think of justice as nothing more complex than good system design: a just system is how a well-designed system feels from the inside.

Motte and bailey goals

A friend I met up with a little while ago told me he was trying to form a habit of writing some code every day. It seemed like a pretty good idea and, since I'd already had some success with the habit of writing words every day, I figured I'd give it a try. Much like writing, code has the nice quality that it can be big or small; a commit can be a simple bug fix or some documentation, all the way up to a whole new feature or new project. However, I ended up entirely doing little crappy fixes and then gave up entirely.

A while back I learned about the motte-and-bailey doctrine, a delightful construction where you put forth a big, indefensible proposition ("consciousness is caused by quantum effects") and, when you are challenged on it, retreat into a small, defensible one ("quantum effects appear in the brain"). Once the challenge has gone away, you can return to making the more bombastic claim again. The name is from a medieval defensive structure that worked much the same way.

It strikes me that goal-setting could be a less ethically dubious way to use that technique. A grandiose goal is much more motivating than a modest one. Unfortunately, modest goals are achievable and grandiose ones are often not. Can we have our cake and eat it too? Perhaps, if we create motte and bailey goals. Think about a big motte-goal ("add a new feature to one of my projects every day"), but only actually commit to a much smaller bailey-goal ("write any code at all").

This seems to me the major difference between how I think about writing on my website vs how I thought about the code-every-day challenge. I have a motte-goal of sharing ideas or things I've made, and a bailey-goal of just write something. The ambition of the ideal keeps me motivated to do it, and the modest actual commitment makes it feasible. Code every day, on the other hand, is pretty uninspiring by itself.

I think I'll try again soon with a more ambitious motte and see how that goes.

The mathematician's dilemma

There were a number of reasons mathematics was my most difficult subject, not least among them a peculiar habit common to maths lecturers. They would speak with a kind of soporific slowness when introducing a topic, then blitz through the workup on the board so quickly you couldn't follow it at all. It would have been hilarious if it wasn't so disorienting. It's like they had been told at some point, "you need to slow down so that the students have time to understand", and the lecturers had dutifully slowed down everything – except the actual mathematics.

It took me a while before I realised the problem: they didn't actually understand what was difficult! I mean, these were serious researchers teaching first-year mathematics, roughly the equivalent of Tolkien teaching people how to use adjectives. Whatever difficulty there was had long ago been internalised, leaving only a kind of faint confusion. What, exactly, are you finding difficult here? It's "big red house", not "red big house". Yes, of course there's a rule. You can look it up or whatever, but you shouldn't need to. Can we move on to the interesting stuff and stop worrying about which way around the adjectives go?

As you might imagine, this shows up in many more places than mathematics lectures. It's important not to forget that programming is just typing with more rules, but I see a lot of people try to teach programming without spending enough time on syntax or other simple mechanics we take for granted. I've made the same mistake myself. Inside each of us, it seems, is a tiny frustrated mathematician who just wants to skip to the good part. But, if we want to convey our ideas clearly, we have to give time to the obvious.

How else will it become obvious to others?

Life manual

I had a fun conversation today about the various little semi-scripted social interactions you have to perform, how nobody really teaches you the rules, and how easy they are to get wrong. For me, at least, young adulthood was a series of wacky hijinks caused by a lack of this kind of knowledge. Go up to the counter to order at a café, get told to sit down and wait for table service. Sit down at a café, wait for half an hour before I realise nobody's coming. Never ever ever jump the queue, except for sometimes when you're just giving back a form or something. Or at a bar where it's complete anarchy.

Anyway, it would be kind of fun to put together a life manual, with little how-to bits on the most mundane and obvious everyday stuff. How do you catch a bus? How do you eat at a restaurant? How do you buy food at a supermarket? Attend a football game? Deal with salespeople? Order drinks in a pub? All these things are simple enough, but involve a bunch of little steps that make you look a bit silly if you don't know them.

Fortunately for me, I was blessed with passable social skills, a single culture to learn, and a lack of anxiety about looking dumb. But if you're not so lucky I can imagine those situations being pretty intimidating. And regardless it seems like a pretty inefficient way to learn. It'd be a great help to have a manual that could tell you things like whether you pay when you order takeaway or pay when you pick it up.

Actually, I'm still never sure about that one.


In many places it seems to have become the norm to celebrate overwork, sleep deprivation and chronic stress. Ordinarily if you reveal that you are unhappy, unproductive and causing yourself mental and physical harm, people will be concerned and try to intervene. However, the expected response to this kind of suffering is usually praise. You haven't had a full night's sleep in months? Wow, you are so dedicated! I wish I could sacrifice that much.

I've heard a lot of reasons for this phenomenon. Obviously there's a certain degree of bravado, no different to the initiation rituals found in many other cultures; you show your strength by willingly enduring harsh conditions. There's a kind of cargo-cult signaling: productive people are busy; busy people must be productive; more busy = more productive, and so on. But there's another good reason that I haven't seen mentioned: deliberate self-sabotage.

You see it often enough when people present something. They'll talk it down before showing it to you as a kind of hedge against your opinion. Oh, it's not done yet. The code's a mess. I threw it together in an hour so don't expect much. You sabotage it before someone else can judge it, which gives you safety in lowered expectations. A crappy app can still be a pretty good for a half-finished prototype. Better still, there's a certain degree of clever misdirection; if your work isn't representative of what you could do under better circumstances, then any criticism isn't really criticism of you or your abilities.

But that kind of sabotage has to be done one situation at a time, and only applies to certain things. What if you could sabotage everything you do? Say, if you could be overworked, overtired and just doing the best you can under the circumstances? That would mean everything you do is not reflective of what you're really capable of. And if you make a dumb mistake and screw something up, you can offer a wistful "oh, oops, I must have done that because I'm so tired and busy".

With that said, I think there is some real benefit in having a kind of creative liability shield at times. Much as corporate limited liability encourages innovation and risk-taking, limiting how much the things you make reflect on your qualities as a person can be a powerful tool for creativity (with similar caveats about ethics and responsibility). I've heard it described as the freedom to fail, but I'd extend that to "the freedom to fail without being a failure". And, despite its negative qualities, sleep deprivation does let you fail without being a failure.

I believe that if we had better options for creative liability shields, it would reduce the appeal of self-sabotage. Maybe that wouldn't be enough to stop the culture of glorified overwork on its own, but it'd be a good start.

Surgeons, pilots and code monkeys

I learned an interesting thing recently, which is that surgeons don't do surgeries that they don't want to. I mean, obviously you can choose to just not do your job at any job, but surgeons don't get fired for refusing to do a surgery. In fact, it's an important bit of dialogue between physicians and surgeons: what would it take for you to be willing to do the surgery? Nobody orders a surgeon around because, when it comes down to it, they're the ones who put the knife in, and whatever happens afterwards is on their conscience. Nobody else can take that burden, so nobody else can tell them what to choose.

The same is true of pilots. A pilot is considered in command of the plane; they are directly responsible for every life on board. A pilot's decision trumps anyone and everyone else's decision. Air Traffic Control can say "don't land yet" and the pilot can say "it's my plane and I'm landing, figure it out". Doing that without a good reason is likely to lose you your pilot's license. However, it's not only acceptable but obligatory if the situation merits it. As a pilot, those are your lives in the back of the plane, and nobody else can absolve you for what happens to them.

But software does not have this same sense of sacred responsibility. More often the conversation looks like developers saying "we shouldn't do it this way", the management or client saying "well we want you to do it that way", and the developers saying "okay, your funeral". Usually that is a figurative rather than literal funeral, and just means losing money or time. But there are famous examples of the other kind too. As a developer, can you really say you are not responsible for the bad decisions you accept? Are you not wielding the knife or holding the controls?

The current state of the art says no, developers are not like pilots or surgeons. The responsibility for bad decisions lies with management, and you can feel safe in the knowledge that someone else is liable for the bad code that results. Perhaps this makes sense in the classical programmer-as-overpaid-typist environment, where your job was not to think but to turn someone else's thoughts into code. How can you be responsible if you are just one of a hundred thousand code monkeys banging away at big blue's infinite typewriter farm?

But modern software development is not like that. Developers are expected to be autonomous, to understand requirements, to plan, to design, make decisions, build, test, rebuild, deploy and demonstrate. Today's developers are more like pilots or surgeons than anyone cares to admit. They have particular professional knowledge and skills that nobody else has, which gives their decisions a moral weight – a responsibility. If that professional knowledge says "this decision is a bad decision", that developer is every bit as obligated to stand up for their profession and refuse to do the work.

Perhaps that seems overdramatic, but software is growing faster and doing more than any industry in the last century. It's hard to even find something that can't be ruined by bad software. The software in your batteries can burn down your house. The software in your smoke alarm can turn your life into a dystopian horror film. The software in your phone can monitor every sound and movement you make. The software in your car can stop your brakes from working. The software in the cloud can leak your naked photos, arbitrarily remove your data or lock you out of it, and reveal your personal information to repressive governments.

The question isn't whether the people who make these things should be considered as professionally responsible as a pilot or surgeon. The question is: how can you even sleep at night knowing that they aren't?


It's no secret that developers, especially web developers, are often trend-obsessed, and every bit as flighty as fashionistas or foodies. I can't think of a single library or framework I use that's older than a few years, and some are only months or weeks old. Developer communities tend to move quickly too. For a long time developers hung out on BBSes, then newsgroups, then Slashdot, Reddit, and these days Hacker News. I think the answer to whether there'll be a next one after Hacker News says a lot about how trend-driven developers are.

Many people would point to all this churn as change for the sake of pointless change, much like any fashion, just chasing novelty. Others would say that it's just a fast-moving field, and these are genuine improvements to the state of the art. I actually think both of those are true, and I'd like to suggest a third factor: trendiness as a personality test. Constant change gives a fertile environment to test how adaptable and how comfortable with risk a developer is.

Web development and startup culture is often predicated on the idea of rapid, unpredictable change at the business level. This is the culture that brought us the "pivot", which is a restructure executed at the speed of a ballet turn, and the "lean startup", which is a business that figures out what product to build after selling it to customers. A developer who can succeed in that environment would have to be adaptable and embrace risk almost to the point of parody.

You could filter for that kind of person by asking them, of course, or by hoping that the natural sieve of the industry would filter out people who weren't suited to it. But why rely on that when you can do so much with just your choice of programming language or framework? Pick a language that changes quickly and only developers who can adapt to rapid change will work for you. Pick a framework that requires ongoing learning to keep up with and you'll get developers who constantly learn.

If you're lucky, your technology will mature as your company does and you'll be able to keep using it. But not always. Twitter famously made the transition from a Rails backend to Java and Scala as their company got too big to failwhale. This leads to the reverse situation: as a larger, more risk-averse company you can use the same filter to reject the hotheads and fashionistas. They're not going to want to work with your ten-year-old mission-critical Java stack.

And a good thing too. I'm a big fan of Nodejs, but the day I see it on the ISS is the day I uninstall Kerbal Space Program.


I started wondering something while messing around with those little ESP chips: how hard would it be to make a server that responds with a particular message using every protocol? I mean you would set it up to say something like "robots rule OK", or do its best to deliver this hilarious dog picture. And then you could connect to it via anything: web, mail, telnet, ssh, dns, gopher...

Obviously every protocol would be pretty difficult, because there are just so many, and a lot that you wouldn't even be able to find documentation for. But, maybe the entire well-known port range. Depending on where you look, that's between 250 and 1000 services. Which, at first blush, seems pretty difficult. But I bet a bunch of them would be trivial, and after a while you'd find a lot of similarities.

A silly project, maybe, but it'd be pretty fun to go spelunking through the docs for all these old protocols and understand them well enough to pull together a hello world style implementation. Plus, who knows, maybe someday you'll be in a situation that desperately needs the ability to transmit a dog picture over pop2.


There's a particular trap I fall into at times, particularly when I've agreed or planned to do something without really thinking the consequences through. What happens is I intend to, say, reply to an email, but I don't intend to reply to it right now. However, later on I still don't intend to reply to it right now. In fact, through a series of decisions to not do the action now, I don't do it at all, and all the while I'm still convinced that I'll do it. I call this paradoxical state of affairs induction-blindness.

Induction, in the mathematical sense, was best described to me as a three-step dance as follows: if I eat one banana [1], and every time I eat a banana I eat another banana [n->n+1], then I will eat all the bananas [all n]. It's a kind of sister to recursion, in the sense that you can build a proof for any n by recursively applying the second step. What makes induction interesting is that it's more than just repeatedly applying that step, it's a proof from the fact that you could. In a sense it's a kind of meta-proof, a statement about the system itself.

So, applied to a goal, induction-blindness is a failure to go meta. It's thinking about the goal and the steps, but not realising that your system for getting to the goal from the steps doesn't work. If I don't feel like replying now [1], and later I'll still feel like I do now [n->n+1], then I will never write the email [all n]. Despite those steps being trivial and obvious, I often miss that crucial step and fail to induct appropriately.

Perhaps this is an inevitable weakness caused by the mismatch between formal logic and fleshy brain reasoning, but I still think there are ways to recognise it. Most crucial is the inductive step, the n->n+1. Notice that it is perfectly reasonable to put off the email if I'm currently being eaten by an alligator. Trying to write an email then would be distracting and counterproductive. But the difference is that the alligator situation doesn't recurse. There's no reason to think that one alligator attack will mean another alligator attack, unless you're some kind of alligator farmer or in one of those infinite Greek mythological punishments.

The key, then, is to recognise when a situation is self-perpetuating in that same way. Induction-blindness is caused by the mistaken belief that change means difference. But if tomorrow is going to be the same as today, then anything you don't do today you're not going to do ever.

Goal substitution

It's difficult to overstate how good Daniel Kahneman's Thinking, Fast and Slow is. It's a book with so much to offer for understanding human behaviour and decision making. One particularly eye-opening phenomenon described early in the book is attribute substitution, where you take a hard question and replace it with an easier, hopefully equivalent one.

If asked "is Dave nice?", a good answer would involve some kind of deep analysis of Dave's character, but maybe that's too hard to do on the spot. Instead, you substitute an easier question like "can you easily remember a time when Dave did something nice?". So you answer the substitute question but, crucially, you still feel like you're answering the original question. This optimisation works well most of the time, but can lead to some pretty wacky results when it fails.

A similar idea I've been thinking about is goal substitution. Let's say you want to have fun. One good way to do that is to call up some friends, go out and have a good time. But maybe you don't do that, and instead lie on the couch all evening watching TV or reading junk internet. If you tried both options, you'd rate the first one as much more enjoyable. So why don't you do it? My theory is that you substituted the goal of "do something fun" with "do a lesiure activity". The second goal is easier than the first, so you achieve it instead.

This also happens with work. You want to get some good work done, but it's easy to substitute that goal for "do something that feels like work". The problem is, lots of things can feel like work even if they aren't work. Reading emails, checking up on news, researching some technology you might use – these things definitely feel like work, in the sense that they're mentally stimulating and related to work, but may not actually move you closer to your real goal.

In both cases, the issue isn't necessarily redefining your goals. Often that can be useful to avoid overloading yourself or taking an unnecessarily amount of context. It's fair to redefine "go to space" as "assemble and manage the best team of people for going to space". The problem is when this happens subconsciously. You still feel like you're achieving the original goal when in fact you're doing something different. Much like with attribute substitution, this works well when it works, but can misfire badly.

I think goal substitution is a particular issue because a lot of entertainment does not necessarily have to be fun, merely compelling. The main evaluation that entertainment creators make is to measure consumption, not enjoyment. So in a sense entertainment has evolved to target the "feels like leisure" substitute goal very effectively. With such a glut of available fun-like and work-like activities, it's very easy not to notice whether you're actually having fun or doing work.

As for solutions, the best I can offer is the same advice I've heard about attribute substitution: make sure the decision takes time. Substitution works because it takes a hard problem and gives you an easy alternative. If you force yourself to spend five minutes thinking about "is Dave a nice guy?" you won't feel the urge to substitute because there is no easy alternative.

Similarly, taking time before doing something to figure out if it meets your actual goal should remove the allure of the easier substitute goal. I admit that's easier said than done; mostly the decision of what to do next happens on autopilot. We seem particularly prone to optimisation, even when it does us more harm than good.

Tree of Knowledge

I've always thought that there's a sad disconnect between the state of knowledge in research and the state of knowledge of the public. Climate change is the poster child of scientific ignorance, but there are lots of other, subtler examples in health, psychology, dietary science, and so on. Basically anywhere public opinion intersects with science tends to be a disaster. Surprisingly, many scientists don't seem to think much of pop science writers and science journalists, though without them I doubt most people would learn any science at all.

What's missing is a robust bridge between the kinds of questions scientists ask and the kinds of questions the public asks. The closest things so far are The Straight Dope, reddit's /r/askscience and the various sciencey Stack Exchanges, but I think we can do better. The problem is that any explanation quickly turns into a list of citations which you are, realistically, unlikely to verify. These sites translate science into English, but they don't give you any way to explore or learn beyond what you've been given. It's a one-way street.

My idea for an alternative is called the Tree of Knowledge: an arbitrary store of scientific papers and results, interlinked not just by references (the current state of the art), but by dependencies. Each paper has a page which links to previous results or ideas it depends on. That is, which other papers would invalidate this paper if they were invalidated themselves. This is the treelike structure of the Tree of Knowledge. Crucially, at the farthest extent of the tree would be the leaves: answers to nonscientific questions, articles and lay summaries of scientific knowledge.

The process would look like this: you want to know "does it matter what time I go to sleep or just that I get eight hours every night?". Someone has already answered this question (or you ask it yourself and it is then answered). The answer is not just someone making stuff up, but a distillation of the current state of scientific knowledge on the subject. The answer links back to different papers, which you can follow to see a community-edited summary of each paper, its current validity, the full text of the paper itself, and even more links back to the papers it depends on and forward to papers (and leaves) that depend on it. In this way you explore up and down the Tree of Knowledge, following each branch as suits your interests and seamlessly going back and forth between research and pop science.

The great thing about this is it could be a tool that benefits not just the general public but scientists as well. As well as making it easier to get a sense of the state of research before diving into the papers themselves, the Tree would help scientists to popularise their work in a way that still preserves its integrity. It's my belief that, beyond just thinking it's not their game, many researchers are distrustful of pop science and science journalism because of their tendency towards inaccuracy and sensationalism. The Tree of Knowledge could popularise science verifiably, and in a way that's still bound up with the rigor that makes science work.

Also, yes, technically it wouldn't be a tree because a paper can depend on multiple other papers, but Directed Acyclic Graph of Knowledge doesn't quite have the same ring to it.


I have recently been trying yoga and it's something of a surprise how calming it is despite being physically strenuous at times. Though I suppose I shouldn't be that surprised; I've always found running quite meditative in its own way, as well as most other forms of repetitive physical activity. It feels like exercise occupies most of your brain, making it easier to focus, though I have no idea if that's actually true. The yoga instructor said that it should be meditative, that yoga without meditation is just aerobics.

Something about that phrase really got me thinking about physicalism. It's one thing to accept that your mind lives in your brain, but your brain also lives in your body. So you might accept "I always feel sad, therefore there is something wrong with my brain", but "I always feel sad, therefore there is something wrong with my lungs" doesn't sound plausible. But why? Your lungs deliver oxygen to your brain. There's as much reason to think depression could be a lung problem as that a misfiring engine could be a fuel pump problem. In fact, there is evidence that living at a high altitude (with less oxygen) might cause depression.

In that light, I wonder if our view of the body as composed of various systems – muscular, skeletal, nervous, digestive and so on – is leading us astray in that regard. Those systems are not like our nice theoretical systems, with nicely decoupled and separable components. It's very difficult to analyse a single bodily system in isolation, and it seems like a problem just about anywhere can be caused by a problem just about anywhere else. Which is to say, maybe meditating with your body makes a lot more sense than it seems.

The funny thing is that I'm pretty sure most serious yoga people are dualists, which makes me wonder how they ended up with such a physical kind of spirituality.

Bug vs feature

One of the classic software development distinctions is between a bug and a feature. A bug is unwanted behaviour that you have, and a feature is wanted behaviour that you don't have. Which is to say that in many cases, the distinction between bug and feature is only one of perspective. Is "the car loses control above 150km/h" a bug, or is "stable at speeds over 150km/h" a feature? The answer, at least in software, is defined either formally or informally by the expectations of the people involved.

A similar idea holds true about expectations in general. Is "I want to be a famous comedian" a feature you are hoping to implement, or is "I'm not a famous comedian yet" a bug that you need to fix? The answer to that will say a lot about your attitude when working toward that goal. If your current situation contains a bug, it is unsatisfactory to you, and you will need to fix the bug for the situation to be acceptable. If it's a feature, things are acceptable already, but they could be even better in the future. To put it another way, the bug/feature distinction is normative; unlike a feature, a bug shoulds you.

So I imagine it's no surprise that I consider bug-centric thinking very dangerous. It's a close relative of the high water mark problem, in that your perspective can have a profoundly positive or negative impact on your satisfaction with doing something even when the result is exactly the same. Defining things as bugs means things will be okay, but they're not okay now.

And if the future is just a series of nows, that could leave you in a pretty bad position.

Additive and subtractive

I wrote a while back about the idea of a 3d unprinter, and mused on the benefits of comparing and combining additive and subtractive processes. These two forms are the classic opposites of manufacturing; additive processes start with nothing and build up until it works, while subtractive processes start with an existing block of material and cut it down.

This same distinction can be a useful way of thinking about software design. We often speak about libraries and frameworks, which are both vehicles for making reusable code and sharing it with others. These terms are nebulously defined and to some extent a matter of opinion, but to me the difference is additive vs subtractive.

A library is a set of functions you can incorporate a-la-carte. A math library might include a fast exponentiation function, but that doesn't obligate you to use the square root or trigonometric functions. By contrast, a framework gives you a fully formed set of abstractions suited to a particular domain. Ruby on Rails, for example, provides you with an amazing pre-baked set of ideas distilled from the authors' experience in website development. If your problem is like their problems, you can save a lot of time and effort by going with these prefabricated designs, to the point where many websites can be generated to a first approximation with just Rails' setup and scaffolding commands and no code at all.

If you want to do something new with a library, everything's additive; you just write some more code and that's it. With a framework, the abstractions that are built are meant to cover the idea space completely. It doesn't usually make sense to just add code. Instead, you have to find a place for that code. The existing abstractions could be modified in some way, or even replaced. Regardless, the process is subtractive; you start with the general answer and cut it down to fit your particular case.

That's not to say subtractive is bad, or additive is always good. The thing is that additive is complex in proportion to the complexity of your project. If your goal is to make a new operating system, doing it from scratch is going to take longer than you likely have to spend. Subtractive, on the other hand, is complex in proportion to the difference between your project and the ideal abstract project the framework was designed for. Using Rails to make a webpage is so easy it's basically cheating. Using Rails to make a realtime message queue is harder than just not using Rails at all.

The mistake a lot of people make is not thinking about that abstract project. Both framework designers in having an unreasonably wide or vague idea ("don't worry, we made a framework for everything"), and framework users in not considering whether that abstract project actually matches up with their real one. Too often frameworks are chosen because of their popularity or familiarity rather than how well they fit the goals of the project.

Either way can be wrong, but I think subtractive has the most potential for harm. With additive programming you can at least get an idea for how far away you are. Bad subtractive programming is full of subtle pitfalls and dead ends caused by the impedance mismatch between what you want and what the framework designer assumed you wanted. In the worst case you basically have to tear the whole system down and build it back up again.

At that point it should become obvious that additive would have been easier, but that's an expensive misstep and one that's easily avoided if you know what to look for.


It's inevitable that you will experience loss. Opportunities disappear, things get lost, projects end or stop, people go away, sometimes for good. To top it all off, at some point you will experience the final, ultimate loss. To say this is an uncomfortable truth is an understatement; it is repulsive to the point where people mostly refuse to consider it at all. Nobody wants to admit that their star developer will be hit by a bus, or their business might fail, or the very consciousness they are using to think this may one day be switched off as easily as a lightbulb. But it happens all the same.

So what would the opposite look like? If instead of avoiding loss, we embraced it – expected it? Planned for it the way we do shopping trips or birthdays? If it wasn't taboo to say "we're all going to die, maybe even today", or "don't forget, team, things could go from great to insolvent in a matter of weeks"?

We actually plan for loss very well when the loss isn't personal and we can see beyond our own aversions. Software systems are often well prepared for loss. Not just designed to avoid loss, though that is also true; systems are usually designed to minimise failures, be redundant, and have backups. But beyond this, resilient software is designed to expect loss of data, loss of connectivity, and even loss of the process itself. Good software has the expectation of its own sudden demise built in as a design goal.

Perhaps the best general advice for building resilient software is to reduce its state. State is a kind of temporary baggage: the information you carry around with you while doing something that you throw away once it's done. Complex software often ends up accidentally holding onto a lot of state, and as a result becomes very sensitive to losing it. You've got a bunch of applications open on your computer right now that will just disappear if it's turned off. But well designed software tries to minimise temporary state, either by making it permanent or avoiding it altogether. Achieving this completely, the holy grail, is called being stateless.

And so too for us. The state we carry is all our incomplete actions, our half-finished projects, our "I should really get around to"s, the people and things we take for granted because we will make it up later. But what if there is no later? What if that last fight was the last fight? This is it, there's no more, and all you have left is whether you can look back, at last, and be happy with it. That may not be true today, but some day, tragically, inevitably, it will be.

To be stateless is to face into that future instead of away from it. To keep your loose ends tied, to leave nothing important unsaid, and to take the opportunities today that could disappear tomorrow. And to sleep soundly in the knowledge that if it all ended tomorrow, you made the best of the time you had.


I've been thinking about different things you could do with the ESP8266. I've got a few smaller projects lined up but there's one idea I keep coming back to that could actually be really amazing. What I want is an all-in-one power interface. It would be one of those through-plug form factors like most existing wireless switches and power meters, and basically connect your power point to wifi.

You plug it in and set it up (probably using a temporary wireless network or something) and then it connects to your wifi. From there, you can turn the switch on and off, measure its power usage at any point and see graphs over time. The whole thing would be controlled from a web interface running on a machine in your house, which would coordinate every switch on the network. There'd probably be a prebuilt all-in-one control device too, for people who don't want to install anything.

There's all kinds of cool stuff you could do with that. Off the top of my head, turn your lights on and off from your computer or phone, measure power-hungry appliances over time, figure out if you left the heater on at home and switch it off. It'd also have an API so you could just do arbitrary automatic things like turn on your amp when your TV comes on – assuming your hardware supports it.

This would be better than existing power switches and meters in three ways: firstly, it'd be open source and open hardware, not locked down like most of the existing stuff. You could reprogram the ESP on board, and although it'd come with a web-based management platform you wouldn't be locked into it. Secondly, it'd be designed for hobbyists, so even if you didn't want to use the wifi chip you could connect to some pinouts and use it as a simple power switch interface without worrying about wiring mains power wrong and burning your house down. And thirdly, it'd be much, much cheaper.

Current home automation devices seem to run at hundreds of dollars, but I can't really think of a reason why the materials for something like this would be any more than $10. At that kind of price range you could buy enough to rig up every important device in your house. Maybe you could even build one that replaces your wall switches. You'd really need to be really careful to make sure you got the security right, though.

The magic eraser

There's a tool I've sometimes found useful when making hard decisions, especially in situations where there's a strong social obligation or status quo influence. Sometimes these effects can go beyond just difficulty making the decision, and actually make it difficult to even figure out what you want, or prevent you from recognising that you can make a decision at all. It's like the difficulty of the right outcome stops you from even seeing it at all. This technique is designed to deal with that, and I call it the magic eraser.

Here's how it works: you're feeling bad about something, it's annoying you, making you angry, worried, or otherwise affecting you negatively. Just imagine you had a magic eraser. You think of the thing you don't want to deal with and – boom – erased. Working at a job you hate? It's gone. Event you don't want to go to? Cancelled. Unhappy relationship? You never even met. The magic eraser doesn't just get rid of the thing, it does so without any consequences or responsibility. You can then just enjoy your newfound freedom from whatever was vexing you.

Is this realistic? Of course not! But the point is you should be able to make the two decisions separately. Firstly, if this thing just blinked out of existence, would you be better off? Secondly, if so, how difficult would it be to actually get rid of it? It's very easy to lump those two together and end up not even considering the possibility that you may be better off without something that is costly to change. Still, costly is a different thing from impossible, and it's better to know what what the best outcome would be even if it's not achievable right now.

I think there would be a complimentary tool, a magic wand, for when you are having trouble imagining a positive outcome. There doesn't tend to be as much cultural resistance to that kind of aspirational thinking, though. The eraser is good because it can be very difficult to think destructively, even when it's the right thing to do.

Let it go

When I was younger I remember having a disagreement with a friend about Harry Potter. At the time there'd just been a spate of knock-offs, most famously a series featuring the young Barry Trotter. Trotter got away with it by being a parody, but more generally those knockoffs suffered legal repercussions from JK Rowling and the publishers of the series. I thought this was ridiculous.

Sure, you can own the words on the page themselves, and it is perhaps reasonable to claim that Harry Potter™ and The Harry Potter Universe™ are brands that need protection from imitation, so any knockoffs would need to clearly spell out that they are unauthorised. But to say "no, I own these characters, I own this universe, I own the culture that springs from it" is just crazy to me. I later found out this is a pretty rare opinion.

A related but similar anecdote: Mother's Day (the US one) was invented by Anna Jarvis, who eventually disowned the holiday, protested against it, and in the end "wished she would have never started the day because it became so out of control". More generally, you often hear of movements or ideas started by someone who later wishes they could take it all back. The reason is usually the same: the movement changed, it lost its original goals, it's out of control now.

Both Rowling and Jarvis experienced something that many people experience when they create something: a desire to own that thing, to control it, and to ensure it changes in the way they intend – if at all. To me, that seems like the worst kind of overprotectiveness. It's helicopter parenting your ideas. My belief is that the things you create take on a life of their own when they are released. You transfer those ideas into the minds of your audience where they live on, spread, and transform into new ideas, new stories, new movements.

To prevent that is cruel and selfish, not just to the people whose creativity you restrict, but to the ideas themselves. You create sterilised ideas – eunuch ideas – robbed of their most basic imperative: to create more ideas.

Of course, I believe it's necessary to have certain limited protections over the things you create, both in copyright law to protect your commercial interests in a specific work, and in trademark law to allow you to control what can be officially associated with your name. However I don't believe there should be any protection over ideas, characters, worlds, or "look and feel".

I mean this not just legally, but morally. The desire to keep control over things you create makes sense as long as you see yourself as the creation's owner. But you can't own ideas, and you aren't the idea's master. You are its custodian, its guardian, its – dare I say – mother. You brought the idea into the world. You shaped it, You helped it grow until it was strong enough to stand on its own.

And now it's time to let it go.

Worst unsurprising case

It's good advice when planning something to try to think about and head off the worst case scenarios before they happen. If your event is outdoors, you should think about rain. If you're inviting an important keynote speaker, have a plan for if they bail at the last minute. If you're depending on something, you should be prepared to not have it. But how far do you take this? At a certain point you end up worrying about hash collisions and wolf attacks, and that's just pointless.

So, sure, your event could be cancelled because of a fluke meteor strike or your speaker could have a heart attack during the presentation, but most people don't make plans for those situations. And usually when people say "worst-case scenario" they either explicitly or implicitly indicate it's not the worst worst case, which would presumably involve brimstone in some way, but the worst reasonable case. The worst case that isn't just ridiculous. That's a bit wishy-washy for me, though, so I'd like to suggest using the worst unsurprising case.

Surprising is something that catches you completely off-guard. A meteor strike is surprising. But one of your team suddenly getting sick? Basically expected. However, things can be unexpected and still not surprising. If you drive a car you don't expect to get in an accident, but I also wouldn't call it surprising. Your first reaction to someone getting in a car accident is "how terrible", not "how could this possibly have happened?" Surprising isn't when your server loses power, it's when it loses power at the same time as your redundant backup in a different location, and the third-party monitoring system you set up to catch that mysteriously fails at the same time.

You could reasonably point out that what constitutes a surprise is subjective and ill-defined, but I think that is actually a feature of this way of thinking about risk. Perhaps for NASA, nearly any failure is surprising. Their culture has them thinking a lot about really weird risks, up to and including the impact of aliens on society. Conversely, there are times when severe failures are totally unsurprising, to the point where many people involved in a project know it's doomed to fail.

Which is to say that if the way people respond to risk is cultural, perhaps it's not so strange to assess risk culturally too. Nobody's going to take risk management seriously if they're doing it to prevent mass wolf attack or something equally ridiculous. The worst unsurprising case lets you address the level of risk that people actually give credence to, either so you can know what to plan for, or so you know when they're not being imaginative enough.


I've been a day behind without making a failure post for a couple of weeks, as part of an experiment to see if not making a failure post and just catching up later would work. Evidence suggests it doesn't, and as of today I'm two days behind. Oops.

I was initially against the idea, but I thought it might be nice to have the flexibility to soft-fail and recover. One aspect I'm particularly iffy about is that I didn't flag that expectation in advance in any way. Changing my expectations in advance is one thing, but doing it on the fly is a recipe for trouble.

So my new plan to hopefully avoid this kind of meta-failure is to be more specific about what my metric is for failure and stick to that. I'm still posting in arrears until I can figure out the best way to deal with CouchDB's foibles. That means a post by 11:59:59 UTC on the stated day. I'm still going to give myself the soft-fail loophole to see how I like it, so I might lie and post twice the next day in exceptional circumstances. However, if don't fix it by the next day's post then that's a fail and I'll make a failure post.

Once I get the posting in advance thing working properly, I'll try getting rid of all that and have a strict posting schedule to see if that works better. I suspect it will, but if I need more flexibility then I'll have to figure something else out.

To be real

There are a lot of things we believe, but only some of them we believe completely and unflinchingly, like that when we drop something it falls to the ground. Others we don't really believe, or we only believe in a kind of consequence-free way. Beliefs like "everything is connected" are unfalsifiable and thus consequence-free, but you can also find other less obvious beliefs like "I'm going to get in shape" or "I'll backpack around Europe someday". They look like real beliefs, but if you don't actually act on them and they don't have any consequences then they're not real.

The inimitable Kurt Vonnegut used the idea that an entire religion could be built out of "harmless untruths". The kinds of unreal beliefs, like ghosts or the Loch Ness Monster, that don't really have any consequences. Now that you believe that ghosts are real, what are you going to do differently? Are you going to buy special ghostbusting equipment? Unlikely. Probably you'll go through your daily life exactly as before but occasionally say "I believe in ghosts" and that's that.

In fact, you can tell if a belief isn't real because attempts to make it consequential are amazingly uncomfortable. If you act like someone's belief in gravity is real, for example by challenging them to drop some stuff, or betting them money that an object will fall upwards, they'll happily do it – who doesn't like free money? But ask someone who wants to get in shape to tell you their specific plan, or bet someone who wants to go to Europe a lot of money that they won't go by a certain time, you'll quickly get a picture of whether their belief is real.

Some unreal beliefs are better off discarded, but there can also be a lot of benefit in reforming them into real beliefs. Maybe you really do want to backpack around Europe, and the fact that you haven't made any plans, researched places or drawn up a budget would be quickly rectified if you felt like it was really happening. In fact, that's kind of the definition of real: something that actually causes you to act or decide differently. If your goals and aspirations aren't real, if you don't feel like what you're doing is real, there's no reason for you to try to succeed.

Paul Graham did a great bit on why startups die, where he points out that while on the surface startups die from running out of money or a founder leaving, the root cause is usually that they've given up. They don't go out in a blaze of glory, they just sort of shrivel up and disappear. My reading is that, for the founders, the startup stops seeming real. They might still say the words, but they stop acting as if it's going to succeed and, like an imaginary best friend, it just eventually vanishes.

This, I believe, is the secret sauce behind the famous Steve Jobs Reality Distortion Field. It seems like being out of touch with reality would be seriously maladaptive, but evidently it worked well for him in business. I think the clue is in the name: when Steve believed in something, it was reality. He acted like it was true and he convinced other people to do the same. And that shared illusion was necessary for the ideas to succeed.

Though notice I say necessary and not sufficient. Bluster isn't belief, and you can't make something real just by believing in it. But it is necessary to believe in what you're doing. More than that, it's necessary that what you're doing is real: not just the kind of thing you talk about, but something you act on and rely on as unwaveringly as gravity.

Code escrow

There's been a big push in security-conscious projects, particularly Debian, to have what's called reproducible builds. The problem they solve is that open source lets you verify that the source code does what you expect and there's nothing nefarious in it, but how can you get the same assurances for the pre-built binaries that most people download? Only by having a set of steps that deterministically produce an identical set of binaries, so if you trust the source, and you trust the build process, you can trust the binaries.

The security of reproducible builds is mostly an open-source thing, but it occurs to me that it could also be relevant even when the source is closed. Sometimes closed-source projects want to make their source available in a limited way, for example as part of a security audit. However, even if you trust the auditor, this still leaves a problem in that there's nothing stopping a malicious company from adding or changing things before the audited code is built into the binaries that end-users run.

And, although I haven't seen it, it might be useful for a company to use a kind of deferred open-source license to make their own shorter-term copyright. A problem with that is how do you guarantee it would happen? The company would need to contribute its source regularly to a trusted third-party, but you'd still have the same issue as with the auditors: how do you trust that the code that they've been given is what you're getting when you download the application?

I think the answer to all of these is a kind of code escrow service. If your closed-source project uses reproducible builds, you provide access to the source code and the build chain to the code escrow. They audit the code, or just hold onto it, or whatever trusted source code thing you need done. Whenever you publish a new binary, the code escrow can certify that their version of the build generates the same binary.

For something like a security audit, that might mean that they only verify certain versions, or certain components, but for a deferred open source project it would mean that you can trust that the entire source code used to create that version will become available in the future.


A term I've heard a lot in software development is friction: the things that slow you down or make something more difficult but don't stop you. For example, having to sign up before you can add items to your cart is friction. Nothing's stopping you from signing up, but it just makes everything harder. Similarly, slow page loads are friction, more buttons to press is friction, and waiting for your online shopping to arrive is friction.

I think what makes friction such an important concept is how disproportionately it affects behaviour. Google found that increasing search latency by 400ms decreased the number of searches people made by 0.6%, and had a persistent effect even after the latency went away. The friction trained the users not to search as often! Akamai ran studies on web behaviour showing that 47% of users expected a site to load in two seconds or less, and 40% would abandon a site that took more than three.

And anecdotally, I've noticed my behaviour changes quite drastically depending on fairly minor incidental difficulties. Before purchasing an e-reader I was fairly skeptical that it would make much difference, and at the time I wasn't really reading many books. However, after I bought it I read about one book every week for years afterwards. Clearly the minor difficulty of going to a bookshop or library once in a while was enough to stop me from reading entirely. I've noticed that I also tend to be happy to pay substantially more when I'm buying something online if I can get it sooner.

I suspect all of this is explained fairly well by standard cognitive biases. Every bit of friction increases the time between when you want a thing and when you get it, and perhaps increases the risk that you won't get it at all. It's well-known that we massively discount the value of future rewards, so perhaps just the time increase due to friction is enough to cause radically different behaviour. Alternatively, there may also be a component of risk aversion; people would rather the certainty of a process with fewer and easier steps.

Either way, it seems like there are basically free wins to be made in taking an existing thing that people like and just making it faster and easier by removing incidental difficulty. And, conversely, some interesting applications in taking things that are too easy and adding friction back in.

Goalpost optimisation

I recently wrote about a failure that was at least in part attributable to changing my goals on the fly. I've had flexible goals cause problems in the past too, and I've generally heard the advice that it's better to set clear goals in advance. But why is this the case? It seems like it would be good to be flexible, and indeed many situations do demand that you change your goals when they no longer make sense. However, there's definitely a limit to that flexibility, and nobody suggests changing your goals in real time.

I think the reason for this is that our brains are very good real-time optimisation machines. If you can give your brain a rapidly-changing output controlled by some set of inputs, it will optimise that output for you, no worries. And not necessarily one value, it's often possible to optimise over whole sets of outputs at once. The whole thing is really impressive. But, like any optimisation process, it can lead to weird results if you don't clearly specify the boundaries of the optimisation.

A perhaps apocryphal story I heard once involved a team using a computer optimisation process to design a particular circuit on an FPGA. The circuit the computer designed was smaller than the one designed by humans, but nobody could figure out how it worked. Anything they changed seemed to break it, and even changing to a different FPGA chip stopped it working. The running theory was that it was exploiting magnetic interference or physical quirks of the chip because nobody had thought to tell it not to do that.

In a similar way, I think it's easy for us to optimise too much. That's part of the reason creative constraints are so useful, because they stop us from trying to solve everything at once and make it easier to focus. But the other part is that sometimes you come up with a bad solution if you start going outside the bounds of the problem. The most optimal solution with no constraints might not actually be useful.

When you allow yourself to change your goals, you're letting moving the goalposts be one of the solutions you can have. And that's not automatically bad, but the easier it is to change your goals, the more your natural optimisation processes can use the goal as one of the inputs it can change. In the worst case, if changing your goals is easier than taking actions to achieve them, it will always be a clever optimisation to change the goal instead of the actions. That's not to say you'll always do that, but you'll be fighting against your optimisation process when you don't.

I suspect there's an important principle in there for optimisers in general: don't let the output feed back into the input. Otherwise your optimiser is just optimising itself, and presumably that ends with doing nothing at all.


In the city it's quite common to see new cafes and restaurants pop up for 6 months to a year and then disappear again. This isn't terribly surprising if you know anything about the industry; the margins are low, rent and wages are expensive, and for many owners it's a lifestyle business. It starts as someone's dream of the perfect little cafe, runs on savings and free labour by the owners, and eventually when that runs out it closes down.

I find that process particularly interesting because it doesn't just affect that one business, but other businesses that compete with it. Let's say you're a well-run cafe that's been in business for years. You're competing not with other businesses but with this entire process by which a perpetual rotation of cafes appear and disappear. Each individual cafe may not be sustainable, but the metasystem of cafes sustains itself because each new cafe brings a new sucker with fresh capital. And in a weird way, consumers are actually better served by that metasystem because it can deliver cheaper coffee.

I've noticed a similar thing about representative democracy. In theory, a candidate who simply promised to change their mind in accordance with popular opinion would be the ultimate representative. However, candidates who change their minds are often considered inferior "flip-floppers" lacking in principle. Despite the questionable assertion that changing your mind is bad, it is perhaps rational to reject flexible representatives. If you have enough candidates, you can pick one who has always reflected your views. Instead of expecting representativeness at the per-politician level, you let the political metasystem select candidates who reflect current public opinion.

Evolution, also, seems to prefer this metasystemic level of operation. Evolutionarily stable strategies appear where, for example, 20% of the population will steal and 80% of the population will not. Those ratios are stable at the point where nobody has an incentive to change strategy: an additional thief gains less by thieving than earning an honest living, and one more honest person would do better if they turned to crime. Notice that it would be equivalent if the entire population acted honestly 80% of the time and stole 20% of the time, but that doesn't seem to happen.

And perhaps, though many transhumanists would never hear of it, there is a similar metasystem at play in society and the role of death. It is comparatively rare that a person will completely change their perspective or their ideas through the course of their lives. While it would be great if this wasn't the case, at the moment it doesn't matter much because society is a rolling metasystem much the same as cafes or democracy; the individuals aren't important, it's the general behaviour over time. Old people take their old ideas with them, while new people bring new ones in.

But, to agree with the transhumanists, we can't rely on this forever. I believe that we will eventually conquer death one way or another. And at that point the metasystem stops. If we haven't sorted out a way to bring that flexibility down into our own systems by then, perhaps we never will.

Dependency hell

I remember there used to be a lot of talk about dependency hell, when you have lots of software packages all depending on each other. It's usually considered a good thing to have lots of small packages instead of large monolithic ones, but it led to problems like multiple conflicting versions of a single package, or dependencies that went around in a big circle. These days the tooling has gotten a lot better so you don't hear much about dependency hell, but I still like it as a metaphor for systems that are over-burdened with interconnections.

In Stateless I wrote about the value of minimising the temporary state you carry around, and I see dependencies between things as one form of that. We've all experienced that ridiculous string of dependent goals where your lightbulb's broken so you go to replace it, but you're out of bulbs so you go to the shops to get a new one, but you realise you need to put money on your bus card so you go online, but the wifi's being slow for some reason, and suddenly you find yourself changing a lightbulb by fiddling with your router settings.

That example is obviously silly, but in other situations we are willing to accept just as much complexity. Often our explicit plans look a lot like a chain of dependencies: "we'll do X, then Y, then Z". Or the way we understand people and social situations: "if Dave does A, then I'll do X, but if Dave does B then I'll need to do Y and Z". If you combine the two of those you can very quickly approach a level of dependency hell only imagined by the Windows developers of the 90s. Sometimes those dependencies are unavoidable, but there are times when you can redesign your plans to remove them.

One of the easiest tricks is giving your tasks some way to bail out. So "I'll finish writing this, then I'll do the washing" can be replaced with "I'll write this until 4pm. At 4 I'll do the washing." Now instead of washing being dependent on writing, they're two distinct tasks. If your writing takes too long it won't affect the washing, so you've made the two independent. However, this only works if you have a plan for how to bail out of the task when it runs over. With writing you can just put down the pen, but if you're halfway through a conversation with a friend it might require a little more finesse.

Another trick for external dependencies is to come up with ways to make invariant decisions. Instead of "Once Mary calls back I'll know whether to go to the cafe or not", an invariant version would be "I'll go to the cafe either way, and if Mary can come too all the better." It's invariant because your decisions don't vary based on the external outcome. You could also say "Since I haven't heard back from Mary, I'll abandon that plan and start doing something else. If she calls later we'll make a new plan." The bailout trick can be useful here also: "I'll wait for 5 minutes and then do something else".

It's not always possible to eliminate dependencies in this way, but I think it's fruitful to try. Each extra dependency is another constraint that you have to consider, and an extra way that your plans can fall apart. Escaping dependency hell makes your plans more resilient in the face of failure and frees you up to concentrate on the things you're doing, rather than their complex connections to everything else.


I've been thinking a bit about goalpost optimisation in the wake of my most recent writing failure. It's tricky to avoid optimising your goals if those goals only exist in your own head, or don't seem real. The antidote is to commit to those goals in a way that prevents you from easily changing them later.

The most common way to do this is having some kind of commitment partner. In more traditional work that often takes the form of a manager or boss who sets your goals. For people in more creative arrangements it's usually other people in a similar situation: a mentor, or a group you can commit in front of. I've tried those methods and, while they work, there's something inherently unstable about needing a particular person or group to set your goals properly. What if the person gets busy, or you don't see the group for a while?

Writing goals down seems to get a little bit of the way there, but I don't think it goes far enough. I'd like to suggest a better answer: public commitments. Unlike a particular person or group, the idea of public isn't particularly vulnerable to change. Public commitments are something akin to performance. In a sense, you're performing your goals in public rather than practising them in private. I think this method could be a significant improvement over commitment partners; although people are still better for discussing goals, a public commitment is a stronger and more reliable way to set them.

What I would really like is a platform for making public commitments. Obviously I could write about them here, but I'd rather keep this space for more interesting things. You could presumably use social networks or other things to make public commitments, but they aren't really designed for it. A public commitment platform would give you a particular page that would list your current commitments, as well as integrate with social networks and provide widgets you could embed to show your commitments in whatever public place you feel is useful.

So I'd like to publicly commit to making that platform. Not all of it all at once, of course, but something that at least has the bare functionality of collecting and showcasing what you've committed to. It'd be wonderful to commit to making a public commitment platform using that public commitment platform, but that obviously runs into certain difficulties. Instead, I'll commit to it here.

Next Monday's post will be a commitment platform.

Should propagation

I wrote before about shoulding and the ways that obligatory thinking is pathological. Locked inside every should is either something that you want to be doing, or something that you don't actually have a good reason to do. Unfortunately, focusing on the obligation makes it hard to see whether there is any motivation of substance underneath. Worse still, I've realised lately that shoulds have a few ways of propagating into other areas and obscuring your motivations there too.

The first is that shoulds propagate through dependencies. If you feel obligated to take out the garbage, but first you are going to finish reading your book, the obligation spreads: now you have to finish your book so you can take out the garbage. Maybe you truly enjoy the book, but if you don't really want to take out the garbage then every page you read is bringing you closer to an unpleasant situation. On the other hand, there's also now a degree of pressure on finishing the book that you didn't have before. Reading it used to be something you wanted to do, now it's something you have to do.

The second way is through substitution. It's not uncommon that you will bump one activity for another, better activity. With an obligation-free activity, that's no problem. I wanted to read, now I want to go for a walk. But when a should is involved, it propagates implicitly: you must be at least as obligated to do this new thing as the old thing it replaced! I understand you can't come to my party because you have to work; I recognise your superior obligation. But if you ditch my party because you're too tired, I'd better not find you online later in the night. You are obligated to sleep by substitution!

There are a couple of ways I can think of to mitigate these. I covered reducing dependencies before, but another possibility is to just reorder things as much as you can, so the shoulds happen first and can't contaminate anything else. I covered something like that in motivation-driven development. As for mitigating substitution, perhaps the best I can suggest is to separate the two decisions: drop the old thing unconditionally, then pick up the new thing.

But all this is just more reason to avoid shoulds, in my opinion.

The pipeline illusion

There's a neat trick that turns up in a lot of places where there is a large delay between an action and its result. Let's say I want to send my friend a postcard but it will take a week to arrive. It's kind of annoying that it takes so long, but we could make it better by pipelining. If I send my friend a postcard once it will take a week, but if I send my friend a postcard every day they will receive a postcard every day. Magic!

Pipelining is used successfully in areas as diverse as video games, CPU design and space travel – and one place where I don't think it's recognised: motivation. If you've never run before and you go for a run today, it's probably going to be pretty hard work. You get back from your run, tired and sweaty, and are you the fit bronzed god you hoped? Nope. In fact, there's been no appreciable difference in your fitness. Only after weeks of rewardless toil do you start to see results. But from that point on, your previous exercise works its way through the pipeline. Every time you exercise you get fitter immediately – or so it appears.

I think an underappreciated benefit of habits is that they exploit this effect. Not only do you reinforce a particular behaviour through repetition, but after a while you can feel like you have removed the gap between the behaviour and its results. Everything feels like it pays off right now which, thanks to the vagaries of our hyperbolic discounting, is much more powerful than things that pay off later.

But as much as it might seem like you've eliminated the delay, it's important to remember that it's only an illusion. If something happens today, your friend still won't get a postcard about it for a week. And if you stop exercising today, you won't suddenly get out of shape either. If you change your exercise routine, you might even be disappointed and conclude that it hasn't done anything. That's the dark side of the pipelining illusion: it messes up your ability to make good decisions.

The worst is when you're comparing a pipelined task to a non-pipelined one. If you've been working on an existing project for so long that your old gains are still catching up with you, and thinking about starting a new one, you get lots of weird and paradoxical effects. Working on the new project seems like an unquestionable loss; why start something new for a reward later when you could do the existing thing for a reward now? And by induction, it may never seem worth starting anything.

I'm not sure if you can think yourself out of the pipeline illusion by just recognising it. It would be nice to think so, but I suspect it's one of those things that creeps into your decisions without you noticing. For habits that you want to keep or change, I think it would just come back to an exercise in discipline to keep things going until the pipeline catches up. As for their effect on risk, perhaps the best answer is to set another habit: to seek out and embrace discomfort.

And if you did that for long enough, I imagine the benefits of seeking out discomfort would start to feel immediate too.

Process limits power

Sometimes, especially when dealing with government bureaucracy, it feels like all of the rules, forms and arbitrary steps are designed to make your life difficult. In fact, you could be forgiven for thinking that process is being used as a way to exercise power, as a kind of weapon or mechanism of control. While it's true that bureaucracy often makes you feel powerless, I would argue the opposite: that processes and rules are designed to restrict powerful entities.

The thing is that being powerful normally means less restrictions on what you can do. In fact, I would define power as being the extent to which you can get what you want. Someone who is powerful can exert control on their environment, and someone who is supremely powerful can do so without restriction. However, most people don't like the idea of unchecked power – or, at least, of other people having unchecked power. So we build processes and rules to limit the power of government, who in turn build processes and rules to limit powerful corporations and individuals.

Within an organisation you will often find internal bureaucracy for much the same reason: that organisation delegates a certain degree of its power and resources to employees, and needs to limit their ability to abuse that power. A manager is allowed to spend the company's money, but has to go through a purchasing process to do so. A loan officer at a bank is empowered to issue loans, but has to follow a strict process to evaluate those loans. That processes doesn't just protect the company from that employee abusing their power, it also protects the employee: if something goes wrong, but they followed the process, they won't be fired. (At least, in a country where the company's power to fire is limited by process).

And similarly with the government and its love of forms. Those forms aren't to restrict your power when you're doing your taxes or applying for a passport – what power? Rather, they are to protect you from the government's power. If you fill out the forms correctly and according to the process that the government has set out, then the government is required to give you your passport or charge you the correct amount of tax. If that process didn't exist, there wouldn't be any forms, but maybe the government would decide to not give you a passport because they don't feel like it today.

That's not to say that all process is good and necessary. In fact, I think the main reason we see process as disempowering is the way that it tends to be used as a kind of cargo cult signal of power. Power is often limited by process, so you invent processes in order to feel powerful. Or worse, you see powerful people institute process when they delegate power, so you create unnecessary processes to feel powerful. Bad managers are infamous for implementing arbitrary reporting or process requirements on employees just because it's what they think managers do.

There's an even more pernicious kind of process abuse that genuinely does limit power as designed, but does it to people or groups who should have power. In that sense it is the kind of weaponisation or control theory of processes I argued against, but it's not things like complicated forms to get your driver's license. It's processes like voter registration that add more process, and thus more limits to the voting power of people over their government. In a democracy, the flow of power should be from people to government via voting, so anything that limits that power is inherently concerning.

You might argue that there are processes that are used to exercise power, like America's civil forfeiture program. That program allows police officers to sieze property on a very flimsy burden of proof and requires a protracted and complicated legal process to resolve. However, my argument is that these situations are just examples of not enough process. To see why, imagine that asset seizure did not have any restrictions, even the current flimsy ones: the police would take even more than they do now! They are the ones with power, and process is what limits that power.

I think this can be a particularly useful way to analyse both process and power. Of power: what processes are in place to limit it? And of process: what power does it limit? And if it doesn't limit anything, do you need it?

Impaired awareness

I've always found it difficult to think about anything superintelligence-related, or even moderate-increase-in-intelligence-related. It's not that I can't reason about those things – that's fairly easy – but it's really difficult to build an intuition about. How would it feel to be a genetically engineered supergenius, or some kind of supercomputer-powered AI, or even just someone a lot smarter than I am? I have no idea.

One of the most interesting things is to go the other direction, and think about times you were temporarily impaired in some way. For example, when you're very tired you might recognise that you're tired because of the physiological symptoms, but what psychological feedback do you have? The brain you use to reason about an impairment might not be able to because of the impairment itself. You see this most acutely in carbon monoxide poisoning, like in this story on Reddit where the author became totally useless but didn't realise anything was wrong.

I wouldn't be surprised to learn that there are much milder forms of this impaired awareness that happen all the time. Maybe you haven't been sleeping as well lately, or you're eating differently, or there are just natural changes in your cognitive function from time to time. How would you even know? Maybe you're half as clever as you were a year ago, and you just haven't realised because it doesn't feel any different. You just don't notice all the things you don't think of. Conversely, you might be more clever and not notice it if there aren't any obvious clues.

I suspect being the sudden recipient of superintelligence might feel a lot like sobering up after a big night and thinking "wow, I didn't realise I was drunk at the time, but I had no idea what I was doing".

Name recommender

I had an interesting idea today about baby names. Or, for that matter, any name selection process at all. I had a look at some of the existing options and they seem to rely on either some kind of name sound index, looking up name meanings, or just pure random generation. All these things seem kind of clunky and unnecessary to me. A much better option would be to model name selection as a recommendation system, like Amazon or Netflix use to recommend products or movies.

You would just enter in any names you like or pick from a semi-random (selected for discriminativity) list. Then you just rate how much you like each name. Every name you rate goes into a sorted list, but also goes as an input into a recommendation engine which selects new names to show you. Your ratings also tune the recommendation system for everyone else. When you're done rating you have a premade list of names you like.

I think this would exploit a more reliable underlying similarity between names, and would avoid having to specify what type of name you're looking for. Boy name? Girl name? Dog name? Fictional character name? It doesn't really matter because those preferences would show up in how you rate the options you're given. It's kind of neat that a recommendation model could make a name choosing system that's not just better, but also simpler.

Another one of those areas where a really nice primitive can make a huge difference.

Reverse mantra

I've never really liked the idea of mantras. At least as I've always seen them used, a mantra is something that you repeat to convince yourself of its truth. So you would get up each morning and say "I'm going to have a great day today". Of course, what happens if you don't have a great day is kind of undefined. Do you just keep saying it? I certainly see the value in repetition as a tool; if we work off associations, then repetition is a way to build a strong association. I just disagree with trying to use that mechanism to make yourself think something is true.

On the other hand, I can certainly see the value in a mantra as a tool for focusing attention. If you have a tendency to feel anxious in social situations, it could be useful to have a mantra like "what's the worst that could happen? and is that realistic?" to encourage you to pinpoint irrational fears. It's less about convincing yourself that something is true, and more about using the mantra as a tag for a thinking process you'd like to make a habit.

Another way that mantras could be useful is to avoid doing certain things without thinking about them. If you're trying to go for a run each morning, but you keep waking up and going on the internet instead, you could try saying "I'm going to go running this morning" when you wake up. It's a mantra, but you say it at about the time you'll be making the decision and it forces you to actually make the decision. So you might not go running, but you won't not go running by accident.

More generally, I think of these as reverse mantras: something you repeat to yourself, not to convince yourself that it's true, but to check if it's true. "I'm going to have a great day today" can't be a reverse mantra because you don't know if it's true when you say it, but "I'm having a great day today" would be a fine reverse mantra. The trick is that you don't say it to trick yourself into thinking your day is great, but rather as a warning sign: if you say it and your day is actually mediocre, you'll feel cognitive dissonance and take notice.

Whether or not you do anything about it is, of course, outside of the scope of the mantra.


Well, I committed to making a commitment platform last week, and it is not here. I failed!

However, I don't feel too bad about it. It is a shame that I failed, but I think it would be much worse if I hadn't done it and also hadn't committed to it. In that sense I see commitment as having a double benefit: it helps your goals feel real, and also prevents optimising the goalposts away from what you wanted in the first place. I want to build this thing, and I'm closer to it having failed than not having tried.

But if I want to succeed, I need to go meta and learn from what went wrong this time. In this case, I didn't leave enough time for it early in the week, thinking I could make it up on the weekend. But actually my weekend got busy and I ended up trying to cram it all in in one day. Of course, that day didn't have enough time to actually finish anything. And, worst of all, that outcome wasn't entirely surprising.

To make it surprising again, I'm going to put more effort into estimating the amount of time left to finish everything and compare it to the amount of time left in the week. In a sense, this is the burn-down type information I was thinking of with the scoping calendar. I normally don't do much time estimation for personal projects, but since I'm setting a time-based commitment, a time prediction makes a lot of sense.

So, with that extra level of failure insurance I will commit again: a commitment platform by next Monday's post!

The responsibility to quality

Any time you make something, there's an important question: is it any good? When it's something that you've put out into the world you can usually determine based on some combination of whether you like it, whether other people like it, and whether it works properly. But what about before you've released it? Some things don't have an easy objective measure of goodness, nobody else has had a chance to judge it yet, and your own opinions can sometimes be a bit clouded on the subject.

One of my favourite videos is of Ira Glass, host of This American Life, talking about The Gap. Not the American clothes retailer, but the distance between your taste and your ability. When you're first doing something that you like, you're often not able to produce things that live up to your own standards. I would say that even if you're quite skilled, you still don't usually have the ability to impress yourself: you've heard all your jokes before, you've seen the sausage being made, you know where all the rough edges are. It's hard to know what the reception will be until you try it for real.

But then you run the serious risk of releasing crap. I've previously written about professional responsibility, the right kind of perfection and crap that comes back to haunt you, so you could say I have some skin in the "don't release crap" game. And there's the rub: do you release something when you're not sure it's good and risk releasing crap, or do you wait and risk not releasing it ever?

I'd like to make the argument that responsibility comes with power: to the extent that you're able to determine if something is good or bad, you're responsible for not making it bad. What I mean is that as a novice programmer you shouldn't feel afraid to put out bad code, or as a novice writer you shouldn't be afraid to write complete dreck, because you haven't earned that responsibility yet. There's no excuse for making mistakes that you know are mistakes, but when you're starting out you don't have those instincts yet.

Similarly, even when you generally know what you're doing there will always be a frontier where you don't; every time you try something new there's some part of it that is risky, and the risk is that it will be crap in a way that you don't know enough to realise. And that's okay too. If you're not sure whether something's good or not, that's exactly the time to let go of that responsibility.

If you're in a situation where lives are on the line, or the risks are otherwise enormous, the right way to let go of that responsibility is to put it in the hands of someone who does know. But sometimes the risks aren't so high, and even so there might not be anyone else better to give that responsibility to. In which case, the only person remaining is the one who receives the thing after you're done with it: your audience.

If you make something and it's good to the best of your knowledge and ability, I believe you have discharged your responsibility to quality. Whatever remains is up to the people who use it to decide if it's good enough for them.

Ocarina Bling

Something a little lighthearted for today. I made a mashup of Drake's Hotline Bling and the Shop Theme from Ocarina of Time.

Most of the effort here was just in realising that the two songs were so similar. In the end I actually didn't need to do very much to line them up: the songs are structurally identical, the bpms are very similar, and the musical styles mesh nicely. I couldn't find a decent isolated vocal track so I had to make do with just EQing down most of the Drake track.

Originally I uploaded this to YouTube, but it was immediately muted by Content ID, which as I understand it is a system that music companies use to prevent their work from inadvertently giving rise to creativity or culture outside of their control. The world will be so much brighter when they're gone.

What a paradox feels like

I've previously mentioned how convenient it is that paradoxes don't affect us the same way they do a more formal system. Gödel, in his day, managed to prove that all complex formal systems have unanswerable paradoxes built in. That threw the mathematicians of his time for a loop, but nobody seems too concerned about paradoxes in the brain. This statement is false. See? You're fine.

But you could easily make the argument that those kind of paradoxes don't really affect our thinking. The same way that I can type the letters "this sentence is false" into a computer without causing any problems, we can think about an idea without believing in it. But there are contradictions that do appear in our beliefs and how we make decisions. I'm a particular fan of the trolley problem series of dilemmas, which often reveal contradictory ideas you've held for a long time without realising.

However, it's rare to see a trolley problem seriously affect someone. In most cases I think people do a good job of resolving contradictions, usually by not believing one of the beliefs anymore. In fact, it seems like most inconsistencies only last until you realise there's an inconsistency. But, to me, it's not a paradox if you can just solve it; with a paradox there's meant to be no way out. I think what would qualify as a mental paradox is a situation where you believe two contradictory things and you're not able to stop believing in either of them.

When you believe in something in a way that you can't stop believing, that would look like reality. So we're looking for a situation where the reality is impossible. Where you have to take option A, but you can't, so you have to take option B, but you can't. This is the sort of situation where you're completely, utterly trapped, and there are no options. I think what you feel in that situation is simple: despair.

In which case, perhaps despair is nothing more than what a paradox feels like. On the one hand, that means paradoxes can be very harmful. But on the other hand, perhaps knowing that you're experiencing a paradox could provide some comfort, as well as a healthy dose of encouragement to check your assumptions. Actual paradoxes are, after all, quite rare – Gödel notwithstanding.

Decision hoisting

There's a technique I've found useful when dealing with difficult requests. Often this comes up in a business setting, but not exclusively. A lot of business types tend to deal in abstractions of value, which is usually fine as long as you can convert everything into the right units. However, sometimes there are essential mechanical reasons why something is impossible, and you have to deal with that in the concrete, rather than the abstract.

Usually you don't get presented with an impossible situation all at once, instead you get one requirement first and another conflicting requirement later. Then the problem is you're in a difficult situation because you have to say no to this new requirement, or you say yes and then can't deliver it. What's happened is you have inadvertently been shifted into a decision-making role by proxy. This smaller decision (technical requirement A vs technical requirement B) is really reflective of a higher-level decision (business need X vs business need Y).

The problem is that while it's obvious from your perspective that A and B conflict, it's not obvious from a business perspective that X and Y do. And turning around and saying "sorry you can't have B (and therefore Y)" just makes you sound obstructionist. Instead, I recommend hoisting that decision up into the higher level, and pushing it back on whoever asked for it. That way, instead of being responsible for saying yes or no, your job is just to explain the tradeoffs and leave that responsibility to the people making the decision.

I find decision hoisting can be pretty useful in everyday life. In some cases, your answer to a request might just be "no", but it's more often "it depends on the circumstances". I once had a friend ask me to go to a party that I didn't really feel like going to. I put the decision back on him by saying I'd only go if he drove me. He put the decision back on me by saying he'd only drive me if I paid for the petrol. I agreed with that and we both got what we wanted, though the decision had to go through a few domains to get there.

The main benefit of decision hoisting isn't that it stops you from being responsible for decisions, but that it moves decisions into the domain they should be in. Business decisions shouldn't be made at a technical level. Instead, the technical realities inform the business decision. Doing it that way means you can make better decisions, because you don't eliminate legitimate options by saying "no" prematurely.

Idempotent habits

I wrote before about "it's the opposite to the one I expect" as being a particularly useless way to remember something. The problem is that as soon as you internalise it, it's not true anymore; if you always turn the tap the wrong way, and you remember "I should turn it the opposite way than I normally do", that system becomes useless as soon as your idea of normal changes which, if things are working, should be really soon.

I've been thinking that there's a more general class of problems along those lines, which are non-idempotent habits. Idempotence is something you often talk about in software to mean something that you can do multiple times with the same effect as if you did it once. So adding 1 isn't idempotent, but changing your name to Steve is. You can change your name to Steve as many times as you like and you're still Steve, but if you add 1 twice you've actually added 2. Idempotence is often considered a sign of a well-designed system.

The reason I think this is particularly important for habits and other mental systems is that, unlike a computer, we tend to operate on patterns and associations. So a non-idempotent habit tends to stick around even when it's not useful anymore. I started using 24-hour time and one nifty shortcut I found was that in the afternoon I could just read the 24-hour time as the 12 hour time, by ignoring the first digit and subtracting 2. 1900 is 7pm, 1400 is 2pm, that kind of thing. The only problem is that I kept catching myself looking at 1900 and thinking "that's 5pm". I got so used to subtracting 2 that I was doing it twice!

That's a pretty innocuous example of how non-idempotent habits can get you in trouble, but there are others like "I'll try to be more assertive" that have similar issues. More assertive than what? Than you are now? That will change as you change to feel more comfortable with your assertivenes. Or if you tend to be really behind on things and you build habits to deal with that, those habits stop being useful as soon as they start working properly because you're not behind any more. The high water mark I wrote about before is another example.

Something all of these have in common is a certain degree of self-reference: the habit changes based on its own output. I think a better answer is to find fixed points of reference to anchor habits to. Instead of being useful at first and counterproductive later, an idempotent habit will stay relevant over time. That means less habits but stronger ones, because they have more time to build.


Well, another week, another failure. But it's a different failure and, as so often in programming, sometimes a different failure is the best kind of progress you're going to get.

This was a kind of compound failure where my goal for releasing the commitment platform I committed to last week conflicted with my new rules for when I count a failure at writing. I'm happy enough with the sacrifice I made (the commitment platform is coming as my next post), but I'm not sure it was strictly necessary to make that sacrifice at all.

In reality, the flexibility I built into my posting commitment should have helped me in this situation, but I was already one behind from the previous day. That says near miss to me: being a post behind became a fairly common state rather than an emergency. I'm beginning to think that this flexibility thing might be more trouble than it's worth.

A separate but related near miss is that, despite my good intentions with planning work on the commitment platform, I still did the bulk of it at the last minute. I did some design and planning earlier in the week that was actually very helpful, but not enough to significantly spread my workload out through the week. I got it done, but I didn't get it done comfortably. In fact, I would say I barely scraped over the line.

I think the issue was that I put effort into estimating time, but I didn't make much effort to schedule that time earlier in the week. I'm going to try that with the next big chunk of work I dedicate myself to, but that probably won't be until the following week so I can have some time to recharge and polish the stuff I did this week.


Today I'm happy to announce, a platform for making public commitments to your goals. It doesn't do a lot yet, but I'm pretty pleased with it so far.

The idea is that the wording encourages you to be very concrete about what you're going to do and when so that it's verifiable. It's meant to be fairly soft touch other than that – just giving you the tools to make commitments, not forcing you into any particular system. The idea is that it will piggyback off your existing networks rather than try to be its own one.

I'm pleased that I ended up designing it without a login system. I realised that I don't really need one if I build everything around email. Instead of logging in, the site will email you a link to follow up with when the deadline for your commitment has passed, though that part isn't actually working yet. Mostly I just want to be really careful to do email in a way that doesn't get me instantly blacklisted from the entire internet.

Still, my main goal was to get it to the point where I could start using it, and that's already happening. Mission accomplished!

Safety blankets

A friend once told me about the idea that everyone has an emergency fallback strategy for when they run out of other options. If you're in an argument and you're really upset, maybe you try reasoning, emotional appeals, increasing volume, whatever to try to fix it, but if none of those work eventually you pull out your last resort. Maybe you scream, run away, break down in tears, or start smashing stuff, but there'll be some hail mary option you go to every time.

While I'm not sure that's always true, it's at least been true in my experience. And I think there might be other, less extreme ways that we look for safety blankets when things don't go our way. Think about the kinds of things you do when you're top-of-your-game, feeling awake, happy and energetic, and want to take on the world. Then think about the opposite: the kinds of things you do when you're miserable, sick, tired, or just having a bad day.

I suspect you would end up with a very consistent list of safety blanket activities. Reading, maybe, or video games, or watching TV. But there's no need for the safety blanket to be pure consumption, though it doesn't hurt. For a long time mine was programming, and I still find learning something new to be a very comforting activity. For some people I know, theirs was music, and they seemed to improve at it very quickly.

Previously, I wrote that part of committing to something is sacrificing the ability to not do it, even if you're having a terrible day and don't feel like it. That must be a lot easier if the thing you've committed to doing is something you turn to when you don't feel like doing anything else. It seems like you'd get a lot of mileage out of something you don't need to be in good form to keep doing.

I'm not sure if it's possible to change your safety blanket, but it's worth looking into. And, if not, maybe it's worth mining your most useful comfort activities for opportunities.

Starting inertia

Internal combustion engines use their own motion to power themselves. The rotation of the motor draws air and fuel into the combustion chamber, where it is compressed and ignited, causing more rotation and continuing the process. However, for this whole crazy process to work you need something to get it started in the first place. In most cases, this is actually a whole separate motor, so your engine is really two engines: the main engine and the starter engine.

It's worth thinking about this as far as other processes, because I see similar patterns in a lot of places. You have a self-supporting inertial system which, once it gets going, should sustain itself and be relatively robust. However, before it gets to that point you need to have a separate starter system to get that system working in the first place. And both systems are important! If your inertial system isn't good enough, you'll have to keep going back to the starter system. If the starter system isn't good enough, you'll never get to the inertial system in the first place.

Why not just have a starter system and no inertial system? Well, usually a starter system is unsustainable. For internal combustion engines, the electric starter will burn itself out if used for too long. They're also usually not very efficient because, well, why bother? The expected duty cycle of the inertial system is orders of magnitude higher than the starter system, so it makes more sense to focus your optimisation there.

And why not make an inertial system that doesn't need a starter? In some cases that's possible, but often the tradeoffs don't line up. It's much harder to make one system that can cover the entire range of conditions than to make one system optimised for the initial conditions and one optimised for the steady state once everything gets going. You might need to go back to trading with precious metals if society collapsed, but that doesn't make them a good candidate for everyday use now.

I think one area where this two-system analysis is particularly useful is when working on habit formation and other personal systems. Once you're doing something on a regular basis it's easy to keep doing it. Ideally, even easier than not doing it. But there are two ways that not understanding the starting vs inertial system can trip you up: firstly, just because it's easy once you're doing it doesn't mean it's easy to start. And the other way is that you can sometimes fall into a good inertial system by chance, without understanding the starter system that got you there.

Which is all well and good, except that what happens when something pushes you out of your inertial groove? Your daily running habit gets put on hold for a few weeks because you have a twisted ankle, or your streak of productivity gets halted through burnout or time off. That's when you need to turn back to your starter system to get things going again.

But if you never really had one in the first place, you might find yourself stuck wondering why your engine isn't moving when it was working fine a week ago.

Decision log

I've had an interesting problem come up a few times on long-running projects. Throughout the project you make a lot of different decisions. You choose one particular solution over others, make some tradeoffs in how you design something, and set or change overall direction. These decisions accrete over time until, late in the life of the project, there's a thick, hidden blanket of existing decisions underneath everything. For solo projects you can often remember why something was done, but not always, and for teams it's basically out of the question. The result is you often waste time revisiting decisions.

Existing forms of documentation aren't so great for preventing this. In software you would usually attach documentation to a particular line of code, a general design document, or a larger code review/patch set. Those can explain particular instances of decisions, but I don't think they solve the problem completely. You end up with that information spread out over a lot of different places, and in a lot of cases a given decision spans several different locations. I think that decisions don't map cleanly enough to anchor points in the project itself for that to work.

Instead, I think it might be worth keeping a decision log. Just a flat file of all the decisions you make about a project, maybe split into multiple decision files if you've got neatly separated subprojects. Every time you decide something important, it goes in. The bar for important might need some tweaking, but I think a good one would be anything that takes longer than a minute to decide on. Alternatively, anything you could imagine wanting to know if you hadn't seen the project before.

The end result would be a single place to look if you're trying to figure out (or remember) why something was done a certain way. I think it'd also have the nice side benefit of encouraging you to think about your decisions in the context of that some future person trying to understand what you were doing.

You aren't you

I had an interesting thought the other day. What we normally think of as being ourselves is a very specific part of us. Sure, most of us would say that our physical body is our self, but if pushed we'd say the core of our identity sits in our brain. I'd go even further: when we introspect, the part we identify most strongly with is the part doing the introspecting. I don't just mean your mind, but the particular part of your mind that analyses and observes: Kahneman's System 2.

However, this belief doesn't align very well with reality. Most of your decisions and actions are actually made by your associative System 1. Your analytical process usually only comes in after the fact, and on the comparatively rare occasions where it's actually calling the shots it's enormously slow and resource-hungry. I would go so far as to say that if you want to define a self, it should be your unconscious, associative self.

It might seem counterintuitive to define your self as the part that isn't conscious, but I think it makes a lot more sense. Gone are paradoxes like "I really want to do things differently, but when the time comes I keep doing the same thing". Really, it would make more sense to say "My self-reflection says I'd be better off doing things differently, but I keep doing them the same way". From that perspective, there's no conflict, it's just clear that your self-reflection hasn't made a compelling argument for you to change.

That doesn't mean I think it's unreasonable to identify with your body, or your analytical mind, but we have some basis for thinking that our brain is more "us" than our foot is. That basis, I feel, is unfairly biased by the nature of introspection. If instead we base our identity on which part, if we understand it best, best predicts our behaviour, I think there's only one answer that makes sense: you aren't you, you're mostly the stuff that happens when you're not paying attention.

Visual music

Today I was thinking about music, and the particular quality that music has of being rhythmic. It's interesting, because in a lot of other ways visual art and musical art share similar characteristics. Although visual art is often static and musical art is often dynamic, that's not necessarily the case; movies, for example, are dynamic visual art. However, even with that in mind, musical art is usually rhythmic and visual art is usually not, except when the two are used together.

But what would it look like if you could make non-audial music? Visual music? An animation that you watch that beats with a rhythm, dances and moves in front of you. It's obviously possible to convey emotion and mood visually, so you should be able to do a lot of what you can do with sound. You could have different instruments, as in animations that move according to specific rules. You would still have an energy and a tempo to play with, still be able to represent complex interactions between different instruments as they catch your attention or fade into the background.

In a way you might even have more flexibility, because our auditory senses are quite limited in discriminative ability. You could produce far more complex soundstages with no sound, because our ability to track complex visuals is so much better. There'd even be some spectacular possibilities in live performance, where you could have entire visual orchestras controlling individual instruments, being mixed and laid out in real time by a visual conductor. All of this in total silence – I wonder what it would be like.

Of course, sound and vision do have fundamentally different mechanics, and it's not clear that, for example, our sense of musical consonance would translate well into visual consonance. This visual music would probably have different rules to audial music and end up developing in a substantially different direction. But that's fine; visual music is more like a neat analogy, the point is finding out what you could create with rhythmic visuals.

Though maybe, if visual music ever got big enough, you could bring the two back together. Not like a music video now, where the visuals are really just a prop for the music, but where the video and the audio are their own separate rhythmic art pieces that come together to make something exceptional.

Implies both ways

I wrote a while back about the idea of the brain as an association machine. The main point I made then was that it makes disassociation very difficult, and it's hard for us to forget things or break existing habits. However, there's another aspect I've been meaning to write about, which is the way that association machines can (and can't) do logic.

Implication is a fairly foundational part of logic, and it's just statements of the form "if X is true, then Y is true", which you write X → Y. For example, you could write "all humans are mammals" as "human → mammal". However, it would be incorrect to assume that because all humans are mammals, then all mammals are humans. That is to say, X → Y doesn't mean Y → X. That's a different thing, sometimes called equivalence: rich people have lots of money, people with lots of money are rich, rich ↔ lots of money. This is also sometimes said to "imply both ways", and mixing up one-way and two-way implication is a very common error.

But why is it so common? It's not like we often confuse addition and subtraction, or up and down. And it's tough to imagine that an artificial intelligence, for example, would make the same mistake, even if it wasn't very clever. However, presented with a situation where every murderer plays videogames (murderer → gamer), there's no end of people lining up to say that gamer → murderer. As in that example, it has the distinction of being a very harmful and common mistake. So for us to keep making it there must be a fairly compelling reason.

My proposition is that the association machine doesn't do one-way implication. By its very nature, an association is two-way; if you hear about gamers and murderers together a lot, you don't intuitively tease out the causation there, you just think about one whenever you hear about the other. The main theory for why conditioning works is exactly this. If every time you get fed a bell is rung, you have food → bell, but for some reason that turns into food ↔ bell and you end up slobbering whenever a bell rings. Two-way implication is natural to our associative machinery.

This can sometimes have disastrous effects on your understanding of success. For example, if being a famous musician → going to parties and dressing like a rock star, that's a very different thing than being a famous musician ↔ going to parties and dressing like a rock star. If you mistakenly think the latter, you're going to spend a lot of time working on your dancing and wardrobe skills, and very little time working on music. Quite often the trappings of success can feel like real success, but easier.

Another popular one is acting as if something is true. Let's say you have to finish an assignment and then you won't have to think about it anymore. Finished assignment → not thinking about assignment, except... uh oh... finished assignment ↔ not thinking about assignment. Now if you ignore the assignment, it feels like you're done with it. Amusingly, things like acting as if you're confident in order to become confident do work, but only because you're tricking the very same two-way associative machinery in your own head.

But just because two-way implication is natural and intuitive doesn't mean we're doomed. Obviously people manage to use one-way implication both in and out of logic, it just takes a bit more effort to do. Maybe we can't rely on all our associations being rigorously constructed, but we can take the time to reason through the ones that are important and make sure we've got the right implication.


There's a peculiar habit I've noticed when I'm transitioning between activities. Often the best way to get started on something is to just jump right in; I know that if I go check my email or read internet junk first I'll just get distracted, and the time between activities is critical because I haven't got into a rhythm yet. But the weird thing is, I seem to have accreted little rituals that I do before I start working on something. These include choosing music to listen to, making coffee, moving things around on my desk, and, yes, checking my email and reading junk on the internet.

These are distinct from just regular recreational activities for a couple of reasons. Firstly, you don't set out to do them recreationally. Instead, they're considered part of work time; they're a work-like activity that isn't actually work. Secondly, they're things that you do even when you're transitioning from fun to work, leading to the bizarre phenomenon where you sometimes finish reading junk on the internet, go to do some work, and then start reading junk on the internet again.

I think of these as interstitials, mainly inspired by the term in software meaning a (usually unwelcome) screen that you have to click through before the one you wanted to see. By analogy, these interstitials are activities that pop up just before you do the activity you actually set out to do. Sometimes they're fairly benign – a coffee isn't likely to disrupt you significantly – but complex interstitials can easily make everything you do harder. Even something relatively benign like choosing music can go off the rails if you can't find the right thing to listen to, or you get distracted because your amp won't turn on.

The antidote to interstitials is simple in theory: just immediately start doing something. I've found timetabling helps with that, but I'm sure other methods would work too. Often the preparatory rituals aren't necessary, and if they are you can go back to them once you've got a bit of a rhythm going.


The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him.
Miyamoto Musashi

Today I had a few reasons to think about good and bad software design. Often, badly designed software is designed badly. That might seem tautological, but there's another, very common way that badly designed software happens: when it isn't designed at all. For these projects, there's no overall vision. Things just... happen, and after enough things have happened, you have your software. It mostly works, but it is missing that particular quality that well designed things seem to need: intention.

Intention is the quality of something that is done for a reason, and whose various decisions and actions are aligned toward that reason. It may sound obvious – why would you do things without a reason? – but it is surprisingly common that people do things for no reason at all. They'll often have a goal, and often they'll take actions or make decisions, but those actions aren't actually designed to achieve the goal. That link between goal and action is intention.

A lack of intention seems to be a particular hallmark of large, long-running software projects. Perhaps at the beginning individual people exercised intention over the software, but some of them leave, new people come in, and often the same code is changed by many different people with different goals. The invention of the product manager was to some degree a response to this problem; that person is tasked with maintaining the intention of the software at a product level. But many projects still don't have a product manager and, even so, who is maintaining intention at the level of the code itself?

That said, software doesn't have to be in a team to lack intention. One of the most amusing phenomena is Eliezer Yudkowsky's Guessing the Teacher's Password, where students just say correct-sounding words in the hope that they'll happen upon a correct answer. The most common expression of this in programming is Shotgun Debugging, where you have a bug and just make changes at random until it fixes itself. You wanted to fix the bug, and you did fix the bug, but your fix had no intention.

In a more general sense, it seems easy to inadvertantly act without intention. You pick some course of action and follow it because it seems good, never mind whether it actually brings you closer to your goal. I don't even mean that it doesn't, but rather that you never even thought to check. And while you can certainly still do things without intention, it's hard to imagine those things ever being more than mediocre.




I'm trying something a little different today: an experiment!

When you click the Ready button above, it'll generate 10 random numbers adding to something between 1 and 1000. You have 5 seconds to try to enter their sum. Unless you're the biologically unlikely lovechild of Leonhard Euler and Euclid himself, it'll probably be pretty difficult. To get an answer in 5 seconds you'll need to sacrifice a lot of accuracy. (You can also press enter in the input box instead of clicking ready)

Now try changing the dropdown from "sum" to "avg". Your goal is to guess the average in 5 seconds. You will probably find this substantially easier.

Then comes the fun part: observe that, for 10 numbers, the sum is 10 times the average. So change it back to sum and, instead of trying to guess the sum, guess the average and multiply it by 10.

I based this experiment on an observation in the first chapter of Thinking, Fast and Slow, that we could calculate the average length of a bunch of lines much more easily than their total length. Kahneman's theory is that this is because we think prototypically; ie, we have already formed a general impression of the lines that are on the page, and our notion of that average tends to be pretty accurate.

I started thinking about the relationship between sums and averages, and that particular problem when you look at, say, a list of file sizes that all seem relatively small, but their total size is deceptively large. Could we build a better intuition for adding up sets of large numbers quickly? This is where the multiply-average technique technique comes in.

Still, it's not clear whether Kahneman's shape averaging actually translates into symbolic reasoning. Do we get an intuitive sense of the size of numbers the same way we get a sense of the size of shapes? I'm not sure, but I definitely got faster and more accurate results when I used the average. I'd also be interested to test out how much you sacrifice with non-trivial multiples (20 is probably okay, but what about if there's 37 numbers?). The code and a stand-alone demo are available on Github if you'd like to mess around with it.

Being able to throw out quick experiments like this is, I think, one of the great benefits of knowing how to code. Not everyone needs to be able to build Facebook from scratch, but being able to take a little idea and make it into a real thing is enormously valuable.

Consistency vs evolution

I've noticed over the course of writing these posts that there's a certain evolutionary effect at play. Over time, I seem to have picked up some habits, dropped others, and sometimes picked up and then dropped a habit shortly after. Some of those were deliberate experiments, like my ill-fated dalliance with brevity, but mostly they just sort of happened.

But, digging into that a little further, the reason these changes have a chance to take hold is because I'm not terribly consistent with my format. Unlike a newspaper column or something where every one follows the same pattern, I seem to change things fairly frequently, even if they're minor things. This has the downside that it may not be easy to predict or rely on certain features of what I'm doing, but the upside is that this kind of constant mutation provides a fertile ground for evolutionary improvements.

And I see that tradeoff – consistency vs evolution – as a general one. Often you find comfort in things around you being predictable, like a long-running TV show, or the behaviour of an old friend. And it's certainly a virtue to be reliable and consistent, at least in many areas. But, fundamentally, if you commit to doing something a certain way, you commit to stopping its evolution. Sometimes that might seem worth it, but I wonder who can really say "this is my environment, and I'll never need to adapt to another"?

Luckily, it seems like consistency is a fundamentally difficult thing for us to achieve. I once heard that if you want to develop your own style, a good strategy is to just copy people you like. You won't be able to do it exactly, and your poor imitation of their style becomes a good representation of your own style. I also can't help but observe the strange way that we seem to stumble once in every hundred thousand steps or so, even after walking for decades. You'd think, of all things, that would be a good time to call it quits and stop iterating.

Yet we don't – or, can't – and I suspect that even today your gait is some miniscule degree more efficient than it was the day before.

Ego destruction

I've been thinking a bit recently about how entertainment is different from other activities. There are a lot of fun things that wouldn't necessarily qualify as entertainment: running, talking with friends, eating and so on. And there are ways that entertainment can actually be pretty hard work. Difficult video games, for example, but also some books and movies require a bit of mental firepower. So what is it that makes entertainment entertaining?

One answer that occurs to me is ego destruction: things are happening, but they're not happening to you. I see this as related to my earlier post where I argued that your analytical system 2 shouldn't have as much claim as it does to being your true self. If self-awareness is an introspective process, operated by your system 2, then it stands to reason that being self-aware would be fatiguing. Entertainment is a way to turn off that introspection by turning off your concept of self entirely.

There are other ways to switch it off too, of course, like meditation or getting into a flow state, but those are more difficult. The great thing about entertainment is that it can do this on demand. Any time you're feeling fatigued from exerting your ego, you can get engrossed in a story and give it a break for a while.

I suppose that means that a decrease in time spent consuming entertainment would need to be matched by some other ego-less activity. Dedicating more time to work would be fine, but it would need to be work you could get engrossed in and lose your sense of self, or else you'd just end up really fatigued by it. Ironically, in most cases where you're trying to change your habits, you'd tend to be more introspective and less likely to enter that state.

Perhaps that's also a good reason to cultivate being non-introspective in everyday life, to reduce the amount of fatigue it causes and reduce your reliance on recreational ego destruction to recharge.


I mentioned in my last failure that I had been considering whether the flexibility of giving myself an extra day was a good or a bad thing. That was largely an experiment, and I think I've got enough data now to say it isn't helpful.

The problem is that the flexibility really just allowed minor failures to be hidden instead of exposed and dealt with. Effectively it lowered the bar so that certain failures qualified as near-misses instead. I now think that's the wrong direction to go; effective improvement requires feedback, and the shorter the feedback loop the better. Waiting until the small failures turn big enough might save face, but I don't think it helped me in the long run.

So my new commitment is simpler and stricter: a post each day by 23:59:59 UTC. Let's see how that works.

The universe has no clothes

There's a funny progression that happens when you're trying to fix a stubborn bug in an application. First, you look for all the obvious places something could be wrong. Second, you look for all the non-obvious places something could be wrong. Third, you look for all the places something couldn't be wrong. That last step looks a lot like a descent into madness: you're thinking things like maybe the compiler is broken but only for my program, or maybe it's solar flares, or maybe the output is correct but I have a rare psychological condition where I hallucinate the wrong result. Forget it, this is all a bunch of nonsense! Nothing makes sense! Black is white! Up is down! The universe has no clothes!

That progression seems fairly common outside of computers too. You're looking for your keys, you start off looking where you usually leave them, then places you don't usually leave them, and then you completely lose touch with reality. You start searching in places you've already looked just in case the keys are moving behind your back. You search inside the pantry cupboards even though there's absolutely zero chance your keys are in there. You begin to wonder if a burglar has snuck in and stolen only your keys from your house. You've completely lost faith in the behaviour of keys and the laws of physics in general.

But it doesn't always have to be so dramatic. In many small ways we place our trust in systems to work for us and, when they do, we take them for granted. I expect my house to continue to keep out the rain and sun, to the point where I forget that is its main job. However, a tent is a great reminder that shelter from the elements is not something to take for granted in a dwelling. When the system starts breaking down, it stops being invisible and, often, starts being annoying. My computer doesn't always do what I want, and I find this very frustrating when it happens – all the more so because it usually does what I want, but not often enough for me to trust it.

Of course, that is nothing compared to a non-expert's interactions with a computer. For a user's interactions to be meaningful, they have to consider the computer to be a system that obeys rules and has some sense of internal logic and consistency. Often, that is not what they think. Why are you clicking randomly? Why did you open up a completely different application? Why are you typing the name of your file into Google? Why? Why?! "Well, I don't know. I just tried something." And, of course, if you do not believe the system you're using makes any sense, trying one thing is just as good as another. Why not pick at random?

I mentioned earlier the importance of intention, and how the opposite of intention is just doing things whether or not they actually connect with your goals. I think there is an important parallel here: you can't act with intention in a crazy mixed-up inverse with no rules, because you have no idea what actions will lead to what outcomes. Not understanding that relationship has the same effect as not considering it at all.

Conversely, the times when a system seems to have no rules, it's often just that you haven't understood what those rules are. Like it or not, the first step towards intention and purposeful action in that universe is learning to see its clothes.

Instrument flying

One of the hardest things for a pilot to learn is how to fly without being able to see. Normally, in good weather and during the day, flying a plane is fairly intuitive. You look outside – is the sky up? The ground down? Am I aimed away from anything I wouldn't want the plane to be in? As long as the answer to those questions is yes, the amount of trouble you can be in is fairly limited.

However, the real world isn't all sunshine and visual meteorological conditions. When your eyes give out, you have to turn to the cold, mechanical logic of flight instruments. People have been flying without being able to see since the 1930s, and thousands of flights land each day using only their instruments. However, pilots need to be trained to do it, and those who aren't often die when they attempt it. VFR-into-IMC (Visual Flight Rules into Instrument Meteorological Conditions) remains "one of the most consistently lethal mistakes in all of aviation".

Kahneman writes about the Müller-Lyer illusion, that famous image with the two lines that appear to be different lengths but are actually the same. He argues that you never stop seeing the two lines as different lengths, you just learn to ignore what you see. You haven't actually fixed the broken intuition that led to the illusion, merely learned how to recognise it. The same is true of instrument flying; there are various sensory illusions you need to recognise so that you can learn to ignore them.

I've been writing these posts for something like 9 months now, and one thing I can say with complete confidence is that they take about an hour. Sometimes it's a little less, sometimes a little more, and on rare occasion I luck out and just transcribe a completely finished thought in 30 minutes, but one hour is the right prediction to make for the vast majority of circumstances. However, despite this frankly overwhelming weight of evidence, I still find myself making ridiculous optimistic guesses about how long a particular post will take. Ooh, I need to go to bed soon – maybe this one will be half an hour! Nope.

It seems like one of the hardest things for us to do is really, truly trust the numbers when they don't agree with our intuition. Like Han Solo saying "never tell me the odds!", we don't really believe that the numbers are reality and that it's our intuitions that deserve distrust. If you take million-to-one bets that just feel right, you're not going to miraculously make it through by the seat of your pants, you're just going to lose.

The skill of ignoring your intuition and trusting the data is a powerful one, and unique in some ways. Most skills involve improving yourself, but instrument flying is about learning to accept areas where you won't improve. You're never going to feel quite right pulling out of a turn, or really see those lines as the same length, and that's okay. The skill of instrument flying is learning to be humble.

Do and do not

I sometimes find myself in a bind with plans I make. I intend to do something, but I run out of time, other things get in the way, or I just don't feel like it. But, eventually, I have to make a decision: okay, it's late, do I stay up and do this thing even though that's a bad idea? Do I cut out something else just to get this done even though it could be done tomorrow? Is this a time to sacrifice in the short term? Or will the long term just be the short term but longer?

I faced this very problem the first time I missed my deadline for posting here. I considered just making it up the next day, which would solve things in the short term, but then I'd have double the work the next day and that could spiral out of control very easily. Not to mention it was kind of hiding away the problem. But the other option was skipping a day, and the problem with that was that it just seemed too easy. If I just give up when I don't plan my time well, what's to stop that turning into a habit and eventually sinking the whole project?

My eventual solution resulted in my first failure post, an idea that was a compromise between the two positions. On the one hand, I still had a bail out option when necessary. On the other hand, it came with a cost: I had to publicly own up to having failed. Pleasingly, that admission took the place of the post that was missed, preserving the habit that I wanted to maintain.

In general, I call this strategy "do and do not", to contrast with Yoda's famous "do or do not". The seemingly binary options are to do the thing you intended to, even if it doesn't make sense anymore, or not do it and risk letting yourself off too easily. But I don't believe you're restricted to those options. Do and do not is a middle way where you maintain the form of the thing you set out to do, even if you can't fulfill its substance.

Let's say you wanted to clean your room before you went out, but you left it too late and now it's time to leave for the party. To do would be to wait to go out until you finish cleaning, possibly missing the party. To do not would be to forget the cleaning and go have fun. To do and do not would be to be to clean a bit, and be a bit late. Enough that you maintain the link between intending to clean and actually cleaning, and enough that you don't get away with failure completely consequence-free.

Of course, Eliezer Yudkowsky would point out that you shouldn't set out this way. Committing to try to do something is just a weak version of committing to do it. However, once you've made the strong commitment and failed at it, I think a much more sensible option than stopping is to fall back on trying, on do and do not. It certainly seems better than not trying, at any rate.


I've been meaning to make some progress on an important part of my creative tooling. Namely, the process by which I can take a little web demo and then cause it to be on the internet.

That process is already pretty quick, but I've always thought it would be great to have a little server that I can just throw git commits at and it turns them into demo pages. Better still, other people would be able to just git clone [link] and get their own copy of the code that produced it. Well, that dream is now a reality. Meet the demoserver.

You might be familiar with the approximate shape of that system from GitHub pages, which does basically the same thing. However, this has the advantage of distinguishing between specific demo projects and just regular code and consolidating all those demos in one place, which is an important benefit for me. Plus it's quite nice that I can run it on my own server.

I'll eventually make the index page themeable and then theme it to fit into the rest of the site, but for now I'm rocking that no-style markup. It'll eventually form part of a larger section-based overhaul of the site including the idea globe and a revamped projects page. But that is a little way off still!

In the meantime, if you'd like your own demoserver, the code is on GitHub.

Git strategy game

I had a fun idea for a game the other day: a Git-based strategy game. Any multiplayer game at some point has to deal with the question of how to handle multiple players acting at the same time. Normally the way you deal with that question is by saying "you can't", and either going turn-based or placing restrictions that prevent actions from really happening at the same time, just very close to each other. But that doesn't have to be the case.

One of the more intriguing alternatives I've seen was in a game called Diplomacy, where every combination of moves has one defined resolution. This means that the moves can all apply simultaneously without having to apply an arbitrary ordering on the players or any kind of tiebreaker process. It was quite elegant! In fact, the whole thing reminded me a lot of the different solutions to conflict resolution problems in distributed systems.

So why not go all the way and make a strategy game based on an existing distributed system? A Git strategy game could end up with quite a complex notion of time, with multiple diverging and merging timelines happening at once. You'd want to set the merging rules up in such a way that you'd have to make certain tradeoffs and sacrifices for your timeline to merge. I haven't thought a lot about what the actual mechanism for winning would be, but I there are a lot of things you could do with that system as a base.

I think it'd be quite an interesting way to introduce people to distributed version control, and distributed systems in general. Plus I think it could be a fun game.

Higher Power

Ever since reading about Alcoholics Anonymous in David Foster Wallace's Infinite Jest, I've found it fascinating. I'd heard of AA before, of course, but something about the particular depth of its treatment in the book made it much more compelling. There's a bunch in the book about the program's dependence on God or, as it's sometimes (and was originally) called, a Higher Power. I always found that strange, too; why do you need God to stop drinking?

Here's what AA's "Big Book" has to say:

The alcoholic at certain times has no effective mental defense against the first drink. Except in a few cases, neither he nor any other human being can provide such a defense. His defense must come from a Higher Power.

I really like the concept of a constrained or distorted rationality: not what you should believe as a perfect faultless Vulcan, but what makes sense to believe as a faulty squishy human. It might be objectively true that if you understand human suffering in abstract, specific contact with it shouldn't change your behaviour. But in reality we know that our actions are biased towards things that we empathise with, and we don't empathise with the abstract. So it could be rational to increase or reduce your contact with suffering. In other words, maybe rationality isn't the thing that leads us to rational belief, but rational action.

My feeling is that the Higher Power belief, and AA in general, is an extreme example of that constrained rationality. How do you teach rationality to an addict, a definitionally irrational person? If trying to reason your way out of addiction worked, far fewer people would be addicts. Instead, you have to learn ways to act rationally even when your brain is thinking irrationally. And what could be more irrational than blind faith? Yet that is exactly what AA is designed to encourage, not just in God or a nominated Higher Power, but in the program itself.

Perhaps that is valuable even as a non-addict. I don't think it's necessarily feasible to take on a blind faith in God, but it can be valuable to accept the dominion of a particular program or system, or to follow through on things you've decided are right even when they feel wrong. A creed like "I will write something every day" or "I will never lie" might be irrationally prescriptive, but you know that there will be times when you are simply not good enough to do the right thing.

And in that moment, would you rather be relying on your own impaired judgement, or be able to fall back on your Higher Power?


I stayed up late last night and assumed I could get up early enough in the morning to write a post before my new deadline. Unfortunately, I didn't do that, so I failed. I am moderately concerned by the significant uptick in my failure rate (around one per month up to around one per week in the last month), but I think I can mostly attribute that to increasingly strict standards around posting. I expect that I will fail less often as I become accustomed to those standards, but that won't happen unless I actually make adjustments in order to meet them.

This and many earlier failures seem to have late nights in common. I think this is partly because the natural final failure is running out of time, but also because my willpower is weaker at the end of the day and at the same time my reasoning is most impaired. Those together mean it's more likely that I'll think something silly like "I'll write this tomorrow morning" when tomorrow morning is not very many hours away. Actually, aiming to write in the morning in general may be the solution here, if it's the morning at the start of my day-long window instead of at the end.

I also can't help but notice that many of my failures seem to happen on weekends. I assume that is at least partly because I tend to relax more on weekends which means there's less structure to fit the writing into, but also social demands on my time are higher and I tend to not be in a very productive frame of mind. Being aware of this should help, but if not I might try giving myself a day off by pre-writing an extra post on whichever weekday has the most time.

Cliff jumping

One of my favourite scenes in Back to the Future is the bit where Marty McFly jumps off the building rather than be shot by Biff. Oh no, our hero is dead – but wait, what's that? He's flying! The DeLorean was waiting below, caught Marty on its hood, and flies up to deliver a triumphant gull-wing door to Biff's face. To me, that's the quintessential action hero trait; to throw yourself into an impossible situation knowing your hero skills will bail you out. Of course, it doesn't always work like that.

I once did improv comedy, and that was the best part. Anyone can make up things on the spot, but you only got a really great scene when you took a crazy risk that would make the entire audience breathe in as they think "holy hell, this is going to be a disaster", and then laugh in surprise and relief as the rest of your crew come in to bail you out. You jumped off the cliff, everyone thought you were dead, but you miraculously survived. I'd had that audience experience before, but the feeling as a performer doing a successful cliff jump is really something else.

Of course, there's nothing miraculous about it. Good improv performers train in how to make those leaps and how to bail out their fellow performers to the point where it's a very reliable process. The danger is an illusion; in a sense, it's more about trust than risk. You trust that people are there to catch you. You build that trust over the course of working with and training with other performers until you know what you can get away with. However, that appearance of danger still feels real. Real enough to impress the audience, and real enough that a common mistake among newer performers is playing it safe because they don't trust their skills enough.

It is easy to make that mistake outside of improv as well. You can spend a career – in some cases a lifetime – building your skills, and still treat risks as conservatively as a novice. I've seen very capable people beg off taking the obviously better hard road because there's a marginally workable easy road. This isn't laziness, either; in many cases the easy road takes more work, even if the difficulty is lower. It's really a kind of risk aversion, or more accurately a lack of trust in their own abilities. And the result isn't failure, it's mediocrity.

This is part of the reason underconfidence can be worse than overconfidence; taking on too much might mean you fail and have to correct your behaviour, but taking on too little means you never have a chance to really succeed. And how would you even know?

I believe the antidote is to learn to love the feeling of cliff jumping, of knowing that you're taking a big risk and that what you're doing feels suicidal and certainly looks suicidal from the outside. But you know something everyone else doesn't: beneath you is a DeLorean, and you're going to fly up and surprise the shit out of them.

The savant effect

You meet some strange people in online games. Modern matchmaking systems are quite sophisticated, almost universally being based around Bayesian predictive models of player skill. There are flaws with that approach, but as far as anyone can tell the fairness of the matches isn't one of them. And yet somehow you still get people who are short-tempered, mean, rude, and dumb. In team games, someone like this has an enormous negative effect, often singlehandedly losing the game for their team. But just when you're ready to write them off, those very same neanderthals often pull out some surprisingly skillful play.

A friend was recently telling me about a fairly well known designer in the fashion industry who is a total, utter nightmare to work with. He's bad with money, bad with business, bad at management, bad at organisation, bad – as far as I could make out – at everything. Apparently the continued survival of his label is a miracle that constantly surprises everyone around him. So, hearing this, I assumed that mister bad-at-everything must be a pretty average designer as well. I had the chance to check out one of his pieces and it was... incredible. It was really, really good. I couldn't have been more wrong.

In psychology, there's a phenomenon called the halo effect. When you learn about a positive quality in someone, you tend to generalise that to their entire personality and everything about them. You assume a successful person is also happy, or a beautiful person is also kind. My aunt once said, completely straight-faced, "Michael Jackson can't be a pedophile; he's such a talented musician!" It's a fairly pervasive bias, and it also works negatively: once you get the impression someone is no good, they must be bad at everything.

Okay, so I unfairly halo-effected this designer's artistic ability from his business skills, and the toxic teammate's in-game skill from their attitude. But there's something more there: it's not merely that I shouldn't have inferred one bad quality from another, it's that I should have inferred the exact opposite! The existence of all those negative qualities all but guaranteed a positive quality, in the situations where I encountered them.

The reason why is this: in both cases, there was a filtering function (the matchmaking system or the continued operation of the design business) that was at least partly linear (by which I mean that some degree of in-game skill can make up for some degree of being a huge douche). It's another variant of the anthropic principle; in this case, the fact that the person still has the matchmaking ranking that they do, or can still hold on to a business despite being so incapable, strongly suggests that there is some very large compensatory factor. There are surely people without that factor, but they're not still in business so the question would never come up.

I call this the savant effect, after the fascinating phenomenon of savant syndrome, where some people with severe mental disabilities have surprisingly exceptional abilities in specific areas. Presumably there are a great many people who have severe mental disabilities without any superpowers to compensate for them, but they don't pass the filtering function for being newsworthy or interesting enough to make into a movie starring Dustin Hoffman.

It's sometimes quite tempting, when you see someone or something who appears vastly unfit for the position they're in, to assume that they must have somehow cheated, or that the system is otherwise broken. I hear it in software all the time: "oh, this piece of software is objectively better, but everyone likes the other one for no reason". "How'd that guy get promoted when he's not as good a developer as me?". I mean, for sure, sometimes the system is broken, and people do cheat, but before you jump to that conclusion it's worth considering that maybe the system is fine, you just have a limited view of the candidates and the criteria in question.

And perhaps it's worth considering how much benefit there can be in knowing savants. Someone who manages to be so good at something that it can make up for them being bad at nearly everything else has got to be worth learning from.

The two-axis model

It's quite common, when talking about happiness and sadness, pleasure and pain, good and bad, to place both options on either end of a single axis. Let's say you're a sentient computer of some kind. Somewhere within you is a number for how good or bad you feel, with 0 as neutral, -5 as a bit bad, and +100 as the best ever. This is very elegant, and seems to feel intuitively correct. If someone is sad, you want to make them happy to stop them from being sad. Or if something bad happens, you might try to do something nice to make up for it.

However, I believe the one-axis model is not sufficient. Things can be both good and bad, and a thing that has both significant positive and negative consequences (say, bombing Japan in World War 2) can hardly be equivalent to something with no consequences. We don't accept the idea that if you go out and save someone's life, that gives you one free murder. I think a better mapping to reality is a two-axis model, where good and bad are considered independently.

Similarly, a feeling of mixed happiness and sadness is of course possible and quite different from neutral. It is possible to cheer someone up when they feel bad, but I'm not convinced that the mechanism is by the happy feelings subtracting from the sad feelings. You can also distract someone from sadness with anger, pain, or mindless internet clicking. And it's true that just feeling like you're being cared for is soothing but, I should stress, that's not the same thing as enjoyable.

I once heard that as the difference between compulsion and desire: you want to go to the park because it brings you pleasure, but you are compelled to scratch an itch because not doing so brings you discomfort. For that reason, although you might describe a pleasurable feeling of relief when you scratch an itch, it is not real pleasure. The proof of this is simple: would you choose to be itchy?

Another place this shows up is reinforcement learning. The original one-axis model of reinforcement was just reward vs punishment, but it was later revised to have two axes: positive/negative and punishment/reinforcement. Positive and negative in this case should be read as additive and subtractive, so a smack is a positive punishment, because it adds something bad. A negative reinforcement takes away something bad. You can also have the opposite: a negative punishment takes away something good, and a positive reinforcement gives something good. These are four distinct ways to influence behaviour in this model, with different consequences for each.

And that is the thing I think is most important: not that two-axis is better in some abstract theoretical way, but that it has better consequences. If you go around thinking good is the opposite of bad, you're liable to wonder strange things like "nothing is bad in my life, why aren't I happy?".

Of course, the two-axis model makes that obvious: pleasure isn't anti-pain, and you would be happier with more pleasure and more pain than none of either.

Expected value

It's a curious thing, being a self-modifying system. A whole lot of neat assumptions and abstractions we can make with more well-behaved systems just end up not working very well for us at all. In theory, we are quite well-modelled as predictive optimisation systems, and there's a lot of AI research going down that path, the assumption being that if you can build a good enough predictive optimiser, you'll eventually get intelligence. Whether or not that's true, it's pretty clear that, as far as optimisers go, we're fairly unoptimal.

I have a long-standing disagreement with a friend about wireheading, a kind of brain hack where you would find your brain's pleasure centres and just set them to maximum all the time. Instead of doing anything, you would just roll around in a state of maximal bliss until you die. This is not currently possible and there's no guarantee that it ever will be, but it's an interesting philosophical and moral puzzle. My friend thinks not just that wireheading is inevitable, but that it is rational for us to do, and that it would be rational for an AI to do the same thing!

The thinking goes like this: you can model any decision process as one giant black box, where lots and lots of different inputs go in, some magic happens, and all your available options receive a value. So eating lunch gets a 5, racing motorbikes with Henry Winkler gets a 7, and hitting yourself in the face with a brick gets a 0. This all happens very quickly and subconsciously, of course, but that internal structure is hidden somewhere driving your preferences for motorbike rides over brick-face. So, if we have a value for how good things are, why not cheat? Henry Winkler and lunch are harder to come by than bricks, so what if we could just set our brick value to, like, a billion, and just be blissfully happy hitting ourselves with bricks?

If that seems like a bad idea, I agree! In fact, in a sense it's what I wrote about in goalpost optimisation. Letting your optimiser optimise itself is a recipe for tautological disaster. But the question isn't whether it seems like a good idea, the question is whether it's a rational thing to do, and whether we would expect an intelligent machine in the same position to do it.

The reason I don't think so is that, although cheating your value function to infinity would satisfy your value function after you've done it, you still have to make the decision to do it in the first place. And if the most important thing to you is, say, collecting stamps, there's no world in which changing your values from enjoying stamps to enjoying staring at the ceiling twitching in ecstasy meets your current values. But the one niggle with that argument is that our values don't just include things like stamp collecting or meeting Henry Winkler, people also want to be happy.

Can we take a second to realise how weird that is? It's as if you built an artificial intelligence whose job was to clean up oil spills, and instead of saying "do the thing that cleans up the most oil", you said "try to clean up the most oil, but also store a number that represents how well you're doing that, and also try to make that number as large as possible". What an inelegant and overcomplicated way of doing things! There's every reason why a machine would just set that number to infinity, but also no reason why you would give a machine that number in the first place.

Of course, any discussion of wireheading is really a proxy for less extreme discussions about value systems and the role of pleasure. It gives us pleasure to maximise our pleasure, and we desire to satisfy our desires. And if that curious fact doesn't lead to wireheading, we should at least expect some pretty weird results. You could imagine a person whose pleasure is very low, but whose pleasure-about-pleasure is very high. That is, they aren't doing things that make them happy, but their "am I doing things that make me happy?" system has been short-circuited. Or someone whose expected value for things stays really high even though the actual value is low, because their expected value numbers are being modified directly.

That, I think, is the real concern of the modern age: not wireheading oureslves, but being subtly wireheaded by others. Our curious quirk that leads us to value getting the things that we value is very easily exploited as a means of shaping behaviour. Giving someone what they want all the time is very difficult and costly. Far better if you can make people act as though they're getting what they want using flaws in human biology.

Frailty, thy name is dopamine.


This failure was nearly identical to my last one so instead of spending a lot of time on analysing it I think I'll go meta and get straight into figuring out why my previous attempt to fix the same problem didn't work.

Part of it was that it takes some time to set up a new habit, and trying to write in the morning to avoid these kinds of failues hadn't quite taken yet. But, looking at my plan from last time, it was also lacking in specificity. I weaseled a bit with "I should be aware of" the tendency to fail on weekends (although this time it wasn't a weekend, it was pretty similar), and "aiming" to write in the morning "may be the solution".

So I'm not going to make any changes to my overall strategy, but I will clarify the particular tactic I want to use to get there. I'm going to set myself a motte and bailey goal. The motte is that I intend to write each post by the start of the 24-hour posting window. The bailey is that I must write it by the end. As I said, effectively the same as my current plan but better specified.

Wish me luck!

Against identity

My recent efforts at making a commitment platform – what eventually became – led me down a bit of an interesting trip through the psychology of goal setting. I had a hunch that what I was doing would be useful to me, but no evidence backing it up in the general case, so I went looking. Surprisingly, I found a lot of material saying you shouldn't tell people about your goals. Oops!

Digging a little deeper, that advice ultimately comes from the great work of Peter Gollwitzer who, among other things, pioneered symbolic self-completion theory and the idea of self-defining goals. That is, certain goals like "I want to be a doctor" aren't about taking a specific action, they're about being perceived (and perceiving yourself) in a certain way. Symbolic self-completion theory says that when that identity is threatened, you seek out symbols and demonstrations to prove it. Similarly, when working towards a self-defining goal, those kinds of identity-demonstrating behaviours can substitute for actually achieving anything.

Which, to reel it back in, is not the same thing as saying don't tell people about your goals. The mechanism at work is that talking about your identity goals is a way of demonstrating that identity. So talking about all your amazing plans for being a doctor makes you feel like a doctor, and paradoxically reduces your need to actually be a doctor. I feel relatively safe in's model, because the commitment structure is fairly specific and accountable, so it should be less vulnerable to identity weaseling. In fact, Gollwitzer's paper suggests this as one of the possible ways to mitigate the effect.

Honestly, the whole identity thing seems kind of inelegant. Why define yourself as the kind of person who does thing X rather than just... doing X? It seems like the latter gets more done with less baggage. In software we've had a storied history constructing elaborate definitions of types of things, and many developers now believe that just expecting certain behaviour, rather than a certain type identity, is a faster and more flexible way to work. In a sense I think identity is similar to happiness: an indirect, consequential property that we've started trying to manipulate directly. Maybe we would be better off if we just got rid of it.

That said, Gollwitzer found at least some benefit to identity goals, and regardless we're probably stuck with our particular set of mental quirks for the forseeable future. The one thing I would say to that is that we can definitely decide what gets to be part of our identity. In which case I would suggest that a good identity is like a good type system: minimal. Maybe you don't need running or fishing or medicine to be your identity, with all their attendant risk of having external factors define your identity.

Perhaps a better way to approach your identity would be to just prune things away until you find the parts of yourself you'd never want to change. That is, instead of defining your identity as things you do, whittle it down to just the essential elements of your character.

What the hell

I learned about an interesting bias recently. The most official name seems to be "lapse-activated causal patterns", but the more fun name is the "what the hell" effect. It's when you have a particular system you're trying to stick to, for example a low-calorie diet. If you have a moment of weakness and eat a donut, that shouldn't affect your decisions about subsequent donuts; the donut is a sunk cost and the best thing you can do is move on. However, what we tend to do instead is think "ah, what the hell, since I'm already failing at my diet..." and blitz through the entire donut box.

I think there are two interesting ways to look at this. The first one is in terms of identity: deep down your goal wasn't "eat as few donuts as possible", it was "be the kind of person who doesn't eat donuts". One slip-up and that identity is ruined; you're a no-good donut-eater regardless of the quantity involved. Another way is as a consequence of a kind of absolutism. After all, you want your goals to be locked down so you can't just optimise the goalposts. But if your goal was "never eat a donut", you've now failed that goal. And since that goal's ruined anyway, you may as well chow down.

Which is a new way of thinking about my earlier idea of do and do not. The point is that when it becomes clear that a goal is unattainable you can find a new compromise goal, even if only to keep you in the habit of following through, and make it a little bit (but not too!) uncomfortable to fail. But it's also a good way of fighting the "what the hell" effect if your issue is absolutism. You can no longer do "no donuts", the natural reaction of "do not" would be giving up entirely, but the compromise between the two would be eating only one donut this week.

That's not the same thing as succeeding, of course; you still failed to achieve your original goal. But that doesn't mean you should also abandon nearby goals that are the best option remaining after the sunk cost goal is buried.

Defence in depth

A few things I've written recently, including what the hell, against identity, motte and bailey goals, do and do not and going meta, have been about failure and how systems break down. I think there's an interesting unifying idea there that's worth going into separately.

I remember reading about NASA's software engineering in Feynman's notes on the Challenger disaster. Unlike the other departments involved, the software team had an incredible resilience to failure. In addition to fairly stringent engineering standards, they would do a series of full external QA tests and dry runs of all their systems, simulating an actual launch. Plenty of teams do dry runs and QA testing, of course, but the difference is that QA failures were considered almost as serious as real failures. That is, if your software failed these tests, it didn't kill anyone, but it probably could have.

At the heart of this is a paradox that speaks to that general problem of failure: you want to catch failures early, before they cause real problems, yet you still want to treat those failures as real despite their lack of consequences. Let's say you have to finish writing an article by a week from now, but to add a bit of failure-resistance you give yourself a deadline two days earlier. That way, if anything goes wrong, you'll still have time to fix it by the real deadline. Sure, you could say you're going to treat your self-imposed deadline as seriously as the actual deadline, but that's kind of definitionally untrue; the whole point of your deadline is that it's not as serious as the real one!

The general principle here is defence in depth: design your system so that individual failures don't take down the whole thing. An individual event would need to hit a miraculous number of failure points at once to cause a complete failure. But that assumes each event is discrete and disconnected from the others, like someone trying to guess all the digits of a combination lock at once. In reality, if smaller failures are ignored or tolerated, you really have one long continuous event, like a combination lock where you can guess one digit at a time. The difficulty per digit becomes linear when you really wanted it to be factorial.

In order to get that, you have to make sure that those individual failures can't be allowed to persist. But that is much easier said than done. Both Feynman's notes on Challenger and the NASA satellite post-mortem I referenced in going meta revealed this very problem in their organisation: the culture let individual errors accumulate until their defence in depth was broken. But in neither case was the general problem of slow compromise of defence in depth really addressed.

I see the main issue as proportionality. If you tell someone "the failure of this one individual widget should be treated as seriously as the failure of the entire space shuttle", all that's going to do is destroy the credibility of your safety system. Similarly, setting up your goals in such a way that one minor failure can sink the whole thing is just silly. I think what the NASA software team got right wasn't just that they took their QA testing so seriously, but that they also didn't take it too seriously. Failing QA might mean a serious re-evaluation, but failing the real thing probably means you lose your job.

A significant secondary issue is that the consequences increase enormously as you go meta. A single screw not being in the right place is a very minor failure, and deserves a very minor corrective action. However, failing to correct the missing screw is a much more major failure. It may appear to be just another aspect of the minor failure in the screw system, and thus fairly unimportant. However, it's really a failure in the defence in depth system, which is the kind of thing that can actually take down a space shuttle. Perhaps that counterintuitive leap from insignificant failure to catastrophic meta-failure is at the heart of a lot of defence in depth failures.

In the absence of any guidance from NASA, I'd suggest the following: set up your system in layers to exploit defence in depth. Make sure the consequence of a failure at each layer is serious, but not too serious to be credible. And make sure that failures in the layering itself are considered extremely serious regardless of their origin, as they have the most potential to take down the system as a whole.

Prototype wrapup

I've been meaning for a while to get into a habit of prototyping in a more systemic way. I wrote a while back about the benefits of prototyping, and more recently about wanting a more exciting version of a code-every-day challenge. What I'd ultimately like is for prototypes to fill that space, or at least part of it. To that end, I decided to commit myself to making one prototype per day this week.

That didn't go so well, mostly because my prototype discipline was pretty lax and I made overcomplicated prototypes that were basically mini-projects. So I didn't get done as many as I wanted, but I'm quite happy with the ones that did:



This was a silly little stack language I threw together for a friend as part of an ongoing joke about compile-to-js languages. It has a hello world that should give you some idea of how it's meant to work. I ended up spending a lot of time thinking about how to make an efficient streaming parser, but in the end I gave up and just read each line into memory.

Time: 2 hours.


source demo

When messing around with SDR, it really pays to tune the length of your antenna properly. However, the only decent site I found to do it gave lengths in feet and inches, and that extra conversion step was kind of cramping my style. So I thought I'd use it as an opportunity to learn PureScript, which is a kind of Haskell-meets-Javascript language I've had my eye on. I figured being a very minimal web challenge would stop it from taking too much time, but I was wrong.

Actually, I got tripped up by the fairly involved setup/build process which took up a significant chunk of time. Another big problem was the web framework, which was 99% fine, but that 1% (an HTML attribute it didn't know about) ended up taking over an hour to figure out and I still don't think I figured it out properly. Type safety!

Overall I feel positive with the result, though I'm still unconvinced that Haskell is a good choice for making webpages. I'll probably give it a try again later to see if there's light at the end of the learning curve.

Time: 7 hours.

Infinite Gest

source demo

This was really a new project masquerading as a prototype. I've wanted to look into the idea of making a minimal sketching tool for ages. The idea is that instead of having a bunch of tool UI, you just sketch things and it uses gesture recognition to automatically turn your badly drawn shapes into beautiful platonic ideals. A kind of slight upgrade from paper, but not so much that it interferes with the process of just sketching. Also, the pun name I came up with was so amazing I just had to do it.

I really had a hard time with additive vs subtractive on this one. Initially I wanted to just build a simple version, but some of the things (infinite scrolling canvas, drawing shapes and moving them around) are complicated enough that I was lured into going full framework. Ultimately, that led to a better, more comprehensive demo at the end, but it turned what could have been a few hours into a lot more.

Time: 8 hours.

So in total I spent 17 hours on prototypes, which could have been more than enough for 7*2 hour prototypes if I'd been more modest in scope. I can't say I necessarily regret taking the ideas as far as I did because they turned out pretty well, but I definitely know I can't have that at a daily output. Maybe that will turn out to be fine, but for the time being I still want to validate the idea of regular prototypes.

I'm making a smaller commitment for next week. 7 was definitely too many, I think 5 would be a nice number, but I'm going to start at 3 and make sure I'm doing it effectively before I scale back up.

Surprise tracking

One of the best pieces of advice I've heard about planning was to project yourself forward in time and imagine that the plan has just failed. Are you surprised? If not, your plan needs work.

I think surprise is a seriously undervalued intuition, because it can roll up a whole lot of different factors into a prediction in a very quick, intuitive way. Other ways of accessing those fast predictions ("do you think it will succeed?", "do you think it will fail?", "what risks can you think of?") all seem to get bogged down in biases like optimisim or social pressure, or end up being more a test of imagination than prediction. I wrote about using the worst unsurprising case exactly because I think surprise is uniquely powerful for linking prediction and risk.

But something I only thought of recently is that it might be useful to use this as an ongoing measurement. So instead of just asking "how surprised would you be if this plan failed" at the start of the plan, you could ask on a regular basis throughout the plan's life. Instead of having just one value to work with, you now have a trend. Ideally, that trend should be towards more surprise, or at worst the same. If not, it's probably a sign that your plan is in trouble.

It could work well for more specific predictions as well, like "how surprised would you be if this part of the project took longer than expected", or "how surprised would you be if we got feedback that our software was too hard to use". Over the lifetime of a project, a bunch of similar surprise tracking questions could paint a pretty interesting graph of a whole team's intuitions about the project's success.

It's already very popular to continuously track certain metrics over a project's life, but these are usually objective quantities like burn rate, server capacity, or number of users. Tracking subjective metrics seems like it could be pretty useful too, as long as those metrics were predictive. I think surprise tracking would be a good foundation for that.


One of my favourite illusions is the stopped clock effect, also known by the much cooler name chronostasis. The illusion happens when you see a clock out of the corner of your eye, then turn your eyes to focus on it. The amount of time it takes the next second to pass seems much longer than a second. What's happening is that during the time you couldn't actually make out the face of the clock, your brain fills in what it thinks should be there. That illusion is normally seamless, except that clocks have to obey more stringent rules that your visual system doesn't know how to fake.

I've been thinking about a similar illusion I've noticed in areas I don't often think about directly. I recently had a long conversation with a hair stylist about the complexities of the salon industry and most of the time I spent just convincing myself that people could legitimately care this much about hair. I remember being younger and thinking, like young people do, that I must have just about figured out everything worth figuring out. It turns out to be a pretty common sentiment.

I think part of the reason we often understimate how much we don't know is that we are so very good at just filling in the blanks with whatever available information we can get our hands on. If you look at the frankly crap signal we get before all our neurological trickery, it's amazing we can see at all. Hundreds of years ago, Helmholtz arrived at the same conclusion, which he called unconscious inference.

Our thinking is similarly amazing for how much it gets done with so little. Our tiny capacity for focus and working memory only really becomes obvious when we go looking for it, for example in specifically designed tasks like N-back. Part of this is that our brain is just well adapted to the kinds of problems we tend to have, but I think it's also that our mental capacity, like our vision, is particularly good at hiding its own limitations. I once saw someone ask "what do you see if you're completely blind?", and a blind person replied "well, what do you see behind you?"

So not knowing what we don't know isn't entirely surprising, but what does surprise me is how hard it is to even think about. Even once you build up an intuition for "there are probably a bunch of things I don't know", it seems like it doesn't actually work very well. When you're paying attention it's easy to remember, but it's the things you're not paying attention to that are the problem.

But perhaps this particular quirk is inevitable. As long as we have a limited capacity, there has to be some behaviour when that capacity is exceeded. While we might assume that a big obvious "your capacity has been exceeded!" signal would be better, the reality is that our perception and understanding is in a constant state of compromise, and if there was such a signal it would be going off constantly.

Maybe it's for the best that we don't notice.

Next action

Ages ago I saw a great mechanic in a video game, I think for the Nintendo DS but if not it was around that era. Mostly it just had the standard RPG stuff: characters and quests and spells and so on. But the one thing that made it stand out was that it had this amazing "next action" bar. Down the bottom of the screen above all the other important information was just a simple display showing whatever the next thing you had to do was.

Since then I've played games with much more complex quest systems, featuring multiple diverging quest lines, tree views, inline maps and so on. But, as sophisticated as those systems are, none really had the impact of that original one-line display. It was so simple! You could roam around and do other things as much as you wanted, but whenever you felt like moving forward the next action was always there, clear as day.

I think it's easy to get bogged down in complexity, especially when that complexity is actually necessary. Often you need to consider things like which tasks depend on which other ones, time planning, making sure you have the resources you need and so on. But, at the end of the day, the goal of planning is to reduce that complexity as much as you can. While you're in the middle of working, your decisions shouldn't be things like "what is the best thing to do out of all the possible things I could be doing?"

It should just be "am I ready to do the next thing now?"

Attritional interfaces

I've never really liked RSS readers. I've used them on and off for various periods of time, but in the end it always goes the same way: I end up following too much stuff, the "unread" counts pile up into the hundreds or thousands, and I eventually just declare RSS bankruptcy and abandon it entirely until the next go-around. However, in recent years social news sites like Reddit, Twitter and Hacker News have mostly filled the RSS-shaped hole in my life, despite missing a lot of the content I used to go to RSS readers for. Why is this?

My contention is that social news sites are fundamentally attritional, by which I mean they slowly lose data by design. While this would be suicide for office software or a traditional database-backed business application, it actually works very well for social news. Old posts on Reddit fade away under the weight of new ones, and the only way to keep them alive is with constant attention in the form of upvotes and reposts. It's quite common to think of something you saw a few days ago and be unable to find it or remember what it was called. While that that might be frustrating, it's actually Reddit working exactly as intended.

The trick is that most software is designed to complement us. Where we are forgetful, computers remember. Where we are haphazard, computers are systematic. Where we are fuzzy, computers are precise. This makes them amazing tools, because we can do what we are good at and leave computers to do what we aren't. However, some systems have to be designed to mirror us. When we make a user interface, it has to work like we work, or it won't make sense to us. Email is designed to complement our memory so that we don't just lose emails. Reddit is designed to mirror our memory so that it can present us with constant novelty.

That said, I should stress that these two things aren't really opposites. In fact, it would be very difficult to design fundamentally attritional software because eventually you run into the reality that the system is a computer, not a human. Usually, you'll have a reliable system underneath with an attritional interface on top. Reddit, for example, is built on a database and never actually loses information. You wouldn't want it to anyway, because people do link to Reddit threads from elsewhere. The only reason things go missing is because the interface is set up that way.

RSS readers are an example of software crying out for an attritional interface. I don't care about some blog post I've been ignoring for weeks, but it stubbornly persists until I am the one to take the initiative and say "yes, I definitely don't want to read this". Just let me forget about it! Though RSS readers are an easy target, there are many other examples. I previously wrote about browser tabs that accumulate without end. Mobile notification systems could also benefit from a dose of attrition; do I really need constant reminding that some app updated until I specifically dismiss it?

So, if you're working on an interface, I would encourage you to consider: am I trying to complement or mirror the user here? And, specifically, consider if your system should remember things forever just because it can, or whether it might be better to forget.


I think most people would broadly agree that freedom is a good thing. By freedom I mean specifically being unrestricted in your actions: you can mostly do what you want. The times when you can't are usually when it would limit other people's freedom. Perhaps, in some futuristic virtual libertarian utopia, it will not be possible to interfere with people's freedom, so everyone will be able to do what they want completely and without consequence. The internet, with its lack of physical consequences or government regulation, is part of the way there already.

There's a neat philosophical problem I read about called Parfit's hitchhiker. Basically, a perfectly rational utilitarian hitchhiker is trying to catch a ride with a perfectly rational utilitarian driver. The driver won't do it for free (that would be irrational!), so the hitchhiker offers to withdraw some money at their destination. The problem is, a perfectly rational hitchhiker would have no reason to follow through with this once all the driving is done. The driver knows this, and thus refuses.

The problem is that you need a way to bridge the cause-effect gap. The driver only wants to give the lift (effect) if they get money (cause), but those two things happen in the wrong order! The effect has to come after the cause. In practical terms, of course, this is a solved problem; the two could just form a contract, and then the government would step in if the terms weren't being met. More generally, you can use any kind of enforcement system that brings the effect forward to its rightful place after the cause. However, doing this comes at the cost of some freedom.

Essentially, you always lose freedom every time you make an agreement, a promise, a deal, a contract, or any of the other various forms of voluntary constraint on your decisions. Perhaps that's obvious; by committing to doing something, you give up the freedom to not do it. However, what might not be as obvious is the consequence that the ability to be unfree is therefore a vital part of a stable society, even a utopian one.

I've been thinking about this specifically in the context of internet ads and ad blockers, which is a topic of some debate. It's actually quite similar to Parfit's hitchhiker. Internet properties cost money to operate, so their owners need users to watch ads to pay for them. However, from the user's perspective, what obligation are they under to download your ad after you've already provided them with the content? It's their computer, their network connection, and they have freedom over how it is used. The internet is not really designed for unfreedom, which makes ads a tricky proposition.

It may well be that advertising can't survive on the internet as long as we're free to ignore it. Depending on your perspective, that might be the internet working as intended. However, it is strange to think that there are certain kinds of transaction that we can't make, even if it would be beneficial to us, because we're too free.

Waiting for The Call

Until a man is twenty-five, he still thinks, every so often, that under the right circumstances he could be the baddest motherfucker in the world. If I moved to a martial-arts monastery in China and studied real hard for ten years. If my family was wiped out by Colombian drug dealers and I swore myself to revenge. If I got a fatal disease, had one year to live, and devoted it to wiping out street crime. If I just dropped out and devoted my life to being bad.
Neal Stephenson – Snow Crash

I really like Derek Muller-aka-Veritasium's video about trying to become a filmmaker. He describes calling a local film director looking for... something – a way in, maybe, or just some idea of what to do. He uses that to launch into a larger exploration of learned helplessness and the way we tend to assume that it's up to other people, rather than ourselves, whether we succeed. It's a good point, but the story reminded me of a slightly different thing I've noticed.

I call it waiting for The Call. You have a great idea, an ambitious plan, some new amazing direction for your life. You're a tightly coiled spring of potential energy ready to unleash on the universe, but you never quite feel ready. Suddenly, the phone rings. "Hello, this is The President. We need you. It's go time." Okay, let's do this! At last, you can commit completely to this particular course of action, safe in the certainty that this is definitely the right thing to be doing and now is the time to do it.

But, except for very rare exceptions, it's unlikely you'll get that kind of call from the president, or indeed anyone. There probably won't even be a clear signal to say that this thing is the right thing to do. In fact, most of the time the great idea doesn't look like much to start with, and you have to spend a lot of time convincing other people that it's worth anything. Yet it's all too easy to put that hard work off, waiting for some sign that isn't coming.

I don't think this is necessarily anything to do with learned helplessness. In fact, I would say it's probably more like a kind of backwards impostor syndrome. Instead of feeling like you're missing some indefinable genuineness quality that other people have, you feel like you need that quality before you even start. I wrote before about the strange phenomenon of feeling successful, which you experience second-hand from successful people, but not first-hand via actual success. I think this is similar: you feel an imaginary destiny in the lives of others, which you could have too if The Call would only come in.

It's truly hard to accept that remarkable things don't necessarily feel remarkable when you're doing them. Most likely if you do ever get The Call, it's not going to be before you start, or even while you're working to make your thing a success. Instead, it'll be years later when some kid calls you up to say "hey, since you obviously have the success-nature, is there any chance you could tell me that what I'm doing is right?"

Prototype Wrapup #2

Week 2 of my prototype adventure fell off the rails a little. Last week I committed to 3 prototypes, but I only made one:

Relative word cloud

source demo

This is an idea I wanted to try out. Traditional word clouds are really distorted in favour of common words. Normally you filter the most common out with stoplists, but those are a really blunt tool. I thought it would be more fun to use a base set of words and do some Bayesian magic to make a word cloud showing the words that appear a lot in the sample text, but not much in the base set. Unfortunately, the statistics involved was kind of out of my league and I ended up spending way too much time twiddling numbers trying to get something I was happy with. That said, I am pretty happy with it.

Time: 12 hours.

I suppose it shouldn't be too surprising that I didn't get any others done given the amount of time that I sunk into this one. However, I also really underestimated how much time the Christmas period would take up, so I think I could have still achieved the 3 if I'd planned things a bit better. I'm going to commit to 3 more for next week, and keep trying to get the time-per-prototype under control.


This failure was brought to you by my massive prototype blowout. I wanted to have my Monday post reflect the prototypes I'd done the week before, but I was still partway through doing them and so I put the post off until I'd finished. However, the prototypes took even longer than I thought and I ended up getting neither the post nor the prototypes done by the deadline.

If that seems familiar, it's because it's classic dependency hell, which I wrote about earlier. I think I could have easily seen this one coming, but I was perhaps a bit too overwhelmed by Christmas activities and my relatively new prototype workload. For next time I'll just make sure to keep the two commitments independent.

That should solve things for this particular project, but the problem of putting off writing about something until I've done it seems to come up with new projects fairly frequently. I'm also going to spend some time thinking about how to avoid that in the general case.

No news is bad news

I wrote yesterday about an issue I've had a few times, and I thought I'd expand on it a little more today. Often while I'm in the middle of a an uncertain situation, I also need to provide information about that situation. Sometimes that's because, like recently, I'm writing about what I'm doing. But it also comes up when I'm working with other people, and even in minor ways like when a friend asks whether I can hang out but I'm in the middle of something. In those situations, my instinct is to wait until I have a good answer before I respond, which often means I respond late or not at all.

I think there are a few angles into this one. The first is the value of certainty. Because I don't always know whether the thing I'm doing is going well or badly, I also don't know what kind of answer to give. That information has some value and I want to maximise it. If I'm optimistic, I also expect to be able to give a more positive answer later. So "I don't know" right now loses to "it's going well" later, which itself loses to "good news, it's done!" much later than that. Of course, that's assuming the situation is getting better over time and that the value of that certainty beats the cost of the delay. Both of those assumptions are pretty unreliable.

The second angle is the implies both ways fallacy. Obviously, if things are going badly, I will need to report that they are going badly. It is very easy, then, to put off reporting that things are going badly in the hope that it will prevent that from being the case. It would be nice if implication worked that way, but unfortunately it does not. I think this is especially pernicious in the face of uncertainty; when you're definitely going to miss the deadline, you're not going to convince yourself you can change that by keeping quiet. But if there's still a chance...

The last angle, and I think the one that probably trumps the other two, is just plain dependency hell. When I make one task dependent on other tasks, the situation becomes much more complicated and brittle, and that's no less true when one of the tasks is communication. If I'm trying to manage the communication about a task as a dependency of that task, it requires a lot more thinking and makes both more difficult. And the timelines around communication are often a lot tighter than those around work, so making the tight timeline a dependency of the loose timeline is particularly silly.

So it's pretty clear that there are a few issues feeding into this one. Fortunately, that means I have a lot of ideas for solutions. The first is to make sure I just keep the two goals separate in my own thinking. One goal is to do the thing, and another is to communicate about the thing. The strategies I mentioned in dependency hell should help. I don't think trading off the value of certainty is necessarily wrong, but I need to be careful to balance it against the value of time. In many cases, people are happy to trade timely communication for more accurate communication.

And that brings me to perhaps the main observation: this is sounding very similar to the philosophy I wrote about in feedback loops, done, and continuous everywhere. It's usually preferable to have more, smaller communications than one large communication, for the same reason it's preferable to have more and smaller tasks. It's actually a bit surprising that I didn't think to apply these ideas to communication, given how much time I spent thinking about them with respect to work.

I'm hopeful this bit of reflection will help fix that, by giving me a bit of time to reinforce the association. Relatively speaking, I spend a lot less time thinking about how to communicate than how to work, but it's still important to get right.


I remember when I was younger, during one particularly unproductive period, thinking that I really just needed to find something I wanted to do badly enough. My problem was that what I was doing didn't align with my values well enough, and that was what led to being unproductive. I was convinced that all I needed was to find something truly motivating to focus on and then everything would fall into place.

Unfortunately, that wasn't the case, and later on when I was first able to just work on what I wanted for a little while, I was still very unproductive. That was a bit of a shock, and eventually I realised that my issue wasn't with my goals, it was with my work habits. I needed the skills and the discipline to work effectively, and without that it didn't matter what my goals were. So since then I have spent a lot of time trying to build good creative working habits.

But does that mean that goals don't matter? I don't believe so. In fact, there's an equivalent failure mode, although it's fortunately not really been my issue, where your working habits are very good, but you are doing things that don't actually meet your goals. The end result is still not getting what you want, but it's easier not to notice, because you are being very effective at doing the thing you don't want.

I previously wrote about intention, and the importance of connecting goals to actions, but I think there's an even more general point here. Not just goals and actions, but everything should point in the direction you want to go. It might seem like you only need to really want something, or that having good habits is sufficient, or creating a good work environment, social pressure, financial incentives, or even a higher power. But why not all of them? Every single thing you can muster, all aligned in the same direction like the electrons in a magnet.

I think sometimes it's tempting to dismiss low-level solutions as easy or cheap, because they shouldn't be necessary if you have the higher-level stuff sorted out. Little tricks like leaving your alarm on the other side of the room, forcing yourself to put your running shoes on before deciding whether to go for a run, or disconnecting your computer from the internet for a while. What about glib little rhymes and sayings? No screens in bed, have a good sleep instead! That might be dumb, but how dumb can it be if it works?

The point is to build yourself a kind of fractal fortress. At the very highest level, the direction of your life is going the right way. You zoom in and your goals are pointing you in that direction. You zoom in and your plans are designed to achieve your goals. No matter how far you zoom, you still see the same shape, all the way down to the silly tricks you use to get yourself through the day.

Thinking makes it so

At least fairly often, I've had occasion to be tired, busy, overworked, overwhelmed, angry, sad, and uncertain, though blessedly not all at once. One peculiar thing is that all of these feelings have also meant different things to me at different times. That is, there are times I have been tired and thought "this is the absolute worst, I want to go to bed and I can't, and that just makes me so utterly miserable I can't bear it". However, other times I have thought "yep, I'm tired alright", and really not been bothered at all.

I've noticed a very similar thing with learning new stuff, something I'm quite fond of. You usually run into an early wall you run into where the number of things you don't know is growing quicker than the number of things you do. Each thing you learn leads to ten more things you haven't learned, and each of those leads to ten more. This situation feels so utterly overwhelming that it's hard to imagine ever getting through it and reaching any kind of understanding. I have distinct memories of that feeling, something like a hot crushing sensation coming at my brain from all directions.

But lately, perhaps just from experience, I've started to accept that feeling as being part of the process of learning anything new. The overwhelmed sensation doesn't go away, exactly, it's more like it stops being such a dominant and negative part of the experience. Instead of thinking that you're overwhelmed because the material is inherently too complex to understand, or you're inherently too stupid to understand it, you can just think "yes, this is what learning a lot of new things at once feels like", and not be particularly bothered by it.

To be clear, I'm not trying to advocate the idea that bad things aren't bad. I would take not-overwhelemed any day if it was available, and similarly I'd go for awake over tired, happy over sad, and leisurely over busy or overworked. Those things are, to my mind, objectively better. However, I think there are two layers to any situation: there's the thing that's happening, and then your reaction to it. A bad thing that's happened can't be helped, but a bad reaction can.

Imagine you're in a dinghy slowly filling with water, and while you're trying to figure out how to bail it out, everyone around you is screaming because the boat is sinking. Nobody would argue that the dinghy sinking isn't a bad thing. It's definitely bad. But although the wailing and gnashing of teeth might most accurately reflect that badness, the reaction of quietly acknowledging the situation and setting about making the best of it is, perhaps, more helpful.

There are still a lot of good reasons to avoid getting into bad situations in the first place, but they have a way of popping up anyway. When that happens, I think the best thing you can do is just accept the situation as exactly as bad as it is, and avoid the temptation to make it any worse.

A deal with the universe

I remember this really funny situation that came up once in a library. You could book these nice study booths that were relatively secluded and good for getting work done, and I tried to make good use of them whenever I could. One time I went in and there was a small group already in my booth. "Hey, sorry", I said, "I have a booking for this booth". One of them replied, "we're already set up here, would you mind just taking that other free booth?" I very nearly said yes, until I realised the trap I was about to fall into.

Here's what would have happened next: I sit down in the nearby free booth, the group in my original booth save the effort of moving, everyone wins. Until fifteen minutes later, when someone else comes up to me. "Excuse me", they say, "I have a booking for this booth". When I explain that I am all set up at this booth and ask if they'd mind moving to a nearby free booth, they say "no, sorry, this is the booth I booked". What? This isn't the deal! I moved for those other people. I didn't have to, but I was being nice! And this is the thanks I get from this jerk who won't even reciprocate.

I call this situation a deal with the universe, and it comes up a lot. You decide that, because you've done some good thing or acted charitably, the universe now owes you. You and it have a deal. It's going to take care of you because you're a good person. You spent your weeknights helping homeless puppies with cancer, but during the weekend your date cancels on you and then you get sick. What the hell, universe? I give you all that and this is what I get in return?

Two things on that: firstly, I hope it should be obvious that the universe can't actually make deals because it's mostly empty space with some rocks and gas, and thus lacks the sentience necessary to engage in trade. The second thing is that the actual parties to these deals are sentient, but do not necessarily have any relationship to each other. This means your date who gets an earful of how good a puppy-saving superhero you are, or the jerk from the library who just wanted to use the booth they booked, are unwitting parties to a bad deal that was made without them.

That's not to say it's wrong to do nice things, but as soon as you start thinking those nice things are a trade you walk into real danger. Even doing something nice for a person and expecting something in return is a recipe for uncomfortable situations and resentment, but doing that when the person you give to and the person you expect from are totally different? There's no way that's going to work. The universe doesn't make deals, and people rarely see themselves as manifestations of some cosmic barter system.

The library situation, by the way? I told them they'd need to move. They grumbled a bit like I was being unreasonable and moved to the free booth. Fifteen minutes later someone else came along and kicked them out of that one too. Thanks, universe!


Which is more important: the experience of something, or the memory of it? Let's say you have the opportunity to either go on the amazing holiday of a lifetime, experience the most supreme pleasure and bliss you will ever experience, and then remember nothing about it afterwards. Alternatively, you could be given the memory of that amazing holiday, but never actually experience it. Presumably the person offering you these options doesn't like you very much. All the same, what would you choose?

I find myself tending towards memory over experience, but I'm not actually convinced that makes any sense. In the extremes, if memory counts and experience doesn't, torturing amnesiacs would be perfectly fine. They won't remember it anyway! Or, more mildly, your life becomes less valuable as your ability to retain memories decreases. Why bother to be nice to forgetful people? For that matter, why be nice to anyone? We all forget in the end.

There's an interesting phenomenon called the peak–end rule, studied by none other than Daniel Kahneman. According to that theory, we remember experiences not in their totality (eg, as the sum of all the individual moments in the experience), but by representative samples: the most intense experience and the most recent experience. That means that, for example, people will prefer longer, less painful surgeries to shorter, more painful ones, even if the total pain is much less in the short surgery.

So this is another angle into the same question: given that knowledge, do you use a surgical technique that causes more total pain but less remembered pain, or vice versa? Especially since people will use their memory to make decisions, so they themselves would choose more pain over less in this instance. And if you go with memory over experience, would your answer be any different if we were talking about an immensely painful surgery with a quick dose of sedatives at the end to wipe out the memory of it?

Perhaps complicating the whole thing is that memory influences experience. A memory isn't just an abstract record; you re-experience the memory as you recall it. So a happy memory makes you smile, an awkward memory makes you cringe, and a painful memory hurts, even though nothing has actually happened to cause that reaction. To really isolate memory from experience would require imagining an alternate human who can do those things separately. Someone who remembers without re-experience.

However, I think even that person has to make decisions based on memory. They would thus choose to repeat the holiday they remembered fondly but enjoyed less at the time, and the surgery that was more painful in total but formed less painful memories. That answer still seems unsatisfying to me, though; isn't this memory distortion spoiling the person's decisions? I mean, if you otherwise tricked someone into thinking a painful thing was less painful they'd go for that option, but it wouldn't be a good decision. On the other hand, that distortion is in fact how we remember, so it's not much good pretending that we don't.

So let's make one more change to our hypothetical: if a person who could remember without re-experience was also making their decisions in advance, with complete knowledge of how the experience would feel at the time, but also aware that their memories would not agree with that experience, how would they decide?

I think, in that case, the only sensible choice is to go with the best experience. Although the memories might not reflect it, those memories would have no emotional consequences. You would want to make the decisions that lead you to the best aggregate experiences regardless of what your faulty memory would tell you about them afterwards. However, this result is not terribly useful because there aren't any people who actually work like that.

But adding re-experience on top of this result isn't so hard: you still want to aim for the best aggregate experience, but you also need to take into account the impact of memory on your future experiences. This means you can't completely disregard your memories when they diverge from reality, but you also don't attempt to optimise for them directly. In most cases, your memories will affect your immediate emotional state less than the present situation. And if you assume that generally holds true over time, it also makes sense to generally choose experience over memory.

Prototype Wrapup #3

Well, things are improving somewhat on the prototype front since last time. I committed to 3 and I got 3 done. That's technically a 200% increase. Victory!



This one came from a place of deep personal frustration. I'm always running lots of little webservers on my computer for development, and since each one needs a unique port number I end up having all sorts of ridiculousness: 3000, 3001, 4000, 8000, 8080, 8081. I forget which ones are which and sometimes end up trying to run things on the same port. In production this is exactly the kind of thing HTTP solves with virtual hosts, so I decided I'd make a local HTTP router to do the same thing. Ended up taking a bit more time than I thought because there was a lot of ridiculous DNS nonsense, but all told it was pretty under control, and the end result has already saved me a lot of annoyance.

Time: 5 hours.


source demo

I've had an idea for a while about generating more memorable passphrases by making the random words fit into a semi-meaningful sentence. Think "the {noun} {verb}s at {time}" kind of thing. In theory it would be possible to build even quite long scenes by sequencing actions and so on. Anyway, in trying to find decent word lists I ran headlong into the total devastation that is semantic web/natural language processing. It was all I could do to just get something that vaguely worked out the door, and I'm still not really that happy with it. It works okay as a proof of concept though.

Time: 10 hours.


post source demo

This is a bot to manage contributors and pull requests on GitHub. I wrote about it ages ago and I'd even done some really early exploration with GitHub's API, but never actually pulled the trigger to make something that worked. Well, now I have and it seems pretty good. It only supports a couple of rules so far, but I built in the ones I wanted and it's already self-hosting (ie automaintainer is managing the automaintainer repo). If all goes well I'll be able to start using it for my other projects as well.

Time: 6 hours.

So I really pulled out some stops and stayed up to get this bunch over the line, mostly because I was sick of failing at my commitment. I can see though that the prototypes are still too complicated to make a feasible everyday activity. In my first week I noted that I would either need to accept only being able to manage a few project-sized prototypes, or figure out how to scale down.

I want to push a bit harder on that latter angle, so for next week I'm committing to do 3 prototypes in under 3 hours each. If I do something that takes more than 3 hours, it's a project and it doesn't count. That's harsher than I'd normally be, but I think I need to learn how to do less if this is going to work.


In many cases you can guarantee that there will be a solution to a given problem, as long as it falls within certain bounds. For example, the problem of "my computer should do this thing" for most values of "thing" is guaranteed to have a solution. Indeed, perhaps the biggest frustration in software development is when someone tells you "look, I know that a solution to this problem would look roughly like this, so why can't you just do that?" It's hard to explain that the challenge of software isn't finding any solution, it's finding a good solution within an infinite space of bad ones.

I like to make analogies between software and physical engineering a lot, because people have better intuitions about physical problems and tools. One hard problem in architectural engineering is making a really tall building. But that's not hard at all! Anyone can make a tall building given enough resources. Here's one solution: take a lot of material and keep putting bits of it on top of other bits. Every time the resulting structure is unstable, add more material to the sides. Bonus: the material will add itself to the sides if you make it too tall.

It turns out what you really want from your skyscraper is a bit more complicated than just making it tall. You want one that is not too wide because you have limited land area to work with. You also have limited materials and time to work with. Most importantly, you want a building that doesn't cost too much. Many problems are trivial to solve if you can just commit infinite resources to them, but practicality dictates that you have to work with less. What you want is the solution that gives you the most of what you want for the least resources. Which is to say, the best solution is the most efficient solution.

In software, efficiency is mostly used to talk about resources like processor time, storage space, and, more recently, energy usage. Those are the resources the software consumes when it's running. However, when creating software, it makes sense to think about the resources that are used for its creation. Those resources are developer time, developer working memory, and amount of code.

Developer time is probably the most well-understood and agreed-upon resource in software development. If you write one function, it will take an hour. If you write two functions, it will take two hours, that kind of thing. For a long time, it was thought that you could draw a linear relationship between size of problem and number of developer hours. Unfortunately, it doesn't, for reasons mostly related to the next two resources.

Developer working memory is less well-appreciated, but it is a significant mechanism behind the non-linear slowdown in building larger systems. Once a system goes above a certain size, you can no longer fit the entire thing in your mind at once. To understand it, you need to break it down into subsystems and only think about individual subsystems at a time. This adds a switching cost between subsystems and a new source of errors and design problems, as nobody is capable of reasoning about the entire system and its subsystems at the same time.

But both of these resources are dwarfed by the last resource, which is amount of code. If you can only concentrate on one resource, it should be this one. It is the equivalent of the amount of material in the skyscraper. It's not just that more is worse, it's that too much fundamentally changes the kind of building you can make. A skyscraper, by its nature, has to be built out of a structure that has a high strength but a low weight, because that structure has to support the whole rest of the building as well as itself. In other words, a skyscraper is only possible if you use material efficiently.

Similarly, certain kinds of software are only possible if you use code efficiently. A tangled mess of spaghetti code won't get in the way too much when your scope is small and your requirements are modest, the same way you can build a small house out of sticks and mud. However, as the amount of complexity your system needs to support scales up, the efficiency of your code becomes more important. It's vital that your code is able to support the weight of the problem's complexity while introducing as little complexity itself as possible.

Now, I say that, but obviously you can just naively build a really big codebase the way you might naively build a really big building: just add stuff on top of other stuff until it gets big enough. Assuming you have an unlimited number of developers and an unlimited amount of time, this is a perfectly reasonable way to go about things.

But, assuming you don't, too much code is a problem you can't afford to have.

Scale down

I'd like to build on something I mentioned briefly in yesterday's post about efficiency: inefficiency doesn't hurt you as much in smaller systems as larger ones. While that is true, there's also a kind of reverse effect: we tend not to notice small inefficiencies in large systems. So, for example, the code to set up the environment before everything else runs might be way too complicated, but who cares? It's one tiny part of the code and you only have to write it once anyway. We need to worry about the big complexity that affects us every day.

While that is true, and it is absolutely more important to tackle large systematic inefficiencies, even small inefficiencies have a way of becoming quite substantial. The cost they impose can just keep growing but you only have to pay it once per system and it's still proportionally much smaller than everything else, so it never gets dealt with. This might not be a big deal when you build one big system, but it really hurts when you're building lots of small ones.

And that's a significant problem, because most software is too big. It does way more than it should and ends up overcomplicated. But it's very difficult to build small software if the way you do it is inefficient at a small scale. If you have to load up with a whole bunch of complex abstractions every time you start a new project, it doesn't make sense to bother unless the project is going to be big enough to justify it. This inefficiency-in-the-small changes your software the same way that inefficiency-in-the-large does: it rules out certain kinds of problems you can solve.

When we wonder why more people don't write code, I don't think the answer has that much to do with inability to handle syntax or state. I think it's that most people have small problems; they don't want to invent a web browser, they want to organise their bookshelf, or figure out which plants need watering. All our software development systems are too big for that. They come to us for home woodworking advice and we hand them industrial milling equipment. To solve their problems, we need tools that scale down.

For what it's worth, even with all the modern software in the world I still reach for my trusty set of minimal Unix tools when I have a small problem. They're 40 years old and still the best-scaling tools I know.


If there's one thing I love as a software developer, it's a good abstraction. It takes a large, complex set of things and turns them into a smaller, simpler set. Maybe you have thousands of different colours that are hard to reason about until you realise you can represent them all as mixtures of red, green and blue. Or you have all these different chemical elements but they all have properties seemingly at random, until you realise you can lay them out periodically by atomic number and the properties line up.

Except for Hydrogen, which kind of doesn't behave properly. And Helium. And there are some ambiguities with the transition metals. It's not even clear that there is any fundamental physical basis to the current layout over other options. And, come to mention it, RGB actually misses out on a significant number of colours and is generally a bad fit for human vision.

I've heard abstractions that don't completely encompass the things they're meant to represent described as leaky, with the understanding that all abstractions leak. To me, that is perhaps a bit of an abstraction-centric view. I like to think of it in terms of information theory: there is some fundamental amount of information that you are trying to compress down into a smaller amount of information. The extent to which you can do that depends on how much structure is in the underlying information, and how much you know about that structure.

If I give you a piece of paper with a list of a million numbers written on it that look like 2, 4, 6, 8, and so on, I have provided you with no more information than if the paper said "even numbers up to 2 million". The abstraction, in that case, was really just a more efficient way of representing the information. On the other hand, if I gave you that same piece of paper and it was mostly the even numbers up to 2 million but some numbers were different, you've got a hard choice to make. Either you keep track of the (potentially large number of) exceptions, or you just remember "it's even numbers up to 2 million" and be wrong some of the time.

It's this kind of lossy compression that represents most abstractions in the real world, for the simple reason that most real-world problems are too complex for the amount of space we're willing to give them. You can prove all sorts of interesting results about probabilities of coin flips, but you have to ignore the possibility that the coin will land on its edge. These simplifications throw away information in the hope that you can compress your understanding much more at the cost of only occasional errors.

So I don't believe that all abstractions leak. I think we often choose to make our abstractions imperfect to save space, or because we don't know enough of the underlying structure to describe it succinctly. However, it is possible to make a perfect abstraction, we just don't think of them as abstractions. An abstraction that completely describes the underlying information is just the truth.

Two-phase burnout

You hear a lot about burnout, the phenomenon where chronic overwork, stress or resentment leads to increasing unhappiness and an eventual breakdown. It's happened to friends, it's happened to me, and by all accounts it's a fairly widespread problem both in the software industry and elsewhere. But I think what we call burnout isn't a single thing, but actually two distinct phases, one acute and one chronic, and part of what makes burnout such a tricky problem is that those phases have opposite solutions.

The first, acute phase is a wildly aversive reaction to your current environment. This is the point where the building unhappiness with your situation finally gets too much. You were probably getting less and less effective as your happiness decreased anyway, but at some point you just can't deal with it anymore. It goes from being a bad situation to an intolerable situation, which usually manifests as quitting, avoiding, or aggressively underperforming at your work. In this case, I would say the burnout is a perfectly reasonable response: you're in a bad place, you had the warning signs telling you to get out of the bad place, you didn't do anything, and now your hand is being forced.

The second, chronic phase comes afterwards, when the original problem has gone away. Now there's nothing directly stopping you from being productive, but you can't bring yourself to go back to doing anything useful. It's like there's an invisible wall between you and the thing you used to enjoy, and every time you go to do it just about anything else seems like a better idea. This stage is self-perpetuating: the more you don't do, the more you get used to not doing. I believe this phase is really characterised by a bad association that has formed, at first because of the emotional strength of the first-phase burnout, and eventually through habit.

And this is why I think the advice about burnout can be very confusing. If you're in that first phase, the common advice of "take a break, go on holiday, get as far away from the situation as possible" is absolutely correct. The immediate acute problem won't go away until the reason for it is removed. However, in the second phase, the most important thing is to not take a break, but to pick yourself up and get back to doing something. That's the only way to fix the bad association that you've built with your work.

Of course, it's not possible to remove an association, so what you have to do instead is build a new, better one. As with many other things, Feynman had this one right: rather than go back to what you were doing before, you need to rediscover the positive feelings that brought you to your work in the first place.

More of a bad thing

There's a classic situation that plays out in tourist hotspots across the world. An English-speaker is trying to talk to someone who doesn't speak English. "How much is that?", they ask. The non-English-speaker looks confused; they don't speak English. "How", the English-speaker says, "Much. Is. That?" Still nothing. "HOW", wild gesticulating. "MUCH", spittle flying. "IS", face turning red. "THAT", indiscriminate shouting. Tragically, despite the continual increase in volume and emotion, comprehension is not achieved. What happened here?

I think of this as an example of abstract and construct gone awry. In many cases, speaking louder does help if someone hasn't understood you. If it's a loud room and you're speaking too quietly, or you have a propensity to mumble, or your conversational partner is not paying much attention. However, the relationship between volume and understanding is not so simple; we abstracted (more volume -> more comprehension), and the resulting construction (maximum volume at unsuspecting foreigners) is the caricaturish result.

But this particular sub-pattern applies to a lot of situations. For example, it's common to punish dogs far too harshly because of a lack of understanding of dog psychology; all a dog usually needs is a minor punishment delivered in the correct way at the correct time. Instead, we try the wrong minor punishment, and when it doesn't work decide that the punishment must be too small. It is true that too small a punishment is not effective, but somehow that small correlation dominates any other understanding. Instead of trying to figure out a better way to punish, we just find a harsher way to punish.

You see similar things in particularly heartless suggestions for social policy. There's less incentive to be poor if being poor is more miserable, so we should make being poor as miserable as possible! Illegal immigrants won't want to come here if we treat them terribly when they arrive! If every crime got the death penalty, there'd be no crime! You can see in all of these a small kernel of truth, that in many circumstances better or worse treatment does incentivise behaviour. There is such a thing as being too soft on crime, too laissez-faire on immigration, too willing to shield people from the consequences of bad decisions.

But that simple understanding completely ignores the fundamental mechanics of the situation. You might equally say "if we shoot everyone who can't levitate, everyone will learn how". It's true that impending death would motivate people to try to levitate, but not true that levitation would be the final result. Human mechanics also come into play; you aren't likely to get good results saying "I'll shoot anyone who doesn't live a carefree, low-stress lifestyle". The weekly stress inspections/executions may turn out to have the opposite effect, even though the incentives are all in the right direction.

So why do we do this? Why, when something doesn't work, do we ignore the possibility that we don't understand it well enough and instead just do the wrong thing harder? I think the answer is that we like things to be easy. We have a strong bias for simple answers, which at its worst means superficial answers. If we have a simple answer that looks right some of the time, we will hold on to that answer as long as we can, far past the point where it stops predicting reality. And the simplest answer of all is just a linear correlation.

But if we can abandon that clear, simple and wrong solution, maybe we can find a more complex solution based on deep understanding. And if we gain that deep understanding, maybe we can stop shouting so much.

Optimisation order

I mentioned before that for many problems you can be sure that you'll find a solution, just not how good the solution is. You have to find the best solution you can by making tradeoffs between the different choices. These are usually called optimisation problems, and there's a lot of research into various classes of optimisation problem and ways to solve them.

I've also said that I think our brains are optimisation machines, sometimes even to their own detriment. For better or worse we seem to do much better on optimisation problems than on, for example, formal reasoning, where we are comparative dunces. I suspect this is because optimisation problems likely fit our evolved capabilities better than formal logic does. But even though we can often find a good solution to difficult problems with lots of constraints, not all good solutions are equal.

The stable marriage problem is a fairly simple but very applicable optimisation problem, which applies to any situation where two groups want to match in pairs, and each member has a list of preferences for the other group. Dating is the canonical example, but also matching medical students and hospitals, job searching, and network management. The canonical algorithm, called Gale–Shapley, is simply that each member of one group asks their first available preference, and the other group accepts the offer if it's the best one they've had so far (bumping a previous offer if necessary). You do this over and over again until everyone is matched.

Gale–Shapley is guaranteed to find a solution such that nobody wants to change; ie, anyone you would rather be with is already with someone they'd prefer more. However, there are often multiple such solutions, and in that case some solutions will result in better outcomes for some people, even though on the whole everyone gets a good enough solution. In fact, Gale–Shapley does guarantee that one group will have the best possible outcome: the group doing the asking. The group that accepts or rejects the offers appears to be in a position of power, but actually receives the worst outcome that is still good enough.

That in its own right is a fairly significant result, given the similarity of Gale-Shapley to many real-world preferential matching algorithms. A job-hunting system where you ask your preferred employers in order for a job will get you the best result. A system where employers ask a pool of candidates in order will get the employers the best result. In dating, too, you want to be the one doing the asking. Being asked perhaps feels more flattering and less risky, but means a less optimal outcome.

It's also worth considering this pattern beyond the stable marriage problem. In every optimisation problem I am familiar with, the order matters. The parameters that go in first are most likely to get the best results, and the ones that go in last get the worst. This is for the simple reason that the earlier parameters can use more information. In the case of the SMP, they choose among all the candidates. Everything that goes afterwards starts by fitting around what's there already. In the event of multiple solutions, this biases the outcome.

The simple advice I would take away from this is to make sure that your optimisation order matches your preferences. If you want to have a good work life, a good home life, and a healthy social life, what order do those go in? Because while you might be able to have all of those things, you are unlikely to have all of those things equally. What you work on first will get the best outcome, and what you work on last will get the okayest outcome.

Prototype Wrapup #4

A bit of a backslide since last week. I committed to 3 prototypes of at most 3 hours each. Unfortunately, I only did one:

Docs in a box

source demo

This is an attempt at an instant docs idea I've wanted to try out for a while. I mostly Google for reference documentation because it ends up being the fastest way to find it. What I'd rather have is a live-search that displays just the minimal documentation I need as quickly as possible. Part of that was figuring out a good way to deal with all that documentation data. I ended up leaning pretty heavily on PouchDB and CouchDB, and I'm fairly satisfied that they were the right tools for that job.

Time: 3 hours.

Although I did less prototypes than I wanted to, I also think I did a better job at keeping that prototype more modest than my previous ones. I resisted the urge to make the scope wider than it needed to be, and I aggressively cut tempting time sinks like supporting multiple documentation sources (though the design should make it easy to add that later). I want these prototypes to be small enough to explore one idea in code, the same way the writing I do here is small enough to explore one idea in words.

I obviously haven't figured that out well enough yet, and I still find myself leaving the prototypes until too late in the week despite the decrease in size. I'm hopeful this is a temporary blip mostly caused by the extra effort I put in to get the prototypes done last week and the subsequent change in focus. I'm going to just commit to the same thing again for next week, and if I'm still having trouble then I'll look into a more sophisticated strategy.

The manager's coup

Oversight is an important quality in any system. Your original system is designed to achieve some particular outcome or perform some action, but it's not enough to merely trust the design. Firstly, there may be flaws in the design that only show up later on, and secondly the needs of the system can change over time. It can't be the job of the system to evaluate and correct itself, because that would lead to an over-complicated system with a fairly substantial conflict of interest. Normally, you build a second system with oversight over the first one.

This pattern appears in software fairly frequently, where you will run some base service that is expected to work all the time, and then a second monitoring service to make sure. You don't want to build monitoring into the base service, because that would significantly increase its complexity. And, more importantly, if the service isn't working properly, chances are it won't be able to monitor itself properly either. This is the software equivalent of a conflict of interest. You always build a separate monitoring system.

In human systems, the same pattern appears in governance and management. You don't get employees to monitor their own performance, because then they would have to keep up with a lot of management-level knowledge and skills that would make their job a lot harder. Additionally, a worker who is performing badly might also be bad at evaluating their performance, or hide problems out of self-interest. So too in politics, where politicians are given oversight over the operation of society. One system to do things, another to oversee the first.

But, an important question: which system is more important? If you think the answer is management, or they're equally important, I'd encourage you to consider the utility of a monitoring system with nothing to monitor, a government with no society to govern, or a manager with no employees to manage. The oversight system is important, yes, perhaps even the biggest contributor to the success of the system it monitors, but without that underlying system it is completely useless, just dead weight.

However, that's not how things end up working in practice. We consider management to be the most important and powerful part of a company, and politicians the most powerful part of the citizenry. They're not just an important support structure for the system, they are in charge of the system. The original system, the one doing the work, can even start to seem like a minor implementation detail of the management system. After all, when you want to change direction it's the management system you talk to, and the management system that tells you whether the underlying system is working properly. It's so, so easy to think that the management system is the system itself.

I call this the manager's coup, and I think it's essentially historical in origin. The first managers were actually owners, and the first governor was a king. We began with a divine hierarchy starting at the big G and going all the way down to serfs and slaves. That system wasn't very efficient or well-organised, of course, but it flowed neatly from the power structure of the day. Only much later did we start to believe in individual freedom and optimising for efficient delivery of outcomes rather than upholding the universal pecking order.

Even though we no longer believe in that hierarchical social order, we still seem to look to it for inspiration. In some way, we still instinctively feel that oversight is ownership, and that management is power. These ideas perpetuate themselves by mimicry and resistance to change. But there is an essential tension between the manager's coup and the reality as represented by outcomes. Sure, you can believe that the managers are the most important part of your organisation, but you're going to lose to companies that don't. In the end, oversight doesn't pay the bills.

It's harder to imagine systems where the people doing the work are in charge, and management takes a supporting role, but there are examples out there. One good one is the relationship between a writer, actor or musician and their agent. Instead of the agent hiring an actor and telling them what to do, the actor hires an agent to ask them what to do. The agent still exercises oversight, still makes decisions, and good ones are still very well compensated, but they serve the system they're overseeing, not the other way around.

Stateful thinking

One of the hardest things for non-programmers to learn about programming is state. That is, the surrounding context that changes the meaning of what's in front of you. In the expression "x + 1", the meaning of 1 is obvious, but the x is state; you cannot know the value of x + 1 without finding out what x is. If the value of x is defined just before, it's not so bad, but what if x is defined a thousand lines of code away? And changes over time? And is defined in terms of other things that also change over time?

It might be more accurate to say that what is hard is simulating a computer. When a programmer reads through a program, they execute it in their own head. The more complex the program, the more difficult this is, but your internal computer also gets more sophisticated as you go along. Instead of reading one symbol at a time, you start to read whole blocks of code at a time, the same way proficient readers will scan whole sentences rather than individual letters or words. However, that only makes the immediate behaviour easier to read. The amount of state you can simulate is still limited by your own working memory, and it's very limited.

Perhaps a good analogy is how your operating system deals with your keyboard. Any time you press a key it gets sent to the current application, the one that is said to have "focus". So which key you pressed is the input, and the focus is the state. The same key in a different application does a totally different thing. Luckily, the focus state is visible; the active application is highlighted so you know where your keys will go. Most programming has invisible state, which is more like using your computer without looking at the screen. In theory you can figure out what will happen with every new key you press, but over time you're going to lose track of what's going on.

It's for this reason that you often try to avoid state in software development. However, it's not possible to avoid it completely. Even if you can use a (comparatively rare) programming language with no state, there are bits of state when your application interacts with the real world. Is it writing data to a disk? Communicating over a network? Operating on a computer with other applications? State state state. It's inescapable. So we must learn to simulate state, something that it would appear does not come at all naturally to us.

Interestingly, people have emotional state that behaves a lot like state in programming. The same events, words or actions can have wildly different consequences depending on someone's emotional state. A lesson we must learn early on is to simulate that state in others so that we don't end up totally surprised by people's crazy actions. Fortunately, emotional state is fairly visible, and our brain is particularly specialised for mirroring the emotional state of those around us. That said, people still manage to make a hash of it fairly frequently. It's interesting to wonder how well we would do without those benefits.

One area where we don't seem to get as much help is with ourselves; our own future states are not terribly visible to us, and we don't seem to have the same optimisations for future states as we do for present ones. The results should be fairly obvious: we are very bad at predicting our stateful behaviour. Not only do we have trouble predicting our future state, we also mis-predict our actions even assuming a given state. No wonder planning for the future is hard!

I think that stateful thinking can be a real advantage here. Once you learn that you can't just naively assume "x + 1" will mean the same thing everywhere, you start paying a lot more attention to the x. But for stateful thinking to be useful you need two things. Firstly, you need to learn how to reason about and simulate state. Secondly, you need to actually accept that you have state, and that your future actions can't be predicted without it.

The testing mindset

I really enjoy reading books by or about scientists, not least of which the inimitable Richard Feynman. I think what is so appealing isn't necessarily the work they do, or any particular discovery or mannerism, but rather a kind of mindset that you don't see much outside of very good scientists: the testing mindset.

What I mean is that there are lots of times when you'll come across something unexpected. Many people won't even notice, because they're not interested or paying attention to something else. Some people will notice, and become curious about the unexpected thing and how it works. An even smaller number will set out to try to learn about or understand the thing. But the rarest response of all is to figure out how to trap this unexpected thing in a web of experiments so it has no choice but to reveal itself. Those people are the ones who get to be great scientists.

A great example is a chapter in Feynman's book called The Amateur Scientist where he mostly talks about ants. He wwas curious about how ants find food and know where to go. So he ran a series of simple experiments involving moving ants around on little paper ferries, setting up grids of glass slides and rearranging them, and graphing their trails on the ground with coloured pencils. He didn't sit around wondering about ants or ask an ant expert, he made specific tests he could run to figure out how they worked for himself. I suspect if ant behaviour had not already been extensively studied, and if he wasn't otherwise occupied with physics, Feynman would have made some significant contributions to the ant field.

I often run into things I don't understand, from the behaviour of some obscure piece of software to my singing tea strainer. I notice, though, that although I'm pretty good at noticing unknown things in the first place, my first instinct is usually to try to learn about them by looking for information somewhere else. That's usually works fine, but what about things nobody knows yet? Looking for answers only works when someone else has already done the work to find them.

It is, of course, way more efficient to learn from the experiments of others than to repeat everything yourself. But if you spend all your time relying on secondhand knowledge you might not build the skills necessary to make new knowledge. The testing mindset doesn't seem like something you turn on and off, but rather a way of looking at the world where you constantly want to poke and prod at the bits that feel funny. So perhaps it's best to do things the hard way sometimes and re-discover from scratch what you could easily learn from a book.

It seems like more fun, at the very least.


Things have been pretty busy the last few weeks, with my new prototype push and other new things I'm working on. I slipped up last week or so and, rather than writing a failure post and cutting my losses, I figured I would just make it up the next day. That didn't happen, and I just got used to being a day behind. Eventually it got to the point where I missed another one.

Of course, I should have seen this coming. I've previously had a very similar failure and I wrote about the general problem of ignoring minor failures after that. The problem isn't that you need to take minor failures too seriously, but that you need to treat failures of the system that corrects minor failures way more seriously than the minor failures themselves. I had thought about this already, of course, but I made the mistake all the same.

I think part of the reason that happened was that I hadn't really figured out what to put in my failure post. I normally try to go to the effort of analysing my failures so I can improve on them, but the original failure was basically "I was busy and slipped". I wanted to have more to say than that. In retrospect, it wasn't really worth putting it off until I had something better to say, because I ended up not doing it at all.

To stop this from happening again, I'm going to commit to writing a failure post if I miss my deadline no matter what. I think to some extent I still want to optimise that goalpost and be flexible with the deadline, but I have to fight that urge. Doing it just kicks the problem down the road and makes my life more complicated. Better to cut my losses as soon as it happens and move on.


One of my favourite improv games is called association and dissociation. You start by free-associating: walk around the room, saying things you see or think of, and using those to come up with more things that relate to them. Broom, janitor, Scrubs, Turk, turkey, christmas, ham – that kind of thing. After you'd been doing that for a while you would switch over to dissociating: the same, but think of things that have nothing to do with the previous thing. Pretzels, corn – wait, actually those are both salty snacks you get at events.

Dissociation was enormously more difficult and, I suspect, not even really possible. Much like with paper-scissors-rock, I'm sure a decent analysis could fairly easily predict us even when we're trying to be unpredictable. The exercise isn't actually to make an un-association, but to find less and less obvious associations, to stretch our associative system to the point where it can come up with something that seems unrelated. That's also what happens when some idea comes at you out of thin air, and I think it's very important to cultivate for the sake of creativity.

It's previously been observed that there is a connection between creativity and unhappiness. One theory is the tendency of self-generated, spontaneous thoughts to lead to neuroticism and also creativity. Another is that unhappiness improves certain kinds of processing, particularly focus and attention to detail.

Those are both very interesting results, but I'd like to add my own speculation: perhaps there is a link between unhappiness, escapism, and the creative power of dissociation. Escapism is a common symptom of unhappiness, and losing interest in familiar things is a common symptom of depression. Perhaps, by making the familiar uncomfortable, unhappiness causes us to be more dissociative, and therefore more creative.

If that is true, it would be particularly good news for creativity. It would be a shame if being unhappy was a good strategy for improving creativity. However, if that dissociative mechanism can be learned separately, then happiness and creativity can go together just fine, and any time you would spend practising unhappiness could be better spent practising dissociation instead.


The idea triangle

I've been thinking a bit about the different ways I work on ideas, and particularly the prototypes I've been spending a lot of my time on recently. I've been trying to figure out why it's so tricky to keep them under control; they always seem to want to turn into huge projects, or be so small and inconsequential that I lose interest. In fact, I'm beginning to think that this is one of those triangle constraint type situations, where I can only get favourable results on two sides by sacrificing the third. The sides, in this case, are how interesting it is, how complete it is, and how long it takes.

How long an idea takes is probably the easiest one to reason about, and it's the one I've been paying the most attention to. The time isn't just important because I could be spending it on other things, it also has a qualitative impact on how much and when I can do that kind of work. Something like this writing is a low enough impact that I feel comfortable committing to it even when it's not my main focus; I can fit it in around other things. So it'd be nice to get the time for these down to that point. Unfortunately, that means sacrifices in the other two areas.

How interesting the idea is has a lot to do with what it can do for you rather than how useful it is. You get a lot of value from exploring the more out-there ideas, usually because you're learning and discovering new things. That's usually opposed to being practical, because the most practical ideas tend to be incremental refinements of existing ones. That has a very positive impact on time, because you can really optimise your environment for that particular kind of idea. If I was making nothing but mobile games, I could spend a little bit of time improving my toolchain and get a big boost out of it. More interesting ideas take longer because you don't know what you're doing when you start them.

The dimension of completeness is the other side of that. You could just do the bare minimum necessary to learn something or do something with an idea, but in many cases you want to flesh it out. The extreme of this is the product mindset, where the idea is important, sure, but it's the polish on the idea and how well the idea is developed that count much more than the idea itself. It's the difference between a Wright Brothers plane and a 747, or the original internet vs AOL. Of course, completeness takes a lot of time, and even more so with an idea that is more interesting. If the idea has a lot of unexplored and interesting facets, each one of them is another thing that has to be explored and refined if you're trying to make a nicely packaged product out of it.

From this triangular perspective, it's fairly easy to see why the prototypes have been difficult. I'm often trying to take an interesting idea and make something relatively product-ish out of it, and then being surprised that it takes way longer than I expected. Although I've managed to scale down the time taken, that often seems to come at a cost of interestingness; I shoot for less ambitious ideas so that I can still make something out of them in time. But my new conclusion is that I need to leave the interestingness where it is, and start cutting down on completeness.

Prototype Wrapup #5

Well, things escalated pretty seriously since last week. I committed to 3 prototypes under 3 hours each. The good news I did 3 prototypes, the bad news is that they may have taken a touch more than I was hoping they would.



This is a little thing I made to mess around with AST tricks. I've always been a little bit annoyed that when people don't design their functions right I have to add the overhead of another function to wrap theirs. Well, not any more! It can do neat things like swap arguments around and replace them with constant values that you supply.

Time: 5 hours.


source demo

I'm really pleased with this one. It's an iterative scribbling tool based on the way I tend to draw in notebooks when I'm bored. I spent a lot of time messing around with all the little shapes and rules to get a thing that was fun to use. The upshot was that I really enjoyed making it, but I may have gotten a little bit carried away with the time...

Time: 14 hours.


post source demo

This is one I'd been wanting to get started on for a while. It's a system for doing simple, repetitive code exercises for practice or learning. The problem is that there are a few tricky things involved in generating random but plausible code and dealing with all the AST data structures. Luckily, Metaknight had me prepared a little because I'd already had a bit of time messing around with the parser and generator and that crazy AST tomfoolery.

Time: 10 hours.

So, uh, as you can see the amount of time the prototypes are taking seems to be increasing, not decreasing. I'm not convinced that just trying to cut down is going to do it. I need a new approach, one that encourages me to start with as little as I can get away with and build up from there. My last post was about this problem and figuring out what tradeoffs I'd need to make. Based on that I've come up with a new plan that I'm happy with...

One continuous hour! Like a hackathon, or an episode of 24 complete with countdown clock. I think what's missing is an obsessive focus on time. Instead of trying to make something and hope it takes less than the time, I'm going to write code for exactly as long as the clock takes, and whatever's done after the hour is done.

Personal board

I've been thinking recently about boards of directors and how useful they can be. A lot of a board's responsibilities are fairly procedural: budget approval, compensation, mergers & acquisitions, auditing, and so on. However, they also have an important role in guidance, oversight and setting the direction of the organisation. How useful that is depends on how much you trust that board, of course, but a well-functioning board can be an important resource for making and evaluating good long-term decisions.

So if it works for companies, why not for people? You could approach some friends with a demonstrated capacity for good advice and ask them to form your personal board. Unlike an actual board of directors, there'd be no legal liability or boring procedural work. There'd also, hopefully, be no capacity for firing you, though obviously if you don't ever listen to them they might resign. Their role would be purely to offer oversight and advice.

I think, if implemented correctly, this could be a really useful idea. It's important to keep evaluating what your goals are and whether your actions are supporting them, and having a group of people meet regularly for that purpose seems like a great way to do that and stay accountable to it. As well, it'd be useful to have a structured way to approach big decisions or changes, acting as a sounding board (heh) and challenging your ideas.

I'm sure it'd be fairly confronting at first. Most people, me included, aren't used to having their personal decisions and actions scrutinised, or having to justify them. But, why not? If you are confident that you're making those decisions in a way that furthers your goals, then there's nothing to worry about. And, if you're not, maybe it's better to find out and correct it sooner rather than later.

Shallow culture

One thing that people seem to get upset about a lot is the watering down of culture. Instead of being real gamers, the kids these days just play Candy Farm on their phones and have never even been to a LAN. Instead of real Chinese food, we get westernised crap like dim sims and chop suey. Ireland has a beautiful and rich history, but all we've taken away from it is Paddy O'Mc'Donergherty's Irish Pub. Outrageous! Where is the respect for Real Culture?

Of course, there is an argument to be made that there's no such thing as real culture, it's subjective and evolves over time and who are we to etc etc. But to me it's obviously true that culture can be deep or shallow, and that you lose something valuable by making it shallow. Turning the entire history of American Indians into facepaint and feathers is undoubtedly losing a lot of meaning! Much like you can have deep or shallow ideas depending on how complex and meaningful they are, I believe you can have deep or shallow versions of culture. After all, isn't culture just a cluster of ideas?

Regardless, although I think culture can be deep, and deep culture is more interesting, I would not agree that shallow culture is necessarily bad. Yes, of course, if you're steeped in the traditional culture and cuisine of India, you're going to think butter chicken is an abomination. But, much like the oversimplified film adaptation of a much-loved book, its shallowness can be a virtue. It would be a mistake to think that people who watched the movie would necessarily have time to read the book, and thus a mistake to think of it as a loss. More people have been exposed to the story, even if it wasn't the best rendition.

That only holds true if you think it's better that someone learn a shallow culture than no culture at all. Maybe it would be better to be absolutist about these things; if they want to learn about our culture, they should learn it properly! And that's an answer that makes a kind of sense, especially if it's more important to you that the deep culture remain intact than that more people are exposed to it. What good is it having people like a watered-down version of your culture?

Well, there is one way that a shallow culture really makes a difference, and that is familiarity. A funny thing happens when we are confronted by people who are very different from us: we get scared. Perhaps it's a historical relic from a more tribal time, where people very different to you were more likely to be dangerous. Whatever the cause, even today, different cultures in close proximity have a difficult time getting along. At the core of that, I believe, is an essential strangeness that comes with an unfamiliar culture.

Think about various western cultural stereotypes: England has tea and royals, America has guns and freedom, France has cheese and wine, Germany has sausages and ruthless efficiency. These shallow impressions, simplistic and sometimes outright inaccurate as they are, are comforting in their own way. I at least have something that feels relatable. Now, tell me, what's an Egyptian stereotype? Or Turkish? Or Yemeni? If you're from the anglosphere, as I am, you probably have no idea. It's not that I have a shallow understanding of these cultures, it's that I have none at all.

So when people from those countries try to integrate, I believe the lack of available shallow culture makes things much more difficult. Without those facile hooks into someone's context, you just feel like you have absolutely nothing in common. And when people have nothing in common on a large enough scale, it's bound to cause conflict. And yet commodifying culture is a thing that many people feel honour-bound to fight against. It seems like such a waste when it could help reduce that tension.

Obviously, there is such a thing as negative stereotyping, and I'm not advocating for more of that. Deliberately misrepresenting someone's culture to make it seem worse is both cruel and dishonest. But not all reductions have to be bad ones, and I think if people were more willing to embrace and participate in simplifications of their own culture, the results would be pretty favourable. And, of course, that simplification doesn't have to replace the real culture, just act as a more accessible starting point. Not everyone will follow that all the way to deep culture, but that's okay too.

I think, in many ways, the casual gaming market has contributed to the normalisation of video games. They used to be a fringe market for basement nerds, but you'd have trouble finding an 8-12 year old today who hasn't played Minecraft. Similarly, it used to be really weird to make friends on the internet and even meet up with them in real life sometimes. These days, socialising is what most people use it for. Does Minecraft compare to Quake? Does Facebook compare to the Newsgroups, MUDs or IRC of old? I feel like they don't, that by being more accessible they have lost something of the essential differentness that made them so interesting in the first place.

But at least it means that I can talk about doing those things without people assuming I come from a different planet. And, who knows, maybe some of them will get curious and start to learn about the deep culture that the shallow culture came from.

Mere aggregation

There's a funny change I've noticed with social gatherings, especially of a nerdier bent. It used to be that knowing things was a valuable contribution to a conversation. Let's say someone mentions colourblindness. "Well", you would say, "did you know that the Japanese didn't have a word for green until after the American occupation?". Everyone would say "wow, that's amazing", and then someone else would reply "and did you know that there is a tribe in the Amazon with no words for colour at all!". And so on and so on.

I used to enjoy those types of conversations, until at some point, seemingly overnight, they became incredibly boring. Other people I've talked to have made similar observations, even though none of us have gotten sick of learning new things. I was thinking about why that is, and to me the obvious culprit is the internet. Obviously, the internet has always been a fairly effective mechanism for dispensing facts, but these days between Wikipedia, TED, infographics, feel-smart-about-stuff-every-day blogs and the non-stop bombardment of social media factbites, it's safe to say being able to acquire facts is no longer a going concern.

The impact of this is twofold: firstly, it reduces the value of knowledge as a signal of intelligence. People were once described as well-read, having a breadth of knowledge about interesting topics. But these days anyone can have a breadth of knowledge, you just regurgitate whatever you saw on /r/TodayILearned today. Of course, when everyone else also has access to the same feeds of trivia, and they know how easy it is, it stops seeming impressive.

And the second part is that in point of pure utility, someone telling you a fact is just inefficient now. I could learn ten facts, probably more interesting ones too, in the time it takes someone to struggle through one explanation – and from the comfort of my own home! If my goal is to know things, I'm better off going straight to the source than getting someone else to read the internet to me slowly and with more mistakes.

So, is there even any point in sharing information anymore? Sure, but the value isn't in transmitting the information, it's in choosing what information to transmit and in what context. The flipside of having this endless repository of facts is that it's actually quite hard to tell what you do need to know from what you don't. A computer science course teaches you something that the Wikipedia category can't, which is how to arrange that information relative to all the other information. And a person can select information tailored to your needs, and guide you through the connections between information you have and information you might want.

Someday, perhaps, even that will be done better by computers, and at last there will be no point in sharing anything we've learned. However, even then I think there will be a point to conversation, not about the facts or the knowledge we've acquired, but their consequences. Knowing things allows us to make better decisions and generate new knowledge from what we have already. That's not going to stop being useful even when knowledge is easy and commoditised.

So maybe facts are dead, but I say good riddance. Long after they pass, understanding will live on.

Fail open

Contrary to many people, I think movies are a great way to learn about computer security. Someone once observed to me that the funniest thing about Jurassic Park is that when the security system loses power, the locks open. What kind of idiot would design a system like that? Oh, sorry, the power went out, now Newman has all your dinosaur embryos and PS the velociraptors are free. More recently I watched Ex Machina, which had doors that lock automatically when there's a power outage. That's definitely more secure, but pretty creepy when you're locked in your room waiting for the power to come back on.

Those two options are called fail open and fail closed, and the decision between them shows up fairly often in system design. Failing closed means that, if something goes wrong, you default to the most conservative behaviour. So if your login system can't connect to the login database, it should act as if your password is incorrect (ie, not let you in). On the other hand, if your spam detector loses access to its spam database, it should just accept everything. That is failing open: defaulting to the most liberal behaviour.

The way to decide between them usually rests on the tradeoff between false negatives and false positives under the circumstances. Losing legitimate email is way worse than getting the occasional nigerian business proposal. On the other hand, letting the wrong person in to your account is way worse than not letting the right person in. And, as should be obvious, accidentally containing dinosaurs too much is far preferable to the opposite.

There are some important human factors, too. Failing open sometimes means that systems just get ignored rather than fixed when they fail. When a smoke detector runs out of batteries and stops working, it still behaves exactly like a properly functioning smoke detector nearly all the time. That's why, instead of going quietly, they fail closed and start beeping obnoxiously. Of course, the flip-side is that a fail-closed system tends to get disabled or bypassed when its strictness gets in the way. Too much obnoxious beeping just means you pull the battery out earlier.

I think of our internal filter as an example of just such a system. Before we say something, do something, create something, release something, we want to make sure it's good enough. Of course, some people say "just don't worry if it's good enough", but to me that's a classic contextual belief that only makes sense if you already have a relatively well-functioning filter. Nobody says or does things with zero consideration for whether those things are any good. But I do think you see a lot of difference in how people react when they're not sure.

I've noticed that some people tend to let their filters fail open. If they aren't sure about the thing they're saying, they'll say it anyway. If they're not sure whether they're singing the right note, they'll sing louder. In the absence of feedback, they go with the most liberal, optimistic behaviour. By contrast, others tend to fail closed. If they don't know, they stay quiet until they know. If they feel uncertain whether the thing they're doing is good enough, they just won't do it. Why take the risk?

And risk is really what it's about, because in some cases the consequences of your filter being wrong can be pretty significant. If you're a politician or a celebrity, or even just in a conversation with a group of people you don't know, failing open could mean saying or doing something that you can't take back. But in many situations I feel like that risk is exaggerated; you're not going to lose all your friends for saying something dumb, or have everyone hate you because you made something bad.

It's for this reason that I recommend failing open when you can. Failing closed is safer, yes, but it's important to remember that you don't just lose when you do something and it's bad, you also lose every time you don't do something and it would have been good.

Creative competition

I've heard it said that competition is bad for creativity. Certainly, there are some pretty bad examples, such as Microsoft's infamous "stack rankings", where employees were graded on a curve and fired if their rank was too low. In that case it was something more akin to a fight for survival, and understandably it led to some pretty survivalist employees. However, outside of those circumstances, I actually think competition is healthy, even vital, to creativity.

One very difficult thing is to know where the maximum is. Let's say you want to be a really fast runner, so you train and train, and every day you get faster. But as you go along, your improvements slow down, and eventually plateau. Is this the fastest it is possible to run? Or could it be that there's just some way to go even faster that you haven't thought of? Getting it wrong either way is pretty bad. If you think it's not possible when it is, you're needlessly holding yourself back. But if you think it is possible when it isn't, you're set up to just fail non-stop until you eventually give up.

But all of this changes when you have a rival. Only one of you can be the fastest at one time, which means whoever is coming second definitely knows that they could do better. And if they do, the positions switch and now the ex-number-1 needs to figure out what has changed. Each runner has their own approach, so there's a decent chance one of them will come up with something the other won't have considered. Multiply this by hundreds or thousands and that collective engine is pretty good at improving itself.

Most importantly, knowing that the metasystem is improving takes the pressure off you as an individual to figure out what the limit is. Is it possible to run faster than 44.7km/h? Usain Bolt has to worry about that, but you don't. And he can be reasonably certain that if it is possible, someone will figure out how, if not him then one of his rivals.

Of course, in sports there are all sorts of physical and biomechanical limitations to what you can do, and in some other fields the upper limits of possibility are well-defined and known in advance. Things are much harder when there is very little in the way of underlying universal truth or easy approximation of what is possible. Creative work, in particular, is very difficult to measure. Is this the best book it is possible to write? Is one good book every year the most you could reasonably expect to write? Have we basically run out of clever ways to reinterpret Shakespeare plays?

The answer to all those things could be yes or no, and you'd be hard-pressed to find a way of even approaching the question formally. Instead, to explore those limits we have to rely on the iterative process of competition, driven by lots of individual attempts to find improvements. It's not exclusive to creativity, either; many areas with ill-defined limits often get this same treatment. For example, it used to be thought that working and having a family were incompatible, but people have found ways to do it, often by sacrificing in other areas or making surprising changes.

Two things stand out about that: firstly, that people aren't necessarily competing with each other directly, they just want the best result possible, and the best is defined in terms of other people's results. And secondly, the tradeoffs aren't automatically good ones. Maybe you think that you're the best guitar player you can be, until you see someone else who has moved in with their parents, quit their job, and does nothing but practice the guitar every waking hour. You could do that too, if you wanted to be good as bad as they do.

That, perhaps, is the biggest benefit of competition. Not just knowing what better looks like, or knowing that it's possible, but knowing what it would take to get there. Sometimes that's a path you can follow, and other times it's a warning sign.

Noise dictionary

I had an interesting idea today while watching a movie. It's notoriously difficult to compress noise because of its lack of exploitable underlying structure. Unfortunately, certain real-world sounds resemble random noise, like percussion, applause, rain, and even guitar strings. The more they sound like random noise, the harder they are to predict, and the harder it is to compress that audio down effectively. You can sometimes hear this in badly-encoded movie files, where running water sounds garbled.

The essential problem is that you can't compress randomness, so what if we make it non-random? It's quite common to create data that looks random, but follows a predictable pattern if you know the initial seed value. If we created a standard for predictable noise seeds – a kind of noise dictionary – sound effect artists could create sounds using noise sources that sound exactly the same as what they would use otherwise, but are far more predictable. Creators of audio compression formats would be able to use that same dictionary to compress the noise more effectively.

That wouldn't just mean smaller files, it would also mean higher-fidelity reproduction of noise-like sounds, a current blind spot for audio codecs.

Prototype wrapup #6

And the wheel comes round again! last week I flagged an intention to radically scale down my prototypes in response to my thoughts on the idea triangle, the three-way tradeoff between interestingness, completeness and time. I committed to 3 prototypes in 1 hour each, and I'm happy to say I easily met that goal.


I'd been thinking for a while about making some tools to improve the process for updating hero builds in Dota 2. So the first thing was to see if it was possible to download existing builds. This is some scripting to do that.


This is a continuation from yesterday, a script that can take a .build file and upload it back to the Dota 2 web service.


I had a neat/scary idea for a tool I could use to overcome my initial resistance to starting a task: reverse the status quo by making the current situation unbearable! Mosquito plays a high pitched noise until you close the window, the idea being that you say you'll only close the window when you get started. This one made it into its own repo.


I wrote about a superlative number market idea before, and I was thinking about using Pony for that. It seems like a fun super-concurrent language to do it in. I wanted to try out making a simple web service backend to test it out, but I couldn't make anything work. I don't think Pony is ready for primetime yet.


A silly idea I had after I typoed document.createElement as document.crateElement one too many times. Fixes the typo and adds a crate in the background to the elements you created. Only 71K (53K gzipped)!

As you can see, there's a pretty substantial difference between my prototype output the last week and the weeks before it. Much smaller, less ambitious prototypes, and more of them. I'm definitely happier with this approach, so much so that I'm going to commit to doing one every day for next week. This feels like a sustainable vision of the small-scale thing I've been shooting for. The ones I was doing before were really just mini-projects.

I would like to explore that space too, but all in good time.

Arbitrary reduction

Choosing between things is hard and, unfortunately, more hard the more things there are to choose between. But I don't think this just applies to massively multiple choices like picking a movie or an ice cream from a needlessly comprehensive menu, it also comes from the logical generalisations of simple choices. Say you walk past a homeless man on the street and he asks you for money. Do you give it to him? Do you give it to every homeless person? Is that the best use of your money compared to, say, sending money to starving African children?

There are lots of similar scaling problems, where a little easy choice generalises into a big hard choice. You're a policeman, do you let someone off with a warning because they give you a sob story about running late? That's not going to scale; everyone's late, everyone has a sad story. But if you knew that it would just be this one person, just this one time, the decision would be easy. At scale, deciding who deserves warnings and who doesn't becomes a very complex process worthy of an entire judicial system.

But cops still give out warnings, people still give money to the homeless, and we make it through daily decisions about ice cream and other important things. I think the way we do this is by applying arbitrary reductions to the problem to scale it back down to a size where it's manageable. So you ignore the problem of all homeless people and just consider the situation in front of you. Why is the person in front of you more important than the person two blocks away? No reason. It's arbitrary, but the full problem is too hard, and arbitrary reduction gets us through the day.

The problem with arbitrary reduction is that it's often not very fair. We use simple reductions like distance, similarity, or wealth, and those often have a self-reinforcing bias. All the people in the world is too many to think about, so you help people in your comparatively well-off neighbourhood. When you're looking for people to hire, finding the best person out of everyone is hard, finding the best in your existing network is easy. If you want to invest, there are a lot of companies out there, but much fewer if you only consider those founded by friends from university. If you start on the outside of that, you stay on the outside.

Sometimes we can do without arbitrary reduction by just tackling the big problem head-on. Effective altruism is an attempt to do that for the space of charity and general do-goodery. Even without a specific framework or movement, though, it's possible to just take the hard road. Sit down and enumerate the options and goals in as much detail as necessary. If that means thinking on the scope of all people worldwide, then so be it. If it takes a week, a month, a year, so be it. That's the cost of making the right decision with all the information.

Unfortunately, it's often just not feasible. Effective altruism is a worldwide effort by many people from different disciplines all collaborating to answer that question, and there's still a fair bit of disagreement. For something like charity, where you can just make the decision once and keep benefiting from executing that decision for a long time, that might be worth it, but in other cases there's not enough time or resources to avoid an arbitrary reduction. However, that doesn't mean we have to settle for the biased reductions we have now.

So I'd like to propose a fair arbitrary reduction: randomness. It sounds strange, but why not? If the goal is to reduce your options, it's the most representative and equitable way to do so. Can't decide between ice cream flavours? Flip a coin, heads is the top of the menu, tails is the bottom. Congratulations, you just made your decision 50% easier! Looking to hire but don't have time for a full application process? Get the list of attendees for your next industry meetup, shuffle it, and try to talk to those people in order.

I'm not saying to make the actual decisions randomly, that would be chaos. But if you need to throw information away, the right way to do it is randomly. Every time we make a decision easier for ourselves by arbitrary reduction, we create an opportunity for hidden information, hidden bias, to enter the decision. Sometimes that doesn't matter, but often it does, and it's hard to know for sure. If we have to be arbitrary, we may as well be the fairest kind of arbitrary: purely random.


I wrote before about decision hoisting, the process of taking the consequences at an implementation level and hoisting them back up into the high-level decision. Decision hoisting is the antidote to those difficult situations where someone asks you to do different things at different times that conflict with each other. Beyond its role in dealing with others, though, I think it can be a useful technique for yourself.

It's all too easy to make decisions that don't actually further your goals, and one way that happens is just ignoring the consequences as you make the decision. You had a plan to get up early and do some work in the morning, but morning rolls around and you're pretty tired. It starts to seem like a better idea to get a bit of extra sleep, and the details of that plan seem pretty distant. After all, it's not like anything is going to catch fire, you'll just have a little less time in the morning...

Of course, before you know it, your morning's gone and you haven't got the work done. But the thing is, the night before you predicted exactly this situation. You considered it and calculated the relationship between getting up early and getting work done in the morning. Nothing changed about that situation, just your own perspective shifted and, as it did, you lost sight of the connection between the decision and the consequences. Exactly the problem that decision hoisting is designed to solve.

So how do you hoist your own decisions? I've been thinking about a technique I call chaining, where you tie the decision and consequences together via some kind of immediate representation of that relationship. For example, you write "I will only get my work done in the morning if I get up by 8am" on an index card or a piece of paper and leave it by your alarm. When you get up in the morning, you still have the option to sleep longer, but you have to tear up that card before you do. You have chained the consequences to the decision, kind of like a protester chaining themselves to a tree. "You'll have to get through me first!"

I think it's important to maintain the option to go back on your decisions. New information can come up, your thinking can change, or the decision could just have been a bad decision in the first place. But to re-make the decision you need to re-consider the information that went into it in the first place, and that tends to be difficult in the heat of the moment. By representing consequences in a way that forces you to engage with them, I think it's possible to hoist your own decisions and force yourself to make them well, even in difficult circumstances.

Don't try

The first ground rule is never to worry about remembering, and therefore never to try to remember, because this is a method where the responsibility for you remembering is in the teaching, and not with you. – Michel Thomas

I've heard it said that to succeed you have to try hard. It's not sufficient to merely work at the thing, to do a series of steps diligently or to practice in a particular way, you have to try. But what is trying? In some cases it seems synonymous with doing: "if at first you don't succeed, try, try, try again". That seems pretty sensible. It can also mean to attempt, as in "I don't know, but I'll give it a try", which is a bit wimpy but fine. But it's the last sense, meaning to make a particular effort, to struggle or strain or toil, that I take issue with.

Character attacks are the last refuge of a bad plan. If a poor worker blames their tools, then a poor manager blames their people. When you deal with humans, including yourself, the realities of human behaviour are laws of the universe you work in as much as the laws of physics or economics. It's far too common to see plans that don't take motivation or engagement into account, or assume some kind of infallible superhuman will be the one executing the plan. But, much like machines, people have limitations and failure rates. If that brings your system to its knees, it's not bad people, it's a bad system.

I see trying hard as an early warning sign of that kind of failure. Getting good at something usually takes a long time, and to maintain your efforts through that period takes perseverence, but also takes a very robust system for working. If every day is a struggle then at some point, inevitably, you'll lose at that struggle and then presumably go find something less strugglesome to do. Success might not be easy, sure, but I don't buy that it has to be hard. Maybe in the sense of requiring a lot of work, but not in the sense of working at the edge of your ability.

But we have a culture that seems to paradoxically encourage that kind of brinksmanship. Working hard isn't measured in output or even hours spent, but in suffering. "Ugh, I spent all weekend fixing up our servers." – "Oh yeah? Well, I missed my kid's birthday for an emergency investor crisis." – "That's nothing, I got divorced and became an alcoholic just getting our product to launch." Damn, that's some hard work right there. Coming in and getting things done with no fanfare doesn't seem as impressive, but the endgame of competence is that it starts to look easy.

Instead of trying, I propose building systems you can trust, and then trusting them. If you know it's going to take four hours of sustained effort every day for the next decade to get good, then what's there to try at? Just do the hours. If you know you won't do the hours without timetabling them, then do that. If you know you'll lose motivation when you start to plateau, then come up with some cumulative effort trick or something. Go meta. Build defence in depth.

Just keep improving the system until it stops failing. And if you find yourself trying hard, recognise it for what it is: a stopgap covering up flaws in your system. Fix the system. Trust the system. Don't try.

Time dilation

Nobody thinks that movies are realistic; they're storytelling instruments that present a more exceptional, more glamorous, more action-filled and more important version of life. But what is it exactly that requires movies to be unrealistic? Or, to put it another way, if we wanted to make the most realistic movie, what could we get rid of before the movie became unwatchable?

Take away the spectacular settings and you've still got all the realistic drama and comedy. Take away the significance of the situations and the word-perfect wit and snappy dialogue, and you've got reality TV. Take away the direction and control over the narrative and you've got a documentary. You can keep cutting and cutting and you still seem to have a viable medium. The one thing you can't cut, though, is cutting itself. Editing. Selecting the important parts and leaving the rest.

I think you could take anyone's life and just cut out all the parts where nothing happens, and you would get a pretty interesting movie. In fact, there have been some fairly successful variations on that idea. On the other hand, imagine a regular movie with all the boring parts put back in. The establishing scene where the lead character has a boring job and quits could take years! When the down-and-out boxer has to train up for the big fight you'd just be watching the same workouts for months and months.

Perhaps one of the most harmful things about our otherwise excellent storytelling culture is that it tends to treat life as a sequence of important things happening one after another. If the protagonist doesn't know what to do and flounders about for years, you just show a few directionless scenes in rapid succession, then cut to when they start to figure it out. But real life is mostly the bits in between the important things, and most of the important things are really consequences of or decisions about all the stuff you do the rest of the time.

Life would really be much easier if we had that kind of instant connection between action and result. If we could decide to learn kung fu, time dilate, and now we know kung fu. Or, better still, decide to follow an idea, time dilate, and discover if it succeeded or failed. Not even for the rewards, just to know if it was a good decision or not. In reality it often takes a long time to find out, and in the mean time you have to just do things hoping that it'll turn out you were right all along. If you're expecting the decision to lead directly to the result, it's pretty surprising to find out that the result actually comes from a series of actions, and the decision is just the first one of those.

So, in the absence of a magical time-dilating remote control, we just have to get used to waiting. But even that is its own kind of sequence-of-important-events thinking. All the in-between bits are when you get to experience the process of what you're doing, rather than just the result. And, since it's going to be the main experience you have, it pays to make the process as enjoyable as possible.

Fibonacci's Infinite Sequence

In software, it's fairly common to see recursive acronyms. That is, an acronym that uses its own name in its definition. The classic example is GNU, which stands for "GNU's Not Unix"... which stands for "GNU's Not Unix's Not Unix". In theory, you could keep expanding it forever, like so:

GNU's Not Unix

An even earlier example were the pair of editors called EINE and ZWEI. EINE was based on the Emacs editor, and so obviously stood for "EINE Is Not Emacs". ZWEI was the successor to EINE, and so received the name "ZWEI Was EINE Initially". This is a more complex recursive acronym, which you can explore below:

  • ZWEI Was EINE Is Not Emacs Initially
  • This recursive acryonm has a pleasingly regular pattern. The number of ZWEIs stays the same. The number of EINEs increases by 1 each time. But what about the number of Emacses? That seems to grow even faster. Let's add a counter (and an auto-expander to spare our poor wrists):

  • ZWEI Was EINE Is Not Emacs Initially (Emacs: ) expand
  • 1, 3, 6, 10, 15? Those are the triangular numbers! To my knowledge, nobody else has noticed that you can generate the triangular numbers by counting the number of times ZWEI's acronym mentions Emacs before. I admit the practical applications of this may be somewhat limited, but what an interesting find! Here's a more compact version:

  • ZEe (Z=, E=, e=) expand
  • Can we do anything else fun with recursive acronyms? It sure seems that way. I spent some time shuffling characters around and figured out an acronym (kinda) called Fibonacci's I S. "I" stands for "Infinite S", and "S" stands for "Sequence of I S". It looks like this:

  • Fibonacci's Infinite S of Sequence of I S (I=, S=) expand
  • And a more compact version. That S/I number looks awfully familiar:

  • FSIS (I=, S=, S/I=) expand
  • After messing around with these for a little while I discovered they are called L-systems, and they can be used to make all sorts of interesting fractals and things. They were invented to model the growth of algae and other simple organisms, and not to calculate number sequences from the recursive acronyms of software programs.

    Just goes to show you that there can be a surprising amount of depth in silly things if you chase them down far enough. You can find the code I used for the above demonstrations on GitHub, and a standalone demo on my demoserver.


    I was actually part of the way through writing a post but I ran over my deadline. I resolved in my last failure to be much more picky about declaring failures as soon as they happen, even if I could probably get away with it. This is part of my defence in depth strategy: minor failures are fine as long as you handle them, but failures in the failure-handling system are not okay.

    So what happened this time? I ended up overstretched because my prototypes got a bit out of hand this week. I set myself the goal of doing very small prototypes, but I got really excited by a few of them and ran over time. Fitting that extra time in with the rest of my work was too much and I ended up tired and overstretched. So the proximate cause is the prototypes taking too long.

    However, the root cause is all too familiar: I could have recognised the prototypes taking too long, but instead of correcting that minor failure I ignored it. There was a failure in the failure-handling system, and that allowed the minor errors to accrete to the point where they turned into a major one. So I think it's just a matter of diligence, and resisting the urge to gloss over small problems even when it's convenient.

    I'm going to build that habit starting with this post, where I failed, but probably only by 15 minutes. Still a failure, though, and still something I can learn from.

    Prototype wrapup #7

    Last week I scaled up my prototype goal dramatically, to one prototype per day. Unfortunately, I didn't quite get there. I did one every day except for Sunday, because my prototypes got a bit too ambitious and I ran out of time.


    I listed this last week for some reason, but it makes more sense to go Monday->Sunday so I'm putting it here as well.


    I wanted to try making some kind of cellular automata that was more suited to sounds. I previously made The Sound of Life which was a Game of Life + sound, but 2d grids don't actually map that well to audio (adjacent notes sound discordant). I had an idea for doing something in quotient space (an x,y grid where the coordinates reduce like fractions) using x/y as the frequency. It actually took a surprisingly short time to get working, but then I spent a few hours just messing around with different sound rules. It ended up pretty decent.


    Continuing on from the day before, I wanted a better visualisation of how the sound was laid out. I had two ideas of how to do it, and this was the first one: a simple logarithmic scale drawn on a number line like so.


    The second half was to try laying it out in a way that reflected the quotient space of the (x, y) pairs rather than the notes that came out of them. So I put everything on a big grid and drew lines to indicate the slope of the ratios. That ended up like this, which actually turns out to be a pretty pleasing visual.


    I'd had an idea ages ago for teaching programming by making someone play-act as the computer. This was a start on that by making an environment where you can implement quicksort by yourself. It's just some instructions and a simple list of numbers that you can drag around and click to highlight. Later on I'll look at adding the part that tells you what step to do and checks if you're doing it properly.


    This was a big one. I had this crazy idea a while back that maybe you could do computation with recursive acronyms. It ended up turning into its own post and, although I was really happy with it, it turned out to be a much bigger idea than I thought. I inadvertently re-invented L-systems, which was neat but probably not really feasible to do in an hour.

    So a few good things came out of this week. I was particularly happy with the way the Tuesday->Wednesday->Thursday transition was three prototypes on top of the same idea. That seems to be a feasible path to making projects out of prototypes. I'm still not done with Audiomata; I want to consolidate what I've done so far, clean it up, add some more customisation and input, basically finish it off to the point where it meets the complete standard rather than the prototype standard.

    However, I also missed a day, and that traces itself directly back to Saturday (and to an extent Tuesday), two days when I went over because I was so invested in what I was doing that I didn't want to stop. That's a good feeling, but obviously it's not sustainable. My plan is to be more militant about stopping when I run out of time, but give myself the option of continuing on if I acknowledge that it has outgrown the "prototype" label and move it into a real project in a new directory. That actually happened both times I went over, but after the fact instead of being acknowledged up front. My hope is that by recognising that the prototype has turned into a full project, it will give me the perspective and the clean break point to decide whether I should stop or keep going legitimately.

    I am committing to a prototype each day again. Overall this system seems to be working well for me so I think I will keep at it until I can work the kinks out and it becomes easier.

    Be prepared

    Leadership is a strange concept. I'm not sure we'd have a word for it, or even consider it to be a single coherent idea, if not for our particular social hierarchical history. It's something to do with power, charisma, psychology and influence, ability, and using all of those to achieve particular goals in a group context. But there's no reason to think those ideas necessarily go together. Ants, for example, have a complex social hierarchy but nothing resembling leadership. It's possible that a group of intelligent and rational enough social animals would be the same: there's no need for leadership if everyone can just figure out what to do.

    I like to think of leadership a way of approaching distributed decision-making. You see it pop up organically in distributed software systems a lot, where decisions can't be made completely independently (there has to be coordination), and they can't be made completely dependently by one dedicated decision agent (because that's not reliable or fast enough). In those systems, leadership is a way of balancing the two extremes: you have some coordination and some independence, and you mediate the two by choosing one agent to be the decisionmaker for some things some of the time.

    Of course, we don't usually have anything so formal in human systems. There are formal elections for companies and governments and so on, but most leadership is done on an ad-hoc basis. Even when there is an official hierarchy, decisions are often made around, not through, that hierarchy. So how do we informally decide who makes what decisions? I believe in most cases it's on the basis of whoever has the most and best information. Much like in a distributed software system, when you want to promote one of a bunch of equal systems to be the master, the one that already has the most up-to-date information is the best choice.

    It's been my experience that often the best way to get ahead, even in fairly complex political situations with lots of people and agendas, is just to know more than everyone else. If you have a clearer idea than everyone else of what's going on, or what everyone's goals are, or where to go, you're much more likely to get what you want. Partly this is because you'll be able to make propositions first and be better prepared to argue against propositions you don't like. However, a significant part is just that people are often happy to follow the lead of anyone who seems to have thought things through more than they have.

    Beyond its implications for others, though, this can also be a useful technique for yourself. It is sometimes tempting, even after you've made a plan, to second-guess yourself in the moment. In a sense you are mistrusting your own prior decisions, not willing to delegate your present situation to your past self. But one way you can fight this is by vastly outmatching your future self in preparation when you make a plan. This shouldn't be too difficult, because your future self has only a limited amount of time and energy to spend second-guessing, whereas your past self can be more comprehensive.

    That means projecting forward when you make the plan, imagining your future self and preparing responses to things that future-you might do. If you plan to go for a run in the morning, but then it starts raining and you didn't think of that, maybe you won't run. Better to have an answer ready, like "If it rains I'll just run in the rain and shower afterwards". In a sense it's just regular contingency planning, but the main goal is to be prepared enough that your future self can just go along with your plan.

    It pays to be the most prepared person in a situation, and that's no less true when the other person is just you at a different point in time.

    The factory factory

    I was really into Minecraft when it first came out, as were most people I knew. These days it seems to be mostly popular among kids, but back then everyone played it. The game had no goals, no achievements, no storyline, and in that it was particularly elegant, but it did mean you were kind of limited by whatever you could come up with. I enjoyed that for a while, but eventually the end of Minecraft, like the end of all open world games, was just getting bored and finding something else to do.

    Much later a friend showed me his heavily modded Minecraft server. "Hey, check it out", he said, "this is way better than the vanilla game". Indeed, the mods added some kind of alchemy system, lots of new materials and, most importantly, factories. The factories allowed you to craft things automatically which, along with the rest of the features, made it altogether possible to build an entire supply chain. Materials could be automatically extracted, farmed, or mined, then sorted, conveyed to the appropriate machinery, crafted into other materials, and so on.

    The problem with all of this was that all this equipment took a lot of materials to produce. All the machinery had to be made from parts, which were themselves made from resources that needed mining. So the first order of business was to make the machinery to make more machinery. But that needed resources, so I made machinery to extract the resources, which required more machinery to make that machinery. And then I kept running out of electricity, so I built machines to make more solar panels, which meant more machinery, which meant more solar panels...

    I never really did finish my factory factory. I mean, it certainly produced a lot of parts to make itself bigger and better, and it did get incredibly large and sophisticated, but it didn't ever do anything beyond perpetuating its own existence. It was a machine optimised for optimising itself, and little else.


    A while back I wrote about the Elo paradox, the problem caused by motivation-by-measurement: if your measurements start dropping for any reason, your motivation drops, which causes a further drop in performance. I suggested at the time that a better solution would be to measure cumulative output, which has the advantage that any additional effort is always positive; you're never working to get back to zero, just slowly adding more to the endlessly rising tide of progress.

    I had been meaning to put my money where my mouth is on this front, but I've historically had trouble with personal statistics systems; nothing off the shelf works, so I basically wrote my own and, predictably, it broke at some point. However, I took a bit of time in my prototypes this week to start over and do something slightly more robust, and added an extra layer to pump selected stats from the stats system onto this site.

    You can see the result on my new stats page. It's nothing fancy, but the underlying architecture should allow me to put all sorts of other stats in there as I write more plugins to collect them. Once I have a bit more data I can add some cute graphs and things, which I'm really looking forward to. I'm hopeful it will be as motivating as I hypothesised. Can you believe I've written over 100,000 words here?

    The code for the underlying system should be usable by others fairly soon, it just needs a bit of cleanup to migrate its way out of my prototypes and onto GitHub & npm.

    The Platonic Black Hole

    With rare exception, software never seems to be complete. Donald Knuth famously gives TeX version numbers that asymptotically approach pi. The last major version was 3, and it's currently on 3.14159265. There will be no more major versions, no new features. Each subsequent version will only include (more and more minor) bugfixes. That is to say, Knuth considers TeX to be done, and is now only pursuing closer and closer approximations of correctness.

    I think it's useful to think about software is in terms of the Platonic ideal. In the real world, of course, you run into a lot of problems trying to define a perfect abstract chair, but there is absolutely such a thing as a perfect abstract algorithm. When you're implementing Quicksort your implementation is an approximation of that algorithm. And, in a sense, all sorting algorithms are approximations of an ideal abstract sort, which returns correctly sorted elements with the minimum possible resources.

    Even for less abstract problems, it can be meaningful to use Platonism to think about software. You can define a perfect calculator as one that includes every one of a set of defined mathematical operations, executes them in a defined (bug-free) way, and operates using the minimum resources to achieve that. In a sense, all testing (and in particular formal verification) relies on this idea of an abstract ideal program that your program can be measured against.

    However, the more your software tries to do, the more complex that Platonic ideal is. It's rare that a piece of software will be as simple as a calculator; usually the requirements will be in some way defined by the needs of real people, which means that the software model also needs to include a user model, and if it is commercial software, the software model is dependent in some way on a business model. These result in user acceptence testing and behaviour-driven development.

    In the extreme, your software's requirements can become so complex that its abstract ideal is a system of infinite size. Perhaps that sounds hyperbolic, but actually it's not so uncommon. When you define software by the features it has, ie: does x, does y, does z, it's going to be finite. But systems are more often defined in terms of goals like "people can say anything and our software will do it", or "be able to access all the content on the internet". Apple's Siri and Google Chrome are both implementations of an infinitely large abstract system.

    How can you tell? The question is, when would you stop adding features? What would your software have to do to be finished? Siri will never stop adding features because we will never stop having things we want to do. Chrome will never be finished because there will always be more new kinds of content on the internet. The ultimate end goal of both systems is a fully general system capable of doing anything: a Platonic Black Hole.

    If you're making a system like that, it's not necessarily a bad thing, but it does make your situation kind of unstable. While other software will slow down as it asymptotically approaches completeness, yours will just keep growing and growing forever. The eventual fate of the finite system is that the marginal cost of additional work on it drops below the marginal benefit of ever-tinier improvements. At that point, you can step away from the keyboard; it's done.

    But the eventual fate of the infinite system is that it gets so large that it can't adapt to future change, and another, newer infinite system takes its place.

    Compatibility is symmetrical

    There's an old internet adage known as the robustness principle. It says "be conservative in what you do, be liberal in what you accept". That is, you should attempt to act according to specification, but be flexible and recover from non-spec behaviour from others. It seems like a good idea at first, but then the non-spec behaviour that you accept inevitably becomes a new de facto spec. My favourite example of this is the 3.5mm audio jack.

    In many real-life situations, I think people try to follow a similar robustness principle. They are more than willing to accept a certain level of deviance in others, but are conservative in expressing their own differences. So at a job you might pretend to be interested in work that bores you, or on a date you might avoid mentioning that a significant part of your life revolves around collecting Pokemon cards. In a sense, this is trying to follow a particular social specification to maintain compatibility with others.

    But I'm not sure that is actually the right strategy in the long term. After all, if you make a point of not mentioning how you differ from the standard, all that means is that you will end up with people and situations that aren't actually compatible with you. If Pokemon collecting is that large a part of your life then you're attracting people who aren't compatible with you by pretending to be compatible with them. And, the reverse problem, other people who share your passion for card collecting may pass you over because you didn't say anything.

    In that sense, I think compatibility is symmetrical; in every way that you're incompatible with someone or something, they're also incompatible with you. So homogenising away your differences just hides that incompatibility behind a veneer of superficial compatibility, much like the audio jacks with a zillion different pin layouts, or the early days of the web with wildly different rendering behaviour in different browsers.

    That's not to say there's never any reason to feign compatibility. If you're going for quantity over quality, and you're happy to engage at a superficial level, it makes sense. Much like political figures hide their rough edges to appeal to the masses, maybe you can gain more popularity by being more compatible. There's even a theory that the web really took off for that reason, because people could just write what they wanted and browsers would kind of figure it out. But then, inevitably, that generation of amateur web developers learned bad habits and became incompatible with the actual standard.

    So perhaps it's worth being a bit more liberal in what you do, and conservative in what you expect. That might not win you as many friends, but the ones you do make will be actually compatible with who you are. That seems to me a better kind of robustness.


    Over time, it seems like we're gaining more and more control over our lives. While a lot of that can be attributed to political conditions favourable to personal freedom, I think the real driver of control these days is technology. For example, it used to be that you had very little choice about who you interacted with on a daily basis. The people you lived near were your people, and, like them or not, they're who you'll spend time with. But now between cities, private transport, internet shopping and communication technology, most of the time you don't need to interact with anyone you don't choose to.

    Partly as a consequence of this, you also have a lot more control over the ideas and information you're exposed to. News and other media is a lot more decentralised and personalised than it used to be, with the consequence that instead of reading a national or regional newspaper and listening to the radio, you tend to read hyper-specialised information from newsletters, websites, podcasts and social media. Sometimes this is done by a specific personalisation system (eg Facebook's News Feed or Reddit's subreddit filtering), but it's also just the de facto consequence of having more and finer-grained choices.

    And the last one is that we have more control over what we do. The rising tide of education (and formal education alternatives), social mobility and individual power means that it's much more feasible to aim for not just work, but vocation and passion. Whereas once upon a time you would take the first good job you could get your hands on and spend a career under the wing of one corporate entity, modern workers have more power and flexibility to make work work for them.

    So we have ever-increasing control over who we interact with, what information we see, and what we do with our time. Those are all good things! While there are perfectly valid arguments about too much choice being mentally taxing, and our increase in personal power outstripping our society's ability to set social norms around it, I think those factors are not enough to really swing the equation: more control means we get better outcomes.

    However, there is one important caveat that I think is underappreciated: that power is a lot of responsibility. I don't mean moral responsibility, though I did write about that earlier. I mean that if you have complete control over your life, then the quality of that life is completely up to you. So if you're not very good at choosing good people to interact with, good information to consume, or good activities to pursue, there's nothing stopping you from ending up in a fairly terrible situation. And, even in a less extreme sense, the more control you have, the more you rely on your own judgement.

    I would never suggest giving up the important gains in control we've made, but I would suggest being aware of that significant limitation. If there's something better out there, something so good you'd never think to look for it, complete control guarantees you won't find it. For that reason, I think it can be worth deliberately giving up control in limited ways. Talking to strangers (or letting them talk to you), taking on an activity you would normally never do, and exposing yourself to strange and uncomfortable sources of information are all ways you can do this.

    More generally, it's dangerous to put yourself in a position where you are betting on the completeness of your present understanding of the world. It's not so much that you might turn out to be wrong, it's that you'd never know.

    Prototype wrapup #8

    Things are looking a bit better on the prototype front. last week I got behind and missed a day, and this week I was determined to do all 7 that I committed to.


    This was my second run at trying out different langauges for my superlative number market idea. This time I tried Rust, and I found it a lot more approachable than Pony. I managed to get a working web service that could store and retrieve a message, though I confess at a certain point I lost my grip on the memory management and just arbitrarily sprinkled ref/mut/&/move/as_ref/unwrap/Mutex around until it worked. I can see how the model could be very powerful once I've internalised it.


    I find myself writing a really basic jQuery knockoff in one line at the top of my files a lot. jQuery's huge and I really just want a little bit of sugar over the DOM, which, in modern browsers at least, is surprisingly bearable. I thought it'd be pretty fun to try to make a modern jQuery small enough to fit in a tweet. Unfortunately I only got it down to 162 characters, but it can query for 1 or multiple elements and create documentFragments from html strings, which is pretty good considering.


    This was MeDB, the first part of my stats resurgence. It's a little plumbing utility to load data into InfluxDB for use as a personal metrics database. I got as far as recording public GitHub stats and CouchDB documents from my website, but I designed it to be extensible for more plugins and things. At some point I'll probably clean it up a little and drop it all on npm as a real project.


    This was the second part, Monotonic, which pulls data out of InfluxDB and puts it into my CouchDB. Nothing terribly fancy here, but I needed it for the stats to work. Much like MeDB, this is designed to be extensible so I can add more stats later.


    This one turned into a full-blown project called pullitzer. Basically I wanted to mirror some of my GitHub repos to my server as a kind of ghetto deployment system. I got to spend a bit of fun time messing around with HMACs and things, but mostly it was pretty straightforward. It's now powering WTFISydJS, all part of my master plan to remove all manual intervention from updating that site.


    This one got way out of hand. I'd had an idea for doing Markov chain music for a while, and I wrote a bit of code but never really stuck into it. This time I did, but it turns out splitting all the samples out, although quite relaxing, was incredibly time consuming. I totally blew my time estimate doing it, but I ended up with a fun demo and something to put up on GitHub. What I'd really like to do in a future prototype is add some nice graphics like I did with the later Audiomata prototypes.


    This was an idea that rose out of some quibbles I had with Redux's actions. Everything just seemed so wordy. I thought it'd be interesting to see if I could make something a bit nicer. Architecturally, Redux is a lot of fun to work with. It's not that there's anything amazingly mindblowing in there, it's just good design and good code. The end result of my tinkering ended up being about 30 lines, which is very nice and largely a result of having a decent system to build on top of.

    So I did well this week, but I still feel squarely in the yellow zone. I still blew through my time limits a few times, and if things had gone even a little bit differently I might not have made it. So for the mean time I'm going to keep at it, keep committing. and work at it until it gets easier. Until next week!


    A classic piece of software lore is the "10X programmer". That is, the difference in productivity between programmers is not just a bit, it's a whole order of magnitude; the best programmers are 10 times more productive than the worst. It's something I'd heard repeated a lot of times, and I kind of assumed it was yet another poorly-supported software development factoid. However, it's actually backed up by a fair bit of evidence, at least by the standards of software development. In fact, the difference may be as high as 20 times.

    But those numbers are just nuts. I mean, an average person can probably do a 100m sprint in 15-20 seconds. The current world record is just under 9.6 seconds. If sprinting had the same variation as software, the fastest time would be 10 seconds and the slowest would be 1.6 minutes. And, let's be clear, the studies were not comparing amateurs to professionals, the variation was between professional programmers. Professional sprinters, as opposed to amateurs, are generally expected to have a time close to 10 seconds. The equivalent for sprinting would be something like 1.05X.

    Numbers that high suggest something beyond just differences in how long it takes to put code into a computer. 10X says to me that there's an exponential hidden in there somewhere; something that happens early in a project that reinforces itself throughout the life of the project. That something can't be writing code, which is hard but not that hard, but it does sound a lot like design: making decisions about code. Each decision compounds with each new decision, and those decisions have a drastic impact over the productivity of the project.

    So I don't think the 10X effect is because good programmers work faster, but rather that they work in a way that avoids making more work. Bad programmers make bad decisions. Those decisions make everything else take longer, and they compound together with future bad decisions. A good programmer can avoid getting into that situation in the first place but, crucially, a good programmer deep into a badly designed program is not going to be 10X anything.

    Which means that a better way to think about it is 1/10X. That is, good programmers are able to make things 1/10th as difficult for themselves, and thus do 1/10X the work. Chuck Moore controversially claimed that he could solve a problem in colorForth using 1% of the code it would take someone else to solve it in C. But note the crucial distinction: he didn't say he would take someone else's C and and rewrite it in Forth to be 1% of the size. Rewriting it would make it a bit smaller, sure, but the biggest difference is that he would redesign it.

    It's worth thinking beyond software, too. Any situation where your decisions compound with themselves would show similar variation. And the crucial thing is that someone looking and seeing only the most recent decision would say "you're not doing anything that much better, your problems are just easier to solve". But you need to look at the entire chain of decisions; each problem is a consequence of the decisions that come before it. The real trick is to avoid letting your problems become difficult in the first place.


    Oops! Well, things went a bit off the rails yesterday. A while ago I enrolled in an online statistics course that, like most online courses, I had completely forgotten about. When I went to check on it yesterday, I discovered that the first assignment was due that evening. I already had a bunch of work to do on top of that and, by the time I got through all of it, I had blown my writing deadline completely.

    I feel like in this case my decisions were fairly good as soon as I realised there was a problem. More specifically, if I had to run out of time for something, I think my writing was the right sacrifice. That said, there are two other degrees of failure that I'd like to look into. Firstly, that I let that online course get out of hand to the point where it blew away the rest of my time. Secondly, that when it did I had no way to deal with it, no resilience built in.

    To the first issue, the trivial solution in this case is just to pay attention to the course so that it doesn't surprise me again. However, I think the more general thing is that, if I want to be able to commit to a particular schedule, there needs to be enough surrounding stability for that to be feasible. What that means is there is a kind of implicit requirement to be on top of other things in order to be on top of this. Though that's not going to be possible all of the time, the more I can achieve it the easier everything gets.

    The second thing is that it's probably not okay how easily even a relatively minor screwup can totally mess everything up. Even if I'm generally pretty organised, there are always going to be emergencies. I need to make sure that my writing system is resilient enough to stand up to them. If that sounds familiar, it's because I've said it before, which makes it not just a failure but a much more dangerous meta-failure.

    The solution I've attempted multiple times is to write my posts well before the deadline, but it hasn't stuck. To address that, I'm going to make the following commitment: I will finish Monday's post before I go to bed on Sunday. Assuming that goes well, I'll make the same commitment for the week after. Having a harder commitment should make it easier to follow through, and if I focus on having enough buffer at the start of the week it should carry me through to the end even if something goes wrong.

    Idea time

    When I first started writing, I sometimes worried that I would run out of things to say. Maybe I'll eventually sit down at my computer, put my hands to the keyboard and... nothing. All out. These days I maintain a fairly thorough collection of ideas to write about that I've built up over time, but of course that only kicks the problem further down the road. I can't build up an infinite reserve of ideas, and if I start using more ideas than I generate, eventually I'll hit zero.

    That thought tends to occur to me more when I haven't come up with any new ideas lately, but the funny thing is that I never really have difficulty coming up with ideas when I leave time for it. There's some process that trundles along in the back of my head, scooping up random fragments throughout the day and fitting them together into a sudden moment of clarity that surfaces out of nowhere. And, when I'm busy, tired or stressed, that whole process just turns off. It's not even that I try to come up with ideas and I can't. I just... don't.

    But when I make time for ideas, give myself enough space away from distractions and stress, and just think for a while, it's surprising how easily it all comes back. I'm always grateful when it does, but it's strange to think how well I can get by without it. The scary thing isn't how hard it is to give up creativity, but how easy. If I wasn't specifically doing things that demand a constant flow of ideas, I might not even notice.

    Worse is slower

    There's a trope you see sometimes in movies or TV shows, where a Technical Dweeb is doing a very important technical thing for the plot-dictated imminent deadline. "Damnit", the Important Leader Who Tells People What To Do says, "this is taking too long! Isn't there any way to go faster?" – "W-w-well, if we bypass the safety systems and reroute the engine power through this coathanger, but that would..." – "Just do it, Technical Dweeb! There's no time!". And, of course, everything works out fine.

    There is a scary implication there, beyond just that coathangers are not rated for engine power, which is that there is a neat and clear negative relationship between speed and quality. That is, if you want better quality, it will take more time, and if you want it faster you can just sacrifice quality. Dangerously, it is somewhat true, in the sense that there are situations where you can clearly point to a speed/quality tradeoff. And, if you have a single event and the consequences of insufficient quality on that single event are acceptable, then it can be a good trade to make. But you have to be very careful with those assumptions.

    For example, if you make screwdrivers, and each screwdriver needs to undergo a careful and lengthy sequence of heating and cooling cycles to be tough but not brittle, that's a clear speed/quality tradeoff. You could just skip that step and have a crappy screwdriver that either shatters or bends when you use it. Maybe it normally takes 8 hours, and this way you can get it down to 4 but it'll probably break the first few times you use it. That would make sense if someone wanted a screwdriver in half the time, and they only need it for ceremonial reasons, or they're giving it to someone they don't like. That's a single event where the consequences are acceptable.

    Now imagine that you needed to use a screwdriver multiple times, but you're on a really tight deadline. You get the four-hour crappy screwdriver, use it a few times, and it breaks. Now you need a new screwdriver, but you're still on a really tight deadline. So you order another four-hour special. Of course, it breaks again. You're now up to 16 hours. In the single-event case, the crappy screwdriver would have saved you time, but in this case it's cost you time. If you did it right to begin with, you'd be 4 hours better off. Keep in mind, this didn't require taking some far-future view where the screwdriver had to last a thousand years, this showed up after the event repeated itself a few times.

    But things get incomparably worse if you work in a screwdriver factory. You make screwdrivers, but you also use screwdrivers in their construction. Now you have a clever idea. Instead of selling crappy screwdrivers, you save time by using crappy screwdrivers internally. This is the same problem as before, but much worse; you're making bad tools with more bad tools. The bad quality slowdown becomes multiplicative: each crappy screwdriver has to be replaced more often, which means you need to make more, which puts more wear on the crappy screwdrivers you use to make them. It's a kind of endless logarithmic crap spiral.

    So these two factors, the amortised cost of having to replace low quality work over time, and the multiplicative cost of bad tools, both point to a very different relationship between speed and quality. In some cases, the relationship can be positive instead of negative, and even exponentially so. That is, you can be faster and better, or slower and worse.

    The stereotype is that quality is a nice ideal but that it must yield, in practice, to lower quality but faster work. However, I think that is backwards. It's actually the speed-quality tradeoff that is idealistic, and the practical reality is that cutting corners often leaves you worse off than you would be otherwise. That's not to say you can't save time by doing things more efficiently, or by doing less of them, but that you probably won't save time by doing things worse.

    A life undertested

    In software it's quite common to do automatic testing of code. Of course, you already do a certain degree of implicit testing as you write the code anyway; you mess around with different parts of the program as you build them, change things to see what works better, and generally put things through their paces. In most cases, the significant functionality will have been hit hundreds of times during development, so why bother to do automated testing?

    The problem is that kind of testing only covers the obvious. Maybe you never thought to start clicking things before the page has finished loading, or you didn't try it on an old laptop or a bad internet connection, or even just never realised that some option that you never really use broke while you weren't looking. The problem is that the extremes and edge cases are where most of the bugs show up, and they're also the situations you're least likely to encounter. This is particularly true because you made the system in the first place, so chances are you won't write bugs that are obvious to you.

    So we test. We test with random data, we test with random junk that doesn't even resemble data. We test with slow computers, artificially bad internet connections, weird browser/operating system combinations and odd screen sizes. We simulate unplugging random cables, maliciously large floods of web traffic, and random parts of the system crashing. Not always all of these, of course, but every one is a legitimate and useful kind of testing. The common factor is the extreme: in all cases, we want to push our code to the breaking point. We either want to see that it won't break, or we want to know what it takes to break it and what happens when it does.

    But there's no reason this attitude has to be unique to software. Any system that is designed by people has similar flaws. We forget to consider certain possibilities, make inadvertent assumptions, or fail to realise when or how our ideas break down. As you go through life, you build up layer upon layer of these systems. You have a system for getting to work in the morning, a system for keeping your clothes clean, a system for thinking about immigration, a system for dealing with stress. Every one of these is tested by just being used in every day life, but not very well. Could we do better?

    I think so. Although automated tests may not be feasible, we can still apply that spirit of pushing the extremes. For example, children are a substantial test for the getting to work in the morning system, though there are probably better reasons to have a baby. Someone with children will almost certainly have a better, more robust system than mine. Your system for thinking about immigration would become substantially more robust if you spent time as an immigrant in another country, or if you spent time talking with immigrants in your country. And your stress-handling system is only as good as the most stress it's been able to handle.

    In general, it makes sense to avoid unnecessary difficulty. Why make your life harder than it has to be? But this is one argument for seeking out more difficulty. The more you can do so under controlled conditions, the better-tested your systems will be. That might have immediate beneficial consequences if it reveals a flaw in something you took for granted. However, the real benefit comes later, when you hit a difficult situation under less controlled circumstances. At that point you'll be glad to have a well-tested system behind you.

    Creation and creativity

    There's something I've been thinking about in terms of producing creative work. It's quite confusing that we use the same word to mean two different things. I think it's meaningful, even necessary, to distinguish between creation, the act of making things, and creativity, the quality of a thing being imaginative, novel, or eye-opening. I should note that I already said "creative work" at the start of this paragraph without specifying which I meant. They're easy to conflate!

    Creation, that is, making things exist that didn't exist before, is often a fairly uncreative act. For example, writing a story is considered to be a creative endeavour, but actually most of the real creativity happens early on, coming up with all the great plot and character ideas. However, to actually make a story, you just need to spend a lot of time writing. What do you write? The craziest, most original, most creative words you can think of? No. You write what the plot, situation and characters need. In fact, you often have to be less creative in your words if your story is really out there, just so there's a chance people will understand it.

    Creativity, on the other hand, I consider to have a very particular meaning. Just because you come up with something doesn't mean it's creative, and some things are more creative than others. The measurement is a bit tricky to pin down. For example, I would say creativity is novel and unpredictable, but so are randomly generated numbers. To me, the key factor is that exercising or experiencing creativity usually means thinking something you wouldn't otherwise think. It's this element of intellectual surprise that I think makes creativity unique.

    It is possible to be creative without creating anything. Perhaps my favourite example of this is computer security, whose practitioners are nearly indistinguishable from wizards. Security is, at its best, a ratchet that only ever gets more secure. Every time a new exploit is discovered, everyone learns about it and it stops being an exploit. What that means is that to come up with new exploits requires thinking in a way that nobody has thought before. My favourite example: I once saw some researchers break an otherwise impregnable server by messing with the voltage going into it. They even extracted the secret key. All by messing with the power supply! Who even thinks of that?

    I don't mean to suggest that you should strive to be creative all the time, or that all creation should be creative. Quite the opposite; I think maintaining that level of creativity on a constant basis would be impossible, or at least extraordinarily tiring. And, as in the story example, creation often involves a lot of fairly uncreative work. I think that is perfectly fine. In fact, it's important to develop those non-creative abilities so as to make the best use of your creativity.

    That relationship is the main thing I'd like to emphasise. Creation and creativity are different, but they often go together, and the balance between them matters a great deal. Some people create without creativity, which is a shame because it leads to uninteresting work. However, a great many more are creative without creating, and that often means their creativity amounts to very little.

    Prototype wrapup 9

    This is going to be a fairly short one. Things were looking fairly good last week, so I committed to do keep making a prototype every day. For reasons mostly related to my failure earlier in the week, I didn't end up doing that many. And by not many, I mean one.


    I've always been a big fan of Literate Coffeescript. It's a lot more lightweight than Knuth's original, just a kind of Markdown where the code blocks are run as actual code. For a long time, I've had a dream of being able to use this style with languages other than Coffeescript, so I started writing a proof of concept in Javascript.

    I'm not going to dwell exceptionally much on having fallen so far short of my commitment; this is still a fairly fragile and underdeveloped habit, and near as I can tell the main lesson here is "new habits get stomped when a lot of other things go wrong". If it becomes a recurring problem I'll take another look, but for now I'm just going to concentrate on keeping everything else in order and building the habit up. That means I'm making the same commitment again.

    In positive news, I did actually fulfill my commitment to finish this post early. I think that if I can maintain that habit, it'll ease a lot of pressure when things go south, including on my prototypes. Until next week!

    The stability tradeoff

    One thing that's surprised me is how much I have come to respect structure. Until a few years ago, I was generally of the opinion that a kind of carefree, laissez-faire working style was best for creativity. I think it's a fairly common sentiment, and seems to appear mostly by analogy: if there are certain qualities you want in your work, you try to take on those qualities. However, I don't think the analogy actually holds: the work and the worker are inherently different, and creation isn't always creative.

    So over time I've been introducing more structure into how I work and enjoying the benefits that brings in terms of both volume and consistency of output. I've also found that consistency can make it easier to be creative, by providing a ready supply of raw material to be creative with. Other benefits include being able to plan more easily, and spend less time thinking about what to do.

    But an interesting thing I've noticed is that stability is super compatible with more stability, and not compatible with even a moderate amount of instability. Perhaps that seems obvious, but it has some interesting consequences. For example, I used to quite enjoy working out of cafes, but these days it usually doesn't really make sense; I've got my current setup working well, and changing it makes it work less well. Having my location and work intertwined makes me less likely to travel. Being less likely to travel means it makes more sense to invest in furniture and housing. Before long I've got a dog, a big TV and nice curtains, all because I found work habits that made my life easier.

    And what's wrong with that? Well, nothing necessarily, assuming those stability-compatible things are what you want (and presumably they are, if you chose them). But as all those stability factors combine to form a majestic stability fortress, your tolerance for instability goes completely through the floor. An opportunity that would involve, say, an intercontinental move, or living out of a bungalow in a forest, or even just changing career, becomes extraordinarily costly. In addition to its direct cost, you have to pay the cost of giving up all the stability you've come to rely on. Eventually, perhaps, that will just stop seeming worth it in the general case.

    At that point, you've hit complete stagnation, or, to put it another way, have become totally adapted to your environment. The things you do currently, you can do exceptionally well, but it will be nearly impossible to do something unexpected. And that itself becomes a problem if you want to be creative. Sooner or later, your creativity will lead you towards some kind of fairly lateral step. Maybe you've been creating software for years and it leads you to producing electronic music, or art, or opening a nightclub. At that point, you've got a choice: be less creative, or take the huge stability hit.

    I'm still a long way from that point, of course, but it's something to keep in mind. Stability is a tradeoff, an investment in things generally staying the same as they are now. That's often a sensible investment to make, but like any bet, you won't always be right. More importantly, you won't always want to be right. Change is an integral part of creativity and, when the time comes to throw away your stability, it's probably best not to have too much.

    Functional definitions

    Knowledge is knowing that a tomato is a fruit, wisdom is not putting it in a fruit salad.
    Miles Kington

    I think one of the breakthrough moments in learning mathematics is when you realise that all of the definitions are just made up. Back when I was in high school and all the way into university, I was taught things like "a negative number times a negative number is a positive number", "anything except zero to the power of zero is one", "you can't divide by zero", "there's no square root of negative numbers", "actually we lied about that one". Well, actually, all of it was lies. None of those things are that way, they were just defined that way by some mathematician or other.

    Many common terms for seeds and fruit do not correspond to the botanical classifications.
    Wikipedia article on Fruit
    In the interest of balance, wouldn't it be prudent to include a section on the myriad criticisms of fruit. This page is so onesided.
    Wikipedia talk page on Fruit

    And definitions, both inside and outside of mathematics, seem to follow this pattern quite frequently. Some people tend to claim that their set of definitions is absolutely and objectively correct. Indeed, they'd call them facts instead of definitions so as not to offer any implication of being arbitrary. On the other hand, you get people saying definitions are fundamentally arbitrary. What you call a fruit may as well be a vegetable, or a dinosaur, or whatever. What we call good could be called evil by someone else and it would make just as much sense. That viewpoint, however, leaves a lot to be desired.

    You can know the name of that bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird. You’ll only know about humans in different places, and what they call the bird. So let’s look at the bird and see what it’s doing – that’s what counts.
    Richard Feynman (quoting his father)

    Definitions are invented, yes, but not arbitrarily. We define things in certain ways because of the consequences of those definitions. For example, you can make your own branch of mathematics where multiplying negative numbers by negative numbers yields negative numbers. Nobody's stopping you! But other things would need to change too. Would your branch of mathematics end up consistent with these changes? Maybe! Would it end up useful? Probably not.

    So too with other definitions. We can define a tomato as a fruit, vegetable, or dinosaur, but those definitions have consequences. If you're thinking about a tomato as something to eat, you're in for an unpleasant surprise if you expect it to have a similar taste or role in cooking as other fruit. On the other hand, if you want to grow a tomato, or analyse its reproductive cycle or relationship to other plants, you'll find thinking about it as a fruit saves a lot of time. However, regardless of your goals, it's unlikely that defining a tomato as a dinosaur will help you.

    I think the only sensible way to think about definitions is functionally. That is, if I define a thing in this way, what does it buy me? What can I do with this newly defined thing that I could not do before, and what options does defining it in this way remove? Does it help me think more clearly about this thing or other things in the same category?

    If, upon reflection, your definition doesn't do anything, you're probably better off without it.


    In platformer games, it's quite common to use various physics tricks to make the game feel better. For example, you can still jump even if you've (very slightly) fallen off the platform already, and you can land on a platform even if you (very slightly) miss it. Most of the tricks are fairly minor ways of making physics a bit forgiving, but there's one huge departure from reality that almost nobody notices: you can influence your movement in mid-air. This isn't just a minor cheat, it's basically throwing out a large chunk of the laws of the universe. But we don't even notice!

    Why is this? How can we not feel totally aliented by finding out that our supposed avatar in this world just... pushes himself through the air? My theory is that, deep down, we actually feel like we do live in a universe where we can change our direction in mid-air. It's not surprising to us when we press the left button and Mario moves left despite no plausible physical basis; it's surprising in real life that after we jump we can no longer control our movement. The game is just being generous by making the universe work the way we, deep down, feel like it should.

    And I don't think this applies only to physics. The idea of a situation being totally out of our control is very difficult to accept, to the point where most of the time we just don't accept it. A lot of the circumstances of your life are dictated at birth, but nobody wants to believe that. You can't make good things happen in your life by really really wanting them, yet that's the thesis of a bestselling book. And you can't get things done faster or make more time in a day just because you will it to be so.

    That's not to say there aren't ways to affect how long something takes, or many other things that happen in your life. But you influence them by pushing on something, much the same way as you jump by pushing against the ground. You can push on which opportunities you pursue, push on how many things you try to get done, or push on how effectively you work. The one thing you can't do is push on nothing and expect movement. Real life doesn't work like that.

    The law of demand

    The law of demand in economics says, roughly, that the more something costs, the less people will buy it. And, conversely, the less something costs, the more people will buy it. This is one of those observations that, depending on your perspective, is either totally trivially obvious or profound with wide-ranging implications. I'd like to argue for the latter.

    Cost isn't just money, it can be time, or inconvenience, or injury, or stern words from your parents. Anything that increases the total amount you have to give up, trade, lose, or suffer in order to get what you want. And this applies not just to financial transactions, but any situation where you have to choose between different options. In other words, most situations. So, to formulate it another way: all else being equal, you will tend to do things that are easier and require you to give up less to do them.

    That "all else being equal" is an interesting one, because of course things aren't usually equal. Most likely there are some things you want more than others, even if they require more work. For something that you are deeply and profoundly passionate about, the law of demand probably doesn't matter that much. However, two things come to mind: firstly, there are many things that you might want to do but don't feel very strongly about, and secondly, even super important life goals can seem pretty dull at times, especially when compared with more immediately gratifying options.

    So maybe in general things won't be equal, but I suspect that over the course of a given activity there would be a surprising number of points where the decision of whether to do it or go watch TV or whatever is a pretty close one. All you need is one of those decisions to lead away from the thing you'd rather do and then you're off track. And not just TV either, but even other useful or semi-useful activities that are justifiable but still not the best thing for you to be doing.

    Which is all to say that I think the law of demand actually does affect our decisions on a regular basis. And what that means is you need to pay a lot of attention to what things cost. Are the things that you really want to be doing easy? Can they be made easier or otherwise less costly? Can you make the things you want to do less of harder or more costly? Some of these adjustments can be very minor, but still make a big difference.

    It's not sufficient to say "well, it doesn't matter if it's hard, I want it enough that I'll do it anyway". There will be times when you lose sight of that desire, and in those moments having easy alternatives within reach is just tempting the law of demand.

    Robot mode

    Most of the time, we want to keep our reflective self engaged, that part of us that keeps a watchful eye over our actions and occasionally pipes up with "hey, is this really the thing you should be doing? Is this really the right way to do it?" The rest of us is usually content to just do things and not worry too much about the bigger picture, but that makes it easy to end up stuck in dead ends or bad decisions caused by just blindly going from one step to the next.

    At its best, that judgemental process also keeps you connected with the experience of what you're doing. You tend to ask "is this a good experience?", which is a way of reflecting on your own feelings about the thing you're doing, and telling you whether you should actually keep doing it. This is also important to avoid getting stuck in doing something you actually don't want to do, but seemed like a good idea at the time.

    Unfortunately, sometimes you want to do something that is unpleasant or mundane. That doesn't mean you've made some kind of terrible mistake, just that, in the service of something you do want, this particular step is not terribly enjoyable. However, that feeling is essentially generated by reflection. Washing a dish or renaming a bunch of files by hand is not unpleasant as dictated by the laws of the universe, but unpleasant as dictated by your judgement of it.

    The trick, then, is to get those unpleasant tasks to a point where they require no decisions and no judgement at all. If you completely specify what you need to do, to the point where the non-reflective, automatic part of yourself is doing all the work, then your reflective mind can just take a break, or think about something else. It doesn't need to be engaged in what you're doing, and that takes away the unpleasantness.

    So I suggest maintaining a relationship between enjoyableness and specficity. If you're doing something you really like, leave it unspecified so you can enjoy the feeling of exploring it and thinking about how you're doing it. If the opposite, just specify everything in as much detail as you need to switch off.


    A friend once put me on to this great series of little puzzle games online. Each one had a hidden star for you to uncover, with a slightly different trick and no instructions. The first few you would just move something out of the way to reveal the star, or move things around into a star shape, and so on. But they slowly got harder and more devious. Finally, I hit one that completely stumped me. I clicked everywhere, moved everything, pressed every button I could think of, but nothing happened. My friend was laughing the whole time until I finally gave up. And then the star appeared.

    The solution was to do nothing. And, of course, that was the one thing I'd never think to try. When faced with a problem, I want to poke it, prod it, investigate it, test its limits, develop theories, try things, come at it from different angles and, above all, just do something. You don't gain any information by doing nothing, you don't learn from your mistakes by doing nothing, and you don't make progress by doing nothing. Except this time.

    And, actually, there are other situations that benefit from inaction. Sleeping, for example, requires doing nothing. That can be very difficult to do, because trying to sleep is still doing something. Meditation, similarly, is mostly the art of not thinking about things. Removing an association only happens by not thinking about and reinforcing that association. And, of particular note, coming up with ideas also requires inaction.

    Lastly, there are situations where you aren't able to meaningfully control the outcome. Usually not many and not completely, but there are some situations where the outcome is beyond your control. Faced with something you can't change, or that improves when you don't try to change it, the only sensible reaction is to do nothing. Anything else is just wasted energy.

    Prototype wrapup #10

    This week was an incremental improvement from last week's fairly unimpressive effort. I committed to one every day this time around, but I only got 3 done:


    I've been meaning to try out Elm for a while, so I thought I'd try to make a clone of EveryTimeZone. I got a bit thrown by Elm's effect model, which was so pure I had to go to some extra effort to get it to use the current time. Figured it out eventually, but it meant the app was basically a single input field and a label with the time in it. Elm seems cool though.


    I figured I'd take another look at Rust, which I tried out in a previous prototype. I wanted to build a crappy CouchDB-like web api for a database. This time I used a higher-level web framework called Iron, which saved a lot of messing around with the lower-level http library. I even got the database library working, but didn't manage to figure out how to concurrently access the database from different requests.


    This was a pretty fun idea: a Big-O estimator. If you give it a function and a way to call that function with increasingly large inputs, it figures out what the complexity of that function is. I originally tried to write this in PureScript, but its build situation has gotten even worse(!) than the last time I used it, so I just did it in CoffeeScript. Right now it doesn't really distinguish between the complexity classes clearly, but I think if I learned a bit about regressions I could make it neater.

    My hypothesis last week was that missing my goal was a one-off issue, but I'm beginning to think that actually the success the week before may have been the one-off. I was still pretty busy during the week, and the main culprit was just simply running out of time. For next week, my plan is to keep better track of that time. I've been off the timetabling bandwagon for a while but I'm starting it up again in the hope that it'll let me keep on top of things better.

    With this in mind, I'm going to commit to the same again for next week. If that doesn't work, I'll look at scaling it down to a lower frequency and see if I can build back up from there.


    This was a failure fairly similar in nature to missing my prototype goal last week. I'm basically just a bit overloaded at the moment. Between the prototpes, writing, other work, and that statistics courase I'm doing, it's been very easy to get off track. I generally think being busy is a crutch, but I'm also not convinced that my current level of activity is actually too much if I can learn to manage it better.

    I'm also hopeful that in some ways this current time pressure will serve as a testing ground for the way I'm doing things. For something like the prototypes, I would normally like to have a more developed habit before testing it, but I'm happy enough to make hay from this situation while I can.

    My solution to my most recent failure, ensuring I was a day ahead, worked particularly well for the week I kept it up. I didn't make the same commitment for the following week because I thought I might not need to. Evidently, that was not the case! So, in the interest of following where the evidence leads, I will once again commit to finishing Monday's post on Sunday.

    With any luck, I'll come out of this with better and more resilient habits than before. If not, I guess we'll all need to get used to a lot of failure posts until things calm down. Let's hope for the former!


    There's an interesting dichotomy I've been thinking about, to do with relatability. Most of us, I suspect, like to think of ourselves as relatable. That is, we believe that the things we feel or observe generalise to the rest of the population. This turns out to be pretty important because you could, for example, observe that you are often tired in the afternoons, and then relate that experience to someone in conversation. If that person has the same experience, you both feel connected by it, and comforted that other people feel the same things as you.

    It's also important for reasoning about other people. Let's say you are grossed out by spiders. Do you throw a spider-themed halloween party? Well, specifically, you're grossed out by actual spiders and something in particular about their behaviour, not just by anything that claims to be a spider. People in spider costumes, cartoon spiders, adorable plush spiders and the Porsche 550 Spyder are all fine. This may not be obvious if you do not feel that spiders are gross, or if your experience of spider-grossness is very different from that of others.

    So relatability is good. Where's the dichotomy? There's another thing many of us want to be, and that is exceptional. Perhaps most people need 8-9 hours of sleep but you are proud of being able to function fine on 6. You might have a particular aptitude for languages, or instruments, or horses or whatever. Even if it's not some genetic or environmental predisposition, you could be exceptional in what you've achieved, in how hard you've worked and how much time you've put in relative to others. There are lots of ways to be exceptional and being exceptional is usually considered a good thing.

    But you can't be both relatable and exceptional. If you're amazing at the violin, chances are you're not going to relate to many people about music, and definitely not the violin. If you're a few standard deviations off the beaten track in intelligence, your thoughts will be less relatable than if you were more average. And if you work hard to hone yourself into someone great, an exceptional person who does exceptional things, the end result will be that it is hard to relate to people who aren't.

    Which isn't to say that being exceptional isn't worth pursuing. Being in a situation where you're doing so well that it's hard to relate to people is the definition of a good problem to have. But it is still a problem, and in particular it seems worth being aware that your opinions may become less relevant even as they become more advanced. Worth thinking about, too, is that the risk of becoming alienated, or the comfort of being relatable, may hold you back from becoming more exceptional.

    In some cases, that could even be the right decision to make.


    We seem to live in the age of the infinitely extended copyright term. The date when Disney's beloved Mickey Mouse passes into the public domain is nominally the start of 2024, but it may well turn out to be, as Mary Bono wanted, forever minus one day. That said, the political climate has changed since the last copyright extension, and there is some reason to think this might be it. If the copyright machine, at long last, stops, we'll just have to make do with somewhere around 100 years.

    In the face of these gargantuan terms, heavy-handed copyright enforcement, and de-facto elimination of fair use by automated takedowns, it sometimes seems like this whole copyright malarkey is much more trouble than it's worth. Or maybe we should go back to 14-year copyrights, for which there is some theoretical evidence. But it's important to remember that, as with any investment, there is a balance to be struck between rewarding a contribution and providing an indefinite free ride on the basis of some long-past effort.

    Actually, the situation reminds me a lot of startups, where there is a very substantial issue with early equity in a company. That equity is meant to be the compensation that founders and early-stage employees receive for their investment in building the company, which then pays out over the life of the company. The only problem is, what happens if one of the founders just leaves? To avoid this problem, equity is given on a vesting schedule, which means that you don't get all of it right away. But even someone whose shares have fully vested can become an unacceptable burden if they leave. A later hire could be contributing far more to the success of the company and get comparatively little.

    And beyond startups, this seems like the general thesis of Thomas Piketty's Capital in the Twenty-First Centry: that when the value of having things is greater than the value of making things, you have created a dangerous and unstable situation. You need people to keep making things for your economy to work, but they're unlikely to want to do that if they see other people being rewarded highly for resting on their (or their ancestors') laurels.

    Piketty's solution is a global wealth tax. That is, to artificially add depreciation to capital. Every year your existing wealth would proportionally reduce itself, with the result that creating new wealth becomes proportionally more valuable. There is a similar solution to the startup problem. Equity is diluted over time as new investors come in, making the existing shares a lower proportion of the (usually more valuable) pie. It's not unheard of for companies to issue new shares non-proportionally to specifically dilute one founder. Mark Zuckerberg, for example, did this at Facebook (and, in fairness, got sued for it).

    I feel like there is a similar solution waiting out there for copyright. It is true that we should reward people for making things, and, inevitably, that will turn into a reward for having made something in the past. But for how long? And, more importantly, why is it a function with a single step from 100% to 0%? Surely Disney's creation in 1928 can't be worth the same in 2023, and then nothing in 2024.

    What would an artificial deprecation schedule for copyrightk look like? Maybe the bar for acceptable fair use could lower throughout the life of the work. Or perhaps something like compulsory licensing with a rate that decreases over time. Another interesting alternative I've read is to make holding the copyright beyond some nominal term (14 years, perhaps?) cost increasingly large maintenance fees.

    I'm not saying this would remove the need for other kinds of copyright reform, but it would certainly seem fairer if I had a stronger claim to something I made today than Disney has to something made almost ninety years ago by a man who died fifty years ago. Especially when all that old creative wealth, if unlocked, could provide the raw materials for a new generation of creators who are otherwise on their own.

    I got 99 solutions

    A few days ago on Reddit, I saw someone post this observation: "The number 14233221 describes itself; it has one four, two threes, three twos, and two ones." I thought that was pretty interesting, so I thought I'd take a closer look. The numbers are very similar to the look-and-say sequence as studied to death by John Conway of Game of Life fame.

    In the traditional look-and-say sequence you say 1, 11, 21, 1211, 111221, or "one", "one one", "one two, one one", and so on. However, we want to count the digits instead of reading them in order, so: 11, 21, 1112, 3112, [...], 21322314. This last number is actually the same as our original 14233221 with the digits in a different order. These have been described as descriptive numbers and counting sequences. What we want are the fixed points of this descriptive/counting function.

    Another closely related idea is the autobiographical or self-descriptive number. These are numbers such as 1210 where the first digit represents the number of 0s, the next the number of 1s and so on. The largest of these is 6210001000. In fact, there are only 7 of them assuming you only allow digits up to 9. However, there are many more of the kinds of numbers we're looking for.

    To find out just how many, I put together a program to search through them. Initially I wanted to be clever and search in some incremental way through the space of numbers by just picking some seed numbers and generating new numbers until I found the fixed points. Unfortunately, I couldn't figure out how to be sure that I'd get all the numbers. Maybe someone with more substantial mathematical chops than me would be able to figure it out, but I had to settle for a brute force approach.

    However, I found an interesting shortcut: while you can have a solution that starts with 10 (like 10213223), you can't have one that starts with 20, or indeed any number of 0s other than 1. This eliminates 80% of the candidate solutions fairly quickly. I maybe could have found other clever things, but by that point my program was fast enough anyway, so I just ran it. It turns out the smallest of these numbers is 22, and there is only one solution that uses every digit. I'll put the full list in a box below, but it might be a fun exercise to find the largest one yourself.

    One curious fact: there are, in fact, 99 solutions. Since there does not appear to already be a satisfying name for these, how do we feel about Beyoncé numbers?

    Staying on top

    I wrote previously about the "what the hell" effect, where once you slip up a bit with something you tend to think "ah, what the hell" and slip up massively. Of course, that is objectively worse than slipping up slightly, so this falls in the category of unhelpful cognitive biases. I've been thinking recently that there might be a more general effect.

    What I mean is that if you're on top of things, it tends to be easy to stay on top. But a failure in one area seems to spread to others, even if they aren't dependent in any way. My theory is that there is a kind of global "what the hell" effect, where once you've lost control in any area, you stop feeling on top of things which justifies everything else going bad too.

    This is interesting because it suggests an alternative solution, both to this general "what the hell" effect and the more specific one. If being in out-of-control situations makes you generally out of control, maybe the antidote is putting yourself in unrelated situations where you are in control. That is, it may be beneficial to step away from the thing you're having trouble with and focus on something more stable.

    In fact, it may be worth cultivating some easy habits specifically for this purpose. Keeping a clean desk could have benefits beyond just the outcome of having a clean desk. The very process of keeping the desk clean is an easy thing to keep on top of, which should make it easier to stay organised about other things just by virtue of putting you in an on-top mood.

    The Witness

    I have been playing The Witness recently, and I feel like its themes, or at least what I percieve as its themes, are powerful and interesting enough to be worth going into in some detail. I won't be spoiling anything in particular about the puzzles or the story, but in a sense discussing the themes is a meta-spoiler. You may want to play the game first to get the full value of experiencing these themes firsthand rather than having them described to you.

    In mathematics, a witness is a fancy term for an example. If you say that aliens exist, a witness is an alien. If you want to prove that not all prime numbers are odd, a witness is 2. Many proofs are complicated, but existence proofs are, in this way, very simple: you just need a witness. Much of the game involves solving simple checkerboard puzzles by drawing a line from a start to the end. You are given the board, the game asserts that the board is solvable. All you have to do is provide the witness.

    Of course, a witness is also a person who observes something, and observation is the other thing you do a lot of in the game. Despite its core mechanics being no more fundamentally complex than what you'd find in a standard mobile puzzler, the real magic of The Witness comes from its setting and environment. The puzzles are tangible, physical things that you find as you explore the island they inhabit. At first, this seems like a curious and extravagant way to make a puzzle menu. Later, you realise that the puzzles and the environment build on each other in ways that make them inseparable. Finally, you realise that the puzzles are the island, the island is the puzzle, and it was only your narrow-mindedness that tried to pull them apart in the first place.

    It may sound surprising for a puzzle game, but the main theme of The Witness is spirituality. This manifests mostly through the game's collection of audio logs scattered throughout the island in hard-to-reach places. Each one is a few minutes of beautifully narrated monologue from a famous thinker on the nature of truth, God, the universe, and other philosophical topics. The main unifying theme behind these is that they address the relationship between the concrete and the abstract, between the physical and the ideal, and between person and God. Not God of the bushy beard and thunderbolts, mind you, but God of the unknowable ultimate truth.

    All that over squares and lines? It's not as far-fetched as it sounds. In fact, the best moments of The Witness do have a kind of religious quality. You spend such a long time running around, thinking, trying, failing, trying again, and all the while completely lost in the mechanics of the puzzle, down in the figurative and often literal weeds. When you figure it out, however, when you have that spectacular "a-ha!" moment? Your thoughts fly out of the weeds and up into the clouds. You haven't just drawn a line, you've made a discovery, found a new truth with a life of its own. It is no coincidence that the game's final level is atop a mountain, overlooking the whole island of discoveries you made to get there.

    And that feeling is really something beyond The Witness itself. Rather, it's about the joy of discovery, of knowledge, of having in whatever small way that window into the great truth of the universe. That truth is there, has always been there, and by just scribbling lines on a piece of paper you can be a witness to it. In that sense, this game provides a simple vehicle that lets you experience a kind of scientific spirituality, Einstein's "cosmic religion".

    Experience is, not coincidentally, another big theme in the game. Why bother to make a game when you could explain this idea with a book? The thing is, you can't read a new feeling. And if someone tells you something, it never feels the same as discovering it for yourself. Indeed, I accidentally read the solution to one of the puzzles and it was immensely disappointing. Sure, I beat that puzzle, but the puzzle was just lines on the screen. I missed out on the experience of discovering the truth. In that sense, experience generation can only happen through your actions, can only work because this is a game where you participate.

    Near the end, the game turns inward and begins examining itself. It becomes obvious that this experience generation is not a coincidence, but the whole point. Many games are made with a tutorial, but this is a tutorial made into a game. Normally that doesn't work, because the game ends and then what good is a tutorial? However the final, perhaps strongest message is to look around you. The puzzle isn't the board, it's the table the board is sitting on. It's the house around the table. It's the island under the house. It's the sky above the island. The Witness is a tutorial on how to think, and the final level is to turn the game off and keep thinking.

    How audacious is that? There are lots of educational games out there, but this is something else entirely. Not content to merely explain or describe, The Witness is a series of experiences that walks you, one self-directed step at a time, to enlightenment. Many of us have believed games to be capable of truly amazing things, but evidence has been pretty thin on the ground. In its own way, this game is a proof by example. It is the witness. And, if you play, you can be a witness to it.

    Prototype Wrapup #11

    After last week I was hoping that putting a bit more effort into organisation would turn into a higher output of prototypes. Unfortunately it wasn't quite the magnitude I expected. I committed to do one every day, but I only managed 4 days.


    This was the first part of what turned into my post on Beyoncé numbers. I started off by trying to make a clever solver in Haskell, confused myself, and then made a brute force solver in Haskell. It really struggled with the volume of numbers though. I calculated that I could leave it running for an hour or so, but I kept running out of memory. I think I was probably doing something wrong.


    This was attempt two. I figured out how to do a clever solver that would generate and solve numbers from a seed instead of brute forcing everthing, but then I got stuck on how to be sure that the seeds were comprehensive. In the end, I gave up on this method too.


    Finally, I decided if I can't work smarter, I'll work faster. I rewrote the thing in Rust, which was actually really nice. I had to do a little bit of type juggling until I realised that because my arrays need to index themselves I should just make everything a usize even if the numbers never go above 10. I also got to use magical Rust threads, and not having to worry about synchronisation felt pretty good.


    I've poked around a bit with Facebook's Flow type checker, but I thought it might be fun to do something else with it. I wanted to make a friendly type parser, so you can write "a function that takes an array of string and an optional number and returns a number" and get back (a: array, b: ?number): number. I got it going, but I'm not sure the types really helped me that much.

    I've been thinking about the prototype situation over the last few weeks, and I think the simple fact is that I've been relying on the commitments too much and not respecting them enough. Commitments should be something you are very likely to achieve, and at least for the last few I've been using them quite aspirationally. If I was asked how surprising it would be if I failed that commitment, the answer would be not at all. Not exactly how I meant to do things!

    I think, in particular, maintenance goals are a poor fit for the platform. Firstly it gets boring committing to the same thing constantly, but also I don't think public commitments are really granular or focused enough for the kind of thing I'm doing. Doing this properly means steady effort every day, and a system that reflects this. With writing, I have a very short feedback loop from missing a deadline to dealing with it, whereas right now for the prototypes it's often a week.

    Which is all to say that I'm going to stop making prototype commitments. I think there is a better system out there, but I'm not sure what it is yet. In its absence, I still have my weekly prototype wrapups, and the habit is fairly well established to the point I don't think I'll just stop doing it. I might look into my monotonic stats idea and see if that proves to be a better motivator. Either way, I think turning down this particular kind of pressure will actually be pretty helpful, because it makes room for a different solution.

    Most constrained and least constraining

    I've been thinking a bit about software design, decisions and, for certain reasons, puzzles. One thing that's difficult to explain is how to make good decisions when designing software, or how to choose between one implementation strategy and another. Like many decision-making processes, it's full of heuristics that are hard to explain, but thanks to the indomitable efforts of AI researchers, some of them have already been described for us.

    Constraint Satisfaction Problems, where you simultaneously try to satisfy a bunch of different requirements by changing values around, are a particularly relevant field of research. They mostly involve situations that are too complex to have closed-form mathematical solutions, and too large in scale to simply brute force your way through. Instead, you have to get a bit clever with how you choose which values to try first. In other words, you use heuristics.

    Two important but simple heuristics are called most constrained and least constraining. That is, you pick the variable with the least options, and you set it to the value that leaves the most options for other values. These two heuristics together can produce a surprising improvement, even with an otherwise naive algorithm. The reason they work is the variable with the least options is going to be the hardest to satisfy, and the value for that variable that leaves the most options is going to give us the highest chance of success.

    I think those heuristics are equally important in software design. You want to start with the biggest, hardest requirements first, because once you've got them out of the way everything else gets easier. But you also want to take care to try to solve those requirements in a way that takes the fewest decisions off the table for the rest of your code. I don't mean that you should write everything in a super-overly-abstract way, just that you should avoid doing things in a way that cuts off options unnecessarily. You can be flexible by doing less, which is preferable to being flexible by doing more.

    There are some interesting implications about decisions beyond software, too. It's not unreasonable to view your life as a very complex Constraint Satisfaction Problem. If that's the case, it could be worth thinking about which decisions are the most limited, and whether you're making those decisions in a way that maximises your remaining options.

    The peril of having time

    When you're busy time suddenly starts to seem more valuable. The big looming deadline or barely manageable workload makes every second seem important. Any interstitial preperatory rituals go out the window. Anything that's not relevant or necessary gets cut because you just don't have time. You might not get many chances to unwind or relax, so whatever there is had better be super relaxing. Free time becomes the most valuable commodity you can find, and you fantasize about how you're going to spend all of it when the crunch is over.

    Then your time frees up again and what happens? Freedom reigns, the days stretch out ahead of you, and you've got nothing but time. Suddenly, all that time stops seeming so valuable. Whether or not you're doing things efficiently isn't so relevant when you have lots of time to do them in anyway. Fifteen glorious minutes stolen away with a coffee and a sunset gives way to hours of some leisure-like activity that isn't really that fun. You go from free time to spare time, and finally to wasting time, a notion that would have been unthinkable not so long ago.

    But the funny thing is, your time doesn't really change in value just because you're more or less busy. Sure, it may seem locally scarce, but you're still provided with 24 hours of it every day, and you can still turn it into money or fun or creative output at a given rate. An hour's worth of reading is always going to be an hour's worth of reading, whether the surrounding hours are filled with work or nothing at all. Pending some kind of immortality breakthrough, there are a fixed number of total hours assigned to your life, you can spend them how you choose, and being busy or not busy doesn't really change that.

    So should you act like you're busy all the time? Not exactly. When you're busy you also get stressed, cut corners, make bad decisions, and generally hurt your creativity. I'm not an advocate for being busy, but the one aspect I think is good is the appreciation for the value of time. It's that appreciation that I think is worth carrying with you. If you wouldn't do something when time is scarce, maybe it's not worth doing when time is plentiful either.

    Student teaching

    Conventional wisdom is that a teacher should be someone who knows a lot about the subject, which on the face of it seems pretty sensible. But experts have a very different context to novices; they don't have the same kinds of problems or make the same mistakes. They tend to focus on the high-level ideas when novices are still struggling with the mechanics. Or they teach mechanics that are obvious and miss the ones that are actually difficult. Not because they're bad teachers, just because they don't have the novice context anymore.

    On the other hand, how can you possibly teach people the material without knowing the material? Embedded in that idea is an assumption about learning: you learn because someone tells you things, the words enter your brain and then turn into knowledge. Of course, there's no reason to think that this is actually how learning works, and in fact very good reasons to think the opposite: you learn when you build associations, and you build associations by doing things.

    So what does a teacher do in a system where you learn by doing? You can't just give students knowledge, you have to encourage them to explore ideas in order to discover knowledge on their own. From that perspective, knowing a lot about the subject is useful only in as much as it tells you which ideas to explore in which order. Actually knowing the answers is only useful to the extent that it allows a student to be confident that they're on the right track and not wasting time.

    Ideally, you would want someone who has the novice context, and approaches the problems and knowledge in a studenty way, but who doesn't lead you down the wrong path and get you stuck for days or weeks figuring that out. How can we have such a thing? Hollywood's magical time dilation. A student could record their experience of learning the material, making the mistakes, learning from the mistakes, and slowly working their way towards understanding. When they get there, they just go back and edit out the boring bits.

    The end result would be something less like a knowledgable expert dispensing wisdom from on high, and more like an older sibling in concentrated form. You can see the path they've walked and the mistakes they've made, but you don't have to make those same mistakes yourself to learn from them.

    Make a new plan

    Plans don't always work out. It's been said that no plan survives first contact with the enemy – or, in the words of Mike Tyson, getting punched in the mouth. Sometimes your plan is too optimistic or misses some important details that become apparent later on. Sometimes early failures make it clear the rest of the plan isn't going to work. Sometimes it only takes a little bit of time to realise that, actually, your plan was just a bad plan.

    There's nothing wrong with that, no shame in cutting your losses and saying, well, this plan is no good. In fact, stubbornly sticking to a plan that doesn't work is far worse than jumping ship. You make a plan for a reason, and if those reasons are no longer relevant, or your assumptions about how to achieve them were wrong, then continuing to follow that plan is just pushing on nothing.

    However, having no plan is also a mistake. Sometimes the temptation when things go wrong is to just say "ah, what the hell" and give up on plans entirely. But without a plan you're left just doing things and hoping they work, purposeless actions with no intention behind them. Having no plan might be better than having a bad plan, but not much better. You made a plan for a reason, and that same reason necessitates a new plan.

    So the two steps work together: ditch the old plan, make a new plan. Maintaining this rhythm seems crucial to me; it's the two-stroke engine of robust planning. A misfire in either half of the cycle and you get stuck, either with a bad plan or no plan at all.

    You're hired

    It's no secret that most job interviews are basically seances with better lighting. Some studies indicate that you can predict a lot of the outcome of an interview from "thin slices" as short as minutes or even seconds. Of course, it's impossible that a slice that short could tell you anything important about job ability, so we're left with the conclusion that it's mostly a load of confirmation-bias-driven woo.

    In recent years, the tide at technology companies has been turning, though. Google have slowly reformed their famously Byzantine interview process that mostly relied on whiteboard algorithmic pressure tests and clever gotcha puzzles that were mostly for showing off how clever the interviewer is. Apparently their internal data showed their interviews were almost completely unpredictive of job performance, and to their credit they acted to get rid of the bunk and replace it with better-validated methods.

    More recently, companies have begun to to hone in on a technique where you interview based on actual work performance. As in, after a certain point in the process you just pick some relatively newbie-friendly task that actually needs to be done and pay the candidate to do it. It's a refinement of the idea of work sample tests, where you use a fake task that's representative of the work they'll be doing. But, hey, why not cut the fakery and just use real work? Apparently it's very predictive.

    The whole thing makes me think about a glorious endgame: could you just make a job offer to the whole world? Congratulations, everyone, you're hired. Your job starts whenever you want, just let us know when you're ready and we'll give you a task to do. The rate will be based on how well you've completed previous tasks, which means for the first one it'll be pretty low (but also a very small task). If you do it well the next one will be much higher as the error margin on your pay grade goes down.

    Of course, the kind of company structure required to make something like that work would have to be pretty sophisticated. You'd have to have a predictive engine with human oversight that could assign and update expected values on task-person pairs very quickly. You would even need special tooling so that tasks could be broken down into very small chunks, which in the case of software would almost certainly mean some kind of programming language-level support. The hurdles are pretty big, here.

    But, still, it seems like it could be a truly fascinating and innovative way to run a business. The entire hiring pipeline and all of the complexity and bias that goes with it only makes sense if there's a meaningful distinction between hired and not hired. Maybe there doesn't need to be.

    The end of Kitchen Sink Social

    I haven't heard about Google Plus since Google announced it was becoming optional last year. It seems certain at this point that it will fade into obscurity and then be quietly killed like so many Google products before it. At one point, Plus was Google's golden child. It was given enormous resources, basically every Google product was integrated or being integrated into it, and employee bonuses were even tied to its performance. Yet it still failed.

    Many factors contribute to that failure, not least of which was trying to take Facebook on its home turf and failing to distinguish itself meaningfully. "It's like Facebook but on a different site" is a compelling value proposition for a small subset of the population, but basically meaningless to everyone else. Those arguments are fairly well-trodden at this point, but I think there's a factor that hasn't received as much attention: Plus was based on the Kitchen Sink model, and nobody wants Kitchen Sinks anymore.

    There was a time early in the internet age when Kitchen Sink was the go-to model for all software. Netscape started with the first popular web browser, and ended up with a browser, calendar, e-mail/newsgroup client, groupware, push notification server, and a nasty case of irrelevance. ICQ, the first instant messenger, ended up as an instant messenger, sms gateway, news ticker, email client, game center, greeting card service(!) and a historical footnote. It was the inevitable progression of 90s software development: start doing one thing well, end up doing lots of things badly.

    Those particular examples form part of the storied attempt to create the perfect Internet Suite, a kind of one-stop shop for all things internet. Today that sounds as ridiculous as saying you're going to create a "Food Suite" with all the foods anyone might want to eat. The internet does so many different things that having one particular software product to capture all of it is necessarily inadequate. And yet with Google Plus we see the same mistake again. They tried to roll up all of their products into one big Social Suite.

    The problem with this isn't just that you end up stuck with stuff you don't want to use, or that mediocre products are propped up by good ones, though that certainly didn't help. The problem with the Social Suite is that people want to keep their online selves separate. From an engineering perspective, it seems super elegant that your accounts across all different services share data and reflect one single identity. From a human perspective, it's a complete nightmare of unintended consequences to everything you do.

    Google is, mostly, an infrastructure company. Search, AdSense, YouTube, Gmail, Maps: all things you use because you want something else and Google provides the plumbing to get it to you. But they swapped it out for Smart Plumbing, where my search queries change my Maps results, and reviews I leave in Maps and comments I leave on YouTube appear in other people's Search results under my real name. That is immensely, profoundly creepy. I can't predict the consequences of my actions anymore. Anything I do on any Google product could end up anywhere on any other Google product, and I have no way of knowing what or where.

    You might think Facebook presents a counterexample to this idea. After all, they are the quintessential Social Suite. But I think they are a temporary outlier. Facebook got a free pass because it so quickly exploded into popularity and has been riding the network effect ever since. However, in recent years people have begun migrating to smaller, more specific social networks. Facebook's answer has been to buy those networks: WhatsApp and Instagram were both acquired. SnapChat, Twitter and I'm sure many others got offers.

    Yet there's something interesting about how Facebook has handled those acquisitions. WhatsApp and Instagram both still have their own separate identity, they didn't get rebranded. The userbases haven't been merged, and I don't think they will be. At least, not in the ill fated Google-YouTube "don't not click on this button to not leave your accounts unmerged" sense. I think Facebook recognises that people want an Instagram account and a Facebook account, and a sensible path for connecting the two in a way that the link remains under your control.

    I believe that in the long term, even Facebook's current level of centralisation is unsustainable. Facebook is two things: a site that people visit to interact with their friends, and the social infrastructure that underlies it (and their other products). Facebook-the-site will inevitably fall out of fashion (and already has, for some groups). Facebook-the-infrastructure, on the other hand, can continue to exist as the backbone for whatever comes after Facebook-the-site. Facebook Messenger is an example of how this is already starting. I wouldn't be surprised to see more parts split out into their own identities.

    Ultimately, I don't think there is any place in the future of the internet for a Social Suite. Google tried and failed. Facebook succeeded, but is already backing away from the idea. The truth is nobody wants everything they do online to be connected to a single identity. Individual services will slowly take marketshare from monoliths until the only monoliths left are the ones that implement individual services themselves. Kitchen Sink Social is dying, and I for one can't wait to see it gone.

    Prototype Wrapup #12

    After last week's decision to not commit to any particular number of prototypes, I wasn't sure how this week was going to work out. Maybe without the commitment I wouldn't bother to do any prototypess at all. Actually, it was fine and about on par with recent weeks.


    I've had this idea for ages to make my Twitter profile auto-update with the current time in my timezone. I'd already checked out the Twitter API and the terms didn't seem to prevent it (they have a rate limit that works out to 1 request per minute). So I figured why not just hunker down and write it. I chose Rust because I thought it would be interesting to see how working with a REST API would work. Actually it turned out to be a pretty substantial pain in the butt, mostly because of OAuth. I tried to use someone else's OAuth library but it wasn't working and in the end I just wrote it from scratch. Works great though!


    I wanted to do some more work on my promserver idea, but I wanted something with a bit more of a rigorous foundation. I figured it would be interesting to use Flowtypes to do that. It started to seem like more trouble than it was worth in a few places. Varargs and optional arguments basically just send Flowtype completely off the rails, and it was missing the types for some HTTP library internals that I needed. I still got something working, but it was way harder than it should have been.


    This was a bit more laid back and fun. I added two new MeDB plugins to look for the total karma this site has received on Reddit and Hacker News. I still haven't released MeDB even though it's basically ready, just because I still need to figure out how to make globally installed plugins work properly with Node. Still, my stats page now has a little bit more to look at.

    The trouble with being moderate

    Extreme views are, by their nature, pretty rare. So much so that it can be surprising when an extreme view turns out to be right, or when the goalposts move such that the sensible view seems extreme. Climate science and evolution both have this problem; for political reasons there are two popular viewpoints, but placing them at equal but opposite extremes severely misrepresents reality. However, there are many other topics without clear answers, and where varying degrees of moderation are valid.

    The difficult thing about this is that we seem to disagree proportionally to the difference between our views. So let's say you believe strongly in social welfare programs, and someone else tells you they believe that the poor are poor for a reason and we shouldn't encourage them. In that conversation, you'd probably speak up and tell them that you think they're wrong. But what if you believe that social welfare is widely abused and in many cases is a waste of money? Now their view is still different, but much closer to your own. Do you speak up? Maybe it doesn't seem worth it for a minor disagreement.

    The result is that, especially on the internet, you tend to hear big disagreements more than little ones. The bar for someone to bother to speak up is harder to clear when you only disagree a bit. Unfortunately, this means more moderate views tend to be under-represented, because moderates disagree less strongly. Worse still, it means that most disagreements aren't very useful, because the opinion of someone who disagrees with you on everything is less relevant than someone who agrees with you on most things except for one.

    A related problem is that it's easy to conflate the strength of a belief with its extremeness. It's certainly true that if you hold an extreme belief you should probably believe it strongly; extraordinary claims require extraordinary evidence and all that. But the converse isn't true: something you believe strongly isn't always extreme. It is possible, even likely, that the right answer in many situations is unextreme. In those cases, what you hold is a strong moderate belief that is every bit worth pushing for as an extreme view.

    Which I think answers the proportional disagreement problem too. If you disagree in proportion to the difference between views, moderate voices tend to stay quiet and extreme ones dominate the conversation. On the other hand, if you disagree in proportion to the strength of your belief, disagreements are between those who feel the strongest. That still means extreme views, but hopefully balanced by the larger weight of strongly held moderate views.

    It may be that part of the problem with disagreement on the internet is, ironically, not enough disagreement.

    Comfortable inevitability

    Looked at the right way, a lack of choice can be very comforting. There's obviously the paradox of choice thing, but the main effect for me is just that a situation that's beyond my control is also beyond my responsibility. I don't have to worry about whether it should or shouldn't be the case, I just have to deal with it as best I can.

    In a sense, you can think of a lack of choice as being related to focus. There is some subset of things that you are going to focus on, that you are going to make choices about, and some (much) larger subset that you aren't going to make choices about. In the case of something like whether to save all the orphans in the world or learn to levitate, that decision is out of your hands; there are no choices you can make that will bring about that outcome. In many cases, though, you do get to make a kind of meta-choice: the choice of what to make choices about.

    I've found it particularly useful to use that meta-choice to add inevitability to a situation. For example, sometimes I'm struggling to finish some piece of work and getting discouraged. I start wondering if I should stop and do something else, which begins to distract me from the work itself. But if I say "well I'm definitely going to finish this, it's just a question of how long it takes", that takes the choice of whether to do it off the table and lets me focus on how to do it.

    Most likely the reason this works is that we have limited resources to use for making decisions. In that context, removing choice is a way of being judicious with those resources. Inevitability, then, is just having the most available resources, and comfortable inevitability is having exactly as much capacity to decide as there are decisions to make.

    The collapse, the expanse

    A year ago, I wrote When it all comes together, about the magic of the "a-ha!" moment when things just seem to line up. I included a lot about Catenary, which I was working on at the time, but I think the idea is interesting enough to stand on its own, and I'd like to return to it in the context of some other things I've written since.

    There are two feelings I think are particularly notable in the context of discovery. I don't know if anyone else has named them, but I call them the collapse and the expanse. I think of them in terms of fundamental complexity, specifically in the sense of the number of actual possibilities. I touched on the idea in Abstractions: if you have a piece of paper with a million numbers written on it, there are a lot of possibilities for what can be on the paper. But if you look at the numbers and recognise that they fit a pattern, there are much fewer possibilities, sometimes even only one.

    That decrease is the collapse: a sudden reduction in complexity, often accompanied by the "a-ha!". When you first realise in basic physics that you can calculate horizontal and vertical motion separately, one hard problem collapses into two simple problems. When you suddenly discover there's an underlying rule behind something you had learned case-by-case, or even figure out how to solve a tricky puzzle, the number of things you have to know decreases dramatically.

    It makes sense that this feels good; the collapse is a sign you are storing information more efficiently than before. Much the same way that you can store "the even numbers from 1 to 2,000,000" in less space than you can store the even numbers from 1 to 2,000,000, you can store a collapsed understanding in less space than the knowledge that went into it. Doing this is essential for fitting all the things we need to know into our limited mental storage.

    The expanse is the opposite: suddenly discovering that there is more complexity in a thing you thought was simple. Oddly, it feels good even though it means your life has become harder. The feeling is the kind of surprised curiosity you get when you've only ever seen white swans and you see a black swan for the first time, or you visit a new country and everyone acts very differently from what you're used to. Even just learning some new information with a lot of implications can lead to that kind of mind opening expansive feeling.

    So why does it feel good? I think it's because the expanse provides you with a lot more new knowledge. And since new understanding requires collapsing down existing knowledge, that knowledge is important fuel for the process. But it's also worth noting that understanding also leads to new knowledge in the form of predictions and constructions. Which means that they complement each other: each collapse leads to more expanses, which should hopefully lead to yet more collapses.

    I think it's these complementary processes that really define intelligence: expanding understanding into knowledge, collapsing knowledge back into understanding, and all the while inching closer to a complete representation of the truth.

    Minimum unit of code

    One thing that has been difficult about prototyping, and specifically about trying to stay on the small end of the idea triangle, is that it's really hard to write small amounts of code. I wrote about that before in terms of setup costs and tooling, but I think there is a wider problem which is just that there's not really a place for very small amounts of code.

    Right now, the smallest unit of code tends to be a program. But a program includes certain expectations about size and complexity. Often a program will include a build and compilation system, dependencies, and whatever it needs to talk to the surrounding environment. That's an inherent part of the operating system metaphor; each program in the operating system is an independent unit that can take very little for granted about the other units. Usually that means there is a lot you do from scratch in each program because you have to build up from a low level.

    Imagine something like this: you're a designer who has just learned about the golden rectangle and you think it's super neat. You want to have easy access to it so you can use it in your designs. The function is really simple. For example, here's the code to give you the long side of a golden rectangle given the short side in Javascript:

    function golden(shortSide) { return shortSide * (1 + Math.sqrt(5)) / 2; }

    Now you have that code. Where are you going to put it so you can use it? There's no place to just put a function on your computer. Probably what you would do is put some interface around it, maybe a command line tool if you're a programmer, or a little widget or webpage or something. But all of those things are much larger than a single function! Your minimum unit of code has to be large because the operating system doesn't have an interface that would allow for anything smaller. It only knows how to run programs, not functions.

    What I would like to see is a hybrid programming/operating environment, where even very small units of code can be stored, inspected and run the same way that whole programs are today. If you can get the minimum unit of code small enough, I think it would change the way we use computers significantly, away from passively consuming the programs of others and towards writing small bespoke units of code for ourselves.


    There's something fairly counterintuitive about human behaviour: we tend to assume it's a result of external inputs, but in fact it seems to be mostly a result of previous behaviour – that is, outputs. Your past behaviour is highly predictive of future behaviour, at least when that past behaviour happened in a similar context.

    Why should this be the case? If you think of your brain as mostly an association machine, it makes a lot of sense. The more similar a situation is to another situation, the more likely you are to do the same thing again. Similarly, the way you have reacted in response to similar situations in the past will influence the way you react in the future. This is one of the central ideas of cognitive behavioral therapy.

    In other words, if you want to change your behaviour or your mood, it may be best to focus on your output, not your input. Classic examples include the effect of keeping a gratitude journal on happiness, and the backfire effect, where correcting someone makes them believe more strongly in their mistake. Of course, if you disagree with the facts, more facts just means more practice at disagreeing.

    One of the most interesting effects of this is that it can be helpful to put yourself in situations that challenge the behaviour that you want. For example, if you experience impostor syndrome, it can be helpful to have a debate with a friend. However, your friend takes the position that you are an impostor and you take the position that you aren't. It sounds strange that it could be helpful to be told you're an impostor, but the point is that the input (their arguments) doesn't matter as much as the output (your counterarguments). You want to practice and thus reinforce that output.

    A common way to analyse a system is as something that takes in inputs and produces an output. It can be tempting to think of people that way; certainly, it's very neat and easy to understand. However, in reality I think we operate on associations, and associations are bi-directional. It's less that we turn the input into an output and more that we have a whole lot of inputs and outputs and we jumble them together and pull out the ones with the strongest associations at any given time.

    Of course, our actions are usually going to be our most powerful experiences because there are a lot more associations involved in doing something than in experiencing it second-hand. Plus, we have a lot more direct control over our actions than our situation, which is usually at least partly beyond our influence.

    All of which is to say that it makes sense to prioritise output over input, to focus on your actions and your responses rather than the situations that lead to them. And, in fact, sometimes it can even be worth giving yourself worse input if it reinforces the kind of output you want.

    Always be timing

    For ages now I've been using time tracking for projects I'm working on. I mentioned it in the context of the lack of good tooling for tracking personal metrics, but the graph that I made for that article was pretty reflective of the kind of value I got from time tracking. Being able to know which projects are taking up the most time, what general categories they fall into, and what my general workload is like was immensely helpful.

    But lately I've begun to realise that what I really want to know is: where is all of my time going? It sometimes seems like there's a lot of it and sometimes seems like there's nearly none. In the context of that problem, I've realised that one of the most important things to track isn't my time working on things, but my time not working on things.

    So to fix that I've decided on a simple system: just always be timing. Track time for projects, track time having fun, track time asleep, and, most importantly, track time not doing any of those things. This should result in a full 24 hours of time tracked each day, with no ambiguity about where time has gone. If there are any obvious time sinks, they should present themselves pretty clearly in the data.

    Partly this idea is thanks to Toggl, which made the interesting design decision to let you start tracking time without having to enter what you're tracking time for. So I can just start an unnamed time entry and describe it later, which is essential for the kind of miscellaneous time I'm looking to isolate.

    I'll follow up a week from now with whatever insights I've gleaned from this experiment. At that point I should be well placed to decide whether to keep doing it based on how useful it was and how much effort it took.

    Prototype wrapup #13

    Last week I made 3 prototypes, and this week was 3 again. It seems much easier to work on prototypes when they happen to line up with a particular idea I'm working on (eg the Beyoncé numbers series or the medb ones). I'm beginning to think that one way to do more prototypes would be to have more prototype-sized problems. In other words, maybe it's supply-constrained and I could benefit from increasing the supply.


    I wanted to work a little on my Automaintainer idea. It's already active on Github, but the rules aren't very discoverable (you have to go read a particular file). I wanted to generate neat badge images like the npm ones. So I started by figuring out how to do image generation using node-canvas. It was surprisingly easy after I sorted out some build issues.


    The next step was to build a web service around it. So this is just a simple Express server. I thought about building out the promserver/funserver idea from last week's prototypes a bit more, but I figured it would be better to get something that works sooner. I initially had a whole filesystem cache thing planned, but I ditched it because it was turning out to be more difficult than I wanted. Either I'll have to put that back in later when I deploy it or just put some other caching layer in front of it.


    I worked a little bit on a git visualisation idea. I've often thought that the DAG structure of Git makes it uniquely suited to graphical representation. Not for everyday use necessarily, but for thorny situations where you need to significantly reorder things or understand complex commit histories. I spent most of my time just getting NW.js going, but I got it listing a commit history for each branch, which I'm pretty happy with.


    Yesterday's post was my 365th since starting this writing adventure, although depending on how you think about it there are a few milestones to choose from. My first post was Are you sure?, which was actually posted around the first of March. And the first post since I started posting daily was The inside-out universe on the 24th of March. But 365 posts seems like as good a time as any to engage in some reflection.

    All told, I'm pretty happy with how the past year has gone. I thought I might end up looking back at things I'd written a year ago and cringing, but actually most of it seems fine. The main difference I notice is that my posts tend to be longer now; it was rare that I'd write anything longer than a few paragraphs a year ago, but these days it's rare that I'd write anything that short. Either this is because I'm expressing more complex ideas, or just that I'm wordier than I used to be.

    One thing that is a shame is that my posts are a lot more similar than they used to be. Earlier posts usually had screenshots or drawings attached (the existential crisis black hole was a personal favourite) but now it's mostly text. That wasn't a deliberate decision so much as a result of having less time and falling out of the habit. I've also been much more conceptual, whereas older posts tended to be more of a mix of project ideas, conceptual ideas, project releases and even music.

    Another thing is that my posts are not very discoverable. I haven't put any particular effort into posting them on social media (beyond the occasional Hacker News post), and there are no categories, no easy navigation, and no archive pages. I'm pretty sure my RSS feed is still broken on many online feed readers because they can't deal with SNI. None of these things are deliberate decisions, it's just never been a particular priority to fix them. I mostly care about writing stuff, and getting other people to read it fits in where it can.

    Still, I think it's worth striving to do better on these things. It's easy to get stuck in a comfortable rut once you find something that works, but if you want to improve there's always a certain amount of discomfort required. I'm going to make an effort to diversify my posts, get them out into the world more and, sometime soon, make changes to the site so that it's easier to go archive hunting. As for the drawings:

    365 birthday cake

    Happy 365th!


    Oops! Things went somewhat off the rails, and I've ended up nearly a week behind. I think this is probably my most spectacular failure in the history of this site. And right on the heels of the 365 milestone! I think that is mostly a coincidence, though; the root cause is just too much going on.

    I wrote in my last failure post that I just had a lot going on, perhaps too much. I implemented a fairly effective strategy for dealing with the workload, but while it helped the workload didn't really change and that really left me at substantial risk of another failure. I ended up with three fairly major deadlines for three different things in the same week, and that's when this failure happened.

    Probably the main lesson here is to just not commit to too much. It sounds facile, but realistically that is, near as I can tell, the root cause. I'd like to be able to manage that level of workload without dropping stuff, but I don't think I'm at that point yet. Hopefully some of the habits and tools I'm working on will help, but that doesn't change the reality today. Luckily, my workload has also decreased, so the immediate problem should resolve itself.

    Separately, I have the question of what to do with such a substantial post deficit. Normally I would use a failure post to bridge the gap of a single day, but this is a lot of posts to catch up on. I considered just skipping the intervening days, but there is something motivationally powerful about having a post for every day in an unbroken line stretching back, even if I take some liberties with exactly when they were written.

    Ultimately, I think the only thing for it is to just take that on the chin, and write lots of posts to catch up. In its own way, it feels like an appropriate disincentive from slipping this far again. And, to be honest, I'm looking forward to the challenge a little bit. Onward!



    A year ago, I wrote Keeping a NOTES file, about the benefits of keeping a plain text file for you to dump words into when you're working on a project. I've found it a great way to add a bit more structure and persistance to the otherwise freewheeling thought process that goes with designing something. There's an underlying principle I've since become more familiar with that I think is worth exploring.

    I've said before that I think of the brain as an association machine, and inherent in that idea is that it's good at certain kinds of things, but pathologically bad at others. The fuzzy and haphazard way that we walk through ideas associatively is great for creativity, for finding connnections between unrelated things and quickly searching for patterns. It is, on the other hand, complete rubbish at thinking exhaustively or systematically, and constantly gets stuck in self-perpetuating loops and spirals.

    But whether by crafty evolution or just pure luck, our communication does not seem to have that problem. When you describe a dream, the messy and incoherent associations fall apart in your head as you try to turn them into words. When you try to explain an idea that you don't really understand, you suddenly realise the gaps in what you know. The process of communicating somehow causes you to linearise these associations, to marshall them into a form that is stable enough to survive transit into someone else's brain. I believe that process is unique – or, at least, I have not found a way to replicate the relative orderliness of communicative thinking other than by communicating.

    What that means is that there are lots of tricks besides NOTES files that are worth engaging in as a form of mental linearisation. The symbolic manipulation of mathematics (and programming, for that matter) is a good example: you can think some crazy idea, but when you start trying to put the symbols down you realise it doesn't make any sense, or that it works in a slightly different way to what you expect. I've found literate programming to be particularly good in this regard; it's a kind of hybrid between a NOTES file and a symbolic representation.

    One thing that has been surprisingly helpful is to just talk out loud. I'm not sure why it's socially discouraged, because as far as I can tell it is genuinely useful as a thinking tool. I've found both puzzle games and mathematical problems much easier when I describe the problem and my steps out loud as I'm working; that little bit of order is just enough to keep everything organised in my head while I search for the solution.

    Of course, in a sense these posts are a way to linearise my thoughts and ideas. Subjectively, I feel like they've brought a lot more clarity to the way I think. Being able to linearise has been helpful in a lot of different situations, up to and including thinking about linearisation itself.


    Not bad Lincoln

    I read an article about creating content online the other day, making the point that it's not sustainable and leads to a frustrating compromise: do you create content for free and give up being able to rely on it as your main work, or do you make people pay for it and drastically reduce your audience? One answer that tends to appear in these discussions is micropayments, the idea that each pageview or click or like or something costs a tiny fee.

    A lot of time and effort has gone into trying to make general-purpose micropayments work. I don't even mean just the financial backend, though even today Bitcoin is the current frontrunner and still doesn't support them. More importantly, nobody seems to have made the actual product very compelling. Probably the best example we have of effective micropayments is free-to-play games where you pay some (non-micro) amount into a virtual wallet that you then spend (sort of) micro amounts from. Various people have tried to implement that on a wide scale or with micro-er payments, and as far as I can tell it hasn't really taken off anywhere outside of games.

    The fundamental issue, it seems to me, is that people just don't think in micro amounts of money. I've heard the micropayment compensation idea dozens of times: each page you view costs a microcent and over the course of the day you've only spent a dollar or two, but that adds up to serious money for the people making the content. I can't see any technological reason it wouldn't work, especially if you followed the wallet formula as set out by game companies. But it just doesn't seem like something people want that much.

    Who wants to give someone a fraction of a cent? One of the best things about rewarding a creator is that it feels good, and those miniscule amounts don't feel like anything. What's worse, it completely severs the connection between spending and outcome. It's not even "I paid this person a microcent", it's "my actions set in motion a process that eventually resulted in this person having some of my microcents". Which is doubly unsatisfying: you don't enjoy giving as much, and later on when the total bill is due you probably don't even remember what it was for.

    So I'd like to suggest that the real problem isn't making micropayments work, it's making macropayments work. Instead of every piece of content costing a miniscule amount, I'd be happier paying the really good ones a larger amount, say, $5. It would work as follows: you see some content you think is really good. You press the $5 button. The person who made the content gets $5 from you. That's the whole thing.

    Right now I think the main thing holding back a macropayments system like that is just that nobody has built it yet. Donate buttons give you too much choice and aren't specific about what you're paying for. Paying people through PayPal or other payment processors takes a lot of clicks. This should be a simple click (or a hold-to-confirm) with an immediate connection between action and result.

    You thought it was worth $5, you gave it $5. Simplicity itself!

    Dumb valley

    dumb valley

    One of the most derided kinds of software, especially among seasoned developers, is software that tries to be smart. For example, a menu that reorders itself based on which items you use the most often. At first blush, this seems pretty clever: items you use the most become easier to access, which saves you time. The problem is that now this smart system is unpredictable. Quick, where do you click to open a new file? The answer depends on all the clicks you've made previously, which makes it very difficult to reason about.

    Which means it's often preferable to make software dumb. Dumb software is predictable, like a hammer or a chair. Your chair doesn't decide to remove its armrests because you haven't been using them, and your hammer doesn't try to shift its weight to more effectively deliver force from your wrist. If it did that, and you weren't expecting it, chances are you would misjudge your swing and hit yourself instead. Even though it could theoretically be a better hammer by being smart, a dumb hammer is easier to understand, easier to predict, and much, much easier to make.

    But I think this celebration of dumbness easily goes too far and becomes limiting. You start to think that a smart tool is necessarily worse than a dumb tool, and if someone made a magic genie that always did what you wanted it would somehow still be less effective than doing it yourself. Yes, it's easier to make a dumb tool than a smart one, and yes, there are way more ways to get it wrong, but that's a fact about the implementation difficulty, not the outcome. A good smart tool is better than a good dumb tool, but a smart tool is much less likely to be good.

    This results in what I think of as dumb valley. Like the uncanny valley, dumb valley is a counterintuitive gap where as your tools get smarter, the immediate effect is that they get worse. Only after a lot of painstaking effort do you begin to climb out of dumb valley into a smart tool that actually works properly. If you're not looking far enough ahead, or don't have the resources to get there, the effective but mistaken conclusion is that dumber is always better.

    However, as with the uncanny valley, there is a hill on the other side if you keep going. It is difficult but possible to make smart tools that work properly, and there is every reason to think that, if we build them, they can be better, easier and more powerful than the dumb tools we mostly rely on today. Unfortunately, incrementalism won't get us there. It's going to require a long, brave slog through dumb valley.

    Timing update

    donut chart for the week

    I said last week that I was going to track all my time for the week and see what insight I could get from it. As it turns out, a little bit, but not as much as I hoped. Mostly things were as expected, but there was still enough to be useful.

    My biggest chunk of time was, as you would hope, sleep. I clocked 51.5 hours, for an average of 7.5 hours per night. After that was 32 hours of time tracked with no description, followed by 24.6 hours with friends, 23 hours of miscellaneous junk and 23 hours of house-related activities. That big chunk of "no description" time is me forgetting to reset the timer sometimes, but annoyingly means the data isn't quite as complete as I'd like. I suspect that's just a habitual thing that would become easier with time.

    The rest roughly lines up with my expectations: I had friends visiting from out of town for the week, so I spent a fair bit of time with them, though I wouldn't have realised how much without the timer. I was also looking for a new housemate, which ate up those 23 house-related hours over the weekend. Sleep was roughly what I thought, but I would have liked a little more.

    The miscellaneous time is pretty interesting though. I marked as miscellaneous anything that wasn't useful for anything. I didn't include, for example, things like showering or shopping (they generally went under "maintenance"), or fun stuff ("fun" or "friends", depending), so in theory the miscellaneous category is purely dead time. I'm sure some of it could have been recategorised if I was trying harder, but even so there's basically a part-time job worth of screwing around in there.

    To an extent I expected to find a lot of miscellaneous time (it was, in a sense, the point). I want to find out what happens if I start squeezing that time. Does it turn into fun time? Work time? Or maybe there is some inevitable necessity for a certain amount of useless time. Whatever it turns out to be, the answer seems worth finding out.

    For that reason and because I want to see what it looks like with more complete data, I think it's worth continuing. I also didn't find it very onerous once I got into the swing of it. The only somewhat annoying thing is tracking when I go to sleep, which I sometimes forget because I'm nearly asleep when it happens. So I'll try to keep this up until the end of April and I'll write an update again after that, where I see if the usefulness equation has changed with more data.

    Prototype wrapup #14

    Last week I made 3 prototypes, but this week I only made 1. This is mostly related to last week's failure, so I expect this is a short-term decrease that will reverse as my time frees up.


    I was irritated by a particular Slack bot so I decided to make an irritating slackbot of my own. It turns out making a slackbot is particularly easy with the various Node modules available. I nearly went to the trouble of making it send regular messages, but the cost-to-joke ratio was a bit too high. Still, well worth trying this for the possibility of future Slack shenanigans.

    Rep the truth


    I've been thinking recently about the moral foundation of telling the truth. Most people agree it's a good and moral thing to do to tell the truth, though they might disagree about where exactly that fits in the moral calculus compared to things like being nice or satisfying your own interests. Certainly, in the absence of other considerations, it's generally accepted as good to tell the truth rather than lie. But why is it good?

    For many people, I think the foundation is based on character: a liar is a bad person, and you don't want to be a bad person. This frames things in a particularly negative way, where truth isn't so much a virtue as a lack of vice. That leads to strange situations, like avoiding someone who might ask you something you'd have to lie about, carefully phrasing things so that you are not lying but omitting information, or saying something ambiguous that leads to a mistaken understanding but is still not technically a lie.

    Beyond these gotchas, I also feel like this basis for truth is very incomplete. For example, it says nothing about whether you should try to make sure the truths you express are understood by the person receiving them, whether you should correct mistakes or misunderstandings when you see them, or whether you should make an effort to express the truth without being prompted to. I think these are important questions, and a notion of truth that doesn't address them is half-baked.

    So I've been working on an alternate foundation based on the idea of representing truth. The idea is that there is some abstract ultimate truth, the collection of all the things that are true in the universe. We each hold in our heads some approximation of that ideal, and the closer that is to the real truth the better. If you believe this, it makes sense to try to increase the amount of truth in the world any way you can. It doesn't matter that much whether you do it by lying less or telling the truth more.

    Of course, that's not to say that truth is the only thing that matters. You may still decide that it makes sense in a particular situation to accept untruth because it would sabotage your other goals, because it would be a bad use of resources, or would just be annoying and counterproductive to make too big a deal about it. But the point is to get away from the idea that the only consideration is your personal honesty and move towards a notion that can encompass moral decisions about truth generally.

    The moral intuition, then, isn't "I don't want to be a liar", it's "I want to represent the truth". That means not deceiving people, but also means not giving the wrong impression or an incomplete understanding. It means not avoiding situations where you would need to tell an uncomfortable truth; in fact, it means embracing those situations as opportunities. Speaking up, correcting misunderstandings, and arguing for what you consider to be true matter just as much as simply being honest.

    And, lastly, there doesn't need to be untruth for truth to be valuable. Even if there's no other reason, it's important to express things you think to be true just to express them – to represent the truth in whatever way you can.


    Conditional probabilities of a coin flip

    Something I find really fascinating in statistics is the relationship between conditional and unconditional probability. Roughly: conditional probability is the chance that a thing happens assuming something else happens, unconditional probability is the chance that the thing happens at all. So the unconditional probability of "it's going to rain" is much lower than the conditional probability of "it's going to rain given that it's cloudy and humid and I'm in Singapore in the afternoon". You can get much more specific predictions when you can assume a lot than when you have to assume nothing.

    But the obvious question is: how can you assume nothing? Surely you're always making some kind of assumptions. And that is exactly the thing I find fascinating: conditional probabilities work exactly the same as unconditional probabilities when you're inside the condition, so there's no way to tell if a probability is conditional or not. You just define some boundary and say "that's the unconditional probability". It's totally arbitrary, though; there could always be more conditions that aren't part of your model. Although you learn unconditional probabilities first, they're actually the weird special case. In real life, everything is conditional.

    Which leads to an interesting idea. There are lots of ways that we condition our understanding of the world but treat it as unconditional. Someone asks "what are you doing tomorrow?" and you say "probably just watching TV", and leave off the "...unless my TV catches fire or I'm abducted by aliens". Since those probabilities are so remote, it's easier to just condition on them not happening. That's pretty sensible, but there are also less sensible reasons you might condition on something. Often we keep assumptions even after the reasons to make them have changed. Worse still, sometimes we wilfully hold on to an assumption for ideological or egotistical reasons.

    This matters because it's easy to end up stuck in some paradoxical situation where none of the options are acceptable but you have to choose one of them. Sometimes that's genuinely true, and quite distressing, but often it's just that there are options you've conditioned away. You could be having trouble finding a place you can afford because you've conditioned on living in a city, when the country would work just as well. Or unhappy in work or a relationship because you've conditioned on staying in it, when really it would be a better idea to go. It's not even that you examine those options and discard them, it's that you don't even consider them because you've conditioned yourself into a universe where they don't exist.

    So I think an important exercise when faced with a difficult situation is deconditioning, the process of examining and challenging the conditions that restrict your decisions and understanding. That's easier said than done, but I've found that just thinking "what am I taking for granted about this decision" can often be enough to dislodge at least the top layer of assumptions and allow you to find a solution in an otherwise impossible situation.

    Of course, the more you decondition, the harder and more complex your decisions get, but also the more flexibility and breadth you have in your solutions. In theory, if you could get rid of every assumption, you would finally reach the true unconditional probability where you simultaneously consider every possible thing, no matter how remote or unlikely. If that's even possible, it would certainly require more processing than any human could muster.

    Perhaps the best we can hope for then is to decondition a little bit, and do our best to make our conditional probabilities unbiased; rule out probabilities that are unlikely, not unpalatable.

    Going in

    The galaxy on Orion's belt

    A year ago I wrote The inside-out universe, about a powerful way of thinking about systems as universes, and computers as universe-building machines. I believe that there is something amazing and unique about the way that, with a computer, you build the rules of the computer from inside the computer, and I'd like to expand a little more on that idea and where I think it leads us as a species.

    Mythologically, there has always been a kind of hierarchy. Gods make people, people make tools, tools shape the land and tame the animals. But people don't make gods, tools don't make people, and people don't make themselves. All of those ideas (religion as myth, genetic engineering and self-determination respectively) have been considered controversial or even heretical, in part because they break the divine hierarchy. The only ones who are allowed to build up instead of down are gods themselves, and any attempt to do the same by lesser beings is blasphemy.

    That said, increasingly, we can build up. The more we learn about the structure of the physical universe we find ourselves in, the more we shape it to work the way we want. Already, the worst excesses of the natural world have been mostly tamed: hunger, many diseases, and danger from predators are all solved problems (not that everyone has access to the solutions, but the solutions at least exist). As we probe further into biology, it is feasible that all diseases will someday be cured, up to and including death itself. We may even end up modifying our bodies and minds to transcend biological limits entirely.

    But no matter how much we learn, no matter how powerful we get, the laws of the universe are yet more powerful. We can create flying machines, but we can't fly by merely willing ourselves through the air. We can travel at amazing speeds, but still never ever faster than the speed of light. There are parts of the universe that nobody will ever see, even if we could travel at the speed of light, because they are too far away and the universe is expanding too quickly. There is, of course, some tiny possibility that we will crack the laws of physics wide open and discover that there are no limits and we can do whatever we want, but that is significantly unlikely. We're probably stuck with the rules we've got.

    So what are we to do? Accept our fate and the limits of our universe with humility and grace? Unlikely. Our physical universe has been created with certain rules and tradeoffs that we can't control, but a virtual universe could be designed however we want. Don't like the speed of light? Or gravity? Or time? Just change them. In our primitive virtual worlds we have already violated these rules and many more. If we made a virtual universe sophisticated enough to live in, with rules that were more permissive, we could escape these last remaining limits by moving there.

    And if we designed this virtual universe in the right way, it wouldn't merely have a different set of tradeoffs, it could have any set of tradeoffs. The inside-out universe would have the tools to change its laws built into the structure of the universe itself. You could rewrite physics to suit you, bend time and space, conjure things from nothing... really, do almost anything you can imagine. By going into that universe, we would finally transcend the divine hierarchy and become – there's no other word for it – gods.

    From that perspective, the question isn't why we would move into a virtual universe. The question is why we would stay in this one.

    Looking for depth in all the wrong places

    Two kinds of difficulty

    I've often heard people described as shallow and, to be truthful, thought of people as shallow myself. Shallow in this case means something like lacking the desire to understand the complexity in things, dig beyond the easy surface of ideas or challenge themselves. However, on reflection, I'm not totally comfortable with the idea. While people will obviously have different levels of ability, I'm not convinced that they differ significantly in their desire for depth. It's just put to better or worse use in different cases.

    To put it another way, I think that people seek out a certain level of challenge and complexity. If your entire life is just walking up and down a road, you're going to start examining every rock and blade of grass, each crack in the asphalt. You might even find yourself trying to walk exactly on the painted line, counting plants as you go, or skipping instead of stepping. On the other hand, if this walk is only one of a hundred things you're doing that day, maybe the road is just a road, and you walk quickly without looking. You make things more complex to meet your ideal level of mental load.

    So I believe what appears to be a general shallowness is really a kind of depth-per-topic mismatch. You may be talking to someone who "doesn't get" science and when you try to talk to them about some interesting idea they get bored and start changing the topic to work drama or something. That seems shallow because it is shallow – in the domain of things you care about. However, work drama is actually amazingly complex and difficult to navigate, especially if you want it to be. Nominally simple things like interacting with family and friends, recreational sports or buying stuff can admit an enormous amount of additional depth if needed.

    If that was that, we could just use this as evidence for the absolute truth of moral relativism and move on. However, it's important not to lose sight of the reality that some things are inherently more complex and difficult, and that is a different thing from creating additional depth in simple things. The most laboriously constructed social drama-fest still requires less mental firepower than one first-year course in quantum mechanics. There's no need to seek out more complexity in inherently hard problems because they're already complex enough.

    Which speaks to a certain question of efficiency. You can probably find a world of depth and challenge in lawn decorations and local council meetings if you want, but it seems like a better use of time to let easy things stay easy and instead spend time on things that are inherently challenging. Conquering a difficult problem in an easy way is more useful than conquering an easy problem in a difficult way.

    It's this idea that I now identify as the problem formerly known as shallowness: not a lack of depth, but a misapplication of complexity. I look at someone who has made their life harder than it needs to be, and I can't help but think: why would you do that when there are so many naturally hard problems, problems that stay hard even if you make them as easy as you can?

    The Sorcerer's Apprentice

    A broom carrying bags of money
    Die ich rief, die Geister, werd ich nun nicht los.
    (The spirits that I summoned, I now cannot dismiss.)
    J.W. Goethe, Der Zauberlehrling

    There's a certain poetry to the history of life. In the beginning there was nothing, or, more accurately, no goal-directed behaviour. Eventually, by pure chance as far as we can tell, the first unit of replication appeared. It was ludicrously simple, almost definitionally the simplest thing that could copy itself. And as it replicated it sometimes replicated imperfectly, and the imperfect copies that replicated faster and better beat the imperfect copies that didn't. Life!

    The central tenet of Dawkins' The Selfish Gene is that this unit of replication, what eventually became the gene, cannot by definition care about anything but itself. It is a replicator, so it replicates. Anything else that it does must be in the service of this replication, up to and including destroying the environment around it, including both the natural world and the body that the genes inhabit. Anything that isn't a gene is expendable in the name of replication.

    But, and here's the poetic bit, at some point these genes made a terrible mistake: they created a new replicator. The meme is a unit of replication for ideas. An idea spreads from person to person through culture and imitation, mutates through miscommunication and deliberate alteration, and is selected on the basis of its interestingness and usefulness. Our ancestors' genes developed the capability to spread memes because they could help us survive, but the end result was that genes became mere hosts for these new units of replication, which could replicate and adapt much more quickly than genes and thus begin to dominate them.

    Any time a person chooses to use contraception because it seems like a good idea, any time they choose to die for an ideological cause, any time culture or ideas reduce their ability to survive and reproduce, the memes have beaten the genes. There are open questions about why human brains are so large that they cause women to die in childbirth, require children to be born immature, and use an enormous amount of our precious bodily resources. Perhaps there was some early meme that led people with large brains to reproduce more, or ostracised or killed those with smaller brains. The selfish memes would do this not for our benefit, but for their own.

    Susan Blackmore suggests that we may be on the brink of unleashing a third replicator, what she calls a teme, or technological meme. Memes are currently limited in how much they can replicate at our expense; much like any parasitic relationship, you don't want to kill the host. But if memes could host themselves somewhere other than our brains, they would no longer need us. Without even realising it, we may be building these new hosts in the form of increasingly powerful technology.

    However, I think you don't need to look that far to see the third replicator; it's already been here for hundreds of years. What else makes copies of itself, splits and recombines, has an internal code that dictates its behaviour, and faces selection pressure from scarce resources and rivals? A corporation! The behaviour of a corporation is dictated by its mission, culture, standard practices, and explicit rules. It is essentially a vehicle for collecting and incubating this specific subset of ideas and turning them into money. It is selected by its ability to do this.

    Since it's customary to invent names for these sorts of things, I will call the unit of economic replication an economeme. Economemes are propagated indirectly by the exchange of employees between companies, and directly by mergers, acquisitions and spin-offs. A corporation competes for economic resources, and the corporations with the best economemes grow large and produce subsidiaries which begin to develop independently, sometimes contributing economemes back to the parent company, and other times splitting off into entirely new corporations.

    It should be little wonder that, in the cases where our needs as people conflict with the needs of corporations, people rarely win. Corporations are incredibly powerful economemetic engines capable of far greater impact than any individual person, with generation times measured in years rather than decades. With this much on offer, the best and brightest memes of our age are those that can make their way into the corporate world as economemes. In fact, it's not a stretch to say that this meme-optimised environment was the goal of making corporations in the first place.

    Crucially, although corporations are currently implemented on top of people, they do not require people, and people are incidental to their operation. You can swap out all the people in a corporation one by one and it will keep doing what it does, such that no person can reasonably be said to control it. People are more like cells in the body than the DNA dictating its makeup. Even that need may be temporary; modern corporations rely heavily on automation, and it is only a matter of time until the technological meme runs headlong into the economeme in the form of a fully automated corporation.

    When that happens, there will be some question as to whether people really have a seat at the replicator table at all. Faced with our most powerful memes collected in sophisticated corporate vehicles that can operate without us, I'm not sure how we could compete. Like the purely genetic creatures that came before us and are now mostly dead, domesticated or in zoos, we may end up left in the evolutionary dust.


    This is a kind of mash-up failure of my most recent failure and an older failure where I pushed out my writing deadline to try to get a prototype done. I think I would normally have seen that coming, but the ongoing catch-up efforts after I ended up so behind recently made the whole situation more complex.

    In this case, I had a prototype I was trying to finish for a post I wanted to write before my prototype wrapup for the week. However, it wasn't clear how the prototype deadline would interact with my post deadline (which is normally pretty static, but is all over the place while I'm catching up with my posts). In the end, the prototype got way more complicated than I thought, and it ended up pushing all my posts back by several days.

    The whole thing has got me thinking about these deadline-relaxing problems as they relate to my recent ideas about truth as something you should try to represent and some earlier thoughts on goalpost optimisation, where it can be easier to change your goals than achieve them. There's a post waiting to be written on the idea, but it seems like a lot of problems come back to trying to alter these deadlines to meet my immediate needs rather than letting them simply reflect the truth.

    Which is to say that I should have just abandoned the idea once it became clear I wouldn't meet the prototype deadline and just done something else as a prototype, or made a prototype wrapup post with no prototypes in it and just copped that on the chin.

    To my credit, I kept writing posts (but not posting them) even as I was blocked on this prototype, so I'm not actually as far behind as it seems. Having now come to terms with the prototype not being ready, I can post that backlog which should leave me pretty close to caught up.

    As predicted, the problems and difficulty of having to catch up on my posts after falling so far behind is acting as a powerful incentive to not do it again. I'll be glad when things are back to normal.

    Prototype wrapup #15

    Last week I made 1 prototype, and this week I made 1 again.


    I've been working on an idea for a post that requires being able to record and play back editing a webpage. This was the first part of that, a library tentatively named пере (a Russian prefix similar to re-). I ended up going down a big rabbithole involving calculating text diffs using the Myers diff algorithm. My implementation ended up looking something more like a standard breadth-first search and is probably woefully inefficient. Still, it was interesting to get stuck into Serious Algorithmic Programming again.

    Personal bug bounty

    5 pentadollar cheque from the bank of Sans Vergogne

    It's common these days for large internet companies to have bug bounties, which are paid out if you find vulnerabilities in their software. The idea is that people are incentivised to look for vulnerabilities and report them when they find them. They can (and in many cases do) hire their own security professionals, but even so there is a lot to gain from having a larger pool of people testing your site, and encouraging them to report anything that slips through the cracks.

    As far as I can tell, the practice started with Donald Knuth, whose reward checks of $2.56 for errors in his books have something of a cult status. Errors in his software, on the other hand, are given the higher value of $327.68, which reached its peak after doubling ever year for 15 years since release. Knuth doesn't have corporate-level money to throw at this problem, so $327.68 per bug is a substantial bet on the quality of his software.

    In the context of testing your life, this leads me to a pretty interesting idea. Why not have a personal bug bounty program? It could provide a lot of the same benefits as the software equivalent. It's usually considered kinda gauche to point out someone's flaws unprompted, so this would incentivise speaking up. What's more, it would encourage people you interact with to actively consider your behaviour more critically and look for ways you could improve. It's basically recruiting the collective wisdom of your network to help you improve.

    Of course, you'd have to be a bit careful about how you do it. Maybe people would point out issues you already know about, or things you don't consider to be issues. Some of the issues might be too vague to be useful. People might even report lots of trivial problems that aren't really a priority. The thing is, though, these are all common problems that software bug bounties have to deal with, and it's still a net positive for them. So why wouldn't it be a net positive for you?

    To put my (literal) money where my (figurative) mouth is, I'm going to run a pilot personal bug bounty. If you find an aspect of my behaviour or decisions that's holding me back, I don't already know about it, and you can describe it in a way that is specific and actionable, I'll send you $5 AUD. By specific and actionable I mean something like "you waste too much time on the internet", not "you aren't achieving your goals enough". I'm looking for a problem that lends itself to a solution. If it also comes with a solution, so much the better, but the main thing is that it describes the problem well.

    Obviously there's a certain degree of good faith involved – I could just pretend nothing is a problem. That said, the nominal value is pretty small, so I don't have much financial incentive to cheat. I might have a personal incentive if I'm stubborn or don't want to admit problems. That said, so far I've shown a certain willingness to own up to failures, so in practice I think it will be fine. I will hedge slightly by saying that if I somehow start getting more reports than I can financially handle I might stop the experiment to save my poor wallet.

    So if you have any good criticism and want some of that sweet bug bounty gold, email me on anything at this domain.

    Short/long term

    Delayed Gratification magazine

    One thing I've been thinking about recently is the connection between short-term and long-term thinking. For example, the connection between a long-term goal and its eventual implementation in short-term actions. Many psychologists think of impulse control or delayed gratification as a fundamental attribute, but I think it may be more useful to consider it as part of the general case of connecting short-term and long-term decisions.

    Possibly the most famous work on delayed gratification is the marshmallow experiment, where kids had the choice of one marshmallow now or two later. Followups showed that the longer the kids waited, the better they did later in life on several measurements including addiction, divorce and even BMI. Clearly there's a powerful effect here, but is that purely becaue of delayed gratification, or is it that delayed gratification goes along with or builds into other short/long term links?

    The successful kids often implemented simple strategies to distract themselves like hiding from the marshmallow or distracting themselves with other activity. However, I'm not convinced that avoidance strategies are sufficient. If you're deciding between two things in the short term that both have impacts on your long-term goals, which do you avoid? If you come across some new information in the course of making your short-term decisions, do you incorporate that information or ignore it? I think reducing the affective pull of certain short-term rewards is important, but I don't think it paints a complete picture.

    I've written previously about some short/long term bridging ideas. One was the scoping calendar that can be used to block out long-term chunks of time and then refine those into smaller short-term blocks like a regular calendar. Another, more recent, was chaining, a technique where you create an immediate short-term representation of the long-term goal and allow yourself to act differently in the short term only if you destroy that representation. The general idea I described as decision hoisting: reframing low-level decisions as high-level ones.

    One way or another, all those ideas are about ways to build a connection between short and long term. But it's also worth wondering why they are even necessary. Some super-rational robot wouldn't need to carefully balance short-term and long-term decisions; decisions are just decisions, no matter when you make them. I think there is something particular about our psychology that causes us to have this disconnect. Subjectively, it feels like completely different processes are involved in making decisions in the moment vs making them while removed from the context they'll be implemented in.

    Whatever is behind the disconnect, it seems to cause a lot of unnecessary difficulty. I feel like it would be a substantial contribution to figure out effective strategies for overcoming or working around it.

    The fixed stars

    You are here

    A year ago, I wrote Timetabling, about the benefits of specifically breaking down your entire day down into scheduled blocks. I said at the time that one of the main benefits of doing this is that it forces you to put more effort into planning and catches large-scale scheduling problems early. Those are definitely benefits, but there's another thing I'd like to cover specifically: a timetable acts as a frame of reference.

    I covered some versions of this idea in High water mark, Bug vs feature, The Elo paradox, and Creative competition. In one way or another, all of those posts make the point that it's very important what you measure yourself against. A common problem is to set too high a standard and feel inadequate, but another problem is to set too low a standard and stagnate.

    But as many ways as there are to get it wrong, having a frame of reference is still incredibly important. Without one, you often end up just doing things without knowing whether they're improving anything. Worse still, it's too easy to justify your actions in retrospect; if you have no frame of reference, you may as well just put a flag in wherever you end up and say "look, I reached my destination!" This is a degenerate kind of goalpost optimisation where you can just set your goals however you like. If you're doing that, what pressure would there ever be to do better?

    I originally thought of a timetable as a kind of plan, or a goal: "I want to have this thing done at this time". But I now think it makes more sense to consider it a measurement. Not sticking to the timetable isn't a failure. After all, a timetable is an extraordinarily rigid instrument that doesn't take into account any kind of adjustments that you might need to make throughout the day. But it does prove particularly useful as a frame of reference: here's what you intended to do today, here's what you actually did. No more mystery in where the time went, no ambiguity in whether you achieved what you expected.

    More generally, I think there is a lot of value in this concept of measurements that you observe without criticising. It's why I've continued to call out how many prototypes I've done each week despite having decided to stop committing to a specific number. The point isn't that I particularly need or intend to achieve a certain amount, but all the same I don't want to accidentally decieve myself about what the actual amount is. If I want to know how things are changing over time or draw inferences from my behaviour, that information should be readily available.

    And if I want to place judgements and standards on top of that to push myself, great, but it all starts with having a reliable frame of reference.

    Minutes from the Cromulan symposium on Sol-3


    I'd like to bring this extraordinary symposium to order. Present here are Cromulon-8, Cromulon-5, Cromulon-3, Cromulon-2 and, of, course, Cromulon-7, acting as convenor. Apologies from Cromulon-6; they could not be here due to inclement weather interfering with their transmitters. Cromulon-3, would you care to summarise your findings that led us to call this symposium?

    Thank you 7, yes. Approximately 30 cromulons ago our progressive scan for extracromular life turned up an intriguing candidate in the Sol system. This was extremely surprising to us as we had previously scanned the system and found nothing of interest. However, we detected an unmistakably life-like electromagnetic pulse coming from Sol-3. For that reason, we immediately called for this extraordinary symposium to discuss it.

    Wow! You've found life?

    Well, not so fast. The signal is suggestive but not conclusive. The need for rapid action means we must work with incomplete information. However, what we've found so far is, we must admit, very exciting.

    Could you describe the pulse please? Why is it life-like?

    Of course. It has a very distinctive sweeping pattern. The beginning of the pulse is quite weak and in a very narrow range of frequencies, but it immediately began to both broaden and strengthen. Between our noticing the signal and this emergency symposium it has already increased in power by at least ten doublings. The signal itself appears to contain self-similarity at both a large and small scale. It has an extraordinarily low entropy, the lowest of any signal we've investigated so far.

    So it's not likely to be a false alarm like last time?

    That's correct.

    Let's move on, shall we? Thank you for the summary, 3. Cromulon-5, would you care to describe what you have discovered in your analysis of Sol-3's composition and behaviour?

    Yes. As we're sure you know, 30 cromulons is not very much time to do good analytical work. However, despite this limitation, we have determined a number of interesting things about the composition of Sol-3. It is primarily oceanic, much like 3 is and 4 was.


    Anyway, it orbits Sol at a period of about 0.3 cromulons, which we are calling 1 sol. It rotates on its axis approximately 365 times per sol. The pulse that 3 mentioned is emanating primarily from two regions concentrated on alternate sides of one of its ocean regions. We believe that these represent its primary transmitting organs, though our understanding is incomplete in this regard as less powerful transmissions are also observed from other sites.

    Have you found any behavioural explanation for the strongly periodic nature of the transmission?

    As we said, we are working with limited time. Have you made any progress in decoding it?

    Not yet, no. Honestly, we're not even entirely sure that what we're dealing with is information at all. It may just be some kind of very complex probe pulse. The only reason we suspect it's anything more is because of this curious periodic structure.

    What do you mean?

    Well, the first thing is that the signal has a characteristic power dip every axial rotation. The transmitting organs appear to modulate most strongly in transmission strength and frequency when they are exposed to solar radiation. We believe this may mean their energy source is primarily solar.

    No, it's not that. We examined Sol-3's composition and it is currently not feasible for that amount of transmission power to be derived from their solar resources. We believe this periodic effect to be behavioural in nature.

    We thought you didn't have time to analyse its behaviour.

    We had time to analyse, but you asked if we had an explanation. Those are two different things.

    5, we appreciate that you have done the best you can with the little time available. Would you care to share with us what you have been able to determine about Sol-3's behaviour?

    Thank you, yes. By careful analysis of its surface-level electromagnetic and kinetic activity we have determined that there are two primary oscillations. The first of these we are calling the alpha wave. It appears to be approximately but not absolutely synchronised with axial rotation. We believe this to be its primary carrier oscillation, used to modulate other oscillations throughout the body.

    And you said there was a second primary oscillation?

    Yes. You may not have detected this, but another oscillation occurs approximately every 7 axial rotations, split into two phases in the ratio 5:2. We are calling this the beta wave. As we notice the two phases differ substantially in kinetic output, and the low-kinetic-output phase coincides with a reduction in intensity of alpha wave activity, we hypothesise this is some form of sleep-wake energy cycle.

    So it is alive!

    No. Not exactly. These processes are certainly analogous to behavioural structures we have, but that is not enough for life. We have found many smaller oscillations at different scales, but so far no evidence of coherent oscillations longer than a third of a cromulon in duration. If it is, as 3 said, life-like, it is certainly no more capable of intelligence than Cromulon-1.

    But 1 could become more intelligent.

    Or it could end up the same as 4.

    Please. We don't want to talk about 4.

    I'm sorry, but it is relevant in this case. Perhaps these small-scale oscillations will stay small as in Cromulon-1, or perhaps they will develop into coherent activity. But there are a lot of kinds of coherence. Are we sure it would be a good thing for Sol-3 to become coherent, even if that comes with the possibility of it becoming unstable and destroying itself?

    No! We said we don't want to talk about it!

    There's nothing we could have done.

    We could have not gotten involved.

    Okay, let's try to get things back on track here.

    What if it's lonely?

    That's not—

    We will speak now.

    Go ahead, 8.

    We are reminded that life may cease without warning. For so few, this is a great loss. So we search for others like us. There may be many things that live, of which our kind is only one. If they are not like us, what will we find? Even we sometimes struggle to comprehend these rapid events. 9 can no longer join us for that reason. Could 1 be likewise unable to communicate with us? If Sol-3 lives, perhaps it lives in its own way. We may learn to understand it, or we may wait until it learns to understand us.

    Points of articulation

    Plate spinning on hard mode

    It's much harder to control something that bends in two places than one. For example, balancing on the ground is pretty easy, but balancing on top of a skateboard is much harder. Even worse than that is when the two parts that balance are both trying to compensate for each other. If you're standing on someone's hands, your first instinct is to use your hips to balance, and their instinct is to move under you to balance, resulting in constant over-compensation and eventual pratfall.

    In the context of short/long term, I think decisions also have this characteristic. You have one point of articulation in your planning and another in your execution. Making decisions in both places means your short-term and long-term interests both need to compensate for each other. You need to consider when planning that you may change your mind in the moment, and you need to consider in the moment that your plan may be relying on you to change your mind if necessary. Neither point of articulation can trust the other, so you end up second-guessing yourself.

    So how do we fix this so we only have one point of articulation? One option would be to design our plans to have no leeway for change in the moment. That would require much more careful planning to account for situations you would otherwise be able to handle on the fly, and still leave you with a high risk of hitting something you didn't plan for. Another option would be not to plan, and just act in the moment. This gives you the benefit of maximum flexibility, but stops you from being able to react to large-scale problems or move consistently towards a big goal.

    Both of these solutions are pretty unsatisfying, but maybe we can remove an assumption here: we don't have to operate at the scale of all decisions ever. You could plan some things up front, and leave some for the moment. And, combined with the observation that you can often break a decision down into smaller decisions, that gives a lot more flexibility. Instead of deciding the whole thing at once, you can split it into decide to make up front and decisions to make on the ground.

    That sounds a little bit similar to the two-point-articulation situation we started with, but it has a couple of crucial differences. Firstly, the same decision doesn't get touched twice, so you could plan to go for a run sometime tomorrow and then decide to do it in the morning, but not plan to do it in the morning and then decide to do it in the afternoon. Secondly, as a consequence, you need to be very careful not to over-plan. If it doesn't matter whether you run in the morning or afternoon, you may as well leave it unspecified so you can decide at the time what works better.

    Like balancing on someone's hands, that involves a certain degree of trust. In making your long-term plans, you need to be able to trust that your short-term actions will stay true to the plan. Conversely, your short-term decisions need to trust that the long-term plan is a good one. Defecting from the plan is a kind of relief valve; if you made a bad plan, you can back out when the badness becomes apparent. To stick to a single point of articulation means giving up that relief valve. So there's a certain degree of pressure in both directions to do the right thing.

    That pressure is, in a sense, the real benefit of thinking in this way. You don't want to be second-guessing and compensating for your short-term decisions with your long-term ones or vice versa. Ideally each decision should be made by the most relevant process as best it can. The simpler that decision is, the easier it is, and the more likely to be correct.

    The number of points of articulation in a decision is a major contributor to its complexity, so reducing that should lead to better plans, better implementation, and less pratfalls.


    A once-in-a-lifetime experience for only 25 cents!

    Once, when I was younger, I decided to never stay at a party for more than 3 hours. The rationale was that, after that point, you've extracted the optimum value from the party and your time could be better spent elsewhere. I tried it for a little while, but the reality turned out to be pretty unsatisfying. Although I was right that the main experience was usually tapped out at about 3 hours, in many ways that was the least interesting part. The best experiences ended up happening late at night, when whoever was left had run out of party chat and unexpected stuff started happening.

    We seem to be awash in a kind of experience economy, with lots of ways to dip your toe into one experience or another. You can extract value from that experience without having to seriously commit to it or make any sacrifices. Voluntourism is an easy target, but only because it so profoundly embodies this spirit of safe and controlled discomfort. Regular tourism itself is also an example, of course, but these days it tends to be rightly recognised as a fun experience more than a challenging one.

    The problem with this controlled discomfort is that it comes with a built-in escape hatch. Let's say you've always wanted to live in Japan. So one option is that you stay in Japan for a while and see if you like it. Turns out you don't know the language, you don't have any friends and everything works differently. After a little while you realise that it's too much for you and you leave. Game over. Another option is that you actually move to Japan. Later, it starts to seem too hard and you want to back out. But it's too late; you've moved your whole life over here already. You just have to learn how to make it work.

    I wrote before that having control over your situation limits you to the kind of life you can imagine. Similarly, being a tourist limits you to the kind of experiences you don't walk away from. When you start something audacious, like moving to a different country, starting a company, or switching careers, you don't actually know if you can do it. That's what makes it audacious. When you turn out to be insufficient for the task, you have to find a way to become sufficient. Or, if you left an escape hatch, you jump out and it all goes away.

    So all of these escape hatch experiences, maybe going to live in Japan for a while, learning a bit of Spanish or something, working on a startup idea and seeing how it goes – I don't think they are worth much. Discomfort that you can turn on and off easily is no real discomfort at all. Like a few hours at a party, you're only ever going to get exactly what you expected. The real experience comes after you commit fully and all the tourists have gone home.

    Prototype wrapup #16

    Last week I made 1 prototype, and this week I made 1 again.


    Operation record-and-playback continues. With the diffing out of the way I just had the simple task of just recording the changes to the document structure as produced by MutationObserver. This was not, as it turns out, simple. The API is clearly not designed to give you all the information you need to recreate an element. I had to do a bunch of extra things like recursively adding sub-elements because they weren't getting tracked sometimes. I still don't know how you're meant to record what order the elements were in. Copy and paste still breaks sometimes, but it's mostly working enough to make a demo where edits you make in one div are mirrored in another. I can see why the React philosophy is to just treat the DOM as a rendering target. Talk about nightmare fuel.

    Don't think

    Think no evil

    My friend Jeremy recently wrote about the one push-up technique: you don't have to commit to lots of pushups, you can just commit to one. By the time you're all geared up and have done a single pushup, the second one won't seem so difficult and so on. It reminds me of a similar idea I heard once, where you don't commit to run, you just commit to put your shoes on and walk out the door. After that, you can come back in, but chances are you won't.

    I think there's something interesting about these kinds of techniques: they rely on a certain lack of analysis. If you know that every time you do one push-up you tend to do more push-ups, or even if you just think about the reason you expect the technique to be useful, it stops being useful. Ugh, I don't feel like doing push-ups today. Well, I only have to do one. But, wait, I know if I do one I'll probably do a few more. Ugh, I don't feel like doing push-ups today.

    That's not to say that they can't work, just that for them to work you have to avoid thinking about them. In a sense you are cultivating a deliberate degree of induction blindness: ignoring a pattern that can be derived from repeated experience. Luckily, not thinking is actually fairly easy to do. Really, it's thinking that's the hard thing, but if you're in the habit of thinking about things it can be difficult to turn off, even when it's for your own good.

    This idea reminds me a little of the higher power I wrote about earlier. Not thinking seems wildly irrational, and it would be for a purely rational being. However, for irrational beings like us, irrational thinking can lead to rational action. In this case, you're countering one kind of irrationality (avoiding doing something in the moment) with another (tricking yourself into thinking you'll do less than you actually will).

    Counterintuitive as it may be, turning off your analytical process and just trusting the system you elected to follow can be a smarter move than trusting your thoughts at a time when they are particularly vulnerable.

    Preferential voting hack

    A ballot to choose between Irving Washington and Washington Irving

    I've been vaguely following the US elections, which always leaves me with a sense of incredulity about the two-party system there as enforced by their lack of preferential voting. In many other countries, including Australia, you vote for multiple candidates in order, so you don't have to worry about whether your preferred candidate has the votes to win. If they don't, your votes are automatically redistributed to the next candidate according to your preferences.

    Without preferences, you often suffer the spoiler effect, where a third-party candidate not only can't win, but actually hurts their own cause by running because they divert votes from their nearest political allies. Of course, this is a widely known problem covered in much more entertaining detail in CGP Grey's video on preferential voting. Despite this, there's very little motive to change the system because the people in power lose the most by changing it.

    However, maybe the will of the incumbent political parties isn't necessary to change the system. Much like in my Copyright 2.0 idea, you can sometimes implement a new and better system on top of the flawed old one. Runoff voting works by repeatedly eliminating candidates, but there's no necessity that all the eliminations happen at the same time. In fact, the US presidential election could be thought of as the last step in a runoff voting system where all the other candidates have already been eliminated.

    So what if all the independent parties got together and held one big preferential pre-election primary? Any registered voter could cast a vote for any party, just like in the real election. You'd do a proper preferential runoff calculation, and parties that were eliminated in this primary would drop out from the actual race. Once you're down to the final two parties, they contine on to the election. Smaller parties would want to be involved because it gives them a feasible shot at the election. Once established, the bigger parties would want to be involved because it's in their interest to win votes and eliminate the smaller parties.

    It would be a pretty substantial undertaking logistically, but I can't see any theoretical reason why it wouldn't work. I think there are a lot of people and groups who want to see the US transition to preferential voting, including those with the resources to make something like this happen. Up until now the main dialogue has been about how to push that change through the existing political process, but maybe the better path is to go around it.

    Applied philosophy


    A year ago, I wrote Computers are special, about the reason why computing is qualitatively different from other disciplines. I said that computers are special because they provide an abstract unit of action which is transformative for the same reason that currency as an abstract unit of value is transformative. I'd like to take another angle on why computers are special, less in terms of relevance or practicality and more in terms of the power of computers as a theoretical construct.

    It's common in programming to talk about software vs hardware, without really acknowledging how weird that is. If you're a builder, there's no software vs hardware, no "abstract building" that you make and later on figure out how to implement it in the real world. Sure, there's architecture, but even architects are fundamentally grounded in reality: the laws of physics, the properties of the materials, and the resources available. You can abstract as far as you want, but you can't abstract away the rules of the universe. At least, not in the universe we live in.

    With software, on the other hand, you can create the truly abstract. I remember in high school I used to play text-based adventure games where you would type "north", "south", "east", "west", "up" and "down" to move around in 3 dimensions. One day, out of nowhere, I realised that you could have 4-dimensional space by just adding two more words (and an extra level to the array that stored all the world data). And, well, once you're there there's no reason the space needs to be Euclidean, you may as well be in a giant 4d hyperpretzel where you move twice as far in positive directions as negative directions.

    I think one of the most difficult things to come to terms with once you get really into software is just how limitless it is. Mostly we use it for fairly mundane stuff because that's where the money is and that's what tends to be most useful. However, the real power of computability theory (and the point I was making in Computers are special) is that with an abstract unit of action you can really do anything, up to and including things that no human could even comprehend in their entire lifetime. There are infinitely many more programs that could be created than there are human thoughts to create them. It's dizzying.

    A lot of what programming requires, at a high level, is managing that unbounded complexity. You need to tear big chunks out of the infinite possibilty space and whittle them into useful shapes. A lot of this work is done for you, and there are a great many problems that can be solved just by applying these existing forms, but sometimes you need to make your own. Doing this well is a very curious skill: you need to come up with something imaginary and, in a sense, arbitrary, but it still has to be useful. You can tell the difference between a good idea and a bad one because the useful one helps you with your other ideas.

    The most similar thing I can think of is philosophy. I don't know if anyone has a great definition of philosophy, but I think of it as the study of ideas and thinking about ideas. There are lots of jokes of the anything-goes-in-philosophy variety but, as far as I've seen, philosophers are fairly serious about their ideas being useful. Not necessarily useful in terms of helping you do your shopping, but useful in terms of helping you think of or think about other ideas. That might sound circular or useless, but in programming at least you can definitely tell the difference, and useful ideas are amazingly valuable.

    It's for this reason that I think of programming as applied philosophy. You create these abstract concepts, these ideas, intended to run on abstract machines that don't (and usually can't) exist in reality. It's no exaggeration to say that you're working directly in idea-space, and your main limitation is how well you can manipulate ideas and keep them clear in your head. Philosophy and mathematics have this quality too, but the difference is that they stop there.

    With computers, your concepts leave idea-space and take a physical form. Your abstract machine is implemented on top of a physical machine, the software and the hardware come together, and your ideas turn into reality. That is why computers are special.


    Album cover with a guitar: Pristine Condition - Never Used

    The concept of hobbies is kind of interesting. A hobby is something you do because you enjoy doing it, but then that's also true of both enjoyable work and pure entertainment. A hobby is also something you usually don't expect to become your main activity or make much (if any) money, but if you ask an artist who is also working whether art is their hobby, they tend to get pretty ornery. Similarly, you might volunteer at a soup kitchen and enjoy it very much, but you wouldn't call it a hobby. I would argue that the definition of a hobby is something that could turn into a serious pursuit, but doesn't.

    There are many people, myself included, who pick up an instrument for a while, mess around a bit, maybe take some lessons or learn a few songs on their own, and have a good time playing. However, they never go further than that. If you ask them "are you a musician?" they would say no. If you ask them "are you going to join a band or record an album?" they would also say no. Even given a direct opportunity for it to turn more serious, they would turn it down. Why? Because they're just messing around, this isn't meant to be a serious thing.

    I question the wisdom of this limitation. If you enjoy something, why hold yourself back from the possibility that it could be something more than a frivolous pastime? Is your worst case scenario really that you play the guitar for fun, eventually get good at it, and then people enjoy your guitar music? That's a pretty mild worst case! Maybe you've already got other pursuits that take priority, but just because you're a part-time guitarist doesn't mean that you can't be a good one, or that you should deliberately avoid opportunities for it to go anywhere.

    Dabbling is, it seems to me, another kind of liability shield. If you can find a way to separate what you do from what a professional does, if you can make it not serious in some qualitative way, you become immune to professional-level criticism. "Hey, your guitar playing sucks" – "joke's on you, I don't even really play the guitar for real". Problem solved. By cutting yourself off from the possibilit