{"total_rows":774,"offset":171,"rows":[ {"id":"posts/2015-09-01-the-disadvantage-you-know","key":["posts","2015-09-01T14:00:00.000Z"],"value":{"_id":"posts/2015-09-01-the-disadvantage-you-know","_rev":"2-7ad27df1d2ee251b6737590672fd8739","type":"post","title":"The disadvantage you know","body":"

Things can seem pretty difficult at times. I'm worried about money, or my work isn't going well, or I'm just in a bad mood. I think what life must be like for people who don't have these problems, and I feel envious of that. How easy it must be to be wealthy, happy and successful! Perhaps you are immediately jumping to say \"but everyone has problems!\" Yes, perhaps. But is it so difficult to accept that for some people life is just better than it is for you? Is it so impossible that there is someone out there who, for no good reason, has your life plus a little bit more?\n\n

I think the more interesting response is to ask: how weak is your imagination that the only kind of better you picture is you plus a million dollars? What about you plus a thousand limbs? Plus a brain the size of a planet? What about you plus a galaxy of robot servants capable of rendering a creation so expansive that the sum of today's humanity couldn't even comprehend it? What about your mind modified to be in a state of pure, absolute bliss without beginning or end?\n\n

It seems evident in these moments that there is so much – infinitely much – that we don't have. That we will never have. And yet the thought that I may never rule the universe or transcend time and space itself doesn't really bother me on a day-to-day basis. And if that doesn't bother me, perhaps there is no reason to be bothered that my life isn't better in other ways either.","created":"2015-09-01T14:00:00.000Z"}}, {"id":"posts/2015-09-02-creative-tooling","key":["posts","2015-09-02T14:00:00.000Z"],"value":{"_id":"posts/2015-09-02-creative-tooling","_rev":"5-440c4acf30bd20ae04e6d67b26499cf8","type":"post","title":"Creative tooling","body":"

A friend remarked the other day that if you want to make a lot of things, it's worth spending a lot of time on your tools. With my recent prototyping kick I've been noticing how often I seem to be repeating a fairly similar sequence of setup steps. I've been mostly messing around with shiny web technology things, so the setup mostly involves local webservers and Coffeescript build scripts, but each kind of project tends to have its own standard setup process. \n\n

It occurs to me that depending on the balance of new vs existing projects in your work, the total cost of setup would be vastly different. If you tend to work on multi-year-long ongoing projects, really any degree of setup cost is unlikely to matter. On the other hand, a 37signals-style web consultancy business will probably see multiple new projects a month. So it's important to keep that cost down. However, even that is a very different calculation compared to creating a new project each day, or even multiple per day.\n\n

It might sound excessive, but I actually think making multiple projects per day can actually be a pretty good way to do things. If you're looking for new ideas and trying a few different designs, or you want to write code in a highly decoupled (dare I say microservice) style, or you want to validate your assumptions with some throwaway code before you go all-in – all of these are great reasons to create new projects early and often.\n\n

But for that to make sense you really need your new project creation process to be really efficient. If it takes, say, 15 minutes I think that's still too long. Ideally it'd be under a minute from deciding to make a new project to being able to start meaningfully working on it. I'm nowhere near that point at the moment, but I think it could be feasible with the right set of creative tools.\n\n

I think the biggest improvement would be something like a palette of semi-reusable code chunks. When I find myself doing the same thing a few times in different projects I could drop a copy of that repeated code in the palette and then pull it out the next time I need it. I'd want to be able to do that at different scales – from single lines of code to whole files all the way up to multiple files spread across different directories.\n\n

There'd be a lot of tricky work involved to make something like that work well, but I think it'd be pretty useful. The less friction for creating a new project, the easier it is to create and the more experimental you can be.","created":"2015-09-02T14:00:00.000Z"}}, {"id":"posts/2015-09-03-spaced-propaganda","key":["posts","2015-09-03T14:00:00.000Z"],"value":{"_id":"posts/2015-09-03-spaced-propaganda","_rev":"4-90d95025f7fa693dfd792df2249b3939","type":"post","title":"Spaced propaganda","body":"

I was thinking today about the way that reposts survive on sites like Reddit. You might think that there's no value in posting something that's already been posted earlier, and thus the existence of reposts reveals a flaw in the ranking system in some way. However, I'm not convinced this is necessarily the case.\n\n

Firstly, people can sometimes appreciate being reminded of something, like an old joke that you've forgotten still makes you laugh when you hear it again. I'd call that individual forgetfulness. But there's also a second kind: over time new users join the site, older users drift away, and often users will miss new content as it comes in. The result is that even if individuals had perfect memory, the group would lose information. I'd call that population forgetfulness.\n\n

We have a fairly robust system for managing individual forgetfulness: spaced repetition. You repeat each thing you want to remember on an exponential scale with an exponent that you adjust for each item depending on how difficult it is to remember. This is a promising idea for managing population forgetfulness; could we generalise it to groups?\n\n

To do that you'd first need a way to effectively measure how well the population remembers something. For something like Reddit, you could possibly rely on the repost's score. In the more general case, you could probably use a sampling technique and survey people on their memories. Either way, you'd then need some extra statistical trickery to turn that into the right factor for your spaced repetition exponent. Presumably the tuning would then look like targeting a certain confidence interval of remembering.\n\n

Anyway, it occurs to me that once you start thinking about the more general problems you can solve, a lot of them turn out to be pretty unsavoury. For example, how often you should show someone an ad seems like it would be modelled fairly well by population spaced repetition. Similarly, how often should you repeat a message as a repressive government in order to indoctrinate people? I guess it would work for that too.\n\n

Well, hopefully it can be used more for good than for evil. Or, if not, I hope we come up with some decent defenses against it.","created":"2015-09-03T14:00:00.000Z"}}, {"id":"posts/2015-09-04-with-great-power","key":["posts","2015-09-04T14:00:00.000Z"],"value":{"_id":"posts/2015-09-04-with-great-power","_rev":"2-cedabceeddc151cc56eb849e1ea7ab9e","type":"post","title":"With great power","body":"

It's no secret that we're getting better at being humanity. Average life expectancy is increasing, global poverty is decreasing, people are more educated, and our science and technology are making us more powerful and more capable as a civilisation year after year. It's funny to think that as recently as a hundred and fifty years ago, the germ theory of disease was still considered a fringe crackpot theory, and anaesthesia was a party trick rather than a surgical tool.\n\n

But today a surgeon working without asepsis or anaesthesia would be considered a dangerous maniac, and quickly be imprisoned. Similarly, our advancing views on the harm of corporal punishment for children has made it illegal in many countries where it once was common practice. In these cases and others, the driver of our morality is our capability. Before we knew about bacteria, how could we fault a doctor for having dirty hands? Before we knew about the dangerous consequences of physical punishment of children, what basis would there be for making it illegal?\n\n

I believe that, in a similar vein, there are many commonplace things today that are products of ignorance or a lack of capability to do better. The difficult thing is knowing which ones, but I'd like to hazard a couple of guesses. The first big one is psychology. Our understanding of brains and minds is so primitive today that it's hard to stop finding things that will seem barbaric and negligent in another hundred and fifty years, from our attitudes towards and treatments of mental illness to our casual ignorance of the influences and exploitation of cognitive biases. How can we have a morality of memetics when most people don't even know what it is?\n\n

The second, perhaps closer to home, is the construction of software. There was a time when physical construction was more like alchemy than science. You would put up a structure, sometimes it would stay up, sometimes it would fall down. Over time certain patterns became apparent and super amazing 10X master builders appeared who could more-or-less intuitively navigate those patterns, though it was still not entirely clear why their buildings didn't fall down. Today, we have architects who are expected to follow certain principles. If they don't, the building falls down and they are responsible, because they should have known better.\n\n

It's not clear at exactly what point we will be able to say that we should have known better with software development. Firms that produce software are already considered liable if their software hurts someone, but software developers under employment are not yet liable if they write bad code. And I'm not sure they can be yet. Who among us is so sure of the right way to write software that we would be willing to encode those ideas in law?\n\n

I long for the day when a software developer's signature means as much as a doctor's or an architect's; after all, our bad decisions can already cause similar amounts of harm. But to get there we need to be better, and we're not better enough yet.","created":"2015-09-04T14:00:00.000Z"}}, {"id":"posts/2015-09-05-the-sound-of-life","key":["posts","2015-09-05T14:00:00.000Z"],"value":{"_id":"posts/2015-09-05-the-sound-of-life","_rev":"12-cde32aae2a7338ac86bc9e299078a13e","type":"post","title":"The Sound of Life","body":"\n

↑ Click to make noise ↑
\n
\n \n \n \n \n \n \n \n
\n\n\n
\n\n

I have to say I'm pretty chuffed about this one. In a way it's a followup to Chip from the other week, but this one took a fair bit more wrangling to get to behave like I wanted. I really like the outcome though.\n\n

The way it works is that the Game of Life is overlaid on a larger grid of audio regions, making an equal temperament scale with octaves on the y axis and octave divisions on the x axis. The more cells that are alive in a given audio region, the louder it gets relative to the others. The regions are also marked with colour for extra pretty.\n\n

You can scale up the grid and the audio regions to pretty large sizes, but I didn't find many settings that worked as well as these. 12-tone is traditional western music, but you get a lot of dissonance when there's too much going on. The 5-tone is apparently very similar to Gamelan music, though I suspect an actual gamelan musician might have something to say about that.\n\n

More details and a bigger demo are up on Github.","created":"2015-09-05T14:00:00.000Z","_attachments":{"client.js":{"content_type":"text/javascript","revpos":7,"digest":"md5-A8yZhy9bPRae8YAerNDnww==","length":8611,"stub":true}}}}, {"id":"posts/2015-09-06-relationship-technology","key":["posts","2015-09-06T14:00:00.000Z"],"value":{"_id":"posts/2015-09-06-relationship-technology","_rev":"4-5c5646e917bb50f77d4277c2120ea8f1","type":"post","title":"Relationship technology","body":"

It can be quite difficult to explain the modern internet to people who haven't spent much time with it. Not even just in terms of how it works or how to do specific things, but more fundamental questions like \"why are you doing any of this?\" You start to sound a bit nonsensical trying to explain the intricacies of whether to favourite or retweet, or whether someone is a friend friend or a Facebook friend, or what \"seen @ 5pm\" with no reply until 6pm means.\n\n

The point is that the complexity of the technology itself is dwarfed by the complexity of the relationships we build around it. You could produce an acceptable clone of the functionality of most popular websites without much difficulty, but to clone their community is effectively impossible. I suppose it's no surprise that community has become so important to our technology; even as apes we built and lived within intricate social structures. What we see now is just that the setting for those structures has gone from plantain picking to agriculture to industrialisation to... whatever this is.\n\n

But putting aside communities, it's also interesting to consider how technology has shaped the kinds of relationships we have on an individual level. The rise of broadcast media has made it much easier to have very asymmetric relationships, like between a celebrity and an audience member. Of course, we've always had celebrities, but the big difference is that the asymmetry used to be obvious; nobody thought they were Liszt's best friend just because they saw him play piano. Broadcasting makes it possible to hide that asymmetry and make a one-sided relationship feel two-sided.\n\n

I was once told that the difficult thing about television is that you have to have the voice and body language of someone very close to the audience, but the volume and eye contact of someone much further away. The goal is to create the feeling of intimacy without actual intimacy. If you can do that, you can save your intimately non-intimate recording once and distribute it to thousands of people, each of whom can feel a connection with you even though you have no idea who they are.\n\n

And that only gets us as far as TV and radio, nearly a hundred years ago! Now we have systems that let you share your diary with the world, broadcast text messages back and forth, publish your activities and movements, record off-the-cuff video, or even stream yourself live. There are so many ways to speak the language of intimacy online that in many ways it's a better environment than offline. But with the crucial distinction that this online intimacy is designed to scale, to be packaged and distributed. And there's no requirement it be reciprocal.\n\n

I'm not saying this is necessarily a bad thing, but I think it does require a rethinking of how we understand our relationships. If I watch someone talk about their life every day and I form a connection, am I forming it with that person? Or with the version of them I have constructed to fit my needs, like a character in a book? And if, so, where does this imaginary intimacy lie on the spectrum from quirky imaginary friend to Nicolas Cage body pillow?\n\n

On the other hand, maybe we should be prepared to accept a kind of abstracted intimacy: I form a relationship with a group, and the members of that group form a relationship with me. Neither the group nor me in this scenario are strictly capable of reciprocating a relationship; one is a collective with no single identity, and the other is a kind of ur-person, reconstituted from various snippets of public information but incapable of volition. So maybe we each form a half-relationship with someone who doesn't really exist, and that's still okay.\n\n

Whatever the case, this is only one example of the weirdness that's being left in the wake of all our new relationship technology. The future's going to be an interesting place.","created":"2015-09-06T14:00:00.000Z"}}, {"id":"posts/2015-09-07-shell-recipes","key":["posts","2015-09-07T14:00:00.000Z"],"value":{"_id":"posts/2015-09-07-shell-recipes","_rev":"2-ca14ed7afade2897dd489c0fc2c0c254","type":"post","title":"Shell recipes","body":"

I've recently been doing a disproportionate amount of typing commands into Linux machines, and it got me thinking about the way that so much advice in the Linux community looks like \"here, just copy paste these series of commands\", or \"download and run this arbitrary shell script\". Heck, even the official NodeJS distribution has a just-run-this-shell-script.\n\n

But, at the same time, it's kind of hard to argue with results. You often do need to just run a lot of commands in a sequence and, well, that's what a shell script is for. Maybe the answer isn't to shy away from it, but dive in head-first and make a more sophisticated workflow around running arbitrary shell scripts. I'm thinking something a bit similar to IPython notebooks, where the text is interleaved with code and you can run the commands one at a time and verify their output. It'd be a kind of literate approach to shell.\n\n

Maybe you could even combine it with a transactional shell to confirm that each command did what you expect. It'd be strange to get to a place where shell scripts could be considered user-friendly, but I can't think of a good reason why not.","created":"2015-09-07T14:00:00.000Z"}}, {"id":"posts/2015-09-08-primitives","key":["posts","2015-09-08T14:00:00.000Z"],"value":{"_id":"posts/2015-09-08-primitives","_rev":"2-e8e0bda92d4d1859c8d24f9eae492d46","type":"post","title":"Primitives","body":"

I've been going deep into the guts of Git lately, and I have to say it's really beautiful. Not the code necessarily, but the purity of the design, the primitives on which the system is built. Everything follows so simply from just a few initial concepts. You have a content-addressable object store which stores files, or trees of files, or commits which point to trees and other commits. Because everything is based on the hash of its contents, each commit forms a Merkle tree that uniquely identifies everything, from the commit itself to the commits before it to the trees and files themselves. Gorgeous.\n\n

To me that is the absolute essence of great code, to find a minimal set of concepts with maximal conceptual power. You can really feel the difference between a system that has been built on elegant foundations and one that's just compromise upon compromise trying to make up for an irredeemable core. Good primitives are often so pure and powerful that they extend beyond code and end up more like philosophical tools. A content-addressable store is the idea of referring to things by a description of what they are, rather than an arbitrary label. Git's way of representing history is the idea that you can get rid of time entirely, and just define the past as the set of things that must have happened for the present to be like it is now.\n\n

It's extraordinarily satisfying when you learn a new primitive that opens up a whole new class of follow-on ideas. Even more satisfying is when you are struggling to find the right set of primitives to build something powerful on, and then everything suddenly clicks into place.\n\n

But the most satisfying of all – I assume – is discovering a brand new primitive. Something that nobody's thought of before. Relatively few people have found ideas that powerful, but it must really be something to unearth a whole new way of thinking, like peering into the universe itself.","created":"2015-09-08T14:00:00.000Z"}}, {"id":"posts/2015-09-09-metacomputer","key":["posts","2015-09-09T14:00:00.000Z"],"value":{"_id":"posts/2015-09-09-metacomputer","_rev":"2-59638a79e0738e7be6659b9b27cfda80","type":"post","title":"Metacomputer","body":"

An idea came up today that's been floating around in my head for a while. I keep running into issues where no single computer I have access to has the exact mix of resources I need, and I wonder why it is that running things across machines is so difficult.\n\n

An example: I was recently working on some large files. I had to copy them around, write some custom code to do processing on them, and then turn them into dvd images and upload them somewhere. The problem is that my home internet connection is too crappy to upload a lot of files in anything close to a reasonable amount of time.\n\n

So instead, I provisioned a cheap little ARM-based cloud machine in France. Unlike Australia, Europe has good internet, so the uploading and downloading was no longer a bottleneck. But the latency is really high, so I had to kind of awkwardly shuttle things back and forth so I could write code on my local machine and run it on the remote machine.\n\n

During the whole process I remember thinking how cumbersome the whole thing was. It's great that I could do it at all, but it definitely wouldn't be described as a seamless process. I think if the Glorious Cloud Future is to occur, we need something better.\n\n

What I'd like to see is a kind of metacomputer: a computer built out of other computers. It would automatically distribute computation depending on the kind of resources required and the cost of transferring data between resource locations. The end result would be that you can add lots of different kinds of resources, and even do it dynamically, and the system turns that into the best computer it can.\n\n

In my example, it would recognise that the cost of transferring the large files is high and the cost of transferring my keystrokes is high, but the cost of transferring code is low. So the file processing would be allocated to the remote server, but the process that turns keystrokes into code (my editor) would be allocated to my local computer. However, if the server was much closer to me (but I still had crappy internet), maybe it would just move all the computation to the remote server and leave my local computer as a dumb terminal.\n\n

What's even more exciting about this is that you could integrate such a system so well with cloud server platforms. If the metacomputer can automatically redistribute resources when they become available, there's no reason it couldn't automatically add more resources when needed. You could even give it a value-of-time measurement, up to which point you'd be happy to spend money if it saves you processing time.\n\n

It's such a shame our computer architectures have not changed significantly in the last half-century, even as the context we use them in has changed a lot. I think at some point it's gotta give, though, and when it does I hope metacomputation is where we end up.","created":"2015-09-09T14:00:00.000Z"}}, {"id":"posts/2015-09-10-andronix","key":["posts","2015-09-10T14:00:00.000Z"],"value":{"_id":"posts/2015-09-10-andronix","_rev":"1-e69d6e826317a8605e1eb77a7ce8f3d2","type":"post","title":"Andronix","body":"

A while back I managed to get a fairly respectable system out of an Android tablet with a keyboard running a stripped-down Ubuntu in a chroot container. The process was somewhat involved, but despite the macguyveresque sense that it was all held together by tape and prayer, it was actually quite stable and I used it for a long time as a portable development machine.\n\n

In fact, later on I realised that it was actually the best Linux desktop environment that I've used. You get all of the standard apps and things you're used to (it even runs Photoshop... kinda), but under the hood it's still a fully functioning Linux machine that you can do Real Work with. The only problem is there's a kind of disjointedness because the two halves aren't really working together, they just happen to mostly stay out of each other's way.\n\n

The more I think about it, the more I realise that with only a little bit of rejiggering, you could bring those two halves together. You could have a standard Linux environment all the way up to running system services, one of which is the Android Runtime. Then any apps you want to run happen in the sandbox on top of that. You'd end up with something fairly similar to the current developer-friendly state of Mac OS: pretty UI up front, serious unix business in the back.\n\n

Maybe that would also be a reasonable direction for the Perpetual Year of the Linux Desktop. New attempts to remake the desktop environment are all the rage these days, but none of them come with millions of apps. It seems like if you could weld the Android frontend onto the existing Linux backend, you'd have an easy winner.\n\n

I wonder if anyone's already working towards this. Seems like a no-brainer to me.","created":"2015-09-10T14:00:00.000Z"}}, {"id":"posts/2015-09-11-the-second-degree","key":["posts","2015-09-11T14:00:00.000Z"],"value":{"_id":"posts/2015-09-11-the-second-degree","_rev":"3-7a397e1b9a8ce6a8a559d5b1caaad81c","type":"post","title":"The Second Degree","body":"

A friend once said to me that if I ever needed access to a gun, despite guns being illegal and pretty tough to find in Australia, I could get one easily. I should just find the dodgiest person I know, and ask them for the dodgiest person they know. That person, without a doubt, could get me a gun. I thought about this briefly and realised that my second degree of dodgy is probably in jail.\n\n

It would be pretty fun to make an exercise of going through a few different characteristics, finding the friend-of-a-friend who maximises each one, and interviewing them to find out a bit about what it's like in their life.\n\n

A few interesting examples:\n

\n\n

It would also be a little bit interesting to see how transitive those properties end up being. Does the most interesting person I know have more interesting friends? It'd definitely be interesting to find out.","created":"2015-09-11T14:00:00.000Z"}}, {"id":"posts/2015-09-12-failure","key":["posts","2015-09-12T00:00:00.000Z"],"value":{"_id":"posts/2015-09-12-failure","_rev":"2-3a40c27ce516b9a60665ce73f1df96c4","type":"post","title":"Failure","body":"

I fell asleep unexpectedly early last night and didn't write anything. This shares a common theme with most of my failures in that something happened around the time I was going to write, either social-related or a sudden attack of tiredness. I think a sufficient level of sacrifice would also prevent those failures, but I'm not convinced it's sustainable long-term or in line with my true priorities.\n\n

Separately I've had a class of... you might call them semi-failures, or lack of imagination as far as specification goes. I've been writing every day, as defined by the time between when I wake up and go to sleep. Unfortunately, sometimes (like during timezone transitions) the mapping between my day and an Earth day gets a little bit out of whack. For that reason, my posts have been a little bit behind schedule; I've still been writing them every day, but the date they're labeled with is a few days behind the date I write them. Effectively, my dates and Gregorian dates have gone out of sync, and I suspect I'm in need of some calendar reform.\n\n

Related problems with posting to my own schedule are that the time when new posts will appear is somewhat unpredictable for others, and that it's easier for me to accidentally miss a day without an absolute reference for where I'm up to. So ironically I need a solution that is simultaneously more consistent (so posts appear regularly) and more flexible (so it can survive occasional life intrusions). Luckily, I have just such a solution prepared.\n\n

I'm going to shift from writing in arrears to writing in advance. That is, I'll alter the site so that a post only becomes visible if its publication time is earlier than the current time, and I'll write (up to) one day ahead of the time each post will be published. This should mean a more consistent reading experience without really changing my writing experience. I'm also going to change the post date on the articles to midnight UTC, which is 10am here. That should mean that in the event of a disastrous good-night's-sleep-related accident, I'll still have time to make my deadline in the morning.\n\n

Doing things this way also provides the opportunity to front-load posts if I expect to be away from a computer for a little while. I'm not sure if I'll actually do that, but it's nice to have it as an option. Meanwhile, I need to actually make the transition to normal calendar dates, which I guess means having my own annus confusionis.\n\n

Look forward to a big flood of posts today!","created":"2015-09-12T00:00:00.000Z"}}, {"id":"posts/2015-09-13-non-binary","key":["posts","2015-09-13T00:00:00.000Z"],"value":{"_id":"posts/2015-09-13-non-binary","_rev":"3-cc37d268368807ee222c407b0e7f66a0","type":"post","title":"Non-binary","body":"

Long ago I learned about the idea of gradable and ungradable adjectives or, as I thought of them, non-binary and binary adjectives. The difference being that you can be very hot, very scared or very tall, but you can't be very unique, very pregnant or very amazing. The latter class is binary: it's either true or false. I think with superlatives (very fantastic, very amazing, very awesome) it's reasonable to discourage their use. After all, \"very\" is an intensifier, and if you already have the most intense form of a word, intensifying it more isn't really necessary.\n\n

However, cases like \"very pregnant\" are interesting, because they bespeak a certain confusion about the way we analyse language vs the way language is constructed. While it's true you can construct a formal grammar in which certain properties are binary and certain properties aren't, I don't believe that is actually reflective of our thoughts or our speech. \"Pregnant\", like \"German\", \"boiling\", or \"fatal\", is a cluster of concepts that we associate together. Much like nouns, which in theory refer to a single thing, but whose basis is really a fuzzy cloud in concept-space. You can easily reveal the nature of that cloud by turning the noun into an adjective: what is the most \"chairy\" chair you can think of? What is the least chairy?\n\n

I explained this idea to a friend and asked whether there was anything you couldn't do this trick with; that you couldn't make non-binary if you tried hard enough. The response was obvious, in retrospect: mathematics. Can x be \"very equal\" to 3? Obviously not. And in a sense that's the point. Our formal systems are designed to have these strange rigid properties that are alien to us. \n\n

Perhaps, if things were different and we were beings of pure binary logic, we might find ourselves inventing systems for fuzzy reasoning instead.","created":"2015-09-13T00:00:00.000Z"}}, {"id":"posts/2015-09-14-the-outlier-problem","key":["posts","2015-09-14T00:00:00.000Z"],"value":{"_id":"posts/2015-09-14-the-outlier-problem","_rev":"2-d368d13c40ae485f2d484a6429ad709a","type":"post","title":"The outlier problem","body":"

I was really saddened when I learned about Steve Jobs's death, not least of which because of the circumstances leading up to it. Jobs had pancreatic cancer, normally an instant death sentence, but in his case he had an exceptionally rare operable form. However, Jobs elected not to have surgery, hoping he could cure the cancer with diet and meditation. Unfortunately, he could not, and by the time he returned to the surgical option it was too late.\n\n

But the real tragedy isn't just that Jobs died from something that may have been prevented, it's that he died from the very thing that brought him success in the first place: hubris. Jobs had made a habit throughout his career of ignoring people who told him things were impossible, and that's not a habit that normally works out very well. For him, improbably, it worked – very well, in fact – until one day it didn't work any more. This is the essence of what I call the outlier problem.\n\n

We often celebrate outliers, at least when they outlie in positive ways. Elite athletes, gifted thinkers, people of genetically improbable beauty. The view from here, huddled in the fat end of the bell curve, gazing up at the truly exceptional, makes them seem like gods. But it's worth remembering that we are clustered around this mean for a reason: it's a good mean. This mean has carried us through generations of starvation, war, exile and death, and we're still here.\n\n

It's important not to forget that an exceptional quality is a mutation, no different than webbed toes, double joints, or light skin. Sometimes being an outlier lets you get one up on the people around you and start a successful computer empire. Sometimes it lets you remake the music industry, the phone industry, and the software industry in successive years. And sometimes it means you die from treatable cancer.\n\n

I remember Steve Jobs, not as a genius or an idiot, but as a specialist: perfectly adapted to one environment and tragically maladapted to another.","created":"2015-09-14T00:00:00.000Z"}}, {"id":"posts/2015-09-15-sdr","key":["posts","2015-09-15T00:00:00.000Z"],"value":{"_id":"posts/2015-09-15-sdr","_rev":"5-6d23e28965ade1c3cd1f21a13443b08e","type":"post","title":"SDR","body":"\n\n

I've been messing around with RTL-SDR lately, which is what led to the ATC feed you see above. I'm pretty impressed with how much you can get done with nothing but a $20 TV tuner and some software. As well as air traffic, I've had some fun moments being reminded that there's still a non-internet radio service, reading pager messages, and listening to the hilarious hijinks of taxi dispatchers.\n\n

There's some super serious signal processing stuff you can do using gnuradio, up to and including communicating with recently-resurrected space probes. But most of the software available seems geared for that kind of heavy duty signal processing, with not much in the way of resources for the casual spectrum-surfing enthusiast. The software above is CubicSDR, which is great, but currently limited to analogue FM/AM signals.\n\n

It occurs to me that this would be a great area to inflict some hybrid web development on. You could have a nice modular backend in a fast language like C or Go to do the signal processing, and feed that into a JS+HTML frontend. The modularity would make it easy to add new decoding components for things like digital radio, TV and so on, and the HTML frontend would make it easy to create and iterate on different ways to visualise the signals.\n\n

Plus, being web-compatible would give you a lot of cool internet things that are currently pretty difficult. For example, an integrated \"what is this signal and how do I decode it\" database, or Google Map of received location data. The last piece of the picture is that a sufficiently advanced web UI would solve the cross-platform division that's currently making my life more difficult than it needs to be.\n\n

I'm really excited about the potential of SDR. The software is currently just a little bit too awkward to be suitable for general use, but it's so close! Most of the individual components are there, it's just missing a bit of glue, sanding and polish.","created":"2015-09-15T00:00:00.000Z"}}, {"id":"posts/2015-09-16-imaginary","key":["posts","2015-09-16T00:00:00.000Z"],"value":{"_id":"posts/2015-09-16-imaginary","_rev":"2-738626f350231ccec55b030309f47b79","type":"post","title":"Imaginary","body":"

Certain colours – magenta, for example – are not real in the physical sense. That is, there is no magenta wavelength. In fact, everything on the colour wheel between red and violet, which is all of the purples, only exists in our heads. There's every reason to think that if aliens appeared and we showed them purple, they would say \"that's just red and blue!\" and laugh at us.\n\n

Purple exists because we have our own mental colour system, which is an imperfect mapping of the physical colour system. And this doesn't just mean that we see colours wrong sometimes, or that there are certain colours we can't see, but that there are also colours we can see that never existed at all: imaginary colours. But all of our mappings have this same property; there are certain characteristic edge cases that can lead to imaginary results.\n\n

There are a lot of theories for why celebrities often seem to suffer from depression, addiction and public meltdowns. One possibly too easy answer is that we would all act out if we could, but regular people don't have the resources. I'd like to suggest an alternative: empathy. When we see people doing things, we use our empathic system to recreate that feeling in our own minds. But much like colour vision, empathy imperfectly maps the external to the internal.\n\n

We sometimes misinterpret feelings, and sometimes feel nothing in a situation where we should have empathy. Is it possible there could be certain imaginary feelings that do not exist except when we feel them second-hand in someone else? I believe so, and I believe one such feeling is fame, or success. The feeling of \"now I've made it; I'm here; I did it; I'm great now\". We feel this feeling in others, but I don't believe we feel it in ourselves.\n\n

So what could be more destabilising than being driven by fame? You see celebrities and successful people and long to feel like they feel. But, of course, you don't know how they feel. And some day, by luck or hard work, you end up like them – and the feeling's not there. What do you do next? Where do you turn if the thing you've been looking for turns out to be an illusion?","created":"2015-09-16T00:00:00.000Z"}}, {"id":"posts/2015-09-17-merkle-versions","key":["posts","2015-09-17T00:00:00.000Z"],"value":{"_id":"posts/2015-09-17-merkle-versions","_rev":"2-693efba89f24b906729a4db64658adaf","type":"post","title":"Merkle versions","body":"

I've become a big fan of semantic versioning since its introduction. The central idea is that versions should be well-defined and based on the public API of the project, rather than arbitrary feelings about whether a certain change is major or not. It also recognises the increasingly prominent role of automated systems (dependency management, build systems, CI/testing etc) in software, and that they rely much more than puny humans do on meaningful semantic distinctions between software versions.\n\n

But one thing that can be troublesome is being able to depend on the exact contents of a package. Although it's considered bad form and forbidden by some package managers, an author could change the contents of a package without changing its version. Worse still, it's possible that the source you fetch the package from may have been compromised in some way. What would be nice is to have some way of specifying the exact data you expect to go along with that version.\n\n

My proposed solution is to include a hash of that data in the version itself. So instead of 1.2.3 we can have 1.2.3+abcdef123456 That hash would need to be a Merkle tree of some kind, so as to recursively verify the entire directory tree. I couldn't find any particular standard for hashing directories, but I suggest git's trees as being one in fairly widespread use. You can find out the git tree hash of a given commit with git show -s --format=%T <commit>.\n\n

Two interesting things about this idea: firstly, the semver spec already allows a.b.c+hash as a valid version string, so no spec changes are required. Secondly, because the hash can be deterministically calculated from the data itself, you don't actually need package authors to use it for it to be useful! You could simply update your package installer or build system to check your specified Merkle version against the file contents directly, whether or not it appears in the package's actual version number.\n\n

It's funny, I never thought of versioning as something that would see much innovation, but I guess on reflection it's just another kind of names vs ids situation. I wonder if there will be a new place for human-centered numbering once it's been evicted from our cold, emotionless version strings.","created":"2015-09-17T00:00:00.000Z"}}, {"id":"posts/2015-09-18-gish","key":["posts","2015-09-18T00:00:00.000Z"],"value":{"_id":"posts/2015-09-18-gish","_rev":"2-e161d2dd091ce969b6364202f4a7b405","type":"post","title":"Gish","body":"

While thinking about Merkle versions I realised that there's no easy and commonly accepted way to hash a directory. I've actually had this problem before and I ended up doing some awful thing with making a tarball and then hashing that, but then it turned out that tar files have all sorts of arbitrary timestamps and different headers on different platforms, which made the whole thing a nightmare.\n\n

Since I suggested git tree hashing would be a good choice, I thought I'd put my money where my mouth is. It turns out that git doesn't expose its directory tree hashing directly, so you have to actually put the directory into a dummy git store to make it work. That all seemed too hard for most people to use, so I made Gish, which is a reimplementation of git's tree hashing in nodejs.\n\n

It ended up being one of those \"this should only take an hour oh god where did my afternoon go\"-type projects, but I'm happy with it all the same. Hopefully it proves useful to someone and, even if not, I know a whole lot more about git trees than I used to.","created":"2015-09-18T00:00:00.000Z"}}, {"id":"posts/2015-09-19-prototype-discipline","key":["posts","2015-09-19T00:00:00.000Z"],"value":{"_id":"posts/2015-09-19-prototype-discipline","_rev":"1-8f1033ac437ad585d81b8987306878b4","type":"post","title":"Prototype discipline","body":"

As I've started to make more of a habit of prototyping, I've noticed that actually the difficulty isn't so much in making the prototypes themselves. On the contrary, making prototypes is usually fun and interesting in equal parts. Instead, the big difficulty is making prototypes the right way, so that you get something useful out of them, and so that they stay light and exploratory.\n\n

The first thing I've noticed is that it's important to have a particular direction in mind. I've heard it said that prototypes should answer a question, but I'm not sure that's necessarily true. There's definitely a place for that kind of specific question-answering prototype, but for me I've found the most benefit in using prototypes just to explore. That said, the exploration goes a lot better if it's focused on a specific idea-space.\n\n

Another important thing is keeping the scope and the expectations small. It seems to be particularly easy for new ideas to creep in – which is great, in a way, that's the point – but you have to be able to figure out what to say no to. The other risk is to start treating the code like something that has to be perfect-complete, with all the trappings of a kind of project that it isn't. I've also heard similar-but-not-quite-right advice on this front: that it's okay for prototype code to be bad. I think you lose a lot by writing code you're not happy with even in the short term. The trick is letting it be good prototype code and not something else. The goal is exploration, not to make a polished final product.\n\n

I'm beginning to see prototypes as an essential component of the continuous everywhere model: if you can decrease the size of the gap between having an idea and seeing a working version of that idea, it gives you a lot more information and a lot more flexibility in which ideas you explore and how.","created":"2015-09-19T00:00:00.000Z"}}, {"id":"posts/2015-09-20-website","key":["posts","2015-09-20T00:00:00.000Z"],"value":{"_id":"posts/2015-09-20-website","_rev":"10-8a3204441ba819eb1436fba935e62099","type":"post","title":"Website","body":"

Well, getting back up to speed took slightly longer than I thought. However, as of this post I am now officially writing in the future, which is fairly exciting. I figure it seems like as good a time as any to go into a little bit of detail on the website itself.\n\n

The whole thing is a couchapp being served and rendered entirely by CouchDB. Each post is created as a JSON document in the database. Here's this post, for example. All documents of a certain type are then rolled up into the bytype view. You can then query that view to get recent posts, for example all of the posts in September. Finally, those views and documents are rendered by some database-side Javascript (yes, really) using Mustache templates into the amazing website you see before you.\n\n

Obviously a lot of this stuff is really tightly coupled with the CouchDB philosophy. I think Couch has a lot of qualities that make it really great for a site like this, not least of which is that I can have my own local copy of the website and through magic replication, the site just copies itself into production when I'm ready. In fact, you can copy it too! Just point your CouchDB's replicator at the API endpoint.\n\n

I've also finally gotten around to putting the code up on GitHub. I'm not sure why that would necessarily be useful to you, but in case you're curious, there it is. Various parts have been floating around since 2011 or so, which is at least four stack trends ago. Feels good to put it up at last.","created":"2015-09-20T00:00:00.000Z"}}, {"id":"posts/2015-09-21-wet-floors","key":["posts","2015-09-21T00:00:00.000Z"],"value":{"_id":"posts/2015-09-21-wet-floors","_rev":"3-3df4e594e9ee16e1646b60e4ec2e071f","type":"post","title":"Wet floors","body":"

An amusing anecdote from the first time I met a good friend of mine: He was writing some code to dedupe files on his fileserver and needed to pull some logic out of a loop to run it somewhere else. He copy-pasted it rather than abstracting it out into a function, saying \"oh man, I bet this is going to come back to haunt me\". Literally ten seconds later he changed the logic in the body of the loop without changing it in the place he'd copied it to, hitting the exact problem he was worried about.\n\n

I think of those situations as wet floors, after a time I was in a KFC and I saw the workers behind the counter skidding around on an oily floor right next to the deep fryers. I spent a long time thinking about how one of those kids was going to slip and put their hand in boiling oil before I even realised I could do something to prevent that outcome. Of course, when I went up to warn them the response was \"oh, yeah, that is dangerous\". I'm fairly certain they didn't actually clean the floor.\n\n

It occurs to me that this is a consistent pattern in software development and elsewhere: you see a problem just waiting to happen, and you notice it but instead of doing something you say \"that's going to be a problem\". Later on when it is a problem, you can even say \"I knew that was going to be a problem\". Though that is a deft demonstration of analytical and predictive ability, it could perhaps have been put to better use.\n\n

It sometimes seems like the drive to understand things can be so strong that you lose sight of the underlying reality. \"I understand how this works\" can be so satisfying that it makes \"I should change how this works\" unnecessary. Or perhaps it's just that understanding is always a positive; it's often not that difficult, and it feels good when you do it. Whereas acting in response to your understanding can be a lot of effort and doesn't always work the way you want.\n\n

There is also an element of confidence. Something you believe in a consequence-free way is very different from something that has serious costs if you're wrong. I've heard it said that the hardest job is being responsible for the Big Red Button. When you press the Big Red Button, it brings everything to a halt and costs hundreds of thousands of dollars, but not pressing it costs millions, maybe destroys the whole company, and definitely your career. It must take enormous confidence to press that button when necessary.\n\n

A related technique that I quite like is the pre-mortem, where you pretend something has gone wrong and explain why you think that was. What's considered powerful about it is that it removes the negative stigma from predicting failure, but I think there's something else as well: a pre-mortem directly connects your knowledge of failure to the reality of failure. That is, it forces you to imagine the eventual result of \"this is going to be a problem\": an actual problem.\n\n

Perhaps all that is required to defeat wet floors is to drive up your own confidence in that belief, or associate it strongly enough with the actual failure it predicts.","created":"2015-09-21T00:00:00.000Z"}}, {"id":"posts/2015-09-22-are-categories-useful","key":["posts","2015-09-22T00:00:00.000Z"],"value":{"_id":"posts/2015-09-22-are-categories-useful","_rev":"4-b90d71cfa5dcc07fa0798245073dbc59","type":"post","title":"Are categories useful?","body":"

I remember reading some time ago about the Netflix Prize, a cool million dollars available to anyone who considerably improved on Netflix's movie recommendation algorithm at the time. Of course, the prize led to all sorts of interesting techniques, but one thing that came out of it was that none of the serious contenders, nor the original algorithm (ie the actual Netflix recommendation engine) used genres, actors, release years or anything like that. They all just relied on raw statistics, of which the category information was a very poor approximation.\n\n

So I wonder, if it's true for Netflix, is it true for everything? The DSM-5, effectively the psychiatry bible, had a bit of controversy at least partially because of its rearrangement of diagnostic categories. What was once Asperger's is now low severity autism, and many other categories were split further or otherwise changed. However, the particular validity of a treatment for particular symptoms hasn't changed (or, if it has, not because the words in the book are different now).\n\n

Medical diagnostics seems to mostly be a process of naming the disease, and then finding solutions that relate to that name. However, that process can take a long time and doesn't always work. Maybe it would be better if we got rid of the names, and used some kind of predictive statistical model instead. You'd just put as much information is as you can and be told what interventions are most likely to help. The medical landscape would certainly look pretty interesting, but I suspect not in a way that doctors or patients would reassuring, even if it did result in better outcomes.\n\n

Ultimately, that seems like the point of categories. They're not good for prediction by comparison to other methods, and often they're plagued by disagreements over whether a particular edge case fits the category or not. However, the alternative would mean putting our faith in pure statistics, and I'm not sure people are ready for that.\n\n

Can you imagine a world where we don't categorise things? Where you don't need to determine if something is a chair or not, just whether it's likely you can sit on it? You wouldn't be considered a cat person, just someone statistically likely to be interested in a discussion about feline pet food. Maybe we could all get used to predicting outcomes, rather than needing to understand the internal system that leads to those outcomes. It sure would make life a lot simpler.\n\n

But I doubt that's going to happen any time soon.","created":"2015-09-22T00:00:00.000Z"}}, {"id":"posts/2015-09-23-the-unbearable-rightness-of-couchdb","key":["posts","2015-09-23T00:00:00.000Z"],"value":{"_id":"posts/2015-09-23-the-unbearable-rightness-of-couchdb","_rev":"5-46676807e2d995c8c8195af2f2c201eb","type":"post","title":"The unbearable rightness of CouchDB","body":"

As I mentioned recently, this website is built on CouchDB. CouchDB is in many ways a very innovative but still very simple database, and it has the unique quality of genuinely being a \"database for the web\", as the marketing copy claims. However, lately most of the time what I feel about CouchDB is not joy but more a kind of frustration at how close – how agonisingly close – it is to being amazing, while never quite getting there.\n\n

The first one that really gets me is CouchApps. They're so close to being a transformative way of writing software for the web. Code is data, data is code, so why not put it all in one big code/database? Then people can run their own copies of the code locally, have their own data, but sync it around as they choose. Years before things like unhosted or serverless webapps were even on anyone's radar, CouchDB already had a working implementation.\n\n

Well, kind of. Unfortunately CouchApps never really had first-class support in CouchDB. The process of setting one up involves making a big JSON document with all your code in it, but the admin UI was never really designed to make that easy. The rewriting engine (what in a conventional web framework you might call a router) is hilariously primitive, so there certain kinds of structures your app just can't have, and auth is a total disaster too. The end result is that most of the time you need to tack some extra auth/rewriting proxy service on the front of your gloriously pure CouchApp. What a waste.\n\n

There are other similarly frustrating missed opportunities too. CouchDB had a live changes feed long before \"streaming is the new REST\" realtime databases like Firebase showed up, but never went as far as a full streaming API or Redis-style pub/sub. It has a great inbuilt versioning model that it uses for concurrency, which could have meant you magically get versioned data for free – but didn't. It has a clever master-master replication system that somehow doesn't result in being able to generate indexes in parallel.\n\n

I should say that, although it frustrates me to no end, I really do respect CouchDB. At the time it came out, there were no other real NoSQL databases and a lot of the ones that have come since went in a very different direction. Compared to them, I admire CouchDB's purity and the way its vision matches the essential design of the web. But in a way I think that's exactly what makes it so frustrating. That vision is so clearly written in the DNA of CouchDB, and it's such an amazing, grandiose vision, but the execution just doesn't live up to it.","created":"2015-09-23T00:00:00.000Z"}}, {"id":"posts/2015-09-24-middle-out","key":["posts","2015-09-24T00:00:00.000Z"],"value":{"_id":"posts/2015-09-24-middle-out","_rev":"3-b9ccf13c21ae85141ca4d136e3c94006","type":"post","title":"Middle-out","body":"

When creating something, it often seems like you start from a particular point and fill the rest in around it. For example, you start with an amazing character idea and build a plot around that character, or alternatively you start with a great plot idea and find characters that can drive it. Or, if you're Asimov, you just write your ideas down and hope for the best.\n\n

In software businesses there are similar starting points. You can start with a particular product and figure out the engineering necessary to build it - that's most modern web startups. You can start with an engineering breakthrough or scientific discovery and figure out how to turn it into a product – that's basically the rest of the startups. Even in the software itself, you often have to choose which components to start with, which database or web framework or game engine, and that decision then shapes all the subsequent decisions you can make.\n\n

And I think that really misses something. Because when you commit hard to an early decision it means there are significant limitations to how you can make all the other decisions. You can often feel this in a codebase, a kind of impedance mismatch where you can see lots of translation layers between different modules because they're designed in different ways that don't line up well. Or you avoid that by only using modules that fit nicely with the existing ones, even if that means they don't work well in other ways.\n\n

I once read that Jony Ive's philosophy on design is different because he spends so much time thinking about materials and how they can complement or inform product design. The particular choice of metal or plastic, or what kind of manufacturing process to use, doesn't come right at the end, as it does with many other companies, but is part of the process the whole way through. Instead of saying \"we want a laptop this size, therefore we'll make it out of metal\", or \"we want something made out of metal, how about a laptop?\", it's more like \"we like laptops, we like metal, I wonder if those can go together\".\n\n

Ultimately I think this kind of philosophy is best. Obviously there's nothing stopping you from picking one most important starting point and fitting everything around that, but I believe that really amazing feats of design and engineering only happen as a kind of simultaneous equation. You consider all of the possible options for all of the aspects of the thing you're making, and among those you find a set that fits together so beautifully that the resulting product just falls out naturally.\n\n

Of course, that's much easier said than done. All of the options for all of the aspects is a fearsome combinatorial explosion to deal with. In practice, you probably have to pick your battles on that front, and be sensitive to the limitations of your poor fleshy brain and the time available to the project. However, I think that in many cases picking one place to start is a very early optimisation, and an unnecessary one. Taking the time to think about the right set of primitives can give you something much better than you could have designed incrementally.","created":"2015-09-24T00:00:00.000Z"}}, {"id":"posts/2015-09-25-the-shenzhen-shuffle","key":["posts","2015-09-25T00:00:00.000Z"],"value":{"_id":"posts/2015-09-25-the-shenzhen-shuffle","_rev":"4-aff5d71176d3663210148e05a8a69dda","type":"post","title":"The Shenzhen Shuffle","body":"

Although I've had a certain low-level exposure to the riches of China, it never managed to blossom into a serious electronics habit. But all of that has begun to change recently, starting with a gift I received of some ESP8266 modules, which are basically tiny WiFi SoCs that can even run Lua.\n\n

The chips are pretty fun on their own, but nothing compared to the stuff you can do if you have some sensors, lights, wires, breadboards, battery packs, voltage regulators, solar panels... Suffice to say the electronics binging that has probably been my destiny since the age of ten is finally being fulfilled. The process has this great multiplicity: each new thing you buy gives you more options when combined with all the things you already have. And that gives me an idea.\n\n

I've never really been into the whole buyer's club type thing, but it seems like this could be really great place for it. You pay some fairly small amount ($10-20/month) and in exchange you get random new electronics stuff delivered each week. China Post's notorious 20-40 day lead time isn't really an issue once you start pipelining the mail. And you could get some pretty cool stuff for the money, especially with collective buying power. Here's some I found in the $1-3 range in a few minutes of searching: mini Arduino knock-off, LED matrix, RFID module, motion sensor, ultrasonic distance sensor.\n\n

I think an appropriate name would be The Shenzhen Shuffle. Any takers?","created":"2015-09-25T00:00:00.000Z"}}, {"id":"posts/2015-09-26-garbage-collected-tabs","key":["posts","2015-09-26T00:00:00.000Z"],"value":{"_id":"posts/2015-09-26-garbage-collected-tabs","_rev":"1-da0eadf920f997504fe6ee240290833e","type":"post","title":"Garbage-collected tabs","body":"

I tend to keep a lot of tabs open. I mean a lot, like around 200 at the time of this writing. That's spread over 20 windows, with about 10 tabs per window. There's something about the spatial nature of tabs that really works for me, better than bookmarks or other things. Especially because I tend to have a lot of little projects going at once, they work sort of like a project space. You can keep a bunch of associated research together and close it all at once when you're done with it.\n\n

But it appears Chrome is not strictly designed for this kind of usage. It tends to get particularly slow with a lot of tabs, both from enormous memory consumption and because websites seem to like to use your CPU for things in the background. I've found The Great Suspender to be helpful for this, though I feel a bit like it shouldn't have to exist. And even without the resource consumption issues, it's still tough to manage all the windows and tabs.\n\n

I think an interesting approach would be something like the way garbage collection in programming. What you want is to only keep the windows and tabs around that are still relevant to what you're doing. Each new tab you open would have a reference to the tab you opened it from, and each tab would have a freshness that indicates how interested you are in that tab. Whenever you interact with something it freshens that tab and tabs connected to it. When things get a low enough freshness they are suspended, and if they are left suspended for too long they disappear.\n\n

Though unlike with real garbage collection, you'd never actually delete anything. Instead, they would go into some kind of garbage collected tabs and windows history where you could pull them back out if they were needed for something. You could also possibly have some kind of pinning system for things that you want to keep as a long-term reference. Maybe you could even have a nice UI for tabs or windows that are going to disappear, on concertina them if they're stale tabs on a fresh window.\n\n

There would be a lot of tuning to do, especially as far as what updates freshness and how that propagates between tabs, but I think it could be an interesting model. The way I use browser tabs (and, judging from The Great Suspender's install numbers, hundreds of thousands of others) isn't in the same iron-clad way that we are used to for desktop windows. It's less like \"I need this information to stay here forever\" and more like \"I'm interested in this now and, in an hour, who knows?\"","created":"2015-09-26T00:00:00.000Z"}}, {"id":"posts/2015-09-27-convoy","key":["posts","2015-09-27T00:00:00.000Z"],"value":{"_id":"posts/2015-09-27-convoy","_rev":"2-0185566ba0a730e75d02547f7b8ff52c","type":"post","title":"Convoy","body":"

I had an interesting idea for a game the other day, based on the good old fashioned heist trope. You're riding atop a train in some kind of post-apocalyptic badlands, with gangs of baddies riding up alongside and trying to climb on board and take the train over. You run around on top of the train fighting them off, but you can also upgrade the train and add turrets and traps and so on.\n\n

Essentially it'd be an iteration of the standard tower defense formula with a few action/rpg elements. But I really like the possibilities that the setting gives you; the train never stops moving, although the scenery changes and the baddies get stronger, there's a kind of absurdist constancy to it. It'd be fun to play with that in the dialogue of the game too. Why are you on top of a train? Where's the train going? Why does it never seem to get there?\n\n

Plus I think as an art direction you could do some really fun things with post-apocalyptic vehicular battles. Deserts, forests, icy tundra. Molemen on motorbikes. Spider robots with rocket launcher legs. I mean, the ideas just write themselves.","created":"2015-09-27T00:00:00.000Z"}}, {"id":"posts/2015-09-28-charisma-transform","key":["posts","2015-09-28T00:00:00.000Z"],"value":{"_id":"posts/2015-09-28-charisma-transform","_rev":"4-38a6a4c712e84f330adffd0d048836ef","type":"post","title":"Charisma transform","body":"

It's an interesting quirk of our biology that some ways of representing data are much more meaningful than others. We have a very high visual bandwidth, for example, so charts and graphs are much easier for us to understand than numbers. Not so for computers, where the visual information would have to be reverse-engineered back into numbers before it could be analysed. We are similarly very good at understanding movement, but it's trickier to represent things kinematically (though there are some pretty amazing experiments already).\n\n

Rather than visualisations or animations, I think of these functional attempts to shift data into a more easily digestible form as transforms, no different to the kind of transform you might do when converting data between file formats. It's just that these formats are tailored for our own peculiar data ingestion engine. You could say that transforms like these are designed to exploit particular capabilities of our hardware.\n\n

There's one capability that I think is both powerful and underexplored: our empathy. As inherently social creatures, we are very efficient at understanding and simulating the behaviour of others. However, most tools we understand mechanistically, like a car or a keyboard; you know that everything that happens follows directly from something else. But it only works for simple behaviour. How do you understand the behaviour of a complex network, or a country's economy? Doing a strict mechanistic analysis is too hard in many cases to be useful.\n\n

You can understand these complex behaviours much more easily if you can transform them into the empathic domain where we have specialised understanding. And if we could find a way to effectively describe the motivations of an economy, say, by casting the major forces as characters, assigning them emotions and values that reflect their real-world behaviour, I think it would make the whole thing a lot more intuitive.\n\n

What's more, our tools are rapidly exceeding the complexity where we can reason about them in anything but an abstract way. Historical computer interfaces had a simple mapping between actions and results; there's a list of commands, you type a command from the list, it does the same thing every time. But what about voice interfaces? Or search results and other name queries? Or really any system with a user model? Smarter computers are, unfortunately, less predictable computers.\n\n

I believe that to keep these complex tools usable we will need to develop a charisma transform: something that can represent the behaviour of that tool in a humanlike way that we can more easily model. I think our interfaces will have to develop personalities, or something that we can understand the way we understand a personality. I expect this will take some time and most of the early attempts to be pretty ham-fisted, but it seems inevitable that we'll have to go in this direction as systems become more complex and our capability to logically understand them gives out.","created":"2015-09-28T00:00:00.000Z"}}, {"id":"posts/2015-09-29-the-end-of-knowledge","key":["posts","2015-09-29T00:00:00.000Z"],"value":{"_id":"posts/2015-09-29-the-end-of-knowledge","_rev":"4-dc23ef848afa33c8e673fda14a954327","type":"post","title":"The end of knowledge","body":"

There's a great quote, sometimes attributed to Kelvin, but apparently fabricated from things said by one or more other people, that goes \"There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.\" Of course, this was in the late 1800s, just before the discovery of relativity, nuclear physics, quantum mechanics, subatomic particles, black holes and the big bang theory. So I guess you could say that turned out to be a bit short-sighted.\n\n

Though, when you think about it, what is the answer? Will we ever know everything? I think the instinctive answer is \"no\", because the universe is too big, and to an extent maybe we want it to be too big. But, if you follow that through for a second, how could it possibly be true? Purely from an information theory perspective, there's no way you can encode an infinite amount of information in a finite space, so, worst case, there must be a finite description of the observable universe.\n\n

That description could be as large as the universe itself, if the unverse was purely structureless and random. I'm not even sure if that's possible; if the universe used to be smaller and denser, the information now can't be greater than the information then unless we assume some external source injecting information in. Regardless, the universe seems to have structure – in fact, a lot of structure – so I can't see any reason it won't, eventually, be completely described. I think, at some point, we will know everything.\n\n

And where does that leave us? I mean, what do we do when the universe's mysteries are completely explained to us? Perhaps it will all seem pointless then. But on the other hand, there are a lot of domains where people effectively know everything now and it doesn't seem to bother them. It's possible to know everything about a given programming language, for example, or bicycle repair. I don't think people who use programming languages or repair bicycles are filled with existential dread. Or, at least, not because of the completeness of their knowledge. And many fields seem to just generate an infinite stream of new things.\n\n

In the end, I suppose I'm making an argument that essential complexity is finite, but I don't think the same is true of accidental complexity. I read an Iain Banks book where a super-advanced species lived only for a kind of futuristic Reddit karma. Maybe that's where we'll end up.","created":"2015-09-29T00:00:00.000Z"}}, {"id":"posts/2015-09-30-going-meta","key":["posts","2015-09-30T00:00:00.000Z"],"value":{"_id":"posts/2015-09-30-going-meta","_rev":"3-270057f921b119415efe16ad7c2f1557","type":"post","title":"Going meta","body":"

A while back I read the most amazing NASA report. It was just after Lockheed Martin dropped and broke a $200+ million satellite. The sort of thing that you might consider fairly un-NASA-like given their primary mission of keeping things off the ground. They were understandably pretty upset and produced one of the greatest failure analyses I've ever seen.\n\n

It starts by saying \"the satellite fell over\". So far so good. Then \"the satellite fell over because the bolts weren't installed and nobody noticed\". Then \"nobody noticed because the person responsible didn't check properly\". Then \"they didn't check properly because everyone got complacent and there was a lack of oversight\". \"everyone got complacent because the culture was lax and safety programs were inadequate\". And so on. It's not sufficient for them only understand the first failure. Every failure uncovers more failures beneath it.\n\n

It seems to me like this art of going meta on failures is particularly useful personally, because it's easy with personal failures to hand-wave and say \"oh, it just went wrong that time, I'll try harder next time\". But NASA wouldn't let that fly (heh). What failure? What caused it? What are you going to do different next time? I think for simple failures this is instincitvely what people do, but many failures are more complex.\n\n

One of the hardest things to deal with is when you go to do different next time and it doesn't work. Like you say, okay, last time I ate a whole tub of ice cream, but this time I'm definitely not going to. And then you do, and, you feel terrible; not only did you fail (by eating the ice cream), but your system (I won't eat the ice cream next time) also failed. And it's very easy to go from there to \"I must be a bad person and/or ice cream addict\". But What Would NASA Do? Go meta.\n\n

First failure: eating the ice cream. Second failure: the not-eating-the-ice-cream system failed. Okay, we know the first failure from last time, it's because ice cream is delicious. But the second failure is because my plan to not eat the ice cream just didn't seem relevant when the ice cream was right in front of me. And why is that? Well, I guess ice cream in front of me just feels real, whereas the plan feels arbitrary and abstract. So maybe a good plan is to practice deliberately picking up ice cream and then not eating it, to make the plan feel real.\n\n

But let's say that doesn't work. Or, worse still, let's say you don't even actually get around to implementing your plan, and later you eat more ice cream and feel bad again. But everything's fine! You just didn't go meta enough. Why didn't you get around to implementing the plan? That sounds an awful lot like another link in the failure chain. And maybe you'll figure out why you didn't do the plan, and something else will get in the way of fixing that. The cycle continues.\n\n

The interesting thing is that, in a sense, all the failures are one failure. Your ice cream failure is really a knowing-how-to-make-ice-cream-plans failure, which may itself turn out to be a putting-aside-time-for-planning failure, which may end up being that you spend too much time playing golf. So all you need to do is adjust your golfing habits and those problems (and some others, usually) will go away.\n\n

I think to an extent we have this instinct that we mighty humans live outside of these systems. Like \"I didn't consider the salience of the ice cream\" is one answer, but \"I should just do it again and not screw it up\" is another. That line of thinking doesn't make any sense to me, though; your system is a system, and the you that implements it is also a system. Trying to just force-of-will your way through doesn't make that not true, it just means you do it badly.\n\n

To me that's the real value of going meta: you just keep running down the causes – mechanical, organisational, human – until you understand what needs to be done differently. Your actions aren't special; they yield to analysis just as readily as anything else. And I think there's something comforting in that.","created":"2015-09-30T00:00:00.000Z"}} ]}