Waiting for The Call

Until a man is twenty-five, he still thinks, every so often, that under the right circumstances he could be the baddest motherfucker in the world. If I moved to a martial-arts monastery in China and studied real hard for ten years. If my family was wiped out by Colombian drug dealers and I swore myself to revenge. If I got a fatal disease, had one year to live, and devoted it to wiping out street crime. If I just dropped out and devoted my life to being bad.
Neal Stephenson – Snow Crash

I really like Derek Muller-aka-Veritasium's video about trying to become a filmmaker. He describes calling a local film director looking for... something – a way in, maybe, or just some idea of what to do. He uses that to launch into a larger exploration of learned helplessness and the way we tend to assume that it's up to other people, rather than ourselves, whether we succeed. It's a good point, but the story reminded me of a slightly different thing I've noticed.

I call it waiting for The Call. You have a great idea, an ambitious plan, some new amazing direction for your life. You're a tightly coiled spring of potential energy ready to unleash on the universe, but you never quite feel ready. Suddenly, the phone rings. "Hello, this is The President. We need you. It's go time." Okay, let's do this! At last, you can commit completely to this particular course of action, safe in the certainty that this is definitely the right thing to be doing and now is the time to do it.

But, except for very rare exceptions, it's unlikely you'll get that kind of call from the president, or indeed anyone. There probably won't even be a clear signal to say that this thing is the right thing to do. In fact, most of the time the great idea doesn't look like much to start with, and you have to spend a lot of time convincing other people that it's worth anything. Yet it's all too easy to put that hard work off, waiting for some sign that isn't coming.

I don't think this is necessarily anything to do with learned helplessness. In fact, I would say it's probably more like a kind of backwards impostor syndrome. Instead of feeling like you're missing some indefinable genuineness quality that other people have, you feel like you need that quality before you even start. I wrote before about the strange phenomenon of feeling successful, which you experience second-hand from successful people, but not first-hand via actual success. I think this is similar: you feel an imaginary destiny in the lives of others, which you could have too if The Call would only come in.

It's truly hard to accept that remarkable things don't necessarily feel remarkable when you're doing them. Most likely if you do ever get The Call, it's not going to be before you start, or even while you're working to make your thing a success. Instead, it'll be years later when some kid calls you up to say "hey, since you obviously have the success-nature, is there any chance you could tell me that what I'm doing is right?"

Unfreedom

I think most people would broadly agree that freedom is a good thing. By freedom I mean specifically being unrestricted in your actions: you can mostly do what you want. The times when you can't are usually when it would limit other people's freedom. Perhaps, in some futuristic virtual libertarian utopia, it will not be possible to interfere with people's freedom, so everyone will be able to do what they want completely and without consequence. The internet, with its lack of physical consequences or government regulation, is part of the way there already.

There's a neat philosophical problem I read about called Parfit's hitchhiker. Basically, a perfectly rational utilitarian hitchhiker is trying to catch a ride with a perfectly rational utilitarian driver. The driver won't do it for free (that would be irrational!), so the hitchhiker offers to withdraw some money at their destination. The problem is, a perfectly rational hitchhiker would have no reason to follow through with this once all the driving is done. The driver knows this, and thus refuses.

The problem is that you need a way to bridge the cause-effect gap. The driver only wants to give the lift (effect) if they get money (cause), but those two things happen in the wrong order! The effect has to come after the cause. In practical terms, of course, this is a solved problem; the two could just form a contract, and then the government would step in if the terms weren't being met. More generally, you can use any kind of enforcement system that brings the effect forward to its rightful place after the cause. However, doing this comes at the cost of some freedom.

Essentially, you always lose freedom every time you make an agreement, a promise, a deal, a contract, or any of the other various forms of voluntary constraint on your decisions. Perhaps that's obvious; by committing to doing something, you give up the freedom to not do it. However, what might not be as obvious is the consequence that the ability to be unfree is therefore a vital part of a stable society, even a utopian one.

I've been thinking about this specifically in the context of internet ads and ad blockers, which is a topic of some debate. It's actually quite similar to Parfit's hitchhiker. Internet properties cost money to operate, so their owners need users to watch ads to pay for them. However, from the user's perspective, what obligation are they under to download your ad after you've already provided them with the content? It's their computer, their network connection, and they have freedom over how it is used. The internet is not really designed for unfreedom, which makes ads a tricky proposition.

It may well be that advertising can't survive on the internet as long as we're free to ignore it. Depending on your perspective, that might be the internet working as intended. However, it is strange to think that there are certain kinds of transaction that we can't make, even if it would be beneficial to us, because we're too free.

Attritional interfaces

I've never really liked RSS readers. I've used them on and off for various periods of time, but in the end it always goes the same way: I end up following too much stuff, the "unread" counts pile up into the hundreds or thousands, and I eventually just declare RSS bankruptcy and abandon it entirely until the next go-around. However, in recent years social news sites like Reddit, Twitter and Hacker News have mostly filled the RSS-shaped hole in my life, despite missing a lot of the content I used to go to RSS readers for. Why is this?

My contention is that social news sites are fundamentally attritional, by which I mean they slowly lose data by design. While this would be suicide for office software or a traditional database-backed business application, it actually works very well for social news. Old posts on Reddit fade away under the weight of new ones, and the only way to keep them alive is with constant attention in the form of upvotes and reposts. It's quite common to think of something you saw a few days ago and be unable to find it or remember what it was called. While that that might be frustrating, it's actually Reddit working exactly as intended.

The trick is that most software is designed to complement us. Where we are forgetful, computers remember. Where we are haphazard, computers are systematic. Where we are fuzzy, computers are precise. This makes them amazing tools, because we can do what we are good at and leave computers to do what we aren't. However, some systems have to be designed to mirror us. When we make a user interface, it has to work like we work, or it won't make sense to us. Email is designed to complement our memory so that we don't just lose emails. Reddit is designed to mirror our memory so that it can present us with constant novelty.

That said, I should stress that these two things aren't really opposites. In fact, it would be very difficult to design fundamentally attritional software because eventually you run into the reality that the system is a computer, not a human. Usually, you'll have a reliable system underneath with an attritional interface on top. Reddit, for example, is built on a database and never actually loses information. You wouldn't want it to anyway, because people do link to Reddit threads from elsewhere. The only reason things go missing is because the interface is set up that way.

RSS readers are an example of software crying out for an attritional interface. I don't care about some blog post I've been ignoring for weeks, but it stubbornly persists until I am the one to take the initiative and say "yes, I definitely don't want to read this". Just let me forget about it! Though RSS readers are an easy target, there are many other examples. I previously wrote about browser tabs that accumulate without end. Mobile notification systems could also benefit from a dose of attrition; do I really need constant reminding that some app updated until I specifically dismiss it?

So, if you're working on an interface, I would encourage you to consider: am I trying to complement or mirror the user here? And, specifically, consider if your system should remember things forever just because it can, or whether it might be better to forget.

Next action

Ages ago I saw a great mechanic in a video game, I think for the Nintendo DS but if not it was around that era. Mostly it just had the standard RPG stuff: characters and quests and spells and so on. But the one thing that made it stand out was that it had this amazing "next action" bar. Down the bottom of the screen above all the other important information was just a simple display showing whatever the next thing you had to do was.

Since then I've played games with much more complex quest systems, featuring multiple diverging quest lines, tree views, inline maps and so on. But, as sophisticated as those systems are, none really had the impact of that original one-line display. It was so simple! You could roam around and do other things as much as you wanted, but whenever you felt like moving forward the next action was always there, clear as day.

I think it's easy to get bogged down in complexity, especially when that complexity is actually necessary. Often you need to consider things like which tasks depend on which other ones, time planning, making sure you have the resources you need and so on. But, at the end of the day, the goal of planning is to reduce that complexity as much as you can. While you're in the middle of working, your decisions shouldn't be things like "what is the best thing to do out of all the possible things I could be doing?"

It should just be "am I ready to do the next thing now?"

Inference

One of my favourite illusions is the stopped clock effect, also known by the much cooler name chronostasis. The illusion happens when you see a clock out of the corner of your eye, then turn your eyes to focus on it. The amount of time it takes the next second to pass seems much longer than a second. What's happening is that during the time you couldn't actually make out the face of the clock, your brain fills in what it thinks should be there. That illusion is normally seamless, except that clocks have to obey more stringent rules that your visual system doesn't know how to fake.

I've been thinking about a similar illusion I've noticed in areas I don't often think about directly. I recently had a long conversation with a hair stylist about the complexities of the salon industry and most of the time I spent just convincing myself that people could legitimately care this much about hair. I remember being younger and thinking, like young people do, that I must have just about figured out everything worth figuring out. It turns out to be a pretty common sentiment.

I think part of the reason we often understimate how much we don't know is that we are so very good at just filling in the blanks with whatever available information we can get our hands on. If you look at the frankly crap signal we get before all our neurological trickery, it's amazing we can see at all. Hundreds of years ago, Helmholtz arrived at the same conclusion, which he called unconscious inference.

Our thinking is similarly amazing for how much it gets done with so little. Our tiny capacity for focus and working memory only really becomes obvious when we go looking for it, for example in specifically designed tasks like N-back. Part of this is that our brain is just well adapted to the kinds of problems we tend to have, but I think it's also that our mental capacity, like our vision, is particularly good at hiding its own limitations. I once saw someone ask "what do you see if you're completely blind?", and a blind person replied "well, what do you see behind you?"

So not knowing what we don't know isn't entirely surprising, but what does surprise me is how hard it is to even think about. Even once you build up an intuition for "there are probably a bunch of things I don't know", it seems like it doesn't actually work very well. When you're paying attention it's easy to remember, but it's the things you're not paying attention to that are the problem.

But perhaps this particular quirk is inevitable. As long as we have a limited capacity, there has to be some behaviour when that capacity is exceeded. While we might assume that a big obvious "your capacity has been exceeded!" signal would be better, the reality is that our perception and understanding is in a constant state of compromise, and if there was such a signal it would be going off constantly.

Maybe it's for the best that we don't notice.