Failure

Oops. My previous post about confidence spiraled wildly out of control and ended up taking longer than I thought. I've had this problem before, and the answer is generally to cut my losses and start again with something else, but it's tricky if by the time I notice I've already used up a bunch of writing time and energy. Most likely, the solution to that is to have a bunch of posts in states of various completed-ness, to decouple the writing from the publishing a little so if something's taking too long I can finish something else off instead. I've mentioned something like that before, but I'm beginning to think of a more specific system that will help me avoid these failures entirely that I will write about soon.

Tricks of confidence

A year ago I wrote To be real, about the importance of whether you believe something in the sense of saying you believe it, or believe something in the sense of it causing demonstrable changes in your actions. I also covered something similar in Wet floors, where you think something will be a problem, say it's a problem, but don't do anything about it. Another related idea is how difficult it is to trust logic when it disagrees with intuition, which I wrote about in Concentrate and Instrument flying.

The common thread tying all these things together is confidence, which I think is a particularly interesting concept. Mostly when you hear about confidence it's in the context of social signaling: acting in a certain way makes people like you more and do what you want more. While that might be true, I think there's something more interesting in thinking about confidence the way statisticians do, as a way of relating reality to your estimation of it. That is to say, the "confidence" in confidence interval.

Whenever you're working with incomplete information, which is, well, all the time, you build a model of what you think the rest of the information looks like. Obviously, you don't really know, but the more results you've seen and the more conclusive they are, the more likely that model is to be correct. You can quantify that in two ways, either by the Bayesian method of considering how likely the reality is to match your model based on the data and what you already know about reality, or the frequentist method of how likely it is that you would have come up with an accurate model given the data. It's, uh, contentious, but either way you're measuring the connection between your understanding and reality.

Of course, we don't do anything so formal in our everyday beliefs. Or, at least, most of us don't. Instead, we vaguely intuit the strength of the connection between understanding and reality based on a bunch of wishy-washy heuristics and hope it ends up roughly in the right area. But it doesn't, and systematically so. I believe the relationship between confidence (as in self-confidence) and confidence (as in confidence interval) isn't a coincidence, but rather a matter of essential similarity. Your self-confidence is really a measure of how strong your belief in your belief is.

Back to the cases I mentioned: having the sensation that the thing you're doing is real, turning your observation of a problem into doing something about the problem, and letting numbers override your intuition. In each, the question is how much you believe your beliefs. That is to say, the solution to these problems is having the necessary degree of confidence, an accurate measure of the connection between your model of reality and reality itself. But because our confidence is a heuristic, it can often be an under- or over-estimate, and both are dangerous.

Overconfidence gets the worst rap because we all like to see someone try something we didn't try only to fail miserably, but underconfidence can be even more dangerous. If we're overconfident, at least we can learn from our mistakes. Underconfidence doesn't lead to mistakes, just to missed opportunities. Ideally, we would be able to calculate our confidence mathematically like statisticians do, but until we all get robot brain co-processors that seems unlikely. But we can probably tune the heuristics a little if we focus on the right things.

The link to statistics gives us two promising leads. To use the Bayesian method, you could say your confidence comes from your existing understanding of reality and the data you have. In which case, the sensible way to improve the accuracy of your self-confidence to focus is on those. You want to know as much as you can to form an accurate base of prior information, you want to get the best quality data you can, and you want that data to match closely with your existing knowledge. To the extent that those things are true, you should feel confident in your conclusions.

Alternatively, by the frequentist method, you could say that your confidence comes from the data and the process. So you want the best quality data you can, and you want a process that will reliably turn that data into understanding. To the extent that you believe in the data and the process, you should believe in your conclusions.

So that's two good heuristics for calibrating your confidence: "I have good data and it agrees with what I already know", and "I have good data and my method for drawing conclusions from data is solid". Worth noting, of course, that good data is a requirement for both. But, of course, how could it not be? If you get bad information, your conclusions are going to be bad.

Another thing to think about is that this provides a nice roadmap for improving the quality of your beliefs, and thus your confidence in those beliefs: increase your knowledge, learn to make better conclusions from data, and seek out more reliable sources of information.

Your inner Manic Dream Pixie

A popular trope in fiction for young men is the Manic Pixie Dream Girl, a kind of fun hyperactive bonne vivante character who snaps the brooding protagonist out of his ennui by showing him how to enjoy life again. Of course, it's not just fiction aimed at men; you can see similar characters in How Stella Got Her Groove Back and other divorcée-goes-overseas-to-find-herself movies.

The Manic Dream Pixie is a lazy storywriting technique, an easy plot device to drive character growth, a kind of joy ex machina. If you can't find a way for the protagonist to discover meaning in their own lives, just make a character who's a literal personification of fun. They can come in, show the main character how it's done and, afterwards, disappear to a special pixie reservation or something.

Yet this shallow archetype is still somehow compelling, and I think it speaks to a need that often goes unmet. Sometimes life doesn't seem very fun, either because you're not making the effort to appreciate what's fun about it, or because there genuinely isn't much to appreciate. When that happens, the resultant feeling isn't "oh, I should go find the joy in my life", it's "my life sucks and I suck". The impetus to find that joy would need to come from the very same person who is demotivated because it isn't there. It would be so much easier if some external joyous force could just sashay in and fix it!

Unfortunately, real life is short on Manic Dream Pixies. Someone with that profound invigorating joy for life might well be out there, but why would they find your disaffection endearing as opposed to dull and depressing? The critical issue with the trope is that the character doesn't make sense; there's no way to write an internally coherent motivation for that kind of behaviour without the pixie having a massive and cloying messiah complex.

Instead, the place to look for the Manic Dream Pixie is inwards. The trope wouldn't appeal if there wasn't a part of you that wants to go dance in the rain or ride a Vespa up the Amalfi Coast. The problem isn't that you need someone else to show you how to have fun. Your inner Manic Dream Pixie knows already, it's just being ignored.

Red yellow green

I've started using an interesting technique that I thought I'd share. I wrote previously about the psychological impact of being ahead vs behind, and about asking "what changes?" to analyse motivation in terms of what you expect to be different if you succeed. Ages before that I wrote about the idea of a fail scale, breaking down success into comfortable success vs just-barely-success. To some extent, all of these ideas come together in the red/yellow/green board.

Basically, you categorise all the things you're working on as red (failing), yellow (at risk of failing) or green (succeeding). The exact distinctions between those statuses really depends on the project, but the rough intuition is that red is behind, green is ahead, and yellow is hanging in there. How exactly you represent these probably doesn't matter that much, but I have each project as an index card on a whiteboard with one column for each status.

The nice part about this is it gives you an at-a-glance picture of where your work is, and what's most at risk. The even better thing is it makes it much easier to visualise what will change as a result of your efforts. So maybe my writing is a yellow at the moment, but it will become green if I write a new post today. Alternatively, my writing is currently green, but it will become yellow if I don't write today. I've taken to visualising these potential transitions by drawing arrows between the status columns while planning.

As per my anti-snake-oil commitment, I'll follow up in a week and in a month with how I'm finding it, but so far I've been doing it for a few days and it's been very useful.

Prototype wrapup #30

My last prototype was a while ago, but now that I'm back it seems like a good idea to get back on the horse.

Sunday

A timestamp from Ouagadougu, Burkina Faso

Previously, I made the bioupdater accept a timezone name from a file, now I just needed to be able to update that file remotely. Having put some thought into making my prototypes simpler, I decided that simultaneously doing something functional and something in an unfamiliar language was a bad idea. So I threw together a quick web app in Node. This time, because I didn't have to care about anyone using it other than me, I did it in ES6, which felt pretty good. Also it turns out you can't get the user's time zone name from Javascript, so I used the HTML5 Geolocation API to get lat/longs and the geo-tz library to turn that into a zone name. Worked pretty well!