Tricks of confidence
A year ago I wrote To be real, about the importance of whether you believe something in the sense of saying you believe it, or believe something in the sense of it causing demonstrable changes in your actions. I also covered something similar in Wet floors, where you think something will be a problem, say it's a problem, but don't do anything about it. Another related idea is how difficult it is to trust logic when it disagrees with intuition, which I wrote about in Concentrate and Instrument flying.
The common thread tying all these things together is confidence, which I think is a particularly interesting concept. Mostly when you hear about confidence it's in the context of social signaling: acting in a certain way makes people like you more and do what you want more. While that might be true, I think there's something more interesting in thinking about confidence the way statisticians do, as a way of relating reality to your estimation of it. That is to say, the "confidence" in confidence interval.
Whenever you're working with incomplete information, which is, well, all the time, you build a model of what you think the rest of the information looks like. Obviously, you don't really know, but the more results you've seen and the more conclusive they are, the more likely that model is to be correct. You can quantify that in two ways, either by the Bayesian method of considering how likely the reality is to match your model based on the data and what you already know about reality, or the frequentist method of how likely it is that you would have come up with an accurate model given the data. It's, uh, contentious, but either way you're measuring the connection between your understanding and reality.
Of course, we don't do anything so formal in our everyday beliefs. Or, at least, most of us don't. Instead, we vaguely intuit the strength of the connection between understanding and reality based on a bunch of wishy-washy heuristics and hope it ends up roughly in the right area. But it doesn't, and systematically so. I believe the relationship between confidence (as in self-confidence) and confidence (as in confidence interval) isn't a coincidence, but rather a matter of essential similarity. Your self-confidence is really a measure of how strong your belief in your belief is.
Back to the cases I mentioned: having the sensation that the thing you're doing is real, turning your observation of a problem into doing something about the problem, and letting numbers override your intuition. In each, the question is how much you believe your beliefs. That is to say, the solution to these problems is having the necessary degree of confidence, an accurate measure of the connection between your model of reality and reality itself. But because our confidence is a heuristic, it can often be an under- or over-estimate, and both are dangerous.
Overconfidence gets the worst rap because we all like to see someone try something we didn't try only to fail miserably, but underconfidence can be even more dangerous. If we're overconfident, at least we can learn from our mistakes. Underconfidence doesn't lead to mistakes, just to missed opportunities. Ideally, we would be able to calculate our confidence mathematically like statisticians do, but until we all get robot brain co-processors that seems unlikely. But we can probably tune the heuristics a little if we focus on the right things.
The link to statistics gives us two promising leads. To use the Bayesian method, you could say your confidence comes from your existing understanding of reality and the data you have. In which case, the sensible way to improve the accuracy of your self-confidence to focus is on those. You want to know as much as you can to form an accurate base of prior information, you want to get the best quality data you can, and you want that data to match closely with your existing knowledge. To the extent that those things are true, you should feel confident in your conclusions.
Alternatively, by the frequentist method, you could say that your confidence comes from the data and the process. So you want the best quality data you can, and you want a process that will reliably turn that data into understanding. To the extent that you believe in the data and the process, you should believe in your conclusions.
So that's two good heuristics for calibrating your confidence: "I have good data and it agrees with what I already know", and "I have good data and my method for drawing conclusions from data is solid". Worth noting, of course, that good data is a requirement for both. But, of course, how could it not be? If you get bad information, your conclusions are going to be bad.
Another thing to think about is that this provides a nice roadmap for improving the quality of your beliefs, and thus your confidence in those beliefs: increase your knowledge, learn to make better conclusions from data, and seek out more reliable sources of information.