Going meta
A while back I read the most amazing NASA report. It was just after Lockheed Martin dropped and broke a $200+ million satellite. The sort of thing that you might consider fairly un-NASA-like given their primary mission of keeping things off the ground. They were understandably pretty upset and produced one of the greatest failure analyses I've ever seen.
It starts by saying "the satellite fell over". So far so good. Then "the satellite fell over because the bolts weren't installed and nobody noticed". Then "nobody noticed because the person responsible didn't check properly". Then "they didn't check properly because everyone got complacent and there was a lack of oversight". "everyone got complacent because the culture was lax and safety programs were inadequate". And so on. It's not sufficient for them only understand the first failure. Every failure uncovers more failures beneath it.
It seems to me like this art of going meta on failures is particularly useful personally, because it's easy with personal failures to hand-wave and say "oh, it just went wrong that time, I'll try harder next time". But NASA wouldn't let that fly (heh). What failure? What caused it? What are you going to do different next time? I think for simple failures this is instincitvely what people do, but many failures are more complex.
One of the hardest things to deal with is when you go to do different next time and it doesn't work. Like you say, okay, last time I ate a whole tub of ice cream, but this time I'm definitely not going to. And then you do, and, you feel terrible; not only did you fail (by eating the ice cream), but your system (I won't eat the ice cream next time) also failed. And it's very easy to go from there to "I must be a bad person and/or ice cream addict". But What Would NASA Do? Go meta.
First failure: eating the ice cream. Second failure: the not-eating-the-ice-cream system failed. Okay, we know the first failure from last time, it's because ice cream is delicious. But the second failure is because my plan to not eat the ice cream just didn't seem relevant when the ice cream was right in front of me. And why is that? Well, I guess ice cream in front of me just feels real, whereas the plan feels arbitrary and abstract. So maybe a good plan is to practice deliberately picking up ice cream and then not eating it, to make the plan feel real.
But let's say that doesn't work. Or, worse still, let's say you don't even actually get around to implementing your plan, and later you eat more ice cream and feel bad again. But everything's fine! You just didn't go meta enough. Why didn't you get around to implementing the plan? That sounds an awful lot like another link in the failure chain. And maybe you'll figure out why you didn't do the plan, and something else will get in the way of fixing that. The cycle continues.
The interesting thing is that, in a sense, all the failures are one failure. Your ice cream failure is really a knowing-how-to-make-ice-cream-plans failure, which may itself turn out to be a putting-aside-time-for-planning failure, which may end up being that you spend too much time playing golf. So all you need to do is adjust your golfing habits and those problems (and some others, usually) will go away.
I think to an extent we have this instinct that we mighty humans live outside of these systems. Like "I didn't consider the salience of the ice cream" is one answer, but "I should just do it again and not screw it up" is another. That line of thinking doesn't make any sense to me, though; your system is a system, and the you that implements it is also a system. Trying to just force-of-will your way through doesn't make that not true, it just means you do it badly.
To me that's the real value of going meta: you just keep running down the causes – mechanical, organisational, human – until you understand what needs to be done differently. Your actions aren't special; they yield to analysis just as readily as anything else. And I think there's something comforting in that.