An interesting way to think about decisions and how to evaluate them is to consider input and output separately. Inputs are all the decisions and things that you control. Outputs are the consequences, or the goals that you measure but don't necessarily control directly. It seems like in certain situations you can come to very different conclusions depending on whether you're measuring input or output.
For example, imagine a startup you're working on doesn't get any traction, runs out of money, and fails. Okay, so you must have done something wrong if you failed. But that scenario only describes the output half of the situation. Perhaps the startup failed because you made mistakes, or perhaps you did the right things but the startup concept itself was just bad. Or, and few startup people seem willing to admit this, maybe you just got unlucky. I think in this case measuring your own actions – the input – is more useful.
On the other hand, imagine you have a friend who is going to drive home drunk, and you want to stop them. Maybe you talk, ask, plead, convince, argue, and so on, but nothing works. "Well", you think, "I have tried everything" and give up. This is you measuring the input: you've done what you could, and even if that doesn't get the result you wanted, that's good enough. Then someone else just hides your friend's keys. You hadn't thought of that, and it's pretty hard to justify your input-centric view when you didn't achieve the goal and someone else did with the same resources.
So what happened there? I'd like to believe that you can always just focus on inputs, because they're the things you control. Trying to maintain control over the things you don't control sounds like a recipe for constant cognitive dissonance and unhappiness. But the problem is that, like in the drunk friend scenario, you don't always have a complete understanding of the input space. Unless you are certain you have a complete understanding of what decisions you could make and how to evaluate them, you can't rely exclusively on measuring input.
I think the right way to incorporate both elements is to avoid evaluating outputs, but rather measure your inputs, and use your outputs to evaluate your measurements. That way if your understanding of the inputs is incomplete, it will show up in your outputs, but you still won't end up judging your actions based on your outputs directly.