Sam Gentle.com

Fail open

Contrary to many people, I think movies are a great way to learn about computer security. Someone once observed to me that the funniest thing about Jurassic Park is that when the security system loses power, the locks open. What kind of idiot would design a system like that? Oh, sorry, the power went out, now Newman has all your dinosaur embryos and PS the velociraptors are free. More recently I watched Ex Machina, which had doors that lock automatically when there's a power outage. That's definitely more secure, but pretty creepy when you're locked in your room waiting for the power to come back on.

Those two options are called fail open and fail closed, and the decision between them shows up fairly often in system design. Failing closed means that, if something goes wrong, you default to the most conservative behaviour. So if your login system can't connect to the login database, it should act as if your password is incorrect (ie, not let you in). On the other hand, if your spam detector loses access to its spam database, it should just accept everything. That is failing open: defaulting to the most liberal behaviour.

The way to decide between them usually rests on the tradeoff between false negatives and false positives under the circumstances. Losing legitimate email is way worse than getting the occasional nigerian business proposal. On the other hand, letting the wrong person in to your account is way worse than not letting the right person in. And, as should be obvious, accidentally containing dinosaurs too much is far preferable to the opposite.

There are some important human factors, too. Failing open sometimes means that systems just get ignored rather than fixed when they fail. When a smoke detector runs out of batteries and stops working, it still behaves exactly like a properly functioning smoke detector nearly all the time. That's why, instead of going quietly, they fail closed and start beeping obnoxiously. Of course, the flip-side is that a fail-closed system tends to get disabled or bypassed when its strictness gets in the way. Too much obnoxious beeping just means you pull the battery out earlier.

I think of our internal filter as an example of just such a system. Before we say something, do something, create something, release something, we want to make sure it's good enough. Of course, some people say "just don't worry if it's good enough", but to me that's a classic contextual belief that only makes sense if you already have a relatively well-functioning filter. Nobody says or does things with zero consideration for whether those things are any good. But I do think you see a lot of difference in how people react when they're not sure.

I've noticed that some people tend to let their filters fail open. If they aren't sure about the thing they're saying, they'll say it anyway. If they're not sure whether they're singing the right note, they'll sing louder. In the absence of feedback, they go with the most liberal, optimistic behaviour. By contrast, others tend to fail closed. If they don't know, they stay quiet until they know. If they feel uncertain whether the thing they're doing is good enough, they just won't do it. Why take the risk?

And risk is really what it's about, because in some cases the consequences of your filter being wrong can be pretty significant. If you're a politician or a celebrity, or even just in a conversation with a group of people you don't know, failing open could mean saying or doing something that you can't take back. But in many situations I feel like that risk is exaggerated; you're not going to lose all your friends for saying something dumb, or have everyone hate you because you made something bad.

It's for this reason that I recommend failing open when you can. Failing closed is safer, yes, but it's important to remember that you don't just lose when you do something and it's bad, you also lose every time you don't do something and it would have been good.