The lockdown cycle
I've been thinking recently about the XKCD sandboxing cycle – briefly, that we seem to repeatedly come up with new container technologies that isolate systems from each other for security, and then new networking systems that join them back together for flexibility and convenience. I'd like to take a broader view.Fundamentally, security is about dividing actions into two sets: things you should do, and things you shouldn't do. Who "you" is, what "things" they are, what "do"ing means, and how "should" is defined and enforced are... well, complex enough to require a whole industry. But it all derives from that fundamental partition of should-ness.
If everyone should do everything, you don't need security. This sounds facile, but in practice it's a useful way to think about situations that are naturally limited in who or what you have to consider. I don't need to think about a security model for my toilet because most people don't want to use my toilet, and also it's a toilet. If I had a very popular toilet, or my toilet were made of gold, it would be a different story.
If nobody should do anything, you also don't need security. The most secure system is a bonfire: the irreversibility of entropy is the only fundamentally-unbreakable encryption. More practically, though, end-to-end encrypted systems use this principle. The server is secure because it's not secure: it's just passing noise back and forth. If you gain illicit access to that noise... have fun, I guess?
But usually someone should do something, and a secure system allows them to do it without allowing people to do things they shouldn't. Solving that problem can be very difficult, but nowhere near as difficult as figuring out exactly what problem you're trying to solve in the first place. Where exactly is the line between should and shouldn't?
The web has this problem in a big way. On the one hand, developers should be able to build powerful software that can do things like play sounds, store files, send notifications, run in the background, and give haptic feedback. On the other hand, advertisers and malware authors shouldn't be able to do things like... play sounds, store files, send notifications, run in the background, and give haptic feedback.
So our line between should and shouldn't is obvious: who's doing it. We'll just ask each developer whether they're a nice person building good software, or a heinous evildoer foisting crapware upon the innocent. And, of course, the evildoers will respond "it's not crapware, it's valuable metric-driven high-engagement interactive content that provides users the brand awareness they so desperately crave". Back to the drawing board on that one.
Instead, we figure out what kinds of things good people do, and what kinds of things bad people do. Maybe regular developers mostly store data about their own site, whereas advertisers need to store data in a way that they can harvest across many sites. Maybe regular developers don't mind waiting until you've clicked something before they start vibrating or playing sound, but malware authors want your attention right away .
Now, of course, maybe there are some reasons why a regular developer might want to do malware-looking things, or a malware author might find a way to use regular-looking things to do malware with. So we have to draw that line very elaborately and very carefully. How do we do that? We look at what people are doing and turn that into the definition of what they should be doing. We take the current way the system is being used, and we lock it down.
Unfortunately, this is a ratchet that only tightens. As bad people figure out things they're allowed to do and misuse them, those things become disallowed. But how can good people use something that's disallowed in order to convince the system that it's good? The coastline between good and bad becomes increasingly complex and rigid.
Ah, but light at the end of the tunnel! Because somebody has figured out how to build a new system inside the old system! So we can make our new system without so many complex restrictions, because it's just for this new stuff we're building, and all the old, important stuff is stored in the old, rigid, secure system.
Wait, what's happening? Our new system's flexibility means people are making more and more things in it, using it for things we never imagined, and putting important stuff in there that would be a really good target for the evildoers that are rapidly beginning to examine our new system for weak points? Oh nooo... guess we need to lock it down.
Each stage of the lockdown cycle codifies whatever people should be doing at the time it was locked down. Once upon a time, the thing developers should be doing is distributing physical discs with programs on them that are updated once a year. So we locked that down with anti-virus tools and systems for denying/allowing certain programs.
But when everyone started using applications with internet access, the real problem became vulnerabilities in those applications. You might not be able to install a virus, but you can write an evil Word document or PDF, or maybe you send a specially-crafted email that causes your target to send another specially-crafted email... antivirus software became much less useful, because the the complexity of internet-connected applications made them the new system, and the new thing you shouldn't do is have your fancy and legitimate program accidentally running arbitrary code from the internet.
And then, of course, the web app era came along, built entirely on the basis of deliberately running arbitrary code from the internet. What a revolution, but also what a security and privacy nightmare. Luckily, we're starting to get that whole mess straightened out, but in the process we're getting very specific about what kinds of things we expect people to use the web for.
Eventually, those things may become stifling enough that the next, un-locked-down thing begins to flourish.