I've been thinking recently about the XKCD sandboxing cycle – briefly, that we seem to repeatedly come up with new container technologies that isolate systems from each other for security, and then new networking systems that join them back together for flexibility and convenience. I'd like to take a broader view.
Fundamentally, security is about dividing actions into two sets: things you should do, and things you shouldn't do. Who "you" is, what "things" they are, what "do"ing means, and how "should" is defined and enforced are... well, complex enough to require a whole industry. But it all derives from that fundamental partition of should-ness.
If everyone should do everything, you don't need security. This sounds facile, but in practice it's a useful way to think about situations that are naturally limited in who or what you have to consider. I don't need to think about a security model for my toilet because most people don't want to use my toilet, and also it's a toilet. If I had a very popular toilet, or my toilet were made of gold, it would be a different story.
If nobody should do anything, you also don't need security. The most secure system is a bonfire: the irreversibility of entropy is the only fundamentally-unbreakable encryption. More practically, though, end-to-end encrypted systems use this principle. The server is secure because it's not secure: it's just passing noise back and forth. If you gain illicit access to that noise... have fun, I guess?
But usually someone should do something, and a secure system allows them to do it without allowing people to do things they shouldn't. Solving that problem can be very difficult, but nowhere near as difficult as figuring out exactly what problem you're trying to solve in the first place. Where exactly is the line between should and shouldn't?
The web has this problem in a big way. On the one hand, developers should be able to build powerful software that can do things like play sounds, store files, send notifications, run in the background, and give haptic feedback. On the other hand, advertisers and malware authors shouldn't be able to do things like... play sounds, store files, send notifications, run in the background, and give haptic feedback.
So our line between should and shouldn't is obvious: who's doing it. We'll just ask each developer whether they're a nice person building good software, or a heinous evildoer foisting crapware upon the innocent. And, of course, the evildoers will respond "it's not crapware, it's valuable metric-driven high-engagement interactive content that provides users the brand awareness they so desperately crave". Back to the drawing board on that one.
Instead, we figure out what kinds of things good people do, and what kinds of things bad people do. Maybe regular developers mostly store data about their own site, whereas advertisers need to store data in a way that they can harvest across many sites. Maybe regular developers don't mind waiting until you've clicked something before they start vibrating or playing sound, but malware authors want your attention right away .
Now, of course, maybe there are some reasons why a regular developer might want to do malware-looking things, or a malware author might find a way to use regular-looking things to do malware with. So we have to draw that line very elaborately and very carefully. How do we do that? We look at what people are doing and turn that into the definition of what they should be doing. We take the current way the system is being used, and we lock it down.
Unfortunately, this is a ratchet that only tightens. As bad people figure out things they're allowed to do and misuse them, those things become disallowed. But how can good people use something that's disallowed in order to convince the system that it's good? The coastline between good and bad becomes increasingly complex and rigid.
Ah, but light at the end of the tunnel! Because somebody has figured out how to build a new system inside the old system! So we can make our new system without so many complex restrictions, because it's just for this new stuff we're building, and all the old, important stuff is stored in the old, rigid, secure system.
Wait, what's happening? Our new system's flexibility means people are making more and more things in it, using it for things we never imagined, and putting important stuff in there that would be a really good target for the evildoers that are rapidly beginning to examine our new system for weak points? Oh nooo... guess we need to lock it down.
Each stage of the lockdown cycle codifies whatever people should be doing at the time it was locked down. Once upon a time, the thing developers should be doing is distributing physical discs with programs on them that are updated once a year. So we locked that down with anti-virus tools and systems for denying/allowing certain programs.
But when everyone started using applications with internet access, the real problem became vulnerabilities in those applications. You might not be able to install a virus, but you can write an evil Word document or PDF, or maybe you send a specially-crafted email that causes your target to send another specially-crafted email... antivirus software became much less useful, because the the complexity of internet-connected applications made them the new system, and the new thing you shouldn't do is have your fancy and legitimate program accidentally running arbitrary code from the internet.
And then, of course, the web app era came along, built entirely on the basis of deliberately running arbitrary code from the internet. What a revolution, but also what a security and privacy nightmare. Luckily, we're starting to get that whole mess straightened out, but in the process we're getting very specific about what kinds of things we expect people to use the web for.
Eventually, those things may become stifling enough that the next, un-locked-down thing begins to flourish.
When organising your behaviour to achieve a goal, there are two equal and opposite skills: composing and improvising.
Composing means making decisions as early as possible. You anticipate the outcome you want, figure out the paths to that outcome, and construct a plan for getting there. You're making decisions at the point of maximum influence: with more time, your decisions can compound for longer, and you have a lot more freedom to choose what to do and when to do it.
Composing requires bringing the future into the present, making decisions there, and then projecting those decisions back out into the future. Both those transformations create uncertainty and risk, and in that sense composing is quite fragile. You start with a mix of incomplete information and conjecture, turn that into a bunch of assumptions, and turn those assumptions into behaviour.
But as difficult as predicting the future is, the real difficulty is enacting it. Every plan is a constraint on your future behaviour: a commitment to do one thing instead of another at a point in time. But when the plan meets reality – or, rather, doesn't meet it, those constraints can become stifling. Obviously, a decision based on incorrect assumptions should be changed, but any replacement decision is being made with less time, less influence, and less freedom. And what if that new decision conflicts with other, more careful decisions you've already made?
Improvising, on the other hand, means making decisions as late as possible. You might still anticipate a general space of outcomes, but not necessarily a specific one. Instead of preparing by deciding in advance, you prepare by putting yourself in the best position to make decisions as they arise. These decisions are made at the point of maximum information: everything is done but the decision itself.
The key to improvising is flexibility. To make a good decision in the moment, you have to keep a range of options open, and be able to choose quickly and freely. You don't need to anticipate the future, you just wait until it's close enough to the present that the right decision is evident. In that sense, improvising is quite robust – to almost everything except planning.
Composing and improvising are, if not exactly opposites, at least mutually incompatible. Composing relies on future behaviour being determined by present decisions, and improvising relies on present behaviour being determined by present circumstances. To do one better, you have to do the other one worse.
There are situations where composing doesn't work, like anything rapidly-changing or unpredictable, and situations where improvising doesn't work, like anything that requires directed action on a timescale longer than a few hours. And it is possible to use one to fill the gaps of the other, but not really possible to use both at the same time.
This means that to practice composing when you're used to improvising, or practice improvising when you're used to composing, you need to give up the skill that works best for you at exactly the time you'd tend to use it. You have to get worse before you get better, because the skill you already know actively sabotages the skill you're trying to learn.
There's a kind of argument that starts with one person saying something like "social media companies should ban disinformation", and the other saying "but once you start banning speech who knows what it will include". Then the first person says "well I know what it will include: things that aren't true", the second says "ah, but what is true?" and the prophecy is complete.
This looks like a argument about censorship, but really it's an argument about abstraction. Chances are these people both agree that disinformation is bad, that banning speech is bad, that disinformation is untrue, and that truth is not universally-agreed-upon. The points that are raised aren't in conflict, but the people are talking past each other because they can't agree on which level of abstraction is appropriate.
It's easy to see a more abstract concept as more enlightened, or even more virtuous. But really the best way to understand it is that a more abstract concept is more ignorant. The more abstract your point, the less you need to know for it to be true. An abstract point applies in more situations, because it ignores the details that make it specific to any one situation. Ignorance and generality are two sides of the same coin.
When we make an argument like "banning speech is bad", we ignore what specific speech we're talking about. If we know for certain that the speech in question is loud demonic screaming outside your bedroom window at 4am, banning it is just fine. But if we don't know whether it's demonic screaming or criticism of political parties, it's better to risk the occasional 4am exorcism if it allows us a functioning democracy.
This ignorance is sometimes because we genuinely don't know, like when we're trying to find a rule for situations we haven't encountered. But other times it's a deliberate choice to forget. We know that 4am screaming is bad, but we don't trust the people who determine what constitutes 4am screaming. Better to pretend we don't know and find a more general rule.
But abstraction's deliberate ignorance can also be a disingenuous tactic. Yes, we might not be able to universally define misinformation, but it's not hard to define misinformation for less-universal domains like 5g-induced-diseases, cancer cures made from shark bits, or whether vaccines work. Pretending otherwise is just trying to abstract away the truth.
Despite the towering temptations of metaphysics, we do live in an actual physical universe, with actual things we can test and determine to be true or false. We don't need to find arguments that cover the other universes; they can fend for themselves.
And, I mean, they'll have to if they're going to fight off the hordes of autistic 5g corona-sharks.
You know that old trick for finding your way out of a maze? You keep your left hand touching the wall the whole time, leading you to follow a series of left turns that will, inevitably, bring you to the exit. It's a fascinatingly simple algorithm, and an interesting way to understand it is as a kind of depth-first search.
A depth-first search is one way of exploring a space where, each time you have a choice, you pick the first option. Repeat until you get stuck, at which point you go back to your most recent choice and pick the second option. If you're out of options, you go back further. It's called depth-first because you go down each path as far as you can before exploring any other options.
To use this algorithm, though, you need to keep track of where you've been. After all, when I say "go back to your most recent choice", what was it? And when I say "first option", "second option", etc, how do you know which one you're up to? You need to keep some kind of list so you can know where to go next. But, for some reason, the left-hand-following-the-wall solution doesn't use one – how can that be?
Well, imagine a Y shaped passage, where you enter from one side and the other two are dead ends. If you keep your left hand on the wall, you'll take the left fork first, then when you hit the dead end, you follow the wall around to the right fork, which leads you around the dead end back out the way you came.
Play that again in slow motion: the wall leads you back to your most recent choice when you hit a dead end, and the wall connects each option to the one after it. The reason you don't need to keep a list of decisions is because the wall and the list are equivalent. Or, to put it another way, somebody already stored that list in the form of a wall.
The memory requirement of the algorithm hasn't been overcome, we're just using the physical world as memory. Not in the symbolic way that marks on paper are physical, but profoundly, concretely. The walls create a maze, but in doing so they also store a path that explores the maze.
One of the strangest things about video games is the way they encourage you to take the longest path. Most games have some system of progression: a storyline, levels, or a map you move through methodically. However, there are also many things to explore in each area, and when you move on you probably won't return to them. To get the most out of the game, you actually want to avoid winning for as long as possible. In that sense, winning is a kind of loss.
I've started noticing areas of my life where I seem to avoid winning. Rather than a desire to lose, I think this reflects a desire to explore, an unwillingness to move on too soon. If I finish this today, I know there will be parts of it still unexamined. Questions gone unanswered. Experiences left unhad. Why would I want to move forward, if it means leaving important things behind?
But, unlike in a video game, progress is not linear, and not even in a consistent direction. There is no helpful-yet-insistent arrow ushering you from one idea to the next. What is finished today can be even more finished tomorrow. Winning doesn't have to mean moving on. And moving in any direction can still be progress, even if it's revisiting old ideas.
This is a difficult lesson to learn, at least for me. My mindset is much more geared towards discovering the answer, solving the problem, slaying the dragon, and moving on to undiscovered/unsolved/unslain pastures. It's a model of progress as completion. But I think there is a lot more to be found in progress as iteration, refinement and steady accumulation.
Ironically, the key to finishing things may be knowing that they are never truly finished, and thus there is no best ending to hold out for.