This is an attempt at an instant docs idea I've wanted to try out for a while. I mostly Google for reference documentation because it ends up being the fastest way to find it. What I'd rather have is a live-search that displays just the minimal documentation I need as quickly as possible. Part of that was figuring out a good way to deal with all that documentation data. I ended up leaning pretty heavily on PouchDB and CouchDB, and I'm fairly satisfied that they were the right tools for that job.
Time: 3 hours.
Although I did less prototypes than I wanted to, I also think I did a better job at keeping that prototype more modest than my previous ones. I resisted the urge to make the scope wider than it needed to be, and I aggressively cut tempting time sinks like supporting multiple documentation sources (though the design should make it easy to add that later). I want these prototypes to be small enough to explore one idea in code, the same way the writing I do here is small enough to explore one idea in words.
I obviously haven't figured that out well enough yet, and I still find myself leaving the prototypes until too late in the week despite the decrease in size. I'm hopeful this is a temporary blip mostly caused by the extra effort I put in to get the prototypes done last week and the subsequent change in focus. I'm going to just commit to the same thing again for next week, and if I'm still having trouble then I'll look into a more sophisticated strategy.
I mentioned before that for many problems you can be sure that you'll find a solution, just not how good the solution is. You have to find the best solution you can by making tradeoffs between the different choices. These are usually called optimisation problems, and there's a lot of research into various classes of optimisation problem and ways to solve them.
I've also said that I think our brains are optimisation machines, sometimes even to their own detriment. For better or worse we seem to do much better on optimisation problems than on, for example, formal reasoning, where we are comparative dunces. I suspect this is because optimisation problems likely fit our evolved capabilities better than formal logic does. But even though we can often find a good solution to difficult problems with lots of constraints, not all good solutions are equal.
The stable marriage problem is a fairly simple but very applicable optimisation problem, which applies to any situation where two groups want to match in pairs, and each member has a list of preferences for the other group. Dating is the canonical example, but also matching medical students and hospitals, job searching, and network management. The canonical algorithm, called Gale–Shapley, is simply that each member of one group asks their first available preference, and the other group accepts the offer if it's the best one they've had so far (bumping a previous offer if necessary). You do this over and over again until everyone is matched.
Gale–Shapley is guaranteed to find a solution such that nobody wants to change; ie, anyone you would rather be with is already with someone they'd prefer more. However, there are often multiple such solutions, and in that case some solutions will result in better outcomes for some people, even though on the whole everyone gets a good enough solution. In fact, Gale–Shapley does guarantee that one group will have the best possible outcome: the group doing the asking. The group that accepts or rejects the offers appears to be in a position of power, but actually receives the worst outcome that is still good enough.
That in its own right is a fairly significant result, given the similarity of Gale-Shapley to many real-world preferential matching algorithms. A job-hunting system where you ask your preferred employers in order for a job will get you the best result. A system where employers ask a pool of candidates in order will get the employers the best result. In dating, too, you want to be the one doing the asking. Being asked perhaps feels more flattering and less risky, but means a less optimal outcome.
It's also worth considering this pattern beyond the stable marriage problem. In every optimisation problem I am familiar with, the order matters. The parameters that go in first are most likely to get the best results, and the ones that go in last get the worst. This is for the simple reason that the earlier parameters can use more information. In the case of the SMP, they choose among all the candidates. Everything that goes afterwards starts by fitting around what's there already. In the event of multiple solutions, this biases the outcome.
The simple advice I would take away from this is to make sure that your optimisation order matches your preferences. If you want to have a good work life, a good home life, and a healthy social life, what order do those go in? Because while you might be able to have all of those things, you are unlikely to have all of those things equally. What you work on first will get the best outcome, and what you work on last will get the okayest outcome.
There's a classic situation that plays out in tourist hotspots across the world. An English-speaker is trying to talk to someone who doesn't speak English. "How much is that?", they ask. The non-English-speaker looks confused; they don't speak English. "How", the English-speaker says, "Much. Is. That?" Still nothing. "HOW", wild gesticulating. "MUCH", spittle flying. "IS", face turning red. "THAT", indiscriminate shouting. Tragically, despite the continual increase in volume and emotion, comprehension is not achieved. What happened here?
I think of this as an example of abstract and construct gone awry. In many cases, speaking louder does help if someone hasn't understood you. If it's a loud room and you're speaking too quietly, or you have a propensity to mumble, or your conversational partner is not paying much attention. However, the relationship between volume and understanding is not so simple; we abstracted (more volume -> more comprehension), and the resulting construction (maximum volume at unsuspecting foreigners) is the caricaturish result.
But this particular sub-pattern applies to a lot of situations. For example, it's common to punish dogs far too harshly because of a lack of understanding of dog psychology; all a dog usually needs is a minor punishment delivered in the correct way at the correct time. Instead, we try the wrong minor punishment, and when it doesn't work decide that the punishment must be too small. It is true that too small a punishment is not effective, but somehow that small correlation dominates any other understanding. Instead of trying to figure out a better way to punish, we just find a harsher way to punish.
You see similar things in particularly heartless suggestions for social policy. There's less incentive to be poor if being poor is more miserable, so we should make being poor as miserable as possible! Illegal immigrants won't want to come here if we treat them terribly when they arrive! If every crime got the death penalty, there'd be no crime! You can see in all of these a small kernel of truth, that in many circumstances better or worse treatment does incentivise behaviour. There is such a thing as being too soft on crime, too laissez-faire on immigration, too willing to shield people from the consequences of bad decisions.
But that simple understanding completely ignores the fundamental mechanics of the situation. You might equally say "if we shoot everyone who can't levitate, everyone will learn how". It's true that impending death would motivate people to try to levitate, but not true that levitation would be the final result. Human mechanics also come into play; you aren't likely to get good results saying "I'll shoot anyone who doesn't live a carefree, low-stress lifestyle". The weekly stress inspections/executions may turn out to have the opposite effect, even though the incentives are all in the right direction.
So why do we do this? Why, when something doesn't work, do we ignore the possibility that we don't understand it well enough and instead just do the wrong thing harder? I think the answer is that we like things to be easy. We have a strong bias for simple answers, which at its worst means superficial answers. If we have a simple answer that looks right some of the time, we will hold on to that answer as long as we can, far past the point where it stops predicting reality. And the simplest answer of all is just a linear correlation.
But if we can abandon that clear, simple and wrong solution, maybe we can find a more complex solution based on deep understanding. And if we gain that deep understanding, maybe we can stop shouting so much.
You hear a lot about burnout, the phenomenon where chronic overwork, stress or resentment leads to increasing unhappiness and an eventual breakdown. It's happened to friends, it's happened to me, and by all accounts it's a fairly widespread problem both in the software industry and elsewhere. But I think what we call burnout isn't a single thing, but actually two distinct phases, one acute and one chronic, and part of what makes burnout such a tricky problem is that those phases have opposite solutions.
The first, acute phase is a wildly aversive reaction to your current environment. This is the point where the building unhappiness with your situation finally gets too much. You were probably getting less and less effective as your happiness decreased anyway, but at some point you just can't deal with it anymore. It goes from being a bad situation to an intolerable situation, which usually manifests as quitting, avoiding, or aggressively underperforming at your work. In this case, I would say the burnout is a perfectly reasonable response: you're in a bad place, you had the warning signs telling you to get out of the bad place, you didn't do anything, and now your hand is being forced.
The second, chronic phase comes afterwards, when the original problem has gone away. Now there's nothing directly stopping you from being productive, but you can't bring yourself to go back to doing anything useful. It's like there's an invisible wall between you and the thing you used to enjoy, and every time you go to do it just about anything else seems like a better idea. This stage is self-perpetuating: the more you don't do, the more you get used to not doing. I believe this phase is really characterised by a bad association that has formed, at first because of the emotional strength of the first-phase burnout, and eventually through habit.
And this is why I think the advice about burnout can be very confusing. If you're in that first phase, the common advice of "take a break, go on holiday, get as far away from the situation as possible" is absolutely correct. The immediate acute problem won't go away until the reason for it is removed. However, in the second phase, the most important thing is to not take a break, but to pick yourself up and get back to doing something. That's the only way to fix the bad association that you've built with your work.
Of course, it's not possible to remove an association, so what you have to do instead is build a new, better one. As with many other things, Feynman had this one right: rather than go back to what you were doing before, you need to rediscover the positive feelings that brought you to your work in the first place.
If there's one thing I love as a software developer, it's a good abstraction. It takes a large, complex set of things and turns them into a smaller, simpler set. Maybe you have thousands of different colours that are hard to reason about until you realise you can represent them all as mixtures of red, green and blue. Or you have all these different chemical elements but they all have properties seemingly at random, until you realise you can lay them out periodically by atomic number and the properties line up.
I've heard abstractions that don't completely encompass the things they're meant to represent described as leaky, with the understanding that all abstractions leak. To me, that is perhaps a bit of an abstraction-centric view. I like to think of it in terms of information theory: there is some fundamental amount of information that you are trying to compress down into a smaller amount of information. The extent to which you can do that depends on how much structure is in the underlying information, and how much you know about that structure.
If I give you a piece of paper with a list of a million numbers written on it that look like 2, 4, 6, 8, and so on, I have provided you with no more information than if the paper said "even numbers up to 2 million". The abstraction, in that case, was really just a more efficient way of representing the information. On the other hand, if I gave you that same piece of paper and it was mostly the even numbers up to 2 million but some numbers were different, you've got a hard choice to make. Either you keep track of the (potentially large number of) exceptions, or you just remember "it's even numbers up to 2 million" and be wrong some of the time.
It's this kind of lossy compression that represents most abstractions in the real world, for the simple reason that most real-world problems are too complex for the amount of space we're willing to give them. You can prove all sorts of interesting results about probabilities of coin flips, but you have to ignore the possibility that the coin will land on its edge. These simplifications throw away information in the hope that you can compress your understanding much more at the cost of only occasional errors.
So I don't believe that all abstractions leak. I think we often choose to make our abstractions imperfect to save space, or because we don't know enough of the underlying structure to describe it succinctly. However, it is possible to make a perfect abstraction, we just don't think of them as abstractions. An abstraction that completely describes the underlying information is just the truth.