It's inevitable that you will experience loss. Opportunities disappear, things get lost, projects end or stop, people go away, sometimes for good. To top it all off, at some point you will experience the final, ultimate loss. To say this is an uncomfortable truth is an understatement; it is repulsive to the point where people mostly refuse to consider it at all. Nobody wants to admit that their star developer will be hit by a bus, or their business might fail, or the very consciousness they are using to think this may one day be switched off as easily as a lightbulb. But it happens all the same.
So what would the opposite look like? If instead of avoiding loss, we embraced it – expected it? Planned for it the way we do shopping trips or birthdays? If it wasn't taboo to say "we're all going to die, maybe even today", or "don't forget, team, things could go from great to insolvent in a matter of weeks"?
We actually plan for loss very well when the loss isn't personal and we can see beyond our own aversions. Software systems are often well prepared for loss. Not just designed to avoid loss, though that is also true; systems are usually designed to minimise failures, be redundant, and have backups. But beyond this, resilient software is designed to expect loss of data, loss of connectivity, and even loss of the process itself. Good software has the expectation of its own sudden demise built in as a design goal.
Perhaps the best general advice for building resilient software is to reduce its state. State is a kind of temporary baggage: the information you carry around with you while doing something that you throw away once it's done. Complex software often ends up accidentally holding onto a lot of state, and as a result becomes very sensitive to losing it. You've got a bunch of applications open on your computer right now that will just disappear if it's turned off. But well designed software tries to minimise temporary state, either by making it permanent or avoiding it altogether. Achieving this completely, the holy grail, is called being stateless.
And so too for us. The state we carry is all our incomplete actions, our half-finished projects, our "I should really get around to"s, the people and things we take for granted because we will make it up later. But what if there is no later? What if that last fight was the last fight? This is it, there's no more, and all you have left is whether you can look back, at last, and be happy with it. That may not be true today, but some day, tragically, inevitably, it will be.
To be stateless is to face into that future instead of away from it. To keep your loose ends tied, to leave nothing important unsaid, and to take the opportunities today that could disappear tomorrow. And to sleep soundly in the knowledge that if it all ended tomorrow, you made the best of the time you had.
I wrote a while back about the idea of a 3d unprinter, and mused on the benefits of comparing and combining additive and subtractive processes. These two forms are the classic opposites of manufacturing; additive processes start with nothing and build up until it works, while subtractive processes start with an existing block of material and cut it down.
This same distinction can be a useful way of thinking about software design. We often speak about libraries and frameworks, which are both vehicles for making reusable code and sharing it with others. These terms are nebulously defined and to some extent a matter of opinion, but to me the difference is additive vs subtractive.
A library is a set of functions you can incorporate a-la-carte. A math library might include a fast exponentiation function, but that doesn't obligate you to use the square root or trigonometric functions. By contrast, a framework gives you a fully formed set of abstractions suited to a particular domain. Ruby on Rails, for example, provides you with an amazing pre-baked set of ideas distilled from the authors' experience in website development. If your problem is like their problems, you can save a lot of time and effort by going with these prefabricated designs, to the point where many websites can be generated to a first approximation with just Rails' setup and scaffolding commands and no code at all.
If you want to do something new with a library, everything's additive; you just write some more code and that's it. With a framework, the abstractions that are built are meant to cover the idea space completely. It doesn't usually make sense to just add code. Instead, you have to find a place for that code. The existing abstractions could be modified in some way, or even replaced. Regardless, the process is subtractive; you start with the general answer and cut it down to fit your particular case.
That's not to say subtractive is bad, or additive is always good. The thing is that additive is complex in proportion to the complexity of your project. If your goal is to make a new operating system, doing it from scratch is going to take longer than you likely have to spend. Subtractive, on the other hand, is complex in proportion to the difference between your project and the ideal abstract project the framework was designed for. Using Rails to make a webpage is so easy it's basically cheating. Using Rails to make a realtime message queue is harder than just not using Rails at all.
The mistake a lot of people make is not thinking about that abstract project. Both framework designers in having an unreasonably wide or vague idea ("don't worry, we made a framework for everything"), and framework users in not considering whether that abstract project actually matches up with their real one. Too often frameworks are chosen because of their popularity or familiarity rather than how well they fit the goals of the project.
Either way can be wrong, but I think subtractive has the most potential for harm. With additive programming you can at least get an idea for how far away you are. Bad subtractive programming is full of subtle pitfalls and dead ends caused by the impedance mismatch between what you want and what the framework designer assumed you wanted. In the worst case you basically have to tear the whole system down and build it back up again.
At that point it should become obvious that additive would have been easier, but that's an expensive misstep and one that's easily avoided if you know what to look for.
One of the classic software development distinctions is between a bug and a feature. A bug is unwanted behaviour that you have, and a feature is wanted behaviour that you don't have. Which is to say that in many cases, the distinction between bug and feature is only one of perspective. Is "the car loses control above 150km/h" a bug, or is "stable at speeds over 150km/h" a feature? The answer, at least in software, is defined either formally or informally by the expectations of the people involved.
A similar idea holds true about expectations in general. Is "I want to be a famous comedian" a feature you are hoping to implement, or is "I'm not a famous comedian yet" a bug that you need to fix? The answer to that will say a lot about your attitude when working toward that goal. If your current situation contains a bug, it is unsatisfactory to you, and you will need to fix the bug for the situation to be acceptable. If it's a feature, things are acceptable already, but they could be even better in the future. To put it another way, the bug/feature distinction is normative; unlike a feature, a bug shoulds you.
So I imagine it's no surprise that I consider bug-centric thinking very dangerous. It's a close relative of the high water mark problem, in that your perspective can have a profoundly positive or negative impact on your satisfaction with doing something even when the result is exactly the same. Defining things as bugs means things will be okay, but they're not okay now.
And if the future is just a series of nows, that could leave you in a pretty bad position.
I have recently been trying yoga and it's something of a surprise how calming it is despite being physically strenuous at times. Though I suppose I shouldn't be that surprised; I've always found running quite meditative in its own way, as well as most other forms of repetitive physical activity. It feels like exercise occupies most of your brain, making it easier to focus, though I have no idea if that's actually true. The yoga instructor said that it should be meditative, that yoga without meditation is just aerobics.
Something about that phrase really got me thinking about physicalism. It's one thing to accept that your mind lives in your brain, but your brain also lives in your body. So you might accept "I always feel sad, therefore there is something wrong with my brain", but "I always feel sad, therefore there is something wrong with my lungs" doesn't sound plausible. But why? Your lungs deliver oxygen to your brain. There's as much reason to think depression could be a lung problem as that a misfiring engine could be a fuel pump problem. In fact, there is evidence that living at a high altitude (with less oxygen) might cause depression.
In that light, I wonder if our view of the body as composed of various systems – muscular, skeletal, nervous, digestive and so on – is leading us astray in that regard. Those systems are not like our nice theoretical systems, with nicely decoupled and separable components. It's very difficult to analyse a single bodily system in isolation, and it seems like a problem just about anywhere can be caused by a problem just about anywhere else. Which is to say, maybe meditating with your body makes a lot more sense than it seems.
The funny thing is that I'm pretty sure most serious yoga people are dualists, which makes me wonder how they ended up with such a physical kind of spirituality.
I've always thought that there's a sad disconnect between the state of knowledge in research and the state of knowledge of the public. Climate change is the poster child of scientific ignorance, but there are lots of other, subtler examples in health, psychology, dietary science, and so on. Basically anywhere public opinion intersects with science tends to be a disaster. Surprisingly, many scientists don't seem to think much of pop science writers and science journalists, though without them I doubt most people would learn any science at all.
What's missing is a robust bridge between the kinds of questions scientists ask and the kinds of questions the public asks. The closest things so far are The Straight Dope, reddit's /r/askscience and the various sciencey Stack Exchanges, but I think we can do better. The problem is that any explanation quickly turns into a list of citations which you are, realistically, unlikely to verify. These sites translate science into English, but they don't give you any way to explore or learn beyond what you've been given. It's a one-way street.
My idea for an alternative is called the Tree of Knowledge: an arbitrary store of scientific papers and results, interlinked not just by references (the current state of the art), but by dependencies. Each paper has a page which links to previous results or ideas it depends on. That is, which other papers would invalidate this paper if they were invalidated themselves. This is the treelike structure of the Tree of Knowledge. Crucially, at the farthest extent of the tree would be the leaves: answers to nonscientific questions, articles and lay summaries of scientific knowledge.
The process would look like this: you want to know "does it matter what time I go to sleep or just that I get eight hours every night?". Someone has already answered this question (or you ask it yourself and it is then answered). The answer is not just someone making stuff up, but a distillation of the current state of scientific knowledge on the subject. The answer links back to different papers, which you can follow to see a community-edited summary of each paper, its current validity, the full text of the paper itself, and even more links back to the papers it depends on and forward to papers (and leaves) that depend on it. In this way you explore up and down the Tree of Knowledge, following each branch as suits your interests and seamlessly going back and forth between research and pop science.
The great thing about this is it could be a tool that benefits not just the general public but scientists as well. As well as making it easier to get a sense of the state of research before diving into the papers themselves, the Tree would help scientists to popularise their work in a way that still preserves its integrity. It's my belief that, beyond just thinking it's not their game, many researchers are distrustful of pop science and science journalism because of their tendency towards inaccuracy and sensationalism. The Tree of Knowledge could popularise science verifiably, and in a way that's still bound up with the rigor that makes science work.
Also, yes, technically it wouldn't be a tree because a paper can depend on multiple other papers, but Directed Acyclic Graph of Knowledge doesn't quite have the same ring to it.