A year ago, I wrote Interstitials, describing the problem of having extra stuff you tack on before your work actually starts. You sit down to get something done, but first you put some music on, and then what music do you listen to? Each one of these interstitials adds an extra opportunity for distraction and, crucially, adds more difficulty to the task right at the start.
It's this last problem, the extra work at the start, that I've come to realise is the worst part. Any additional decision you have to make is an extra load you have to shift to get started. I wrote before about starting inertia, and more recently about rebreathing as a warmup technique; both of these are based on the idea that building momentum requires more work than keeping it. Every bit of initial difficulty hits your motivation at exactly the time you can least afford some to spare.
Okay, okay, so interstitials are bad, get rid of them. But the problem is that there are some that you can't get rid of. Before you start writing you need to decide what to write. Before you start programming you need to open your editor, web browser, docs etc. In some cases you might have to coordinate with someone first or check something before you begin. Each of these things, while they might be necessary, adds extra weight that makes getting started that much more difficult.
I've found this particularly difficult, because I like time tracking, and it genuinely does solve the problem I want it to solve, but I find myself resisting it. In its own way, even something as simple as starting a timer can take me out of the space I want to be in. After all, I'm starting a timer now, that means I'm committing myself to working on this thing. Is this definitely the right thing to be doing now? Should I check if there's anything else more important? It adds the cognitive load of deciding what to do at the same time as actually doing it.
However, while these interstitials can't be removed, there's nothing saying they have to happen at exactly the moment you start. Lately I've been trying something I think of as pregaming: trying to do any extra stuff ahead of time. So as far as deciding what to write, I spend time at the start of the week deciding what I'll write each day. Sometimes I even open up a text editor in advance and write the title, just so there is literally nothing between me and starting but the first keystroke.
And as far as time tracking? Well, you can't start tracking your time any time except for when you start, but if you broaden the question to mean knowing what you're doing at a given time during the day, it starts to look an awful lot like our friend the timetable.
There's an interesting problem with Git and, indeed, any version control system. These systems are designed to track changes over time, so you write some code, record those changes, write more code, record those new changes, and the history is preserved. The problem is, what if you want to change the history itself?
Let's say it turns out that your first change was actually a mistake. You could commit a third change that reverts the first one, but that's not the same thing as removing your mistake entirely; you can still see it in the history. Alternatively, you could go in and edit the history to make it appear like your mistake never happened. The decision between these options is a matter of considerable debate.
Ultimately, the problem that Git is designed to solve is maintaining a consistent history, and it can't do that when you go back and meddle with the timelines. However, there are various aesthetic (who cares about my fifty "oops now it's really fixed" commits?) and practical (what do I do if I committed a secret key?) reasons to rewrite history. So you end up with weird compromises, like only editing history if you're really sure nobody else will see it, or coordinating your changes manually ("hey, do you have my latest rebase? No, I mean the one after that one...") Exactly the problem Git was meant to solve in the first place!
The central issue is mutability. We can have a consistent mutable codebase because we have an immutable history to coordinate it. Once you start making changes to that history, it stops being immutable, and you lose that consistency. If we want the history to be consistent and mutable, we need to use the same trick we used to make our mutable code consistent: another history.
Essentially, this is something like what I described in a null of nulls. Changes to code form a history, changes to the history form a meta-history: a history of histories. Each thing you change needs a level of history above it. Adding a meta-history to Git would allow you to have a nice, curated history for human consumption while giving the tools enough information to handle synchronisation properly.
There is something a little bit like a meta-history already, in the form of the git reflog, but it's nowhere near sophisticated enough. It's mostly designed for human repair when history-rewriting goes wrong. What if, instead, there was some in-system representation, like a reflog but with enough information to synchronise, merge and reconstruct history changes? It might even turn out to be similar enough to the existing commit structure that most of the code and storage format would be the same.
Of course, the obvious question is: what if you want to make changes to the meta-history? That seems like the kind of problem that will come up eventually and there's no reason why you can't. Assuming the meta-git format is flexible enough to reference itself, you can get as many levels of meta-history as you need. That's not to say it's a good idea, though. One extra level of history is already complex enough.
I heard some interesting advice recently: if you're writing something and you get stuck, just retype your previous sentence or paragraph. Doing so puts you in the frame of mind of writing (rather than thinking about writing) and pulls all those associations into your near orbit. When you catch back up to where you were before, it'll be much easier to just keep rolling along into the next sentence. I think this technique is fascinating, and gives rise to a whole space of similar tools.
One of the trickiest things about doing creative work is getting yourself into the right state for it. If you're thinking about something totally different, it can be difficult to build the associations necessary to get into a nice easy groove. I've previously tried to do things like look at interesting work made by others, or go back through old projects of my own, but never found it particularly effective. Thinking about it now, the reason seems obvious: being on the stage vs being in the audience are very different processes. It'll help a little because you have those associations between consuming the thing and creating it, but ideally it'd be nice to have something more direct.
The answer is to think about output rather than input. To get yourself in a place where it's easy to do things, the solution is to do things. Unfortunately, doing things when you're not already doing them is difficult (otherwise you wouldn't have this problem in the first place). In other words, you need a starter motor, something that's easier to get going than the actual work, but as similar to it as possible so you can transition straight to the real thing when you're ready. This technique ticks all of those boxes.
I think of it as rebreathing, like the SCUBA technology where you reuse your exhaled air. Of course, any kind of practice or warmup activity has a similar function, but rebreathing uses a unique and clever approach to make sure that the warmup is similar to the real activity: just use a real activity that you have done before. And by designing it so that the activity ends where your next task begins, you can make the best use of that built up momentum.
These days, we tend to use the word monoculture to refer to human culture, but it's originally a agricultural term. Most bananas and oranges, for example, are clones of one particular plant that had favourable characteristics. Other crops, like dwarf wheats, are not identical, but very similar. You end up with a monoculture either directly, because growers are just cloning the same plants, or indirectly, because the plants are all being optimised for the same things. This leaves systemic weaknesses in the plant population; a single problem, instead of affecting a proportion of the plants, affects them all. At its worst, this leads to devastatic plant pandemics like the Irish potato famine.
But, to return to human culture, it can be useful to reconstruct this idea of monoculture and its dangers. You could say that the agricultural idea of monoculture is about genetic diversity, but the human cultural equivalent is about memetic diversity. In memetics, a collection of related ideas that exist as a replicable unit is called a memeplex, and Dawkins argues that religion is one such memeplex. Of course, there are many others as well. And, much as certain combinations of genes are vulnerable to diseases, certain combinations of memes can be exploited, both by other people and other self-replicating ideas.
In other words, religious ideas are vulnerable to people and ideas that can exploit religion. But it's not just a religious thing; collectivist ideas were vulnerable to the horrific excesses of soviet communism, and individualist ideas are vulnerable to the callousness of laissez-faire capitalism. Even when we work hard to build rigorous ideas and inure them to external threats, it's a Turing nightmare-esque endless uphill battle; any sufficiently complex idea will have vulnerabilities. Some part of a thing that is beneficial to believe can always be twisted into something harmful.
Note that I'm not making the vastly less interesting argument that there are no good or bad ideas, or that some ideas can't be better than others. Rather, much like dwarf wheats are basically the best wheats, some ideas far outshine their nearest competitors. But, even so, we should be wary of a world where everyone believes the best ideas. If there is some vulnerability in those ideas, some way in which they prove to be maladaptive in certain circumstances, we may just specialise ourselves out of existence.
It's instructive to think about how people's relationships with computers differ from their relationships with other machines. Car engines, for example, or typewriters, or mechanical watches. These are all machines that people don't necessarily understand, but there's something nonetheless fascinating about them. You can watch an engine at work, see the cams turning and pistons moving, hear the roar of combustion and, though the details might escape you, you can get a certain level of understanding just by watching the machine at work.
Electronics, on the other hand, don't generally move or make noise. Instead, they make electromagnetic waves, and where's the fun in that? You can't get any sense of what it's doing because the whole thing is running on invisible magic pixies. Sure, you can rig up an oscilloscope, multimeter, or just add some blinky lights, but you're still only indirectly observing the system. There's no equivalent to the direct observation you get with mechanical things, no equivalent to just watching everything work.
With computers, we had the chance to fix this. After all, a computer can be any kind of machine we want; it can be transparent like a mechanical device, or opaque like an electrical one. Unfortunately, perhaps because it was easier, perhaps because the early programmers were mathematicians and electrical engineers, we ended up going down the opaque path, and we've never really recovered. In theory, the internal operations of any program are easy to inspect at any level, from the raw instructions as they hit the CPU through to system calls, function calls, and application-level debug output. But none of these are really enough.
The various kinds of tracing and debugging are akin to putting a multimeter on your circuit. You're getting a proxy for what's going on, an indirect representation that you can use to infer the inner workings, but not the inner workings themselves. The raw instructions are probably closest to a direct representation, but here we run into a difficulty with the definition of a program. After all, in some sense a program is the instructions hitting the CPU, in another sense it's electrons bouncing around, in another still it's quantum probability spaces interacting in a surprisingly deterministic way. But these things are implementation details. The way a programmer thinks about a program is in abstractions of action, individual functions or modules that represent some behaviour. And what ability do we have to view these functional units in operation? None.
I think of this as the silicon curtain, the barrier between the conceptual operation of a program and its observable behaviour. Programmers only get around this through years of training and practice. They learn how to turn indirect observations into hypotheses by mentally simulating the computer. Obviously, this significantly limits the scale of programming we can accomodate; there's such a high bar that the idea of casual programming is kind of a joke.
Not only programmers are affected. Users, too, suffer from the utter opacity of software. Sure, you might not be able to fix a mechanical device, but if you see a bit that used to move and it's not moving now, you have a pretty good idea of what's wrong. By contrast, with software even localising the problem is enormously difficult. The most common reported problem is "I dunno, it used to work and now it doesn't".
Beyond the practical concerns, I think the silicon curtain is a tragic waste of beauty. It's hard not to lose yourself in the beautiful and intricate operations of a mechanical watch. The coiled springs and spinning cogs, the exquisite engineering and precision of each part moving against each other. And, mounted atop that Rube-Goldbergian complexity, the simple result: one tick followed by another, and an arm sweeping relentlessly forward.
Computers have all this beauty and more, but behind the silicon curtain it can't be seen. We're like the EM-deaf Earthlings in Asimov's Secret Sense, straining to perceive something just out of reach.