Well, it's been some time since my last failure, though depending on how you count it my catching up post probably qualifies as a de facto failpost. Back then I concluded that I would keep writing posts as normal even if I fell behind, and separately fill any missed posts with (avian) content. In theory this should mean that my posting system has positive stability: it gets easier to catch up when I'm behind.
There have been two issues with this, though. The first is that some of the bird drawings have been surprisingly difficult. Some, like the Philippine eagle or Major Mitchell's Cockatoo took much more time than the equivalent bit of writing would have. A lot of this is just that I get carried away with whatever I'm drawing. But even if I didn't, generating extra content is still hard. Worse still, if I fall behind it's often because of external disruptions or drains on my time that mean I don't tend to have the spare capacity to fix it.
The other issue is that I've stopped doing failure posts because I've had birds to fall back on. This is a problem partly because failposts are easier than birdposts as per above, but also because failure posts were serving an important function as a way to recognise that failure, call it out, reflect on what caused it and how to stop it happening again. Without doing that it's easy to have failures happen casually, and even lose their significance. I probably would have reflected on the effort of the birdposts sooner if I'd been doing failure posts as well.
I'm not sure why I stopped doing failposts, maybe because birdposts let me dodge having to call it a failure, but it seems like a mistake and one that's easy to fix. From now on, a single missed posts merits a failure post, and multiple missed posts get a failure post plus whatever extra posts I need to catch up. That's still either birdposts or any other content I can think of, but I might give some particular thought to other kinds of content that might be less effort in a pinch. It could be that the answer is to find some way to piggyback off other things I'd be doing anyway.
Last time I just made a one-off prototype, but this set of prototypes is once again a series of parts leading up to a project. The post for that project is coming soon, but in short it involves real-time button and display synced to a webpage. This led to a particularly interesting set of problems because I basically wrote code for 4 different environments: Arduino, ESP, web server, and web client. This was on a pretty short time frame, so most of the code is pretty rough and ready. Still, although it involved a lot of different moving parts, each individual part wasn't too complex. Perfect for prototypes!
This was the Arduino code necessary to drive the display, as well as both send and receive updates over serial. One neat trick I ended up using here was JSON for communication, and ignoring anything that didn't begin with a left curly (as a JSON object does). This meant I could send debug information and data on the same serial port and, indeed, debug it even while the serial port was being used for communication between the Arduino and ESP. I also, for some reason, wrote it in a way that would allow me to have any number of displays and buttons.
For this part, getting the Arduino to talk to the web server, I basically just wanted a serial-to-web-request proxy. I used NodeMCU and wrote a custom Lua script on top. This wasn't strictly necessary, and in some ways it may have been easier to just use Arduino on both ends (The Arduino platform can build for the ESP chip). But Lua is fun, and I figured this way I wouldn't have to swap back and forth between Arduino serial targets, which is always a huge pain. Plus being event-driven was probably the best API here, because multiple requests could be coming in over the wire at once, though in practical terms all I did was drop them anyway.
This is the code for the actual web client and server. Ye olde bog standard Coffeescript plus DOM manipulation, nothing fancy because fanciness wasn't really required. Everything just used polling, so it's not really real time, though I had in the back of my head that maybe I should be using websockets. I knew I'd need to have polling as a fallback regardless, and I thought maybe polling would cache better, so I figured that might be enough. Both the graphics and the websockets ended up coming later, but that's for next week's prototypes.
It might seem like just after you've achieved something is the best time to relax, but in a sense I think it's actually the most dangerous. Sure, there's a nice space for a break because your workload has suddenly just decreased, and if you've been pushing hard to get the thing done, that pressure is now gone. Maybe you need a bit of compensatory downtime to make up for the overwork. Those are all compelling reasons to take a break, but nonetheless doing so right after you finish something may be exactly the wrong time.
The first part of this issue is that if you feel an overwhelming need to relax because you've been working in a way that's been harmful to you, then that relaxation is masking an important question: why can't you keep going? Sure, for some things it's unavoidable, but I suspect fewer than we think. Sometimes you have to sprint, but mostly life is a marathon. If you have to go recover for a day every time you do something, maybe you're doing it wrong. If at the end of a project you wish you never had to do anything ever again, it's probably a better time to embrace that negativity and use it to reflect on how you can avoid it in future.
Another problem is the way the end-of-project break builds associations. There are two different good feelings in play: the good feeling of having accomplished something, and the good feeling of not doing anything. If you put those together too often, it becomes hard to disassociate them. Every time you do something is like a little signal saying "great, pack it up, we can stop now". Worse still, thanks to the perversity of two-way association, it might start to feel like not doing something is an accomplishment itself. That's a lot of impetus to stop even if you don't need to.
The last thing is that excessive relaxation is very disruptive to existing habits and patterns. I'm not talking about taking it easy for a few days, more like massive go-hide-in-a-cabin-somewhere breaks that sound suspiciously similar to burnout. You have to overcome a certain degree of inertia when you start doing something new, and it takes a particular and unique kind of effort to do that. It's much easier to maintain an existing effort. Even in a world where it was just as efficient to work twice as hard on odd weeks and take even weeks off, I still think you'd be worse off because of all the momentum you lose.
I think all of these factors contribute together to make relaxing just after you've finished dangerous. It's often a response to bad working habits, encourages you to stop when you don't need to, and kills your momentum. So what's the alternative? Never take breaks? Obviously that isn't going to work. I would suggest instead that breaks should happen in small doses while you're working on something, rather than as a big chunk at the end. Even if you're taking a holiday, I would suggest deliberately leaving things in a still-undone state before you go.
While this sounds counterintuitive and perhaps goes against some fundamental desire to tie things up neatly, I think it is much more compelling as a model. You're not taking a break because you're done forever, you're taking a break because you want to come back to what you're working on refreshed. Unless you're totally done with this project and anything coming after it, it's better not to end it too cleanly. Leave a little bit left to do, or if you've finished this part, start on the next one. That way it'll still be in the back of your head somewhere, and be much easier to get back to when you're ready.
Yesterday I wrote about the effect of transitioning from a two-party (you and your computer) to a three-party (you, your computer, and the agents running on your computer) trust model. I'd like to cover another angle, though, which is that the definition of data has also changed. I've said before that computing is special because it invented an abstract unit of operation. In maths you add numbers together, in economics you add dollars together, in computing you add actions together. We sometimes say "code is data", because the only difference between a computer program and a really long number is how you interpret it.
But there is one important difference between code and data: you need a certain amount of complexity in order for something to be a computer. That bar, called Turing completeness, is surprisingly low, and can be met by weaving machines and water tubes, among other strange examples. However low that bar is, though, it's not zero. An mp3 file is not a computer program, a jpeg is not a computer program. All code is data, but not all data is code.
This property is very important, because of the problem of undecidability. Basically, when you run a program, you don't necessarily know what it's going to do. I don't just mean you personally, rather that it's a fundamental mathematical reality; in general, you can't prove what a program is going to do without just running it and seeing what happens. Is this program going to stop eventually or keep running forever? Undecidable. Will this program calculate my tax return or will it secretly compile a dossier of all my activity and send it to the government? Nobody knows!
I should back down from that last statement a little, because it's important to clarify that this is only true in the general sense. I could hand you a program whose code consists entirely of printf("hello, world!"); and you'd have a pretty good idea of whether it prints "hello, world!" or deletes your hard drive. But the point is that you need to be able to know what every program does, not just the trivial ones. A computer program can be arbitrarily complicated, especially if it's designed to be, and expert human examiners are reliably fooled by even comparatively simple malicious programs.
Since we can't necessarily tell whether code will try to hurt us, we retreat to the next available defense: limiting the consequences. By analogy, you don't necessarily know if a person wants to hurt you, but if they're handcuffed to a chair it's kind of a moot point. A program can intend to spy on you, but if it's running in the computer equivalent of a jail cell, only allowed to access your microphone when it asks the warden nicely, any spying it does is going to be pretty ineffectual. Of course, even the best wardens can be fooled, and it's very hard to make an inescapable prison. Still, it's the best we've got.
But even today in most (non-mobile, non-web) computing environments, programs run relatively unrestricted. Any program on an average Mac or PC can delete all your files, send spam, or steal your passwords and credit card information. The fact that most of them don't is really because of a lack of desire, not a lack of ability. Antivirus programs can catch or mitigate some of these, but in general the problem is, literally, unsolvable; undecidability is law. So we fall back on the next available defense: trust. Only run code written by people you trust, downloaded from somewhere you trust.
Unfortunately, I lied before when I said "not all data is code". What I should have said is "not all data is code... yet". An mp3 is not a program, but what about a word document? A web page? A DVD? Unfortunately, the answer to all three of those is "yes". Documents, websites and DVDs all contain embedded programming environments designed to allow interactivity of various kinds. And there's no such thing as a little bit of computing; once you have Turing completeness, that's code, no different from something written in Python or C. Each of these code-in-data environments has its own trust model and its own jail, and these are regularlysubverted.
But the trend is for more code-in-data, more interactivity and more embedded programming languages. Why have a video when you can have a choose-your-own-adventure video? Why have a table when you can have a rich interactive spreadsheet with formulas and live graphs? Why have a dumb document when you can have something that updates as you read it, shows comments in real time, or, uh, expands? People are coming up with new kinds of expression that are fundamentally active rather than passive, and almost all of them pass that agonisingly low bar for computation.
So what we're left with is Turing's nightmare: hundreds of different mini-computers running thousands of different mini-programs, and all the while we're stuck trying to decide the undecidable: is this interactive video spying on me? Is this game going to delete my data? Is this document secretly trying to convince people it's a nigerian prince? Sure, we can make up for our fundamental inability to tackle that problem by relying on a combination of trust and restrictions. But every one of those mini-computers starts from scratch, making its own mistakes and increasing the already elephantine vulnerability surface area of the average computer.
This to me seems like the obvious reason that web and mobile software have trounced desktop software. Unlike the programs of old, web and mobile apps are explicitly designed to be untrusted, built around one centrally-administered jail rather than lots of little ones with their own rules. They are really the pioneers of data-as-code, the idea that when someone sends you a funny image, or a thing to look at, they're not sending you some inert piece of paper, they're sending a tiny person into your house who might trash it on their way out.
I would argue that the inevitable end to all of this, and the solution to Turing's nightmare, is to just go all-in. Data-as-code is the future, whether we want it or not. Inert and trustworthy data is not only obsolete and boring, it's also demonstrably a lie. Every passive data format we've invented has eventually mutated into an active one, and in the process ruined everyone's Tuesday. Why not stop fighting it? Let's have executable mp3s, build all our movies out of code, rename not_a_virus.jpg.exe to just not_a_virus.exe and be done with the whole thing.
Once we can fully accept that modern computers are less like a sterile factory and more like a teeming petri dish of skin flora, it'll be easier to focus on building the centralised containment and immunity systems that we need. With a combination of good models for delegating trust to gatekeepers, and more prison architects behind fewer prisons, I think the next generation of data-as-code personal computing looks not just more secure, but more interesting as well.
I've been thinking a bit more about the paradoxes of online advertising, which I previously wrote about as an example of the importance of voluntary unfreedom. Key to that argument is the "your computer, your rules" doctrine, which is at the core of debates around DRM, the war on general-purpose computing, and other ideas around computational autonomy. Your home is your castle, and by analogy your computer is your internet castle, safe and under your control. Only, well, it's not.
It's no question to anyone who's been paying attention that the locked-down world of mobile operating systems is a vastly superior experience for the average user. The time between a normal person getting their hands on a Windows machine and it being unusably riddled with crapware is measured in minutes. Sure, you can have antivirus programs that try to find the crap and remove it, but they're fighting a doomed battle, like a digital King Canute kicking against an endless tide of cyber-excrement. Meanwhile, in mobile land, spyware is comparatively rare, viruses don't exist, and most likely the only time you'd see an antivirus would be while reading an only-90s-kids-will-remember post.
The reason the mobile model is superior is simple: it doesn't trust the user. Or, more precisely, it recognises there's been a fundamental shift in the trust model since the early days of personal computing. Once upon a time, if a program was running on your computer, it was your program, doing something you had told it to do. But the combination of "just download and run it" software, everything running in the background, and the transition from computers being for experts to computers being for everyone means those days are dead and gone. The things running on your computer aren't your programs; they're someone else's programs, and you have no idea if they're doing what you want.
This is the classic principal-agent problem. Although you have nominal autonomy over your computer, you can't directly control the programs running on it, you don't have direct visibility over their actions, and most of the time if you did you wouldn't understand what they're doing anyway. These programs are your agents and, just like a financial manager or a political representative, they may decide to act against your interests if that's more lucrative. Worse still, you have no way of knowing if this is the case until long after they've done terrible damage.
So the great appification, far from reflecting a disregard for users, is a recognition of this new world where most people aren't, and can't be, masters of their own computers. And, in a sense, I think this is actually kind of liberating. Any time I'm at a command prompt I'm a few fat-fingered keystrokes from accidentally ruining whatever I'm working on. Every command I run could potentially contain some code that deletes every file in my home directory. But on a mobile OS? They just don't assume everything that happens is deliberate.
Now I should say that this isn't a completely rosy picture. The principal-agent problem hasn't gone away on mobile (or on the web), it's just been shunted to somewhere else. Just like you can hire a sales manager if you don't understand whether your salespeople are doing the right thing, the people who make your mobile device can make sure your apps are doing the right thing. But, y'know, quis applicat ipsos applicationes? What if your sales manager doesn't know what they're doing? Do you hire a sales manager manager? You eventually have to trust someone.
Unfortunately, "in the end you can only trust yourself" makes a good tagline for an apocalyptic western, but is lousy as a philosophy for user experience. Locked-down devices are safer and better for most people, even if they drive me nuts when I want to mess with their innards. What we need isn't to push back against appification, but to accept it and make sure we have viable and trustworthy options for gatekeepers. One that isn't run by a tech company 5 minutes off the West Valley Freeway would be a good start.