Today I'm releasing MeDB, a personal analytics project I've been working on in bits and pieces for some time. I originally posted last year about using industry-grade monitoring tools like Grafana and InfluxDB to gather stats about myself rather than servers or business data. There seems to be a lot more decent software in devops than in "quantified self", so it makes sense to piggyback on it.
Earlier this year I wanted to use stats more so I started developing a plugin-based system as a seriesofprototypes. After I wrote the core, each individual plugin was pretty easy, which was really the goal: after you lock down all the hard bits, collecting some new information is just a matter of a little bit of code. I've been using it for a while to collect the stats for my stats page, which currently only displays intermediate values, but could also display graphs fairly easily.
Internally, MeDB is really simple: it just reads a config file, uses that to determine what plugins to load (and what configuration they need), loads them all, passes them the relevant part of the config, and writes the results into InfluxDB. Each plugin is also simple, it just accepts the config data and returns the stats. I've found this framework super lightweight and useful for my needs, so I figured it was time to let it out into the wild in case it could be useful for anyone else's.
You can find MeDB on GitHub and the plugin I use for getting my GitHub stats is on GitHub too. I'll release more plugins (at least the ones I'm using for Hacker News and Reddit stats) soon, but if this kind of thing interests you, feel free to write your own!
I've never been hugely fond of formal education, and I'm quite happy to see that the modern world has a lot of alternatives to offer. Many software developers are self-taught, and I would say even the ones who get a formal education are also self-taught if they are any good. This is perhaps the most obvious reason why a degree in computer science is not a great predictor of ability in software development. I've even heard people claim that there's no point in formal education at all, but that seems a bit too far to me.
It's easy to learn bits and pieces from tutorials or videos online, to build up a giant bag of facts and skills, but that's not the same as true understanding. Each fact in your bag is like a waypost planted somewhere in the knowledge space: as long as you're near it you know where you are and what you're doing. But what happens when you inadvertently wander off the path? Worse, what if you need to wander off the path because it's the only way to get where you're going? The bag of facts can't help you there.
The one thing formal education does do well is define the limits of the space, to map its contours and dimensions. You may not know how to solve every individual problem, but you know that this one falls within a class of problems, which itself falls within the boundaries of the foundational models of the entire field. In computer science, we call this "theory", but I feel like this name doesn't really do justice to the concept: defining the bounds of what is known within the field.
In logic, there are two main quantifiers you use to describe the scope of a statement. You can say it holds true for one case (existence, or ∃), or that it holds for every case (universal, or ∀). To prove something exists requires only a single example, a waypost in the knowledge space. I can prove that there is number greater than 5: it's 17. To prove that a statement holds for every single thing is much more difficult, because you need to map the entire space and reason about it.
You might think that's a bit ambitious. After all, how often do you actually need to reason about the entire space of things, or prove something universally? In practical terms, if I know lots and lots of facts and skills, isn't that good enough? Over the course of a lifetime of study you can build up a stunningly large bag of tricks. But there's still one thing you can't do: disprove anything. To say that something doesn't exist is to say that everything isn't that thing. You need universals for that, and without them you're limited to saying "well, I haven't seen it yet, but..."
To say no to something, to understand what is possible and what isn't, or reason about the unknown at all requires a particular kind of knowledge. I think of it as exhaustive knowledge: once you have it, you're done searching. I don't need to wonder if you can make a significantly faster comparison sort, or an infinite compression algorithm. I may not know the exact details of every minor new piece of knowledge that comes out of those spaces, but I know their limits.
I think formal education is good at giving exhaustive knowledge because that's its stated aim, and I don't think many self-directed resources share that aim. I'm hopeful that more such resources will come to exist over time, especially as more people come to realise the power of self-directed learning. In the meantime, though, it's easy to be led astray. Don't settle for a bag of facts, true understanding only comes from exhaustive knowledge.
When an actor is on stage, they don't speak like a normal person speaks; they project, using their diaphragm and abdominal muscles to increase their volume far above normal without sounding like they're shouting. They wear makeup, not normal makeup that you might wear to go out, but garish makeup that exaggerates every feature of the face. However, when you see them on stage, their voice sounds normal and they look normal. The projection cancels out the effects of distance, and the makeup cancels out the harsh stage lighting. If they had spoken normally or worn normal makeup, they would have looked abnormal.
After you set up a high-end sound system, it's common to calibrate it. The amplifier itself will distort the sound signal in some way, as will the speakers, the stands the speakers are on, the shape of the room and everything in it. You could say the sound is transformed by all these elements before it hits your ears. To fix this, you take a microphone (whose distortions you also have to take into account but are usually known) and record the sound system sweeping through every frequency. From that, you can calculate another transform that, when you put sound through it, will cancel out the original transform. In other words, the reverse transform.
A common piece of advice is to "be yourself", which is not great for a few reasons, but one of them is that it ignores that your actions and words are transformed before they reach other people. Perhaps you see yourself as reserved and thoughtful, but in the eyes of other people you are awkward and shy. Assuming you want other people to see you the way you see yourself, acting like yourself is terrible advice. Instead, you should act the way that will cancel out the distortions between yourself, your thoughts, your actions, other people's perceptions, and their eventual opinion of you.
But, wait! That's not being true to yourself. Maybe. But you have to ask, is an actor being true to themselves when they speak softly or go on stage without makeup, knowing that the audience won't see or hear them? Is a sound system being true to itself by generating the most exquisitely accurate waveform only to have it immediately distorted by the environment? If a tree is true to itself in the woods where nobody can see it, what does it matter?
Besides, it's not like you have to change the way you see yourself. Your identity can (and should) be defined on your own terms, not by your effects on the things around you. But that doesn't mean those effects don't exist. If your aim is to achieve a certain result, to be seen a certain way, to be heard a certain way, you have to work with the mechanics of reality, and that includes its distortions. Figuring out and applying the reverse transform is how you make up for those distortions and project yourself as accurately as possible into reality.
A year ago, I wrote Continuous everywhere, about a philosophy of reducing discontinuities, the gaps between doing and done where you aren't sure whether you've made progress. Ideally, it would be possible to reduce them to nothing, such that your progress is always measurable and you always know whether you're improving. I've since written some related things, including Goalpost optimisation, Optimisation order, Short/long term and The fixed stars. I think it would be worth tying them together with a central metaphor.
I've often referred to brains as association machines, but a different way to think about them is as optimisation machines. What I mean by optimisation in this sense is the way its used in mathematics or control theory: trying to achieve a desired output from a function by controlling the input. Specifically, we are very good at making constant small adjustments to our actions, which we use (and presumably evolved) for movement-related things like tracking moving objects, balancing, and controlling tools.
Where this optimisation machine falls down, then, is with functions that are too complex for our optimiser to predict, where we are lacking information (such as being unable to measure its outputs or inputs) or, crucially, where the feedback is too slow. These are limitations that come up fairly often, and make us unable to use our optimisation powers effectively. We can, of course, fall back on our analytical system, but it's such a waste to ignore our built-in optimiser, which is much faster and more efficient.
So what we need to do is find ways to reframe our problems so they can be in that optimisation sweet spot: simple and well-understood with rapid feedback. A good way to make something simpler and easier to understand is to reduce its entanglements, which I wrote about in Dependency hell and Stateless. There are lots of other ways to make something simpler (including not making it complex in the first place). Most of what I've said about measurement is about different ways to better understand and gather feedback.
But the last piece, the rapid feedback part, is particularly tricky, because it requires fundamentally changing the way you structure things. That was the point I was making in Continuous everywhere: you have to rethink your system entirely to make it optimisation-friendly in this way. Bret Victor's breathtaking Stop Drawing Dead Fish, and most of his work for that matter, is about making computer tools continuous in the same way that physical ones are. We can feel (and optimise) every minute movement of a paintbrush in real time, so why can't we do that with an animation engine, data visualisation, or even a programming environment?
Of course, this idea goes beyond programming or even software. Continuous everywhere is a general philosophy about reworking both your goals and your systems so that they can be easily worked on by your powerful optimisation machine. Doing that is not easy, in fact, it likely requires tearing down and rebuilding substantial parts of the way we do things now. And, if we do it, it's likely to get worse before it gets better. Some bravery will be required.
It's always exciting to figure out a new way of doing things; your old way had all these flaws and downsides that you really hated, but this new way is much better! At first, it seems amazing and flawless, but slowly the cracks appear and you eventually realise that your new way, like the old way, has flaws and downsides as well. And then the hunt begins again for the perfect system so that you can finally be safe in the knowledge you're doing everything right.
In many cases, though perhaps not all cases, there is no perfect system. Even if one exists, it might be too difficult or costly to achieve, you might not know what it is, or it might take some time to get there. So while there's nothing wrong with striving for perfection, it would definitely be a mistake to consider a lack of perfection a failure. Realistically, if you're unhappy with having flaws, you're just going to be unhappy.
But, either explicitly or implicitly, it seems fairly common to talk about a system with flaws as unacceptable, as if something must be done. What shall we do about the risk of dying in an airline crash, terrorist attack, or carcinogenic bacon? Well, probably nothing. The truth is there is some chance you will die from those things, and the effort to avoid those risks is not really worth it. Assuming you continue to fly, eat bacon or go out in public, your system has flaws – fatal flaws – that you are just going to have to accept.
It can be hard sometimes, when someone points out a flaw, or you notice one yourself, to say "yes, that is a flaw I have". But that's exactly what you have to be able to do if you want to make reasonable tradeoffs. You only have a certain amount of time and energy per day, and the more you put into your relationships the less you have for your work and vice versa. You can pursue those in any proportion you like, but you can't use more than you have. Maybe you resolve to focus on doing what you enjoy and then someone else points out that they have much more money than you. That's true, and you have to be able to own it.
If you can't embrace the downsides, you risk putting yourself in a position where you try to optimise for everything at once, and constantly change back and forward between systems depending on whatever downside you most want to avoid at any given moment. Of course, the outcome of constantly changing your system is no system at all. Acting with no intention is bad, but the worse thing is that unless you can accept the negative consequences, you're never going to be happy with your decisions.