Sam Gentle.com

Groupwork

I'm sure everyone's had a bad groupwork experience. They're something of a rite of passage for high school and university students. There's always one or two people who do all the work, either because they're egomaniacs or because nobody else can be bothered (or both). Nobody is organised, everyone resents working together, the final output is always mediocre, and they're generally pretty terrible experiences. Teachers seem to like them, but honestly the only thing they seem to teach is how bad working with other people can be.

The obvious reason behind this is that, in the real world, people (usually) get fired for doing nothing. More generally, though, the members of the group don't control the group's composition. This isn't a group of cohesion, where everyone has decided to gather together for a common purpose; it's a group of adhesion, a bunch of people smooshed together by circumstance who have to make the best they can out of it. The goal isn't to build an exceptional team so much as survive the crushing mediocrity.

The problem isn't just the inefficiency of 20% of people carrying the other 80%. It's that having bad people, or people who just don't care, fundamentally changes the way you operate a team. You can't rely on any assumption of shared goals or common purpose, you can't even assume that people you're working with won't actively work against the best interests of the group because there's no mechanism to remove them if they do. So the strategy changes completely. The biggest win in such a group is in what's euphemistically called "managing", the process of limiting the damage of bad actors.

I find these strategies particularly interesting, and they show up in a lot of places. You can (usually) choose your friends, so mostly these are relationships of mutual benefit. Families, on the other hand, are groups of adhesion, and it's fairly common to have strategies for managing certain family members. Don't get grandpa going about immigrants, but if he starts you can distract him with questions about football. Your brother-in-law's going to keep offering you a drink until you accept one, so you may as well just let him.

Probably the most common group of adhesion is the workplace. Obviously people can and do get fired, but the bar for that tends to be relatively high. It's pretty tough to get fired just for mediocrity, especially in countries with strong labour laws. So a substantial amount of the work in a larger company is just damage control, designing structures to stop mediocrity from becoming catastrophic failure. Much like in software, mediocrity can't be redeemed, only removed or contained. The entire middle management layer of most large companies is a CORBA-like tentacle monster of runaway abstraction.

Why are we so ready to accept this? I mean, why does it seem so natural to form a group where negative influences are managed rather than ejected? I think it comes back to schools. Sure, schools have groupwork projects, but in a larger sense they are groupwork projects, 12-year-long ones where we force disinterested kids to see each other every day whether they want to or not. The strategies for surviving socially at school are so different from real life that parents and teachers end up laughably out of touch. "If someone's bullying you, just walk away" – and go where? This isn't a world where you can choose not to see people.

By the time we're done at school, we're packing an arsenal of stratgies for managing groups of adhesion, to the point where perhaps it seems more natural than any other way. Of course, every group has useless people in it. Of course, you need to round off your rough edges to maintain compatibility. Of course, part of life is just learning to get along with people whether you want to or not. Why wouldn't it be? We've been doing it so long it seems like a law of nature.

But imagine what the world would look like if we eliminated groupwork. Not just the projects, but any environment where we can't choose who to associate with. Sure, there's a risk of filter bubbling ourselves, of misusing that control to avoid discomfort and growth, or of removing interesting outliers to the group's detriment, and those are serious concerns.

However, if we can avoid those pitfalls I think the rewards could be enormous. Perhaps we can trade our tragedy-of-the-commons "management" for genuine coordination between people with compatible goals. Perhaps we can focus entirely on achieving a shared vision rather than building abstractions to pretend one exists. Perhaps it's only when we stop optimising ourselves to survive mediocrity that we can reach towards excellence.

Sensory satiation

I was thinking about background music and other kinds of background entertainment recently. In a sense, it's very counter-intuitive that we enjoy these things. Take a comparable computer system, some kind of post-writing machine or something. Now you tell that machine, hey, also process this music, or this podcast, or this moving nature scene. The machine would work less well because now it's doing more things with the same resources.

But, of course, we're not computers. I think the main difference in this case is to do with the nature of our processing. A computer is at its most efficient when it is processing one task at a time, but we can be more efficient by adding more parallel streams of information. Still, there's a big difference between saying that it's useful to consume more relevant data in parallel and saying that it's useful to add extra irrelevant data. What's that about?

I don't think it's just that our processing is parallel. Even in a multiprocessing-capable computer, the control system is still centralised. The firmware runs the bootcode, runs the OS, runs the services and user shell, runs the applications. Everything is top-down. But for us? It's kind of a mess. You can be trying to concentrate on the single task in front of you, but it seems that whatever part of your brain isn't engaged in that task can't be content simply sitting idle. Hey! Since I'm here and not doing anything, would you like to know what I'm thinking about?

So perhaps the inefficiency of background entertainment isn't so counter-intuitive after all. In a sense, you are being deliberately wasting brainpower, but it's brainpower that would have been distracting anyway. By simultaneously working and listening to music you can saturate your brain to the point where there's no extra processing left over for distractions. You could think of it less like a supercomputer wasting a general-purpose processing unit, and more like a company finding busywork for a special-purpose employee who has nothing to do at the moment.

That notion of special-purpose is key to this idea, though. Although I've found music is good for focusing, it's better the less interesting it is. Words are distracting unless I know the song well enough to ignore them, and podcasts or anything with informational content is right out. I've tried background TV before, which has always been a total disaster. On the other hand, when I was doing my recent round of bird drawings, I listened to podcasts the whole time and found the experience very flowy. It was like the words and the drawing occupied two entirely separate processing paths.

I think there's an interesting direction there in figuring out combinations of activities that satiate your brain without overloading it. My evidence so far seems to suggest visual and language work well together. Music and writing or music and code seem to be an imperfect pair, because I find it distracting sometimes. I suspect music and drawing might not be saturating enough. I haven't experimented much with writing and visuals, but maybe some expeditions to The Outside World would be interesting to try.

There are definitely some limitations to this approach. One thing I've noticed is that when I'm really focused, any external stimulation becomes distracting, even music. I would take this to mean that I'm operating at the limit of my capacity already, which kinda contradicts the special-purpose idea. Perhaps a better model is general-purpose processing with a special-purpose preference. It might be easier to distract those errant processes than convince them to do something against their preference, but when there's long division to do everyone has to come to the party.

The most peculiar thing is that what seems to be most productive for me is to listen to music and then have the music stop without me realising. Maybe it's just a matter of keeping the extra processing occupied until the task builds up enough to fill it.

Swastika emoji

The rise of emoji is an interesting thing. Pictographs have a long history, but a concerning future. I've heard the idea that emoji represent a kind of dumbing down of language, but if anything I've noticed the opposite. Our alphabet is evolving to a mixed symbolic/representational one, which I think is even richer than what we had before. But an important question arises: if we're getting a new alphabet, who controls what goes in it?

As companies become like governments on the internet, a real issue is that what would in the physical world be public spaces or public infrastructure become private spaces with private infrastructure. That has some unfortunate consequences for concepts like freedom of speech or due process, which aren't rights you have in private spaces, and thus aren't rights you have on the internet.

Beyond that, there's a question of responsibility. The government's not responsible for what you say or do in public, at least up until the point where it's breaking the law. You're free to be a jerk all day every day until the day you die. But what about online? Is Twitter responsible for hateful tweets? Is YouTube responsible for offensive videos? Is your browser responsible for letting you view them or upload them? Is your keyboard responsible for letting you create them? I mean, the answer seems like it's no, but more and more often it's becoming yes.

Internet companies have, on many occasions, invoked the "dumb pipe" defence. Oh, sure, that mean tweet is just another text field in our database, we have no particular responsibility for what it says. But increasingly these companies don't want to be dumb pipes, they want to be smart infrastructure, they want to provide algorithmically-curated experiences, enourage specific user behaviour through careful design, and build a virtual space that is both compulsion-inducing and monetisation-friendly. And once you're exercising that level of authorship, responsibility isn't far behind.

So there's no tank emoji, no rifle emoji, no drunk emoji and, at least on iPhones soon, no gun emoji of any kind. I should be clear, some of these symbols may still be part of Unicode, but phone platforms don't want them and they won't provide you with a way to type them. While these things are obviously parts of the human experience that people might want to express, the point is that Apple feels responsible for the emoji it provides. Who wants to be the phone manufacturer that shipped the twin towers emoji? Who wants to be the next q33ny?

And many users actually encourage this idea. They want platforms to be responsible for what you can say. Apple's decision to remove the gun emoji may well have been a response to a Twitter campaign linking it to gun violence. I mean, gun violence is bad, sure, that's not a terribly controversial position. But are we ready to say that guns shouldn't be something we express? I should hope the irony of expressing a sentiment about guns whose end result makes it harder to express sentiments about guns is obvious.

It is a strange time in the co-development of society, government, and corporations. I'm reminded of the Amusing Ourselves to Death comic, a comparison of 1984 and Brave New World. The argument is that Huxley was right, but I wonder if they both were. There will be an engine of control, but it will be one of our own design, embraced willingly. Rather than the boot stamping on a face, it will be a face lovingly caressing the boot, like a kitten with its master. People in the streets shouting "please, remove our ability to express harmful things, it's for our own good!"

That might seem farfetched, but if companies want to be responsible for what we express to achieve their business goals, and we want them to be responsible for what we express to achieve our social goals, what other outcome can we expect? A fun Fisher-Price world with no sharp edges or dangerous ideas, no sex in the App Store, no nipples on Instagram, nothing but a virtual Neverland ruled by a smiling corporate Peter Pan.

Everything in modulation

Food is never more delicious than after you've gone a while without eating. Happiness after a period of sadness is more intense and vice versa. That feeling of relaxation after working hard is so good, but when you relax too much doing work again can seem impossible laurels. The transitions between these things seem in some ways more important than the things themselves. Why is this?

Modulation is the idea of varying one thing in relation to another thing to transmit information. It's the "mo" in "modem", but every wireless system uses modulation of some kind. Radios use analogue modulation, where the thing you vary is the frequency (FM) or amplitude (AM) of a waveform at a certain frequency. Modern digital devices tend to use more complex systems, encoding information using a fixed number of particular frequencies (FSK), phases (PSK) or both (QAM).

One thing that all of these systems have in common is that they depend on a fixed reference called a carrier wave. Usually that's just a simple sine wave at a given frequency. In the old days, you would transmit the carrier along with the signal, but that's pretty inefficient. Instead, when you know the frequency and phase you can just generate your own carrier wave.

But what happens when your reference isn't fixed? If the transmitter is moving relative to the receiver you get a doppler shift, and even without that there are often small variations between the carrier waves generated by different hardware. There's every chance your wave and their wave will be slightly out of sync. Compensating for this is known as carrier recovery and is, uh, fairly complicated.

There's a pretty neat technique that makes this much easier called differential coding, where instead of looking at the absolute value of the signal, you look at the difference between its current and previous value. Or, to put it another way, you use the signal as its own carrier. It's still the same modulation idea, varying one thing in relation to another, but the two things are the same signal at different points in time.

It seems to me that, since we tend to lack any kind of global fixed reference, we also look for meaning in difference. It would be great if there was some kind of absolute reference that we could measure everything by, and sometimes we can create one in certain circumstances, but everything is relative in the end. It may be that our greatest strength as a species is our flexibility, so in some sense it shouldn't be surprising that we are optimised for change.

So perhaps it is better to aim for, instead of a life of constancy, a life of constant transition. If we concentrate on meaningful transitions between high and low, busy and relaxed, over and under-achievement, we can avoid the impossible task of maintaining one particular situation in perpetuity. And although that might be difficult to deal with, it is in its own way quite liberating. Rather than relying on some external signal to give us meaning, we get to make it for ourselves, riding our own carrier wave out to wherever it might take us.

A return to form

It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.
Edsger Dijkstra

Most programmers, even very good programmers, start by writing terrible code. Dijkstra called BASIC programmers "mentally mutilated"; he may have been thinking of the code I would someday write as a beginner. I hadn't ever heard of indentation, so I didn't use it. Subroutines scared me, so I just put all my code in one giant file. I didn't know how to make the body of an IF statement more than one line, so when I needed more lines... I just used more IFs. Needless to say, my code was a Lovecraftian horror I could barely understand even as I was writing it.

These days I've got if statements pretty well under control, and my first instinct when talking to a beginner programmer is to save them from the mental mutilation I inflicted on myself. Learn the form! Learn indentation and style! Learn code structure and comments and tests! Learn from my mistakes, young pup. Respect my long grey beard and glittering eye. Listen to my grizzled warnings, lest ye find yerself becalmed in a sea of spaghetti.

But the thing is, it wasn't all bad. At the time I was too new to understand that anything was wrong. My ambition hadn't yet scaled up to the point where it mattered whether my code was readable; I was happy if I could just get it to print some stuff. In fact, my ignorance was a source of great joy, because it made knowledge that much sweeter. When I found out about indentation, it was like an epiphany. Wow, I can understand code after I've written it! By having the chance to do things the wrong way, it made me appreciate the right way on a visceral level.

The problem with learning the right way first, then, is that it's arbitrary. You learn the form, sure, but you don't know why the form exists. I mean, sure, someone can handwave at you and say "badly-structured code is a nightmare", but until you've run out of single-letter variable names you won't really understand what "nightmare" means. It's hard to be motivated by someone else's assurances that this is the right way when it doesn't feel any different from the wrong way.

On the other hand, it's obvious that learning the right way is more efficient. We don't teach athletes to run badly before we teach them to run well. We don't teach gymnasts to do shoddy backflips with bad form and then hope they eventually come to understand why good ones are important. What a waste of time! Just teach them the right way from the beginning and they'll figure out why it was right when they win. I think this is roughly the attitude we use with beginners as well, and it does make some kind of sense.

But athletics and gymnastics aren't for everyone. For starters, a very small number of very motivated people manage to get very far in either. These people aren't motivation-constrained, they're skill-constrained. There's enormous time pressure; you've got about 10 years to win your medals and then that's it, so no room for screwing around. Most of all, you're facing off against other people who can out-compete you, so any inefficiency is a weakness your competition may not have. Plus these disciplines have been around a long time, and the form is pretty settled at this point.

And that last aspect, the lack of a settled form, is something that I think is worth considering. By learning the form without understanding where it came from, without getting a visceral understanding of the problems it solves, it makes it difficult to stray off the path of your existing knowledge. What if you need new form? What if there's a situation where the old form isn't relevant? Or what if you think you have a good reason to stop using the form, but really you just didn't understand it?

So, sure, if you're a professional athlete, or if you're trying to go from zero to a software development job in six months, learn the form. It'll be less fun, but you're not here to have fun, you're here to win. On the other hand, if you have the time, if this is a passion thing where you want to optimise for motivation, or if you're in this for the long haul and you want to end up with the long grey beard, glittering eye, and mental mutilation of hard-won experience, maybe it's not so bad to do it the wrong way first.

If nothing else, it'll rapidly reduce your desire to do it the wrong way in future.