It's difficult to overstate how good Daniel Kahneman's Thinking, Fast and Slow is. It's a book with so much to offer for understanding human behaviour and decision making. One particularly eye-opening phenomenon described early in the book is attribute substitution, where you take a hard question and replace it with an easier, hopefully equivalent one.
If asked "is Dave nice?", a good answer would involve some kind of deep analysis of Dave's character, but maybe that's too hard to do on the spot. Instead, you substitute an easier question like "can you easily remember a time when Dave did something nice?". So you answer the substitute question but, crucially, you still feel like you're answering the original question. This optimisation works well most of the time, but can lead to some pretty wacky results when it fails.
A similar idea I've been thinking about is goal substitution. Let's say you want to have fun. One good way to do that is to call up some friends, go out and have a good time. But maybe you don't do that, and instead lie on the couch all evening watching TV or reading junk internet. If you tried both options, you'd rate the first one as much more enjoyable. So why don't you do it? My theory is that you substituted the goal of "do something fun" with "do a lesiure activity". The second goal is easier than the first, so you achieve it instead.
This also happens with work. You want to get some good work done, but it's easy to substitute that goal for "do something that feels like work". The problem is, lots of things can feel like work even if they aren't work. Reading emails, checking up on news, researching some technology you might use – these things definitely feel like work, in the sense that they're mentally stimulating and related to work, but may not actually move you closer to your real goal.
In both cases, the issue isn't necessarily redefining your goals. Often that can be useful to avoid overloading yourself or taking an unnecessarily amount of context. It's fair to redefine "go to space" as "assemble and manage the best team of people for going to space". The problem is when this happens subconsciously. You still feel like you're achieving the original goal when in fact you're doing something different. Much like with attribute substitution, this works well when it works, but can misfire badly.
I think goal substitution is a particular issue because a lot of entertainment does not necessarily have to be fun, merely compelling. The main evaluation that entertainment creators make is to measure consumption, not enjoyment. So in a sense entertainment has evolved to target the "feels like leisure" substitute goal very effectively. With such a glut of available fun-like and work-like activities, it's very easy not to notice whether you're actually having fun or doing work.
As for solutions, the best I can offer is the same advice I've heard about attribute substitution: make sure the decision takes time. Substitution works because it takes a hard problem and gives you an easy alternative. If you force yourself to spend five minutes thinking about "is Dave a nice guy?" you won't feel the urge to substitute because there is no easy alternative.
Similarly, taking time before doing something to figure out if it meets your actual goal should remove the allure of the easier substitute goal. I admit that's easier said than done; mostly the decision of what to do next happens on autopilot. We seem particularly prone to optimisation, even when it does us more harm than good.
There's a particular trap I fall into at times, particularly when I've agreed or planned to do something without really thinking the consequences through. What happens is I intend to, say, reply to an email, but I don't intend to reply to it right now. However, later on I still don't intend to reply to it right now. In fact, through a series of decisions to not do the action now, I don't do it at all, and all the while I'm still convinced that I'll do it. I call this paradoxical state of affairs induction-blindness.
Induction, in the mathematical sense, was best described to me as a three-step dance as follows: if I eat one banana [1], and every time I eat a banana I eat another banana [n->n+1], then I will eat all the bananas [all n]. It's a kind of sister to recursion, in the sense that you can build a proof for any n by recursively applying the second step. What makes induction interesting is that it's more than just repeatedly applying that step, it's a proof from the fact that you could. In a sense it's a kind of meta-proof, a statement about the system itself.
So, applied to a goal, induction-blindness is a failure to go meta. It's thinking about the goal and the steps, but not realising that your system for getting to the goal from the steps doesn't work. If I don't feel like replying now [1], and later I'll still feel like I do now [n->n+1], then I will never write the email [all n]. Despite those steps being trivial and obvious, I often miss that crucial step and fail to induct appropriately.
Perhaps this is an inevitable weakness caused by the mismatch between formal logic and fleshy brain reasoning, but I still think there are ways to recognise it. Most crucial is the inductive step, the n->n+1. Notice that it is perfectly reasonable to put off the email if I'm currently being eaten by an alligator. Trying to write an email then would be distracting and counterproductive. But the difference is that the alligator situation doesn't recurse. There's no reason to think that one alligator attack will mean another alligator attack, unless you're some kind of alligator farmer or in one of those infinite Greek mythological punishments.
The key, then, is to recognise when a situation is self-perpetuating in that same way. Induction-blindness is caused by the mistaken belief that change means difference. But if tomorrow is going to be the same as today, then anything you don't do today you're not going to do ever.
I started wondering something while messing around with those little ESP chips: how hard would it be to make a server that responds with a particular message using every protocol? I mean you would set it up to say something like "robots rule OK", or do its best to deliver this hilarious dog picture. And then you could connect to it via anything: web, mail, telnet, ssh, dns, gopher...
Obviously every protocol would be pretty difficult, because there are just so many, and a lot that you wouldn't even be able to find documentation for. But, maybe the entire well-known port range. Depending on where you look, that's between 250 and 1000 services. Which, at first blush, seems pretty difficult. But I bet a bunch of them would be trivial, and after a while you'd find a lot of similarities.
A silly project, maybe, but it'd be pretty fun to go spelunking through the docs for all these old protocols and understand them well enough to pull together a hello world style implementation. Plus, who knows, maybe someday you'll be in a situation that desperately needs the ability to transmit a dog picture over pop2.
It's no secret that developers, especially web developers, are often trend-obsessed, and every bit as flighty as fashionistas or foodies. I can't think of a single library or framework I use that's older than a few years, and some are only months or weeks old. Developer communities tend to move quickly too. For a long time developers hung out on BBSes, then newsgroups, then Slashdot, Reddit, and these days Hacker News. I think the answer to whether there'll be a next one after Hacker News says a lot about how trend-driven developers are.
Many people would point to all this churn as change for the sake of pointless change, much like any fashion, just chasing novelty. Others would say that it's just a fast-moving field, and these are genuine improvements to the state of the art. I actually think both of those are true, and I'd like to suggest a third factor: trendiness as a personality test. Constant change gives a fertile environment to test how adaptable and how comfortable with risk a developer is.
Web development and startup culture is often predicated on the idea of rapid, unpredictable change at the business level. This is the culture that brought us the "pivot", which is a restructure executed at the speed of a ballet turn, and the "lean startup", which is a business that figures out what product to build after selling it to customers. A developer who can succeed in that environment would have to be adaptable and embrace risk almost to the point of parody.
You could filter for that kind of person by asking them, of course, or by hoping that the natural sieve of the industry would filter out people who weren't suited to it. But why rely on that when you can do so much with just your choice of programming language or framework? Pick a language that changes quickly and only developers who can adapt to rapid change will work for you. Pick a framework that requires ongoing learning to keep up with and you'll get developers who constantly learn.
If you're lucky, your technology will mature as your company does and you'll be able to keep using it. But not always. Twitter famously made the transition from a Rails backend to Java and Scala as their company got too big to failwhale. This leads to the reverse situation: as a larger, more risk-averse company you can use the same filter to reject the hotheads and fashionistas. They're not going to want to work with your ten-year-old mission-critical Java stack.
And a good thing too. I'm a big fan of Nodejs, but the day I see it on the ISS is the day I uninstall Kerbal Space Program.
I learned an interesting thing recently, which is that surgeons don't do surgeries that they don't want to. I mean, obviously you can choose to just not do your job at any job, but surgeons don't get fired for refusing to do a surgery. In fact, it's an important bit of dialogue between physicians and surgeons: what would it take for you to be willing to do the surgery? Nobody orders a surgeon around because, when it comes down to it, they're the ones who put the knife in, and whatever happens afterwards is on their conscience. Nobody else can take that burden, so nobody else can tell them what to choose.
The same is true of pilots. A pilot is considered in command of the plane; they are directly responsible for every life on board. A pilot's decision trumps anyone and everyone else's decision. Air Traffic Control can say "don't land yet" and the pilot can say "it's my plane and I'm landing, figure it out". Doing that without a good reason is likely to lose you your pilot's license. However, it's not only acceptable but obligatory if the situation merits it. As a pilot, those are your lives in the back of the plane, and nobody else can absolve you for what happens to them.
But software does not have this same sense of sacred responsibility. More often the conversation looks like developers saying "we shouldn't do it this way", the management or client saying "well we want you to do it that way", and the developers saying "okay, your funeral". Usually that is a figurative rather than literal funeral, and just means losing money or time. But there are famous examples of the other kind too. As a developer, can you really say you are not responsible for the bad decisions you accept? Are you not wielding the knife or holding the controls?
The current state of the art says no, developers are not like pilots or surgeons. The responsibility for bad decisions lies with management, and you can feel safe in the knowledge that someone else is liable for the bad code that results. Perhaps this makes sense in the classical programmer-as-overpaid-typist environment, where your job was not to think but to turn someone else's thoughts into code. How can you be responsible if you are just one of a hundred thousand code monkeys banging away at big blue's infinite typewriter farm?
But modern software development is not like that. Developers are expected to be autonomous, to understand requirements, to plan, to design, make decisions, build, test, rebuild, deploy and demonstrate. Today's developers are more like pilots or surgeons than anyone cares to admit. They have particular professional knowledge and skills that nobody else has, which gives their decisions a moral weight – a responsibility. If that professional knowledge says "this decision is a bad decision", that developer is every bit as obligated to stand up for their profession and refuse to do the work.
Perhaps that seems overdramatic, but software is growing faster and doing more than any industry in the last century. It's hard to even find something that can't be ruined by bad software. The software in your batteries can burn down your house. The software in your smoke alarm can turn your life into a dystopian horror film. The software in your phone can monitor every sound and movement you make. The software in your car can stop your brakes from working. The software in the cloud can leak your naked photos, arbitrarily remove your data or lock you out of it, and reveal your personal information to repressive governments.
The question isn't whether the people who make these things should be considered as professionally responsible as a pilot or surgeon. The question is: how can you even sleep at night knowing that they aren't?