Occasionally, in the course of my work, or just for fun, I end up learning about something new. I figured in the spirit of student teaching I would summarise the things that I learned. This week: transistors!
Most of my exposure to transistors has been in the context of "and that's how you get digital logic and then computers". I was surprised to learn they are substantially more complicated than that. The first rule of transistor club is that a transistor is not just a fancy switch. The second rule of transistor club is that what transistors actually are is way too complicated to keep in your head, so it's okay to think of them as fancy switches as long as you always remember how wrong you are for thinking it.
The most common kind of transistor is called a BJT, for bipolar junction transistor. They come in two varieties: NPN and PNP. P and N are layers of different kinds of semiconductor, so it's similar to a chicken sandwich vs a KFC Double Down. I never bothered to figure out what you use PNP for, but I'm sure they're pretty useful for something. NPN is the one you usually use if you want to make a fancy switch, so that's the one I used. There are various mnemonics for remembering which is which, I use "look it up on Google".
Theory break! This article does a good job of introducing the basis of transistors from scratch. Basically the N and P layers form variable resistive barriers between them that change how much current will flow through one part of the transistor based on how much voltage goes through another part. For reasons that take two pages of ranting to get to, you can pretend that current through one part increases the current through another part. In other words, the transistor multiplies current by some amount, the "power factor", which is represented as hFE or β.
The way you rig up a transistor to act as a switch is you follow this diagram. Or if you like words: hook the positive end of the thing you want to switch up to the positive end of your power supply, hook the negative end to the transistor's collector. Hook the transistor's emitter to the ground/negative of your power supply. Now to switch it on, you connect the transistor's base to a (smaller) positive voltage (like an I/O pin on an Arduino). Put a resistor in between the base and the voltage! If you don't do this your transistor will get really hot and then catch fire.
If you don't give the transistor enough current it won't switch on fully (aka, saturate), so you want a resistor that is large enough to stop the transistor from blowing up and small enough that it lets all the current through the thing you want to switch. How do you figure that out? You can do some maths, or just start with a high-ohm resistor and try smaller ones until it works.
You want to learn to read a dataheet for a transistor. Here's one for the 2n2222a. Specifically, look for VCEO and IC. These are the maximum voltage and current ratings for the thing you're switching. If you need more voltage or current than that, pick a different transistor or it will get really hot and catch fire. You also want to take your switching current (50mA for most I/O pins) and multiply it by the hFE value. That will be the current you can supply to the thing you're switching. If it's not enough, you need a bigger transistor. If you need really big numbers, check out a Darlington pair, which is two transistors hooked together like a human centipede.
Lastly, there are many good reasons to not use transistors for switching at all. If you can, use a MOSFET. A MOSFET is basically just a super buff ubertransistor that's better for switching in every way. You don't need a base resistor, they're more power efficient, and they can switch ridiculous amounts of voltage and current. If you're driving a motor, for example, a MOSFET is basically required because of the high current a motor needs. Also, if you have a motor with either a regular transistor or a MOSFET, don't forget a flyback diode across the pins of the motor, or else the built up charge in the motor will zap your transistor to death when it stops.
The only tricky thing about MOSFETs is that you have to make sure they will switch on at the voltage you can provide. In theory the datasheet specifies that voltage as a value called VGS(th), but its maximum only tells you the minimum that you need, and you want to be a volt or two over that so that it turns on fully. If the datasheet specifies an RDS(on) with a VGS next to it that means they tested it and it was fully on at that voltage. If that voltage isn't the one you want you can use this technique to check the datasheet's "I-V curve"; you want the voltage and current you're using to be on the flat part of the curve for your VGS. If it doesn't have a line for your voltage, don't use it. Keep in mind the graphs are "typical" (ie, optimistic).
General advice: check everything you're told two or three times before you believe it. Lots of people know nothing about transistors, and even people who know some stuff about them are often working off information they learned in undergrad back before we beat the soviets, or heard from someone's uncle who does electronics. Strive for either well-accepted theory or testable facts if you can. Don't trust any part recommendations that aren't backed up by the datasheet. Use a multimeter. Buy extras. Derate.
I promised to follow up on Red yellow green, a process I was using to visualise the status of my work. I put each area on a card and put the cards on columns representing succeeding comfortably (green), at risk of failure (yellow) and failing (red). I initially intended to do this followup after a week, but in the end it took a bit longer to get around to. The biggest complicating factor was that I ended up unexpectedly busy and most of the things I was working on ended up in the red.
That said, I feel that I can at least give it a positive review in that sense. I thought it would be a good way to visualise the ahead vs behind feeling and track what changes in those terms when I do a task. Unfortunately, if you end up very behind with something, there's not much it can tell you. You look at your status and it's red. You think about what will change if you work on it and it's still red.
But, in its own way, that has been quite useful. One of the biggest challenges is knowing when to cut your losses and move the goalposts. But if your goal is to move from red -> yellow -> green and it's clear that no feasible amount of work is going to get you out of the red, then the only way you're going to be able to do it is to change the definition. In other words, the time to make a new plan is when the old one no longer contains a path to green. I feel like that insight was made much easier by this visualisation.
The other thing is that, although it's pretty depressing to see a board full of red and realise most of it's going to stay red, I don't think it's any more depressing than the reality. In fact, by distilling that situation into a visual form it makes it much easier to reason about. Okay, so the whole board is red. What am I going to do about it? The board also makes it easier to visualise a solution.
One difficulty I've found is that you have to make an effort to keep the board maintained. As in, you have to remember to move the cards regularly as the status changes. That might not be such an issue if it were computerised, but using a physical representation it's a bit inconvenient. The other thing is that, although I've been drawing arrows to indicate potential changes in status, it's not really possible to represent things along multiple time-scales. So if one thing is going to go yellow->red if I don't deal with it this week, and another thing is going to go yellow->red if I don't deal with it today, it's hard to represent both of those at once.
Still, all up I've found it useful, and it combines nicely with process pipelines, so I think I'm going to keep it going for a while longer. Next followup should be around the end of December.
A year ago, I wrote Decision hoisting, about a technique for connecting a decision-maker's actions with their consequences. Rather than saying "you can't do that", you can say "if you do that, it will have these consequences; if you do it this other way, it will have these different consequences". In other words, you hoist your technical decision up to a management level. Doing this lets you avoid being the reason why something can or can't be done. It can abolutely be done, it just depends what price you're willing to pay.
The interesting thing about this technique is that it's not really optimal. Ideally, you could just make the technical decision and be done with it. But often, in the real world, you need these constrained-rational solutions that are rational only because of the distorted landscape of business decisionmaking. Often, these strategies have the unfortunate situation of being locally, rather than globally, optimal, which is something I think merits a bit more examination.
Normally, I'm an advocate of global optimisation; it's all very well to say "well I did my bit", but if the outcome isn't one you wanted, and you could have done something more, what good does that get you? It may not have been your responsibility to do more, but if you had done more you would have gotten a better outcome. The responsibility is besides the point. You want to win, and that's hard to do that when you create "not my fault" zones where you can safely and honourably lose.
But you have to be very careful with global optimisation, because it has none of the safety inherent in a more myopic attitude. If you're working on a project and you can tell that it won't be successful without a heroic effort from you, what do you do? The local-optimising answer is "I did my bit, not my problem", but the global-optimising answer is "I will do whatever I need to achieve the goal". And all of a sudden you've taken the entire project on your shoulders. You have made yourself responsible for its success or failure, no matter what is reasonable.
So let's back up a little. The real issue here isn't that you're attempting to globally optimise, it's what you're optimising for. Particularly in employment, it's easy to get mixed up about whose priorities you're satisfying. If for you, in your life, according to your preferences, this project is really the most important thing, then by all means do whatever you need to to get it done. But, if it's someone else's project, designed to meet their needs, this is rarely the case.
You have to be very wary of goal substitution, that pernicious process where you have a primary goal, realise you can achieve it by working towards a secondary goal, and you somehow end up chasing the secondary goal to the detriment of the primary one. You can often want a project to succeed for your own needs, but the thing to globally optimise is the success of your needs, not the success of the project.
These things naturally limit themselves if you just focus on your part. After all, you're limited to local consequences if you only take local responsibility. But once you start thinking globally, those limits fall away and you have to be very careful to make sure the global consequences you take responsibility for are really ones you care about. Otherwise you end up shouldering the burden of a global optimisation that doesn't even benefit you.
Attention is often considered to be a good thing. After all, if you want to do a good job at something, you should pay attention, right? I'd like to argue that attention is, if not a bad thing, at least a dangerous thing that can often be the harbinger of problems.
Something that needs your attention is something that is unstable (or, rather, negatively stable). That is to say, if you withdrew your attention from it, it wouldn't keep going the same way. That could be because it's a problem that you're paying attention to in order to fix, or an area you're paying attention to in order to improve. Either way, it only keeps going as long as you keep focusing on it.
By contrast, something that's stable doesn't much care whether you're paying attention to it. If you brush your teeth absent-mindedly while thinking about something else, you're not going to suffer a catastrophic breakdown in dental health. If you throw a frisbee around every now and again but don't take it seriously, you're not going to have your skills decline to the point where you may as well not bother. In some cases, something that's stable enough can be ignored entirely, though it's pretty difficult to make a system that stable over long periods of time.
The issue with attention is how limited it is. If you're only doing one thing and that thing requires constant attention, well, maybe that can work. But what about when another thing comes along? Not just work, but friends, family, personal development and general life tasks. When two things want attention, only one is going to get it. If they both need attention, then one of them is going to fail.
The other sense in which attention is limited is that it is taxing, and whatever resource it uses can run out. A task that you have to pay careful attention to the whole time you're doing it is a task you can only do for a tiny fraction of time compared to one you can just do inattentively. Compare how long you could spend stacking wood vs doing maths problems, or chatting vs talking through a complex idea. Things that require attention are harder and more fragile.
Put these parts together and you see how dangerous it is to rely on attention. Not only can you only do one thing that requires attention at a time, but you can't even do that thing very much. Which, again, isn't to say that attention is bad. But it is a cost that you pay for getting something done. If you pay that attention once, to gain some improvement or fix some problem, that's fine, but it's very easy to end up paying and paying again. And if all your attention's used up, how are you going to pay for the next thing?
I'm generally a big fan of learning by doing. Learning about something is very different from learning how to do something, and although your analytical self might be perfectly happy with understanding the way something works, you're never really going to know how to do it until your inner child understands it too. It makes sense to learn by doing real(ish) projects too, because that way your practice is as close to the performance as possible.
But there's an issue with this: learning by doing is serving two masters. One of them wants you to get the best outcome for the project, and the other wants you to get the best outcome for yourself. Consider: what if there's a slow, methodical way to do something that will teach you about proper form, or a fast and hacky way that will just get the job done? Sometimes (though not as often as people think), it makes sense to just do it the hacky way to get the project out the door, even though that's only teaching you bad habits.
Worse, learning by doing adds a kind of pressure to the project in the form of an external goal to satisfy. If you set out to learn, do a bunch of things, and learn about them... well, mission accomplished. But if you set out to learn by doing, you have the complication of whether the project worked or not. After all, your project might have been successful in teaching you things, but if it also had the goal of also doing something useful, you can't really feel like it succeeded if it doesn't do that thing.
I've noticed this issue in prototypes sometimes, and it's what led me to start thinking about the idea triangle and tire kickers and so on. I would often give in to the temptation of trying to learn something and achieve something at the same time. After all, that's twice as good! But you pay for it by pulled in two directions. You're trying to serve two masters. One of them wants the best for you. The other wants what's best for the project.