It's instructive to think about how people's relationships with computers differ from their relationships with other machines. Car engines, for example, or typewriters, or mechanical watches. These are all machines that people don't necessarily understand, but there's something nonetheless fascinating about them. You can watch an engine at work, see the cams turning and pistons moving, hear the roar of combustion and, though the details might escape you, you can get a certain level of understanding just by watching the machine at work.
Electronics, on the other hand, don't generally move or make noise. Instead, they make electromagnetic waves, and where's the fun in that? You can't get any sense of what it's doing because the whole thing is running on invisible magic pixies. Sure, you can rig up an oscilloscope, multimeter, or just add some blinky lights, but you're still only indirectly observing the system. There's no equivalent to the direct observation you get with mechanical things, no equivalent to just watching everything work.
With computers, we had the chance to fix this. After all, a computer can be any kind of machine we want; it can be transparent like a mechanical device, or opaque like an electrical one. Unfortunately, perhaps because it was easier, perhaps because the early programmers were mathematicians and electrical engineers, we ended up going down the opaque path, and we've never really recovered. In theory, the internal operations of any program are easy to inspect at any level, from the raw instructions as they hit the CPU through to system calls, function calls, and application-level debug output. But none of these are really enough.
The various kinds of tracing and debugging are akin to putting a multimeter on your circuit. You're getting a proxy for what's going on, an indirect representation that you can use to infer the inner workings, but not the inner workings themselves. The raw instructions are probably closest to a direct representation, but here we run into a difficulty with the definition of a program. After all, in some sense a program is the instructions hitting the CPU, in another sense it's electrons bouncing around, in another still it's quantum probability spaces interacting in a surprisingly deterministic way. But these things are implementation details. The way a programmer thinks about a program is in abstractions of action, individual functions or modules that represent some behaviour. And what ability do we have to view these functional units in operation? None.
I think of this as the silicon curtain, the barrier between the conceptual operation of a program and its observable behaviour. Programmers only get around this through years of training and practice. They learn how to turn indirect observations into hypotheses by mentally simulating the computer. Obviously, this significantly limits the scale of programming we can accomodate; there's such a high bar that the idea of casual programming is kind of a joke.
Not only programmers are affected. Users, too, suffer from the utter opacity of software. Sure, you might not be able to fix a mechanical device, but if you see a bit that used to move and it's not moving now, you have a pretty good idea of what's wrong. By contrast, with software even localising the problem is enormously difficult. The most common reported problem is "I dunno, it used to work and now it doesn't".
Beyond the practical concerns, I think the silicon curtain is a tragic waste of beauty. It's hard not to lose yourself in the beautiful and intricate operations of a mechanical watch. The coiled springs and spinning cogs, the exquisite engineering and precision of each part moving against each other. And, mounted atop that Rube-Goldbergian complexity, the simple result: one tick followed by another, and an arm sweeping relentlessly forward.
Computers have all this beauty and more, but behind the silicon curtain it can't be seen. We're like the EM-deaf Earthlings in Asimov's Secret Sense, straining to perceive something just out of reach.