Today I'm releasing Starboard, a fun demo I made to recreate the sound of Bag Raiders' sleeper internet hit Shooting Stars using Web Audio.
This was an interesting one to make, because the code was actually not that difficult, but trying to recreate a specific synth sound turned out to be way beyond my expertise. I originally intended it to be a 1-day project but it took a whole week to finish, of which around one day was spent writing most of the code, another papering overgaps in theWeb Audio API, and one more making it work in Safari. The rest of my time was just fiddling with parameters over and over again trying to get it to sound right.
I tried a bunch of different effects, filters, EQ, compression, etc, but in the end the thing that sounded the best was just the basic synth, a bit of vibrato, and some reverb. It still doesn't quite have the thin cheesy sound of the original, but that may actually be for the best; a couple of times I had something that sounded close when I played it with the real track, but when I listened to it without any of the bass or backing synths it just came out hollow. In the end, I think just focusing it on making it sound good and letting go of exact reproduction was what I needed to do to get a sound I was happy with.
Most of the internals proceeded fairly normally, though this was the first time I'd had to write an actual sequencer (as opposed to, say, the node-based scheduling code in Markov Technology). I just used timeouts, which was basically fine, except that the scheduler loses time under heavy CPU load, which can happen pretty easily on mobile. I was, however, really pleased with how I represented notes internally; I'd previously used frequencies or scientific pitch notation (like A4), but this time I used MIDI numbers and everything suddenly became way easier.
But the one thing I think I'm most proud of is the way I did the starfield animation. In Markov Contact I started trying something I think of as deterministic animation, where every frame is a pure function of time and a (usually random) initial state. I cheated for a few of the animations back then, but this time I really nailed it. The code is super small, loops seamlessly and, even better, if you hide and show the starfield (like I do a bunch), the stars "keep moving" even when the animation isn't running.
So all up, this was a bit of fun, and a good chance to refine some existing audio & animation techniques and re-remind myself of how hard audio production is. If you'd like to check out the code, it's on GitHub.
This is a write-up for Cardiograph, a social/technological experiment I ran as part of Culture at Work's event Science: A Body of Facts. Each entrant to the event had their cardiac activity measured by a homemade ECG and drawn in real-time on an ID badge that they wore throughout the event.
For a long time, I'd wanted to do something interesting with ECGs (as opposed to EEGs as in Project Brain or Mind Music). For one thing, the collection and processing on ECGs is much less complicated, and for another, people feel a much more direct connection to their heartbeat than their brain activity; they can feel it, manipulate it fairly easily, and it has significant and obvious health implications.
To that end, I'd bought an Olimex EKG-EMG shield, basically a glorified differential amplifier with a huge gain and some filtering. The thing is super cheap, pin-compatible with Arduino, and works great with a little fiddling and smoothing on the software side. It comes with an example Arduino sketch which I used to stream the data over USB serial to my computer.
However, while I had the data, I didn't really have an application other than just gawking at my heart rate. The perfect opportunity presented itself when I got a chance to play with my friend Joseph's AxiDraw plotter. I used it for a tech demo with Scrawl, which gave me a pretty good idea of its capabilities. It turns out to be very easy to interface with via USB serial.
These two pieces together gave me the idea to make an ECG that would actually draw on paper. Although modern ECGs use digital displays and have fancy built-in analytical functions, the original ones were completely analogue: just some large electromagnets, conductive string, and a photographic plate. Reproducing this using the combined powers of modern digital amplifiers, high-precision stepper motors and an overpriced laptop seemed delightfully anachronistic.
I wrote the code entirely in Rust, a language I'd experimented with before but never built an entire project in. It was very nice to use compared to C, and extremely fast compared to Python – all up a good fit for the problem space of "low-level code that doesn't make you wake up in the night thinking about memory bugs".
Since I had to manage the plotter and ECG at the same time, I needed a way to manage concurrency. I could have used threads, which, at least in Rust, are moderately well-behaved. However, I'd heard a lot of interesting things about Tokio, a kind of Nodejs-style event driven programming framework. It tries to do a lot and, although the final code was pretty neat, I'm not sure the burden of the extra complexity was worth it.
The code itself needed to do three things: process the input from the ECG, turn that into smooth drawing commands for the plotter, and manage the plotter's command queue (too many commands would make it lag behind the ECG, too few and it would stutter while waiting). Of these, the input processing was the only easy one; I just did a rolling average of the ECG values which acted as a simple low-pass filter and got rid of basically all of the noise.
Managing the command queue wasn't theoretically hard, just very annoying in practice. The plotter itself was actually quite well-behaved; it would send an OK in response to a command that it had received, and if you filled up the queue it would wait until the queue had space before sending OK. I just filled up the command buffer on start, then sent a new command every time I got an OK. I say just, but this was really tricky to get right.
However, that was nothing compared to the difficulty of figuring out the right drawing commands to send. The plotter only really accepts commands like "move x distance over y time", and if you make the distance too high, or the time too low, the motors will skip or shudder. And it's not just velocity, you have to think about maximum acceleration, and maybe even jerk.
With this, I had entered the dark and scary world of control theory. Too much acceleration or velocity and I got jerky, inaccurate movement. Too little and the plotter couldn't adjust fast enough to keep up with the ECG, leaving it oscillating wildly back and forth.
Truthfully, the right answer to this would have been to sit down for a week or two with some serious reference material, learn enough about control theory to really get a sense of the fundamentals of the problem, figure out how to apply that to the real-time inputs I was getting, maybe finally get good at differential equations... Anyway, I didn't do that. Instead, I wrote something that felt like roughly the right idea and tuned it until it seemed to work well enough. Some day, control theory, I'll come back for you.
Ethically, it's worth mentioning that this ECG has very little medical validity. While it does definitely show a reliable signature generated from cardiac electrical activity, it's not a medical instrument, and unlikely to be useful for diagnosis unless you have a really obvious problem. The setup is just one lead (Lead I) compared to the 12 on a real ECG, and with the layers of smoothing and cowboy control theory, the waves end up squished at the extremes, making small fluctuations look larger.
One other point worth mentioning is safety. When you're putting electrodes on either side of someone's chest it's really important not to accidentally put high voltage through them. That means you need isolation, and almost all of the ways to do it for a serial device are expensive, inconvenient, or don't actually work. After a lot of research I can safely say that the only decent option for less than many hundred dollars is Adafruit's USB Isolator. Most USB isolators have different ratings for data and power, for example Adafruit's is 5kV/2.5kV but a lot of the cheap ones on eBay were 5kV/1kV. I don't understand this; surely someone getting zapped to death doesn't care which wires the electricity came through?
Anyway, the end result worked great and didn't kill anyone. I set the plotter up on a whiteboard, using plain office paper to do practice runs and then drawing the real thing on perforated ID badge inserts. The whiteboard being metallic meant I could use magnets to hold the cards in place while the plotter did its thing. With one other operator fitting the electrodes and me lining up the cards for the plotter, things proceeded quite quickly; about 1-2 minutes per person, but I think we could have hit 30 seconds if we needed to.
The social element went even better than I anticipated. I figured the individual differences in heart rate and morphology would make for an interesting and distinct signature for each person, but I didn't realise how striking the differences in amplitude would be. Some participants had big, sweeping waves that took up most of the badge. Others had little more than a wiggly line. These were normal variations (I believe mostly due to differences in body resistance), but they turned out to be the most noticable.
The people with very small waves took to calling themselves "flatliners" and commiserated together. One lady had an abnormally fast heart rate, which started a conversation about her heart condition and ongoing treatment. A few people came back later in the night to see if drinking had changed their graph much (it was about the same). People with particularly photogenic graphs got a lot of compliments, in a curious twist on the genetic lottery. Only one person declined; she'd had a scary ECG result before that turned out to be nothing, and didn't want to worry about what she'd see this time.
One of the most curious things about medicine is the way it turns people into data. Doctors aren't looking at you; they're looking through you to see what's wrong with your numbers. My goal with this project was to make that part of the conversation. What happens when you wear your heartbeat like a signature on your chest, putting your vital numbers on display for everyone to see?
It definitely made for some unique social experiences, but really drove home the tyranny of physical characteristics and the rapid onset of pseudoscience. In a nearby possible world, is it so hard to imagine the flatliners being stereotyped as wimpy and effete? Or looking for one weird trick to boost your cardiac amplitude? Or Ask Einthoven: I'm a regular dating a flatliner, can it work? The worst thing is, these aren't qualities you can develop, they're facts about you that you can't change.
Even if we restrict ourselves to the real universe and real science, we're not so far from a situation where DNA sequencing, facial recognition, social media, targeting databases and real name policies converge to eliminate the possibility of forging our own identities, leaving us with just one bio-authenticated persona, an indelible history tied to our very cells. The computer scientist part of me can see that this is quite clever, but the human part is horrified.
If you'd like to take a look at the code for this project, you can find it on GitHub, or check out my prototypes from July and August to see some drafts along the way.
If you're giving a presentation, making an argument, or proposing something, the most important thing to keep in mind is: what's the takeaway? It's not sufficient to just lay out information, you also need to provide the conclusion, reaction, or decision that the information implies.
Now, why is this? If you're providing information to someone, surely it's their job to decide how to respond? Perhaps that's true, but in reality, people don't tend to notice, or if they do they don't mind. Even if they disagree with your conclusion, it's much easier to build their response from yours than create one from scratch.
If you're providing information, your optimal strategy is to predigest that information so that your audience has to do less work digesting it themselves. Compared to undigested information, yours will spread further, evoke stronger responses, and be easier to understand.
However, as a consumer of information, this is deeply problematic. Firstly, the choice of predigested conclusion will frame your response, even if it doesn't dictate it. You may agree or disagree, but you're far less likely to find a totally unrelated conclusion or different angle on the idea once you have an existing conclusion to work from.
The second issue is that working out a conclusion for yourself is an important component of understanding. Much the same way that you can memorise the answers to a test, you can agree with a response without really understanding how that response follows from the information. In fact, you may not even realise when it doesn't.
This is particularly troubling in the context of modern journalism and social news, where the line between editorial and reporting is blurry, and the incentive structures of advertising and social media encourage making content as easy to digest and share as possible. You will very rarely find viral content that doesn't also tell you how to react to that content.
Even if the content itself doesn't contain a predigested reaction, the comments are a selection of ready-made reactions for you to choose from. It's a common pattern on popular social news sites to read the article, then read the comments, then decide what you think. But by that point it's more a matter of choosing who you agree with than forming your own opinion.
In conclusion, it may be better to stop reading before you reach the part where an author goes from providing information or making arguments into telling you how to respond. If the content is designed to make that impossible, perhaps it's not worth reading at all. And, furthermore, Carthage must be destroyed.
What do rationalists, atheists, egalitarians and polyamorists have in common? I mean, other than often ending up at the same parties?
These are all identities that come from not believing in something. For polyamorists, it's exclusivity in relationships. For egalitarians, it's differences in value between people. For atheists, it's gods. And for rationalists, it's anything that doesn't change your predictions.
However, the negative is a slippery beast. You could say that a religious person's identity also comes from not believing in something: atheism! So to describe this idea in more precise terms, let's return to the concept of additive and subtractive: you can call something additive if it tries to build from nothing to get to the desired result, and subtractive if it tries to start with some existing thing and cut away at it until the desired result is reached.
To see these different approaches in action with respect to belief, consider a scientific vs religious approach to truth. Science begins with a base of small, empirical truths obtained from observation, and attempts to build from that base to big truths about the universe as a whole. Conversely, religion begins with a big truth about the universe – there's a god and he does everything – and attempts to cut that exhaustive belief down into small everyday truths. If you ask why stars explode, a scientist might say "I don't know", while a religious person would be more likely to say say "I know God did it, but I don't know why".
So to resolve that negative from earlier, "not believing in something" in this case means not accepting some particular subtractive truth you are expected to accept as a given and work backwards from. Instead, you attempt to start from nothing and build additively to that truth. And, in the case of these various non-beliefs, find you can't do it.
What god would develop in a society that never had a god? What hierarchy of human life? What sexual mores? The answers would depend mostly on the popular subtractive truths of the time. Not so with additive truths. In Ricky Gervais's words, if all the books were destroyed, a thousand years from now we'd have different holy books but the same science books.
Perhaps this outlook could be considered a feature of rationality, skepticism or the scientific method, but I think of it as the ur-discipline, the belief behind these various non-belief systems. Don't accept truths that you can't build to additively. Take as little as possible for granted. It is not possible to have a belief system without axioms, but treat all axioms with the utmost suspicion. If they cannot be removed, they should at least make the most modest claims possible.
There is something deeply appealing to me about this way of thinking. It's a kind of intellectual asceticism. A cosmic humbleness. Rather than treating the truth as a big book of mysteries given to us to decipher, we treat it as a structure of our own creation; small at the base, but expanding ever outward into the darkness.
This is a writeup for my work on Mind Music, a presentation at the inaugural Spotify Talks Australia. The talk was presented by Professor Peter Keller of the MARCS institute at Western Sydney University, organised by my friend and previous collaborator Peter Simpson-Young, and included a live visualisation/sonification of brain activity I built based on my prior Project Brain work.
The visualisation/sonification had two parts: one that would show general brain activity, and one that would show the brain responding specifically to music, synchronising with the beat in a process known as neural entrainment, sometimes more specifically called steady-state audio evoked potential (SSAEP), or audio steady state response (ASSR). Although the broad strokes of the data processing were similar to Project Brain this new entrainment demonstration had some unique and challenging features.
We were attempting to reproduce a particular scientific result from Tagging the Neuronal Entrainment to Beat and Meter by Sylvie Nozaradan et al (pictured). Rather than the previous broad-spectrum visualisation, the goal here was to look for a rhythmic brain response that mirrors the BPM of a particular song. That is, if we're listening to 144 beats per minute, that's 2.4 beats per second, so we should find increased activity at 2.4Hz.
The original paper used data from multiple phase-locked trials averaged together. That is, the same experiment was tried multiple times, but crucially the timing was identical each time. You can imagine the equivalent situation with music: if you play ten copies of the same drum beat at the same time, you get a really loud drum beat; if you play them at different times, you just get an unintelligible jumble.
With our demo being live, we couldn't stop and start the recording. Instead, I attempted to come up with a live equivalent of this averaging process, which I called "epoch rolling average", or ERA. An ERA splits the data up into chunks of a certain size and averages them together, with the oldest chunks being discarded as new ones come in. The key part of this is that if the chunk size is a multiple of the frequency in question, then the chunks will end up synchronised with each other without requiring manual synchronisation.
Another difficulty is that the Fast Fourier Transform, used in both the scientific paper and my original visualisation, has some restrictions that made it tricky to work with. It tells you all the different frequencies that make up a signal, but the frequencies are grouped into bins, whose precision depends on how long you collect data for. More data means more bins and thus more precision, but also more latency between when you start collecting data and when you see the output.
Complicating this is that which frequency each bin is centred around is also related to how much data you use. We could pick data sizes so that our frequency would be in the centre of a bin, but the "fast" in "Fast Fourier Transform" requires that we use a a power of 2. We could make up for that by increasing the size until it got precise enough that it was really close, but that would, again, take longer and make it less real-time.
To get around this, I turned to a different kind of Fourier transform technique called the Goertzel algorithm. This is much less efficient than the FFT per frequency, but also allows you to pull out a single frequency at a time. Since in this case we only wanted a few, that meant I could ditch the power-of-2 restriction and make the frequency we wanted fall right in the centre of a bin.
Beyond the technological challenges, there were some interesting new parts to the visualisation. Most of the work for this was done in Unity by Alex Lemon at Rh7thm. I provided a specific data feed for the visualisation that included the colours, animation speed, amplitude and phase for each brain segment, and then he used those parameters to animate a 3d brain model. There was a lot of fine tuning involved, but the end result ended up looking really nice.
As for the sonification, a lot of that was based on my previous pentatonic mapping, but with a lot more tuning to make it sound less shrill and more chill. This pentatonic sonification was used for the first part of the presentation, where we talked about general brain activity, but we also wanted something less ethereal and more rhythmic for the beat detection demonstration.
What I ended up doing was a low bassy frequency with a kind of tremolo and wobble filter on top of it. To make that work properly, I needed to make sure the bass synced with the music, so I had to add some code to cue the music from within the sonification, and only on an even multiple of the beat. I used Web Audio for all of this and, although it got a bit tricky to keep everything orderly with so many components, the flexibility it gave me was totally worth it.
The goal for this ended up being an interesting mix of science and theatre; on the one hand, we were attempting to build on a genuine scientific discovery in an interesting and novel way. On the other, we were there in the spirit of showmanship, trying to make an entertaining audiovisual complement to Professor Keller's presentation.
So how well did we succeed? It definitely wouldn't rise to the level of real science, not least of which because rather than starting with "how do we test a hypothesis?" we were starting with "how do we make something that looks good?" The way the visualisation was designed and tuned basically guaranteed that something would happen, though it would be more intense the better the beat was detected. The signal from our consumer-grade EEG was pretty noisy, and it could be that what we visualised in the end was as much noise as it was neural entrainment. On the other hand, all of the processing we did was legitimate, just not provably so.
But I would say its value as an entertaining and informative visualisation was undeniable. The crowd had a good time, the presentation went smoothly, and the technology all survived a live stage demonstration, despite some terrifying last-minute wireless issues. I had a lot of backup options ready to go in case something failed and, even though I didn't need to use them, having them there really took the edge off the live stage performance.