I made this today while experimenting with cellular automata parameters, and I found it uniquely beautiful. After a while the patterns stabilise and start to look like a kind of constantly-moving circuit diagram. I call it Chip. The source is on GitHub if you're that way inclined.
Do you say "made" or "discovered" with cellular automata?
I've been thinking recently about issue tracking for my various projects. No doubt a lot of it is going to happen on GitHub because that's where most open source stuff happens, but I feel like pinning my projects exclusively to GitHub is a bad idea. On a small scale, having a little TODO file or something would be fine, but what about on a larger scale?
What I'm thinking about is making a special "issues" branch, disconnected from the rest of the tree. It has one file per issue in some kind of structured format, probably JSON. That file contains data about the issue - a description, tags, etc. Comments are commit messages with optional changes, with one separate thread of commits per issue file. All the threads are octopus merged at the end and that forms the HEAD of the issues branch.
Then I could make a two-way gateway that converts GitHub issues into this format and back, and a couple of command line and web tools to use it natively. Issue nirvana!
It's pretty convenient that brains seem a lot more tolerant of nonsense than computers. Specifically, it's fortunate that, however we're wired, "this statement is false" doesn't send us into paradoxical paroxysm. It's not clear that an artificial intelligence would necessarily fall foul of that problem either, but there are a lot of ways in which the inherent rigour of computers seems to make things more difficult than the fuzzy logic and forgiveness of our fleshy processing units.
On the other hand, I think there are some interesting parallels between the failure modes of formal systems and failure modes of our informal equivalents. In (non-exotic) mathematics, division by zero is undefined. On a computer, division by zero will variously return infinity, blow up or return a Null. Luckily we are not affected by such trivial problems! Well, maybe...
A wonderful reddit comment from a while back defined the "zero day", a day where you make no progress towards your goals. The author insists that zero days are the absolute number-one thing to avoid, and my experience agrees: they are uniquely demoralising. So why is this? Obviously to some extent it just feels bad to not achieve goals, but why does it feel so much worse to get nothing done than to get nearly nothing done?
My theory is that we extrapolate out from our present conditions when we think about the future. So our information about the future depends on our information about the present. The less we're doing in the present, the less information we have. If we work all day we have a fairly robust estimate of how much work we can achieve in a week. If we work for an hour that estimate is less reliable.
But if we don't do any work at all, we have no basis for a prediction. Our estimate is undefined: a divide-by-zero.
I've been prototyping some stuff for the ambient cellular automata game, and it never ceases to amaze me how much like alchemy messing with cellular automata is. Too much CREATE_THRESHOLD and everything explodes in brilliant white. Too much DESTROY_THRESHOLD and it all disappears.
It reminds me a lot of the precarious nature of our own universe, where the various phyiscal constants would prevent us from existing at all if they were only a little bit different. After messing around for a while, I got the above parameters to make something fairly lifelike and aesthetically appealing. It certainly wasn't easy, though!
From a game design perspective, I think a big challenge is going to be minimising the amount of arbitrary fiddling while still allowing for a lot of flexibility in changing the automata behaviour and eventual results. I can now fairly reliably explain why gods are so capricious in all our mythology: they're irritated that they spent so long tuning physical constants and the universe still doesn't work like they want.
Also, if you're the cellular automata I just made and you're sentient and coming to this page for answers, I'm really sorry.
A friend pointed me to an interesting concept a while back called bounded rationality. In short, it's a way of asking how optimally something with a limited capacity can make decisions. While a supercomputer could run complex calculations or simulations to determine the best course of action, our brains have a much smaller computational ability. Instead, we rely on various tricks and shortcuts that, although not strictly optimal, may be the best we'll get under the circumstances.
I've heard fairly often that democracy is a similar kind of best-we'll-get system. "the worst form of government, except for all the others", as usually attributed to Churchill. Although more autocratic systems can be more effective and powerful in the short term, they always seem to go bad after a generation or two. Or to put it another way, the best case of an autocracy can be better, but the worst case is much worse.
I've also heard it said about capitalism as social system. We could certainly do some pretty amazing things if everything wasn't so focused on strict exchanges of value. At least, in the best case. Much like with political systems, though, the worst case of capitalism seems positively inviting by comparison to the alternatives. To me, what capitalism does best is turn a whole lot of individual selfish actions into a fairly workable system. It doesn't require a lot of trust.
From these examples and others, it seems evident to me that the way we design societal structures is a kind of bounded rationality writ large. Perhaps in an ideal world we could have a perfectly rational society that ensures the best possible outcome for each member, but our own limitations make that impossible. Instead, we have to try to make the best society we can on the back of those limitations.
That said, it's also worth considering that our limitations aren't permanent. As our technology and our culture develops, some of those assumptions will be invalidated and those old infeasible systems may stop seeming so infeasible after all.
At the rate our technology is moving, maybe it's already happening.