Sam Gentle.com

Metacomputer

An idea came up today that's been floating around in my head for a while. I keep running into issues where no single computer I have access to has the exact mix of resources I need, and I wonder why it is that running things across machines is so difficult.

An example: I was recently working on some large files. I had to copy them around, write some custom code to do processing on them, and then turn them into dvd images and upload them somewhere. The problem is that my home internet connection is too crappy to upload a lot of files in anything close to a reasonable amount of time.

So instead, I provisioned a cheap little ARM-based cloud machine in France. Unlike Australia, Europe has good internet, so the uploading and downloading was no longer a bottleneck. But the latency is really high, so I had to kind of awkwardly shuttle things back and forth so I could write code on my local machine and run it on the remote machine.

During the whole process I remember thinking how cumbersome the whole thing was. It's great that I could do it at all, but it definitely wouldn't be described as a seamless process. I think if the Glorious Cloud Future is to occur, we need something better.

What I'd like to see is a kind of metacomputer: a computer built out of other computers. It would automatically distribute computation depending on the kind of resources required and the cost of transferring data between resource locations. The end result would be that you can add lots of different kinds of resources, and even do it dynamically, and the system turns that into the best computer it can.

In my example, it would recognise that the cost of transferring the large files is high and the cost of transferring my keystrokes is high, but the cost of transferring code is low. So the file processing would be allocated to the remote server, but the process that turns keystrokes into code (my editor) would be allocated to my local computer. However, if the server was much closer to me (but I still had crappy internet), maybe it would just move all the computation to the remote server and leave my local computer as a dumb terminal.

What's even more exciting about this is that you could integrate such a system so well with cloud server platforms. If the metacomputer can automatically redistribute resources when they become available, there's no reason it couldn't automatically add more resources when needed. You could even give it a value-of-time measurement, up to which point you'd be happy to spend money if it saves you processing time.

It's such a shame our computer architectures have not changed significantly in the last half-century, even as the context we use them in has changed a lot. I think at some point it's gotta give, though, and when it does I hope metacomputation is where we end up.