Sam Gentle.com

User model

An interesting thought: as we've made our devices smaller and more integrated we've also made them harder to interact with. To get around these form-factor issues has required branching out into all sorts of alternate input systems: keyboard swiping, predictive text, voice recognition, maybe even finger gestures in the air. At the core of all of these are probabilistic input systems that try to guess what you mean by making assumptions about you.

For text input, at least, the system starts with a pre-trained model of what words I most likely want to write. Then I can train it by adding new words and using certain words more or less often. But the systems aren't integrated very well: words I write more often aren't more likely to be chosen by voice recognition, even though that would be a valuable signal. And I think more generally there's an inkling of an idea that could be really great if it was developed: having a model of the user using your system.

A proper user model could go beyond just learning which words you use most and actually change the way your computer works to better suit you. For example, a model of my reaction time could tell that I didn't mean to click that button that just appeared under my mouse 100 milliseconds ago. A model of my listening habits could tell that I only play music or video games, not both at the same time. It could also tell you that I have specific decibel preferences for different audio sources. A model of my waking hours could adjust my screen temperature, notification preferences and music preferences all at once.

Obviously, anything that users interact with has some kind of user model if it stores any user information or changes its behaviour according to preferences or feedback. However, I think that making it explicit – and, more importantly, centrally managed – would be an amazing improvement in the way we interact with computers.

At least until the ad companies get ahold of it.