Sunday, April 26, 2015

Where is the User Interface going?

Image courtesy of Allan Ajifo -
Licensed under a CC BY 2.0 license.

Ah, that most exciting of things, the user interface.

User interfaces have come a long way - and in some ways, haven't.

Post World War 2, the early computers then were generally accessed by loading into the machine punched cards, which the computer would read,

and then over the course of hours (or even days), responses would come out generally via a printer. No screen, no direct user interface.

But then after that, came command-line interfaces, and with it, the keyboard.

And the keyboard has pretty much hung around ever since.  Supplemented with the mouse, it's been our mainstay for over forty years for getting information into the computer.  And as for output, since screens showed up into 1970s, they've sat on our desks for a generation.

We've stuck with these peripherals ever since, and the reasons we have is because:
a) they're pretty good, and;
b) they're what we're used to.

To go to a different system would require user retraining, which is difficult and unpopular - we all know that the QWERTY keyboard layout wasn't designed because it is fast, but because it slowed typewriter operators down (a necessity to stop the machines jamming). Imagine a computer you could wear on your wrist which you can talk to, and responds to you the same way. This is certainly feasible for the leading technology companies, but, partially fuelled by our innate expectation that speech recognition always goes wrong, we still insist on a screen to look at stuff, and the ability to press controls.

One of the reasons I suspect Google Glass is not currently available is because the ideal user interface isn't available yet.

And that ideal interface is thought.

You don't want to walk around, talking to your own face every time you want to take a photo. You want to be able to think "take a picture" and the technology kicks into action.  No screens necessary, instead an overlay of information is fed into the images processed via the optic nerve to the brain.

There's actually a lot more progress in this area than you might imagine.  Implants to provide visual feedback to visually impaired people have been in use for a number of years now. Work is ongoing to allow people to control prosthesis by thought.  And recent experiments have been undertaken to allow two people to "think" at each other, a direct brain-to-brain interface.

I can imagine a time in 30 years time where a newly qualified but inexperienced IT technician working at his first job will be sent by his manager to go clear out an old cupboard, and he'll discover a keyboard.
 "Hey - have you seen this? It's a keyboard!" he'll exclaim to one of his older colleagues, who'll take the opportunity to reminsce about the old days where you didn't just "know" the information you needed to undertake a task, you had to search the Internet for it by typing on a keyboard.

Of course it'll be annoying having to "think-click" adverts to make them go away when you're walking around, and we'll be told by experts about how healthy it is to restrict the time we spend connected, but it'll be a fascinating time.

In his spare time Mike blogs, makes videos, and intends to do lots of chores.  Check him out at