Fifty years ago, pioneering computer scientist Doug Engelbart showed off a series of breathtaking new technologies in one astonishing keynote that’s referred to as “The Mother of All Demos.” Demonstrating the computer mouse, the graphical user interface, hypertext, video conferencing and more, it was the equivalent of a modern Apple event unveiling the Macintosh, the iPhone, the iPad and the iPod all at the same time.
Half a century after Engelbart’s demo, we’re still relying on a lot of the computer interactions he helped to pioneer. But the means by which we interact with computers are changing, slowly but surely. So put down your mouse and keyboard, because here are seven of the ways we’ll interact with machines in the decades to come:
Voice control
We’ll start with an obvious one. Just a few years ago, voice control was incredibly limited. While it was decent enough for transcribing text, and useful as an accessibility tool for people with impaired vision, few folks were going to voluntarily give up their mouse to speak to their computer instead.
Today, this sci-fi dream has finally come true. Aided by breakthroughs in artificial intelligence, smart speakers like Google Home and Amazon Echo not only understand what we are saying, but can make sense of it, too. Voice controls are able to greatly speed up our interactions with computers, while meaning we no longer have to physically be right in front of them in order to use them.
The technology also lowers the barrier to entry since asking a machine to perform a task, using everyday words, is a whole lot simpler than requesting people learn to grapple with different computer operating systems and software layouts.
Emotion sensing
It’s great if a machine can do what you ask of it. Even better is when a machine can predict what you want before you even have to ask. That’s where emotion tracking technology could help change things.
While it’s more of a way of improving interfaces, rather than an interface in its own right, emotion sensing can assist users by pulling up relevant suggestions based on how you’re feeling at that precise moment.
Knowing the optimal time for you to do work based on your productivity levels? Analyzing your typing to ascertain your mood and pull up the right apps accordingly? Emotion sensing will help with all of this.
Gestural sensing
We already use gestures to control our devices, but there’s so much more that can be done in this area — such as machines which can use image recognition technology to better recognize hand and body motions, even when we’re not physically in contact with a screen.
Devices like the Kinect have already explored this in the gaming space, but companies such as Apple have also explored it for (presumably) more serious productivity-oriented applications.
Aside from image recognition, embedded implants might be another way to let us interact with smart environments with little more than the wave of a hand. Don’t fancy getting a chip injected into your body? Then maybe consider technology like…
Touch surfaces everywhere
Remember rapper Trinidad James’ 2012 song “All Gold Everything?” Well, in the future it seems that “All touch-sensitive everything” is going to be the name of the game.
Researchers at places like Carnegie Mellon have been working on ways to turn just about any surface you can think of — from desks to human limbs to entire walls of your home — into smart touch surfaces. Why limit your touch interactions to the tiny form factor of a smartwatch, or even a tablet computer, when virtually everything can be made smart with the right paint job?
Particularly as the “smart home” comes of age, this tech will allow us to control our surroundings with assorted virtual buttons and the like. The results will be the most complete realization of the late computer visionar Mark Weiser’s statement that the most profound technologies are those which, “weave themselves into the fabric of everyday life until they are indistinguishable from it.”
Pre-touch
In today’s busy world, who has time to actually touch a touchscreen? That’s right: nobody. Fortunately, smartphone makers everywhere — from Samsung to Apple — are actively investigating pre-touch sensing. (Samsung’s current Air Gesture tech is one early implementation.)
The idea is to track your fingers as they hover over a display, and then trigger interactions accordingly. In terms of functionality it could work a bit like Apple’s 3D Touch feature for the iPhone, with apps or files able to offer a sneak preview of what’s inside before you actually open them up. Except without the indignity of actually having to touch the display to do it.
Virtual and augmented reality
Virtual and augmented reality technology opens an entire new world of ways to interface with our devices. Want to surround yourself with infinite MacOS screens for some bonkers multitasking? Fancy designing three-dimensional objects in the virtual world? Dream of being able to summon information about an object or device simply by looking at it? AR and VR will make all of this commonplace.
Add in the number of breakthrough haptic controllers to make the virtual experience even more lifelike, and this is one of the most exciting options on this list.
Brain interface
The ultimate computer interface would surely be one that doesn’t require us to do any more than think about a task and have it performed immediately for us. Brain interfaces could effortlessly carry out certain tasks for us, while also allowing us to tap into the devices around us to access an enormous amount of information.
Groups such as DARPA have investigated brain interfaces, while real-life Iron Man Elon Musk’s proposed Neuralink technology plans to create consumer-facing cybernetic implants that will turn us all into real life cyborgs.