Skip to main content

Before augmented reality becomes the next big thing, here’s what needs to happen

augmented reality gesture controls google glass ar feat 2
Image used with permission by copyright holder
Everyone seems to be talking about Augmented Reality (AR). Last summer, the technology produced its first real success story in the form of Pokemon Go, which enthralled millions across the globe. Yet that may prove just a sideshow before the main event.

We’ve already seen how Apple’s ARKit can empower developers to produce wildly creative AR experiences – but rumors persist that Apple’s endgame involves an accessory not unlike Google Glass. Alongside Microsoft’s HoloLens, these head-mounted AR devices demand new ideas about how people use technology.

Recommended Videos

Done well, 3D gesture-tracking in AR could be as big a leap forward as the first generation of touchscreen interfaces ushered in by the original iPhone. There’s plenty of work to be done.

The Next Generation

“We firmly believe that virtual reality and AR is the next form of the computer, the next generation of smart devices,” Dr. Yue Fei, the co-founder and chief technology officer of Bay Area human-computer interaction specialists uSens, told Digital Trends. He and his team think AR’s world-changing potential exceeds even that of VR. Why? Because it’s not as immersive.

uSens 3D hand-tracking demonstration
uSens 3D hand-tracking demonstration

AR is a filter through which we see the world. It lets you to see your real surroundings and their device as one, which can have some powerful effects on the way we use our smartphones in an environment. AR allows your computer to use information about what’s surrounding you in real time without a display or similar physical interface, and that potential is unique.

Dr. Fei blitzed through examples of how AR could benefit people in the workplace. These aren’t scenarios where tech is being used for the sake of it, but situations where augmented technology could streamline and improve commonplace tasks.

Construction workers might catalogue tools on the job site, as we saw in a recent Microsoft presentation at the company’s Build conference. Maintenance workers could have access to a contextual manual, that gives them instructions tailored to the situation they’re working on, rather a generic how-to. A doctor performing surgery could have access to information without ever having to take an eye off the patient, thanks to a heads-up display.

Hands Free

Smartphone-based AR asks you to hold the device with one hand, and use the other to interact with their touch screen. That puts limitations on what the software can do.

Historically, we’ve seen that people don’t like wearing things on their heads, especially in public – the well-documented failure of Google Glass indicated to many that the public wasn’t ready for this form of hardware. However, a pair of glasses outfitted with an AR display does free up both hands and allow more direct interaction. Glass was too far ahead of its time, but its core concept was sound.

Right now, the best comparison is VR, which typically uses purpose-built controllers to allow people to interact with their surroundings. This solution works fine in most situations, but it does have its limitations.

The well-documented failure of Google Glass indicated to many that the public wasn’t ready.

“Although they use controllers, the design of the game itself is already trying to mimic the real hand in the real world,” explained Dr. Fei, citing Job Simulator and I Expect You to Die as two prime examples. “In their brain, the user wants to do complicated actions, like grab a cup. But on the controller, you need to use the index finger trigger to simulate the action, and that breaks down the immersion.”

Even if you’re completely committed to enjoying a VR experience, it’s difficult to commit to the idea that pulling a trigger on a plastic controller is the same as picking up an object. Dr. Fei and his team at uSens are working on hand-tracking technology that should allow users to interact with virtual objects directly.

Given that VR hasn’t taken off as quickly as many would have hoped, there are concerns that AR could suffer the same fate. However, gesture-based control could offer a solution to this problem. In many ways, it’s the heir apparent to the touchscreen.

Shock of the New

It’s often difficult to get the public on board with new technology – but revolutionary technology seems the exception. A new idea, if intuitive, can go from almost unheard of to common in well less than a decade.

Touchscreens were not the norm ten years ago. They weren’t unheard of, but they were nowhere near as ubiquitous as they are today. Today, even the youngest and oldest are perfectly happy to pick up a smartphone or a tablet and interact with it, even without being shown what to do. The team at uSens believes that gesture-based AR controls are similarly approachable.

When the iPhone came out, the very first one, people could interact with games in a way they never had done before, because you’re just using your hands, and it’s very natural to people,” Will McCormick, the company’s marketing manager, told Digital Trends. “[With] traditional gaming, even when you’re playing Snake on a Nokia 3310, not everyone can do that. Because it’s still a game where you have to interact with a keyboard [or controller].”

Whether a game’s control scheme is straightforward or not, the very presence of a traditional video game controller can sometimes be off-putting to those who aren’t familiar with the hardware. You only need to look back to the bestselling Nintendo Wii for evidence that more natural methods of control can appeal to huge audiences.

Wii Sports bowling
Image used with permission by copyright holder

McCormick told us of an AR tech demo based around painting, which uSens often shows visitors to its offices. “If you give someone a controller, if they’ve never used it before, there’s still some learning, they can be a bit intimidated, they’re not quite sure what to do,” he said. “But if you can just use your hands, and use your gestures to paint, everyone can do that. We’ve seen people from 18 to 80 years old in here try that out.”

Accessibility is going to be a crucial factor for AR. There’s a sense that once people try out the technology for themselves, its appeal will be obvious. Dropping the need for a controller accessory in favor of gesture-based control removes a barrier.

Back to Reality

Apple has made strides forward with its AR project this year, and the upcoming iPhone hardware refresh is rumored to introduce new sensors to benefit the technology. Microsoft, meanwhile, has been working on HoloLens for years, and its headsets look more impressive every time they’re shown off.

Even so, there are still major challenges at hand. “Right now, the HoloLens already has gesture recognition, but it’s pretty primitive,” observed Dr. Fei, noting that the hardware struggles to product a steady, accurate 3D position of a moving hand. “We have talked to a number of developers that feel that this is one of the biggest limitations of HoloLens, besides the field of view.”

Dr. Fei also offered up some criticism of ARKit, which is set to appear in iOS11 . “Personally, I feel that the API is easy to use, and it’s good, but it’s only available for the iOS platform, and it’s only available for one language, Swift,” he said.

AR is in the same spot, struggling to find its ‘swipe to unlock’ moment.

The hand- and head-tracking technology being developed by uSens is very accurate, and is designed to be platform-agnostic. It’s a more specialized solution than Apple and Microsoft’s attempts, thanks to the company’s ability to focus on one element of AR infrastructure, rather than building a platform and hardware side-by-side.

However, uSens is like Apple and Microsoft in that it’s providing tools for the development of AR experiences. This is uncharted territory. Functional AR with gesture recognition control requires a new design language.

Consider the smartphone’s ‘swipe to unlock.’ Though now common in all manner of touchscreens, it’s a recent invention and, before it, numerous methods were used to do the same thing. AR is in the same spot, struggling to find its ‘swipe to unlock’ moment. “Right now, there is not a unified language – everyone is exploring what the GUI [graphical user interface] should be,” said Dr. Fei. “It’s trial and error, trying to make a better GUI, and it’s ongoing.”

Great GUI design will make AR more user-friendly, and it can even cover up some of the limitations of current hardware. Dr. Fei mentioned that the HoloLens suffers from a limited FOV, but that can be mitigated by a smartly composed GUI.

For instance, one of the biggest problems with a narrow FOV is that virtual objects might seem to disappear as you move your head across the scene, and that can be very jarring. However, by providing appropriate on-screen feedback, the software can keep you appraised of where the object is in relation to them, even if it’s not actually on-screen.

AR technology has come a long way in a relatively small amount of time, and now it’s almost ready for the masses. The next stage of its development will be difficult, given the challenges of turning capable hardware and promising APIs into features that will make you happy.

Still, there’s plenty of reason to be excited about the possibilities. If AR supplemented by gesture controls is truly as game-changing as touch interfaces turned out to be a decade ago, we’re on the precipice of the biggest addition to consumer technology since the advent of the smartphone.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
Microsoft unveils Mesh, and with James Cameron, dives into mixed reality
James Cameron holoport with Microsoft's Alex Kippman

James Cameron "holoports" to join Microsoft's Alex Kipman on stage at Microsoft Ignite 2021. Microsoft

Events have been going virtual for a full year now, with the pandemic preventing physical meetings and forcing companies to innovate. But no one’s taken it quite as far as Microsoft Ignite -- which just introduced us to an avatar of the guy who made Avatar.

Read more
Oppo steps into augmented reality with its HoloLens-like AR Glass headset
oppo ar glass headset news

Microsoft’s HoloLens 2 may have a new challenger next year. Smartphone maker Oppo has announced a pair of augmented reality glasses, and stated they will be released during the first three months of 2020. Oppo’s AR Glass headset is part of an effort to expand beyond only making smartphones, and was revealed alongside plans for a smartwatch and an Oppo-produced mobile processor.

Oppo's AR Glass is similar in design to HoloLens. A visor stretches across the front of the headset, attached to a band which holds it on your head, plus there is a forehead rest to keep it in position. This is not something designed for everyday wear, but more for home and industrial use. At home, Oppo says the AR Glass will be used for augmented reality content and games, while in the workplace, and somewhat less specifically, Oppo sees it being used for augmented reality services. These could include 3D modeling, planning, and design.

Read more
Qualcomm and Niantic to partner on a next-gen augmented reality headset
Pokemon GO

Qualcomm and Niantic announced a multi-year partnership aimed at bringing augmented reality devices to the public in the next few years. Details about the partnership have yet to be revealed -- but it could bring games like Pokémon Go and Harry Potter: Wizards Unite to a much more immersive environment through augmented reality glasses.

The announcement came at the Snapdragon Summit alongside the launch of the new Qualcomm XR2 5G augmented reality platform, which is meant to power a new generation of augmented reality glasses that are less bulky and more powerful than ever before. Notably, Qualcomm suggests that the new chipset will allow for lightweight glasses that are tethered to a module that can fit in the user's pocket.

Read more