Elliptic Labs has been showing off a new application for its ultrasound-based gesture technology at MWC in Barcelona, and we caught up with the company to get a demo. The idea is that smart speakers with ultrasound virtual sensor technology inside can detect the presence of people and respond to a range of gestures.
Using a prototype consisting of a speaker with Amazon’s Alexa onboard and a Raspberry Pi, Elliptic Labs showed us how you can trigger Alexa with a double tap palm gesture or cut it off in mid-flow with a single palm tap. The gestures can work from some distance away, allowing you to control your smart speaker without having to touch it or utter a word.
If you’re unfamiliar with Elliptic Labs, we met up with the company a couple of years back when it first began to roll its ultrasound gestures out into phones. The hope was that ultrasound might replace proximity sensors in phones and the technology was subsequently integrated into Xiaomi’s Mi Mix handsets, allowing the manufacturer to shrink the bezels right down. The ultrasound sensor can detect when your hand or face is near and turn the screen on or off accordingly. Specific gestures can also be used to scroll around, snap selfies, or even play games.
With more microphones, Elliptic Labs tech can detect more specific gestures or positioning. In a phone with two microphones, this might allow you to wave your hand to turn the volume up or down. Most smart speakers have several microphones now, so there’s a great deal of potential for more gesture controls, or even for triggering specific actions when someone enters or leaves a room.
Elliptic Labs sees ultrasound as free spectrum that’s not being exploited right now and the company is very optimistic about the potential applications.
“Any space where there are humans is fair game,” Guenael Strutt, Elliptic Labs’ VP of Product Development, told Digital Trends. “The possibilities are infinite.”
In the second demonstration we saw at MWC, the smart speaker was hooked up to a light. By placing your hand on one side of the speaker and holding it there you could turn up the light level, while holding your hand at the other side dimmed the bulb. It’s easy to imagine how this same gesture could work to tweak volume levels.
We tested out both prototypes for ourselves and found them very easy and intuitive to use. The technology doesn’t require direct line of sight, because the sound can bounce off a wall, so even if your speaker is tucked behind a lamp or the arm of the couch, you can still use these gestures to control it. We think the stop gesture is the most potentially useful, because it can be tricky to use voice commands to stop Alexa when it starts speaking or plays the wrong song.
There’s no official support for ultrasound tech in smart speakers just yet, but Elliptic Labs has been talking to all the major players – Amazon, Google, and Apple. The company has also been working with chip manufacturers like Qualcomm and suppliers further up the chain in smart speaker manufacture to try and integrate the technology into the chipsets and components that go into smartphones and smart speakers.
Having tried it out, we expect more manufacturers to adopt it in the near future. Smart speakers may be an easier sell than smartphones, though, unless Elliptic Labs can get ultrasound technology into the chipsets that manufacturers buy.
One of the key challenges for smartphones is reducing the power draw of the ultrasound sensor and working out clever ways to determine when it should be listening. Advances in machine learning and processor speed could make an important difference here and Elliptic Labs has been working to determine the optimal model for gesture detection.
We’re excited to see what these ultrasound pioneers come up with next.