Skip to main content

AI assistants will soon recognize and respond to the emotion in your voice

emotion
Konstantynov/123RF
You know when people say that it’s not what you say, but how you say it that matters? Well, very soon that could become a part of smart assistants such as Amazon’s Alexa or Apple’s Siri. At least, it could if these companies decide to use new technology developed by emotion tracking artificial intelligence company Affectiva.

Affectiva’s work has previously focused on identifying emotion in images by observing the way that a person’s face changes when they express particular sentiments. Affectiva’s latest technology builds on that premise through the creation of a cloud-based application program interface (API) that is able to detect emotion in speech. Developed using the power of deep learning technology, the smart tech is capable of observing changes in tone, volume, speed, and voice quality and using this to recognize emotions like anger, laughter, and arousal in recorded speech.

Recommended Videos

“The addition of Emotion AI for speech builds on Affectiva’s existing emotion recognition technology for facial expressions, making us the first AI company to allow for a person’s emotions to be measured across face and speech,” Rana el Kaliouby, co-founder and CEO of Affectiva, told Digital Trends. “This is all part of a larger vision that we have. People sense and express emotion in many different ways: Through facial expressions, voice, and gestures. We’ve set out to develop multi-modal Emotion AI that can detect emotion the way humans do from multiple communication channels. The launch of Emotion AI for speech takes us one step closer.”

Affectiva Overview

Affectiva developed its voice recognition system by collecting naturalistic speech data from a variety of sources, including commercially available databases. This data was then labeled by human experts for the occurrence of what the company calls “emotion events.” These human generated labels were used to train and validate the team’s deep learning models, so that over time it grew to understand how certain shifts in a person’s voice might indicate a particular emotion.

Please enable Javascript to view this content

It’s smart stuff from a technology perspective but, like the best technology, it also has the possibility of helping users on a practical basis. One specific application could include car navigation systems that are able to hear a driver start to experience road rage, and react to prevent them from making a rash driving decision. It could similarly be used to allow automated assistants to change their approach when they hear anger or frustration from a user — or to learn what kind of responses elicit the best reactions and repeat these strategies.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
OpenAI’s Advanced Voice Mode can now see your screen and analyze videos
Advanced Santa voice mode

OpenAI's "12 Days of OpenAI" continued apace on Wednesday with the development team announcing a new seasonal voice for ChatGPT's Advanced Voice Mode (AVM), as well as new video and screen-sharing capabilities for the conversational AI feature.

Santa Mode, as OpenAI is calling it, is a seasonal feature for AVM, and offers St. Nick's dulcet tones as a preset voice option. It is being released to Plus and Pro subscribers through the website and mobile and desktop apps starting today and will remain so until early January. To access the limited-time feature, first sign in to your Plus or Pro account, then click on the snowflake icon next to the text prompt window.

Read more
OpenAI’s Sora doesn’t feel like the game-changer it was supposed to be
Sora's interpretation of gymnastics

OpenAI has teased, and repeatedly delayed, the release of Sora for nearly a year. On Tuesday, the company finally unveiled a fully functional version of the new video-generation model destined for public use and, despite the initial buzz, more and more early users of the release don't seem overly impressed. And neither am I.

https://x.com/OpenAI/status/1758192957386342435

Read more
Google’s new Gemini 2.0 AI model is about to be everywhere
Gemini 2.0 logo

Less than a year after debuting Gemini 1.5, Google's DeepMind division was back Wednesday to reveal the AI's next-generation model, Gemini 2.0. The new model offers native image and audio output, and "will enable us to build new AI agents that bring us closer to our vision of a universal assistant," the company wrote in its announcement blog post.

As of Wednesday, Gemini 2.0 is available at all subscription tiers, including free. As Google's new flagship AI model, you can expect to see it begin powering AI features across the company's ecosystem in the coming months. As with OpenAI's o1 model, the initial release of Gemini 2.0 is not the company's full-fledged version, but rather a smaller, less capable "experimental preview" iteration that will be upgraded in Google Gemini in the coming months.

Read more