Skip to main content

Zoom’s A.I. tech to detect emotion during calls upsets critics

Zoom has begun to develop A.I. technology which can reportedly scan the faces and speech of users in order to determine their emotions, which was first reported by Protocol.

While this technology appears to still be in its early phases of development and implementation, several human rights groups project that it could be used for more discriminatory purposes down the line, and are urging Zoom to turn away from the practice.

A woman on a Zoom call.
Zoom

Currently, Zoom has detailed plans for using the A.I. technology in a sales and training protocol. In a blog post shared last month, Zoom explained how its concept of  ‘Zoom IQ’ works for helping salespeople determine the emotions of people they are on a call with to improve their pitches.

Recommended Videos

The blog notes that Zoom IQ tracks such metrics as talk-listen ratio, talking speed, monologue, patience, engaging questions, next steps, step up, and sentiment and engagement.

Zoom also noted on its blog that the data it collects is “for informational purposes and may contain inaccuracies.”

“Results are not intended to be used for employment decisions or other comparable decisions. All recommended ranges for metrics are based on publicly available research,” the company added.

Nevertheless, over 25 rights groups sent a joint letter to Zoom CEO, Eric Yuan on Wednesday, urging that the company halt any further research into emotion-based artificial intelligence that could have unfortunate consequences for the disadvantaged. Some of these groups include Access Now, the American Civil Liberties Union (ACLU), and the Muslim Justice League.

ACLU Speech, Privacy, and Technology Project deputy director, Esha Bhandari told the Thomson Reuters Foundation that emotion A.I. was “a junk science” and “creepy technology.”

Beyond Zoom’s initial note in its April blog, the company has yet to respond to this critique, which began as early as last week.

We’ve recently seen brands, such as DuckDuckGo, stand up against Google in the name of privacy. After claiming to get rid of invasive cookies, on web browsers, Google has essentially replaced them with technology that can similarly track and collect user data.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
Even OpenAI has given up trying to detect ChatGPT plagiarism
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

OpenAI, the creator of the wildly popular artificial intelligence (AI) chatbot ChatGPT, has shut down the tool it developed to detect content created by AI rather than humans. The tool, dubbed AI Classifier, has been shuttered just six months after it was launched due to its “low rate of accuracy,” OpenAI said.

Since ChatGPT and rival services have skyrocketed in popularity, there has been a concerted pushback from various groups concerned about the consequences of unchecked AI usage. For one thing, educators have been particularly troubled by the potential for students to use ChatGPT to write their essays and assignments, then pass them off as their own.

Read more
OpenAI’s new ChatGPT app is free for iPhone and iPad
The ChatGPT website on an iPhone.

OpenAI has just launched a free ChatGPT app for iOS, giving iPhone and iPad owners an easy way to take the AI-powered tool for a spin.

The new app, which is able to converse in a remarkably human-like way, is available now in the U.S. App Store and will come to additional countries “in the coming weeks,” OpenAI said. Android users are promised their own ChatGPT app “soon.”

Read more
Google’s AI image-detection tool feels like it could work
An AI image of the Pope in a puffy coat.

Google announced during its I/O developers conference on Wednesday its plans to launch a tool that will distinguish whether images that show up in its search results are AI-generated images.

With the increasing popularity of AI-generated content, there is a need to confirm whether the content is authentic -- as in created by humans -- or if it has been developed by AI.

Read more