Skip to main content

Zoom’s A.I. tech to detect emotion during calls upsets critics

Zoom has begun to develop A.I. technology which can reportedly scan the faces and speech of users in order to determine their emotions, which was first reported by Protocol.

While this technology appears to still be in its early phases of development and implementation, several human rights groups project that it could be used for more discriminatory purposes down the line, and are urging Zoom to turn away from the practice.

A woman on a Zoom call.
Zoom

Currently, Zoom has detailed plans for using the A.I. technology in a sales and training protocol. In a blog post shared last month, Zoom explained how its concept of  ‘Zoom IQ’ works for helping salespeople determine the emotions of people they are on a call with to improve their pitches.

Recommended Videos

The blog notes that Zoom IQ tracks such metrics as talk-listen ratio, talking speed, monologue, patience, engaging questions, next steps, step up, and sentiment and engagement.

Please enable Javascript to view this content

Zoom also noted on its blog that the data it collects is “for informational purposes and may contain inaccuracies.”

“Results are not intended to be used for employment decisions or other comparable decisions. All recommended ranges for metrics are based on publicly available research,” the company added.

Nevertheless, over 25 rights groups sent a joint letter to Zoom CEO, Eric Yuan on Wednesday, urging that the company halt any further research into emotion-based artificial intelligence that could have unfortunate consequences for the disadvantaged. Some of these groups include Access Now, the American Civil Liberties Union (ACLU), and the Muslim Justice League.

ACLU Speech, Privacy, and Technology Project deputy director, Esha Bhandari told the Thomson Reuters Foundation that emotion A.I. was “a junk science” and “creepy technology.”

Beyond Zoom’s initial note in its April blog, the company has yet to respond to this critique, which began as early as last week.

We’ve recently seen brands, such as DuckDuckGo, stand up against Google in the name of privacy. After claiming to get rid of invasive cookies, on web browsers, Google has essentially replaced them with technology that can similarly track and collect user data.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
New ‘poisoning’ tool spells trouble for AI text-to-image tech
Profile of head on computer chip artificial intelligence.

Professional artists and photographers annoyed at generative AI firms using their work to train their technology may soon have an effective way to respond that doesn't involve going to the courts.

Generative AI burst onto the scene with the launch of OpenAI’s ChatGPT chatbot almost a year ago. The tool is extremely adept at conversing in a very natural, human-like way, but to gain that ability it had to be trained on masses of data scraped from the web.

Read more
Zoom adds ChatGPT to help you catch up on missed calls
A person conducting a Zoom call on a laptop while sat at a desk.

The Zoom video-calling app has just added its own “AI Companion” assistant that integrates artificial intelligence (AI) and large language models (LLMs) from ChatGPT maker OpenAI and Facebook owner Meta. The tool is designed to help you catch up on meetings you missed and devise quick responses to chat messages.

Zoom’s developer says the AI Companion “empowers individuals by helping them be more productive, connect and collaborate with teammates, and improve their skills.”

Read more
Even OpenAI has given up trying to detect ChatGPT plagiarism
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

OpenAI, the creator of the wildly popular artificial intelligence (AI) chatbot ChatGPT, has shut down the tool it developed to detect content created by AI rather than humans. The tool, dubbed AI Classifier, has been shuttered just six months after it was launched due to its “low rate of accuracy,” OpenAI said.

Since ChatGPT and rival services have skyrocketed in popularity, there has been a concerted pushback from various groups concerned about the consequences of unchecked AI usage. For one thing, educators have been particularly troubled by the potential for students to use ChatGPT to write their essays and assignments, then pass them off as their own.

Read more