Skip to main content

Is that really you? More companies are turning to voice biometrics for security purposes

Incipio NGP Case
Image used with permission by copyright holder
No amount of security is too much security when it comes to our bank accounts, and now, your own voice may serve as that added layer of protection. Technology known as voice biometrics seems to be the next big thing in keeping your accounts safe and sound, especially with the alarming rise in call-in center fraud. In this latest version of trickery, criminals take advantage of human error and human emotions when they dial into a customer service line, describe some fictional situation that garners the representative’s sympathy, and subsequently gain access to sensitive data and, of course, money. $10 billion worth last year, in fact.

“[Banks] closed and locked the door online, but they left the window open with the call centers,” Vijay Balasubramaniyan, CEO of fraud detection company Pindrop Security, told CNN. And despite advances made with physical credit cards, like the Chip and PIN system, one step forward in the security realm sometimes means two steps back, as resourceful criminal masterminds find new vulnerabilities to attack.

Recommended Videos

According to CNN, a sales specialist at data security company Nice Systems named Erica Thomson has described fraudulent “clients” of a bank who can call a company more than 20 times a day, each time pretending to be someone else and claiming that they’re in a foreign country with a lost credit card, or in some other compromising situation in which a quick fix is desperately needed. And sometimes, in a rush to provide good customer service, representatives fall victim to these lies.

This makes technologies like voice biometrics all the more important — as CNN explains, companies like Nice Systems “can record call-in center conversations, verify the caller, and then convert the voice into a voiceprint to serve as a comparison the next time that person (or someone claiming to be that person) calls.” So when you’re told that your call is being recorded, sometimes it’s not just for quality assurance purposes — it’s for your own security.

But beyond preventing fraud, banks are also turning to voice biometrics to allow for larger transactions to take place via mobile devices. Because most people are still uncomfortable making huge deposits anywhere other than a physical bank, corporations like Wells Fargo are looking into the creation of mobile apps that makes use of some serious James Bond-esque security measures.

Tarun Wadhwa of Forbes had the chance to test out Wells Fargo’s “experimental, biometric-based commercial banking app,” and recalled announcing, “My voice gives me access to proceed, please verify me,” whereupon the phone “scanned [his] face to see if [his] lips were moving.” Then, Wadhwa had to read a series of numbers out loud, and once the app verified a voice match, it finally unlocked itself.

Thanks to the improved technological capabilities of our smartphones, facial recognition and voice biometric methods are actually becoming more and more feasible for widespread use, and Wadhwa notes that “Visa, Mastercard, and American Express all have their own biometric initiatives underway.”

But of course, given that Siri sometimes still has trouble understanding us, don’t expect to do a whole lot of voice verification anytime soon.

Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
BYD’s cheap EVs might remain out of Canada too
BYD Han

With Chinese-made electric vehicles facing stiff tariffs in both Europe and America, a stirring question for EV drivers has started to arise: Can the race to make EVs more affordable continue if the world leader is kept out of the race?

China’s BYD, recognized as a global leader in terms of affordability, had to backtrack on plans to reach the U.S. market after the Biden administration in May imposed 100% tariffs on EVs made in China.

Read more
Tesla posts exaggerate self-driving capacity, safety regulators say
Beta of Tesla's FSD in a car.

The National Highway Traffic Safety Administration (NHTSA) is concerned that Tesla’s use of social media and its website makes false promises about the automaker’s full-self driving (FSD) software.
The warning dates back from May, but was made public in an email to Tesla released on November 8.
The NHTSA opened an investigation in October into 2.4 million Tesla vehicles equipped with the FSD software, following three reported collisions and a fatal crash. The investigation centers on FSD’s ability to perform in “relatively common” reduced visibility conditions, such as sun glare, fog, and airborne dust.
In these instances, it appears that “the driver may not be aware that he or she is responsible” to make appropriate operational selections, or “fully understand” the nuances of the system, NHTSA said.
Meanwhile, “Tesla’s X (Twitter) account has reposted or endorsed postings that exhibit disengaged driver behavior,” Gregory Magno, the NHTSA’s vehicle defects chief investigator, wrote to Tesla in an email.
The postings, which included reposted YouTube videos, may encourage viewers to see FSD-supervised as a “Robotaxi” instead of a partially automated, driver-assist system that requires “persistent attention and intermittent intervention by the driver,” Magno said.
In one of a number of Tesla posts on X, the social media platform owned by Tesla CEO Elon Musk, a driver was seen using FSD to reach a hospital while undergoing a heart attack. In another post, a driver said he had used FSD for a 50-minute ride home. Meanwhile, third-party comments on the posts promoted the advantages of using FSD while under the influence of alcohol or when tired, NHTSA said.
Tesla’s official website also promotes conflicting messaging on the capabilities of the FSD software, the regulator said.
NHTSA has requested that Tesla revisit its communications to ensure its messaging remains consistent with FSD’s approved instructions, namely that the software provides only a driver assist/support system requiring drivers to remain vigilant and maintain constant readiness to intervene in driving.
Tesla last month unveiled the Cybercab, an autonomous-driving EV with no steering wheel or pedals. The vehicle has been promoted as a robotaxi, a self-driving vehicle operated as part of a ride-paying service, such as the one already offered by Alphabet-owned Waymo.
But Tesla’s self-driving technology has remained under the scrutiny of regulators. FSD relies on multiple onboard cameras to feed machine-learning models that, in turn, help the car make decisions based on what it sees.
Meanwhile, Waymo’s technology relies on premapped roads, sensors, cameras, radar, and lidar (a laser-light radar), which might be very costly, but has met the approval of safety regulators.

Read more
Waymo, Nexar present AI-based study to protect ‘vulnerable’ road users
waymo data vulnerable road users ml still  1 ea18c3

Robotaxi operator Waymo says its partnership with Nexar, a machine-learning tech firm dedicated to improving road safety, has yielded the largest dataset of its kind in the U.S., which will help inform the driving of its own automated vehicles.

As part of its latest research with Nexar, Waymo has reconstructed hundreds of crashes involving what it calls ‘vulnerable road users’ (VRUs), such as pedestrians walking through crosswalks, biyclists in city streets, or high-speed motorcycle riders on highways.

Read more