Skip to main content

An A.I. cracks the internet’s squiggly letter bot test in 0.5 seconds

How do you prove you’re human when it comes to communicating on the internet? It’s a tough question, but for years the answer has been your ability to successfully read a string of distorted characters that are unrecognizable to a machine. Called CAPTCHAs (“Completely Automated Public Turing test to tell Computers and Humans Apart”), this security tool is used for everything from blocking automated spammers to stopping bots creating fraudulent profiles on social media sites. And for the past 20-odd years, it’s worked — possibly until now, that is.

In a joint effort by researchers from the U.K.’s Lancaster University and China’s Northwest University and Peking University, computer scientists have developed an artificial intelligence capable of cracking text CAPTCHA systems in as little as 0.5 seconds. It was successfully tested on different 33 CAPTCHA schemes, of which 11 came from the world’s most popular websites, including eBay and Wikipedia.

Recommended Videos

“We think our research probably has pronounced a death sentence for text CAPTCHA,” Zheng Wang, associate professor in the School of Computing and Communications at Lancaster University, told Digital Trends.

The attack developed by the researchers is based on a deep neural network-based image classifier. Deep neural networks have demonstrated impressive performance in image recognition. However, successful models typically require millions of manually labeled images to learn from. The novelty of this latest work is that it uses a generative adversarial network (GAN) to create this training data. Instead of collecting and labeling millions of CAPTCHA examples, the system requires as few as 500 to learn from. It can then use this to generate millions or even billions of synthetic training data to create its successful image classifier. The result? A higher accuracy than any of the CAPTCHA recognizer systems seen to date.

This approach would be useful with any image recognition task requiring masses of training data. CAPTCHAs, however, are somewhat unique in the sense that they keep evolving. The text-based early CAPTCHAs (as seen in the thumbnail picture for this article) was the first iteration of the technology. However, by now you’re probably more used to something like the traffic sign-based CAPTCHAs that are widely used. This constant shifting (versus, say, learning to recognize a dog, which remains broadly the same over lifetimes) makes collecting training data a pain.

“[It] means that by the time the attacker has collected enough training data, the CAPTCHA scheme would have already changed, which will invalidate the efforts,” Wang said. “Our work presents a new way to generate CAPTCHA recognizer at a much lower cost. As a result, it poses a real threat to CAPTCHA schemes as it can learn a CAPTCHA solver much quicker.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
BYD’s cheap EVs might remain out of Canada too
BYD Han

With Chinese-made electric vehicles facing stiff tariffs in both Europe and America, a stirring question for EV drivers has started to arise: Can the race to make EVs more affordable continue if the world leader is kept out of the race?

China’s BYD, recognized as a global leader in terms of affordability, had to backtrack on plans to reach the U.S. market after the Biden administration in May imposed 100% tariffs on EVs made in China.

Read more
Tesla posts exaggerate self-driving capacity, safety regulators say
Beta of Tesla's FSD in a car.

The National Highway Traffic Safety Administration (NHTSA) is concerned that Tesla’s use of social media and its website makes false promises about the automaker’s full-self driving (FSD) software.
The warning dates back from May, but was made public in an email to Tesla released on November 8.
The NHTSA opened an investigation in October into 2.4 million Tesla vehicles equipped with the FSD software, following three reported collisions and a fatal crash. The investigation centers on FSD’s ability to perform in “relatively common” reduced visibility conditions, such as sun glare, fog, and airborne dust.
In these instances, it appears that “the driver may not be aware that he or she is responsible” to make appropriate operational selections, or “fully understand” the nuances of the system, NHTSA said.
Meanwhile, “Tesla’s X (Twitter) account has reposted or endorsed postings that exhibit disengaged driver behavior,” Gregory Magno, the NHTSA’s vehicle defects chief investigator, wrote to Tesla in an email.
The postings, which included reposted YouTube videos, may encourage viewers to see FSD-supervised as a “Robotaxi” instead of a partially automated, driver-assist system that requires “persistent attention and intermittent intervention by the driver,” Magno said.
In one of a number of Tesla posts on X, the social media platform owned by Tesla CEO Elon Musk, a driver was seen using FSD to reach a hospital while undergoing a heart attack. In another post, a driver said he had used FSD for a 50-minute ride home. Meanwhile, third-party comments on the posts promoted the advantages of using FSD while under the influence of alcohol or when tired, NHTSA said.
Tesla’s official website also promotes conflicting messaging on the capabilities of the FSD software, the regulator said.
NHTSA has requested that Tesla revisit its communications to ensure its messaging remains consistent with FSD’s approved instructions, namely that the software provides only a driver assist/support system requiring drivers to remain vigilant and maintain constant readiness to intervene in driving.
Tesla last month unveiled the Cybercab, an autonomous-driving EV with no steering wheel or pedals. The vehicle has been promoted as a robotaxi, a self-driving vehicle operated as part of a ride-paying service, such as the one already offered by Alphabet-owned Waymo.
But Tesla’s self-driving technology has remained under the scrutiny of regulators. FSD relies on multiple onboard cameras to feed machine-learning models that, in turn, help the car make decisions based on what it sees.
Meanwhile, Waymo’s technology relies on premapped roads, sensors, cameras, radar, and lidar (a laser-light radar), which might be very costly, but has met the approval of safety regulators.

Read more
Waymo, Nexar present AI-based study to protect ‘vulnerable’ road users
waymo data vulnerable road users ml still  1 ea18c3

Robotaxi operator Waymo says its partnership with Nexar, a machine-learning tech firm dedicated to improving road safety, has yielded the largest dataset of its kind in the U.S., which will help inform the driving of its own automated vehicles.

As part of its latest research with Nexar, Waymo has reconstructed hundreds of crashes involving what it calls ‘vulnerable road users’ (VRUs), such as pedestrians walking through crosswalks, biyclists in city streets, or high-speed motorcycle riders on highways.

Read more