As sure as night follows day, scammers have been quick to take an interest in ChatGPT, the advanced AI-powered chatbot from Microsoft-backed OpenAI that burst onto the scene in November.
In a new security report posted by Meta on Wednesday, the company formerly known as Facebook said that since March alone, its security analysts have uncovered around 10 types of malware posing as ChatGPT and similar AI-based tools that aim to compromise online accounts, especially those of businesses.
The scams could be delivered via, for example, web browser extensions — some of them found in official web stores — that offer ChatGPT-related tools and might even offer some ChatGPT-like functionality, Guy Rosen, Meta’s chief information security officer, wrote in the post. But the extensions are ultimately designed to trick users into giving up sensitive information or accepting malicious payloads.
Meta’s chief information security officer said his team has seen malware masquerading as ChatGPT apps and then, following detection, simply switched their lures to other popular products such as Google’s AI-powered Bard tool, in a bid to avoid detection.
Rosen said Meta had detected and blocked more than 1,000 unique malicious URLs from being shared on its apps and had reported them to the companies where the malware was hosted to enable them to take their own appropriate action.
Meta promised it will continue to highlight how these malicious campaigns function, share threat indicators with companies, and introduce updated protections to address scammers’ new tactics. Parts of its efforts also include the launch of a new support flow for businesses impacted by malware.
Citing the example of crypto scams, Rosen noted how the new assault by cybercriminals follows a pattern whereby they exploit the popularity of new or buzzy tech products to try to trick innocent users into falling for their ruses.
“The generative AI space is rapidly evolving and bad actors know it, so we should all be vigilant,” Rosen warned.