Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

OpenAI never disclosed that hackers cracked its internal messaging system

A concept image of a hacker at work in a dark room.
Microbiz Mag

A hacker managed to infiltrate OpenAI’s internal messaging system last year and abscond with details about the company’s AI design, according to a report from the New York Times on Thursday. The attack targeted an online forum where OpenAI employees discussed upcoming technologies and features for the popular chatbot, however, the systems where the actual GPT code and user data are stored were not impacted.

While the company disclosed that information to its employees and board members in April 2023, the company declined to notify either the public or the FBI about the breach, claiming that doing so was unnecessary because no user or partner data was stolen. OpenAI does not consider the attack to constitute a national security threat and believes the attacker was a single individual with no ties to foreign powers.

Recommended Videos

Per the NYT, former OpenAI employee Leopold Aschenbrenner previously raised concerns about the state of the company’s security apparatus and warned that its systems could be accessible to the intelligence services of adversaries like China. Aschenbrenner was summarily dismissed by the company, though OpenAI spokesperson Liz Bourgeois told the New York Times his termination was unrelated to the memo.

This is far from the first time that OpenAI has suffered such a security lapse. Since its debut in November 2022, ChatGPT has been repeatedly targeted by malicious actors, often resulting in data leaks.  In February of this year, user names and passwords were leaked in a separate hack. The previous March, OpenAI had to take ChatGPT offline entirely to fix a bug that revealed users’ payment information, including the first and last name, email address, payment address, credit card type, and the last four digits of their card number to other active users. Last December, security researchers discovered that they could entice ChatGPT to reveal snippets of its training data simply by instructing the system to endlessly repeat the word “poem.”

“ChatGPT is not secure. Period,” AI researcher Gary Marcus told The Street in January. “If you type something into a chatbot, it is probably safest to assume that (unless they guarantee otherwise), the chatbot company might train on those data; those data could leak to other users.” Since the attack, OpenAI has taken steps to beef up its security systems, including installing additional safety guardrails to prevent unauthorized access and misuse of the models, as well as establishing a Safety and Security Committee to address future issues.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
OpenAI’s Sora doesn’t feel like the game-changer it was supposed to be
Sora's interpretation of gymnastics

OpenAI has teased, and repeatedly delayed, the release of Sora for nearly a year. On Tuesday, the company finally unveiled a fully functional version of the new video-generation model destined for public use and, despite the initial buzz, more and more early users of the release don't seem overly impressed. And neither am I.

https://x.com/OpenAI/status/1758192957386342435

Read more
ChatGPT unveils Sora with up to 20-second AI video generation
An AI generated image of a woman who walks the streets of Tokyo.

OpenAI has been promising to release its next-gen video generator model, Sora, since February. On Monday, the company finally dropped a working version of it as part of its "12 Days of OpenAI" event.

"This is a critical part of our AGI roadmap," OpenAI CEO Sam Altman said during the company's live stream.

Read more
OpenAI’s Sora was leaked in protest over allegations of ‘art washing’
An AI image portraying two mammoths that walk through snow, with mountains and a forest in the background.

OpenAI's unreleased Sora video generation model was leaked Tuesday by a group protesting the company's "art washing" actions, per a post from X user @legit_rumors.

The group, calling themselves Sora PR Puppets, reportedly had gained early access to the Sora API. Through that, they leveraged authentication tokens to create a front-end interface enabling anyone to generate video clips with the model. While the project only remained online for around three hours before Hugging Face (or possibly OpenAI itself) revoked access, several users managed to publish their creations to social media sites.

Read more