Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

OpenAI never disclosed that hackers cracked its internal messaging system

A concept image of a hacker at work in a dark room.
Microbiz Mag

A hacker managed to infiltrate OpenAI’s internal messaging system last year and abscond with details about the company’s AI design, according to a report from the New York Times on Thursday. The attack targeted an online forum where OpenAI employees discussed upcoming technologies and features for the popular chatbot, however, the systems where the actual GPT code and user data are stored were not impacted.

While the company disclosed that information to its employees and board members in April 2023, the company declined to notify either the public or the FBI about the breach, claiming that doing so was unnecessary because no user or partner data was stolen. OpenAI does not consider the attack to constitute a national security threat and believes the attacker was a single individual with no ties to foreign powers.

Per the NYT, former OpenAI employee Leopold Aschenbrenner previously raised concerns about the state of the company’s security apparatus and warned that its systems could be accessible to the intelligence services of adversaries like China. Aschenbrenner was summarily dismissed by the company, though OpenAI spokesperson Liz Bourgeois told the New York Times his termination was unrelated to the memo.

This is far from the first time that OpenAI has suffered such a security lapse. Since its debut in November 2022, ChatGPT has been repeatedly targeted by malicious actors, often resulting in data leaks.  In February of this year, user names and passwords were leaked in a separate hack. The previous March, OpenAI had to take ChatGPT offline entirely to fix a bug that revealed users’ payment information, including the first and last name, email address, payment address, credit card type, and the last four digits of their card number to other active users. Last December, security researchers discovered that they could entice ChatGPT to reveal snippets of its training data simply by instructing the system to endlessly repeat the word “poem.”

“ChatGPT is not secure. Period,” AI researcher Gary Marcus told The Street in January. “If you type something into a chatbot, it is probably safest to assume that (unless they guarantee otherwise), the chatbot company might train on those data; those data could leak to other users.” Since the attack, OpenAI has taken steps to beef up its security systems, including installing additional safety guardrails to prevent unauthorized access and misuse of the models, as well as establishing a Safety and Security Committee to address future issues.

Andrew Tarantola
Andrew has spent more than a decade reporting on emerging technologies ranging from robotics and machine learning to space…
OpenAI could increase subscription prices to as much as $2,000 per month
a phone displaying the ChatGPT homepage on a beige bbackground.

OpenAI recently surpassed 1 million subscribers, each paying $20 (or more, for Teams and Enterprise), but that doesn't seem to be enough to keep the company financially afloat given that hundreds of millions of people use the chatbot for free.

According to The Information, OpenAI is reportedly mulling over a massive rise in its subscription prices to as much as $2,000 per month for access to its latest and models, amid rumors of its potential bankruptcy.

Read more
A new definition of ‘open source’ could spell trouble for Big AI
Meta AI can generate images within a chat in about five seconds.

The Open Source Initiative (OSI), self-proclaimed steward of the open source definition, the most widely used standard for open-source software, announced an update to what constitutes an "open source AI" on Thursday. The new wording could now exclude models from industry heavyweights like Meta and Google.

"Open Source has demonstrated that massive benefits accrue to everyone after removing the barriers to learning, using, sharing, and improving software systems," the OSI wrote in a recent blog post. "For AI, society needs the same essential freedoms of Open Source to enable AI developers, deployers, and end users to enjoy those same benefits."

Read more
OpenAI gets called out for opposing a proposed AI safety bill
A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.

Ex-OpenAI employees William Saunders and Daniel Kokotajlo have written a letter to California Gov. Gavin Newsom arguing that the company's opposition to a state bill that would impose strict safety guidelines and protocols on future AI development is disappointing but not surprising.

"We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing," Saunders and Kokotajlo wrote. "But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems."

Read more