Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

OpenAI never disclosed that hackers cracked its internal messaging system

A concept image of a hacker at work in a dark room.
Microbiz Mag

A hacker managed to infiltrate OpenAI’s internal messaging system last year and abscond with details about the company’s AI design, according to a report from the New York Times on Thursday. The attack targeted an online forum where OpenAI employees discussed upcoming technologies and features for the popular chatbot, however, the systems where the actual GPT code and user data are stored were not impacted.

While the company disclosed that information to its employees and board members in April 2023, the company declined to notify either the public or the FBI about the breach, claiming that doing so was unnecessary because no user or partner data was stolen. OpenAI does not consider the attack to constitute a national security threat and believes the attacker was a single individual with no ties to foreign powers.

Recommended Videos

Per the NYT, former OpenAI employee Leopold Aschenbrenner previously raised concerns about the state of the company’s security apparatus and warned that its systems could be accessible to the intelligence services of adversaries like China. Aschenbrenner was summarily dismissed by the company, though OpenAI spokesperson Liz Bourgeois told the New York Times his termination was unrelated to the memo.

This is far from the first time that OpenAI has suffered such a security lapse. Since its debut in November 2022, ChatGPT has been repeatedly targeted by malicious actors, often resulting in data leaks.  In February of this year, user names and passwords were leaked in a separate hack. The previous March, OpenAI had to take ChatGPT offline entirely to fix a bug that revealed users’ payment information, including the first and last name, email address, payment address, credit card type, and the last four digits of their card number to other active users. Last December, security researchers discovered that they could entice ChatGPT to reveal snippets of its training data simply by instructing the system to endlessly repeat the word “poem.”

“ChatGPT is not secure. Period,” AI researcher Gary Marcus told The Street in January. “If you type something into a chatbot, it is probably safest to assume that (unless they guarantee otherwise), the chatbot company might train on those data; those data could leak to other users.” Since the attack, OpenAI has taken steps to beef up its security systems, including installing additional safety guardrails to prevent unauthorized access and misuse of the models, as well as establishing a Safety and Security Committee to address future issues.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Microsoft Copilot: how to use this powerful AI assistant
Man using Windows Copilot PC to work

In the rapidly evolving landscape of artificial intelligence, Microsoft's Copilot AI assistant is a powerful tool designed to streamline and enhance your professional productivity. Whether you're new to AI or a seasoned pro, this guide will help you through the essentials of Copilot, from understanding what it is and how to sign up, to mastering the art of effective prompts and creating stunning images.

Additionally, you'll learn how to manage your Copilot account to ensure a seamless and efficient user experience. Dive in to unlock the full potential of Microsoft's Copilot and transform the way you work.
What is Microsoft Copilot?
Copilot is Microsoft's flagship AI assistant, an advanced large language model. It's available on the web, through iOS, and Android mobile apps as well as capable of integrating with apps across the company's 365 app suite, including Word, Excel, PowerPoint, and Outlook. The AI launched in February 2023 as a replacement for the retired Cortana, Microsoft's previous digital assistant. It was initially branded as Bing Chat and offered as a built-in feature for Bing and the Edge browser. It was officially rebranded as Copilot in September 2023 and integrated into Windows 11 through a patch in December of that same year.

Read more
OpenAI uses its own models to fight election interference
chatGPT on a phone on an encyclopedia

OpenAI, the brains behind the popular ChatGPT generative AI solution, released a report saying it blocked more than 20 operations and dishonest networks worldwide in 2024 so far. The operations differed in objective, scale, and focus, and were used to create malware and write fake media accounts, fake bios, and website articles.

OpenAI confirms it has analyzed the activities it has stopped and provided key insights from its analysis. "Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," the report says.

Read more
From Open AI to hacked smart glasses, here are the 5 biggest AI headlines this week
Ray-Ban Meta smart glasses in Headline style are worn by a model.

We officially transitioned into Spooky Season this week and, between OpenAI's $6.6 million funding round, Nvidia's surprise LLM, and some privacy-invading Meta Smart Glasses, we saw a scary number of developments in the AI space. Here are five of the biggest announcements.
OpenAI secures $6.6 billion in latest funding round

Sam Altman's charmed existence continues apace with news this week that OpenAI has secured an additional $6.6 billion in investment as part of its most recent funding round. Existing investors like Microsoft and Khosla Ventures were joined by newcomers SoftBank and Nvidia. The AI company is now valued at a whopping $157 billion, making it one of the wealthiest private enterprises on Earth.

Read more