Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

OpenAI gets called out for opposing a proposed AI safety bill

Ex-OpenAI employees William Saunders and Daniel Kokotajlo have written a letter to California Gov. Gavin Newsom arguing that the company’s opposition to a state bill that would impose strict safety guidelines and protocols on future AI development is disappointing but not surprising.

“We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing,” Saunders and Kokotajlo wrote. “But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems.”

Recommended Videos

The two argue that further development without sufficient guardrails “poses foreseeable risks of catastrophic harm to the public,” whether that’s “unprecedented cyberattacks or assisting in the creation of biological weapons.”

The duo was also quick to point out OpenAI CEO Sam Altman’s hypocrisy on the matter of regulation. They point to his recent congressional testimony calling for regulation of the AI industry but note “when actual regulation is on the table, he opposes it.”

Per a 2023 survey by the MITRE corporation and the Harris Poll, only 39% of respondents believed that today’s AI tech is “safe and secure.”

The bill in question, SB-1047, the Safe and Secure Innovation for Frontier Artificial Models Act, would, “among other things, require that a developer, before beginning to initially train a covered model … comply with various requirements, including implementing the capability to promptly enact a full shutdown … and implement a written and separate safety and security protocol.” OpenAI has suffered multiple data leaks and system intrusions in recent years.

OpenAI reportedly strongly disagrees with the researchers’ “mischaracterization of our position on SB 1047,” as a spokesperson told Business Insider. The company instead argues that “a federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the US to lead the development of global standards,” OpenAI’s Chief Strategy Officer Jason Kwon said in a letter to California state Sen. Scott Wiener in February.

Saunders and Kokotajlo counter that OpenAI’s push for federal regulations is not in good faith. “We cannot wait for Congress to act — they’ve explicitly said that they aren’t willing to pass meaningful AI regulation,” the pair wrote. “If they ever do, it can preempt CA legislation.”

The bill has found support from a surprising source as well: xAI CEO Elon Musk. “This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he wrote on X on Monday. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk.” Musk, who recently announced the construction of “the most powerful AI training cluster in the world” in Memphis, Tennessee, had previously threatened to move the headquarters of his X (formerly Twitter) and SpaceX companies to Texas to escape industry regulation in California.

Update: This post has been updated to include the comments from Elon Musk.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Nvidia just released an open-source LLM to rival GPT-4
Nvidia CEO Jensen in front of a background.

Nvidia, which builds some of the most highly sought-after GPUs in the AI industry, has announced that it has released an open-source large language model that reportedly performs on par with leading proprietary models from OpenAI, Anthropic, Meta, and Google.

The company introduced its new NVLM 1.0 family in a recently released white paper, and it's spearheaded by the 72 billion-parameter NVLM-D-72B model. “We introduce NVLM 1.0, a family of frontier-class multimodal large language models that achieve state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models,” the researchers wrote.

Read more
California governor vetoes expansive AI safety bill
CA Gov Gavin Newsom speaking at a lecturn

California Gov. Gavin Newsom has vetoed SB 1047, the Safe and Secure Innovation for Frontier Artificial Models Act, arguing in a letter to lawmakers that it "establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology."

"I do not believe this is the best approach to protecting the public from real threats posed by the technology," he wrote. SB 1047 would have required "that a developer, before beginning to initially train a covered model … comply with various requirements, including implementing the capability to promptly enact a full shutdown … and implement a written and separate safety and security protocol.”

Read more
Meta and Google made AI news this week. Here were the biggest announcements
Ray-Ban Meta Smart Glasses will be available in clear frames.

From Meta's AI-empowered AR glasses to its new Natural Voice Interactions feature to Google's AlphaChip breakthrough and ChromaLock's chatbot-on-a-graphing calculator mod, this week has been packed with jaw-dropping developments in the AI space. Here are a few of the biggest headlines.

Google taught an AI to design computer chips
Deciding how and where all the bits and bobs go into today's leading-edge computer chips is a massive undertaking, often requiring agonizingly precise work before fabrication can even begin. Or it did, at least, before Google released its AlphaChip AI this week. Similar to AlphaFold, which generates potential protein structures for drug discovery, AlphaChip uses reinforcement learning to generate new chip designs in a matter of hours, rather than months. The company has reportedly been using the AI to design layouts for the past three generations of Google’s Tensor Processing Units (TPUs), and is now sharing the technology with companies like MediaTek, which builds chipsets for mobile phones and other handheld devices.

Read more