Anthropic Supports California's AI Safety Bill SB 53

2025-09-09

On Monday, Anthropic announced its official support for SB 53, a bill introduced by California Senator Scott Wiener that introduces groundbreaking transparency requirements for developers of the world’s largest AI models. This marks a rare and significant victory for SB 53, especially as it faces opposition from major tech groups like the CTA and Chamber for Progress.

“While we believe that frontier AI safety issues are best addressed at the federal level rather than through a patchwork of state regulations, the rapid advancement of AI technology won’t wait for consensus in Washington,” Anthropic stated in a blog post. “The question isn’t whether we need AI governance, but whether we will develop it thoughtfully today or reactively tomorrow. SB 53 offers a solid path toward the former.”

If passed, SB 53 would require leading AI developers such as OpenAI, Anthropic, Google, and xAI to establish safety frameworks and publish public safety and risk reports before deploying powerful AI models. The bill would also offer whistleblower protections for employees who raise safety concerns.

Senator Wiener’s bill focuses specifically on limiting AI models’ contribution to “catastrophic risk,” defined as incidents resulting in at least 50 deaths or over $1 billion in losses. SB 53 targets extreme AI risks—such as AI being used to provide expert-level assistance in creating biological weapons or launching cyberattacks—rather than more immediate concerns like AI-generated deepfakes or misinformation.

The California Senate has already approved earlier versions of SB 53, but it still needs a final vote before heading to the governor’s desk. Governor Gavin Newsom has remained silent on the bill so far, although he previously vetoed Senator Wiener’s earlier AI safety bill, SB 1047.

Legislation targeting frontier AI developers has faced strong resistance from Silicon Valley and the Trump administration, both of whom argue that such measures could hinder U.S. innovation in the race against China. Investors like Andreessen Horowitz and Y Combinator led opposition efforts against SB 1047, and the Trump administration has repeatedly threatened to block state-level AI regulations.

One of the most common arguments against AI safety bills is that such matters should be left to the federal government. In a blog post last week, Matt Perault, head of AI policy at Andreessen Horowitz, and Jai Ramaswamy, the firm’s chief legal officer, argued that many current state-level AI bills risk violating the Constitution’s commerce clause, which limits states from passing laws that extend beyond their borders and harm interstate commerce.

However, Jack Clark, co-founder of Anthropic, argued on X that the tech industry is building powerful AI systems in the coming years and cannot afford to wait for federal action.

“We’ve always said we’d prefer a federal standard,” Clark said. “But in the absence of one, this creates a solid blueprint for AI governance that can’t be ignored.”

Chris Lehane, OpenAI’s chief global affairs officer, sent a letter to Governor Newsom in August urging him to avoid any AI regulations that might push startups out of California—though the letter did not specifically name SB 53.

Miles Brundage, former policy research lead at OpenAI, responded on X that Lehane’s letter contained misleading claims about SB 53 and AI regulation in general. He noted that SB 53 is specifically designed to regulate only the world’s largest AI companies—particularly those with more than $500 million in annual revenue.

Despite the criticism, policy experts say SB 53 is more moderate than previous AI safety bills. Dean Ball, a senior fellow at the Foundation for American Innovation and former White House AI policy adviser, wrote in a blog post in August that he believes SB 53 now has a strong chance of becoming law. Ball criticized SB 1047 but praised SB 53’s drafters for showing “respect for technical realities” and a degree of “legislative restraint.”

Senator Wiener previously noted that SB 53 was heavily influenced by an expert policy group convened by Governor Newsom—including Stanford’s leading researcher and co-founder of World Labs, Fei-Fei Li—to advise on how California should regulate AI.

Most AI labs already have some version of the safety policies required by SB 53. OpenAI, Google DeepMind, and Anthropic regularly publish safety reports for their models. However, these companies are not legally bound and decide compliance on their own terms—sometimes falling short of their own self-imposed safety commitments. SB 53 aims to codify these requirements into state law, imposing financial consequences if AI labs fail to comply.

At the beginning of September, California lawmakers amended SB 53 by removing a section that would have required AI model developers to undergo third-party audits. Tech companies had previously opposed such provisions in past AI policy battles, arguing they were too burdensome.