Anthropic endorses California’s AI safety bill, SB 53

On Monday, Man Declare Official support for SB 53California bill from Senator Scott Winner that imposes the requirements of the world’s first transparency on the largest developers of artificial intelligence models in the world. Antarbur’s support is a rare and large victory for SB 53, at a time when major technology groups were like Consumer Technology Association (CTA) and Progress room They click against the bill.

“While we believe that the safety of Frontier AI is the best treatment at the federal level instead of a group of state regulations, strong artificial intelligence will not wait for consensus in Washington,” Antarubor said in a blog post. “The question is not if we need artificial intelligence governance – it is whether we will develop it carefully today or tomorrow interactively. SB 53 offers a strong way towards the previous.”

If passed, SB 53 The AI’s border model such as Openai, Anthropic, Google and XAI to develop safety frameworks, as well as issuing safety and public security reports before publishing strong AI models. The draft law will also create protection for reporters who are informed of employees who are applying with safety concerns.

The Senator Winner’s draft law focuses specifically on reducing artificial intelligence models from contributing to the “catastrophic risks”, which the draft law defines as the death of at least 50 people or more than a billion dollars. SB 53 focuses on the extreme side of the risks of artificial intelligence-to reduce the use of artificial intelligence models to provide expert assistance in creating biological weapons or their use in electronic attacks-instead of close-up concerns such as AI DeepFakes or SYCOPHANCY.

The California Senate approved a prior version of SB 53, but still needs to make a final vote on the bill before it was able to apply to the ruler’s office. The ruler, Gavin News, has remained silent on the draft law so far, although it is The last bill for safety of artificial intelligence for Senator WinnerSB 1047.

The draft laws that regulate the developers of the artificial intelligence model faced A big reaction From both Silicon Valley and the Trump Administration, which both argue that such efforts can limit America’s innovation in the race against China. Investors such as Andressen Horowitz and Y Combinator led some reaction against SB 1047In recent months, it has the Trump administration repeatedly threatening To prevent countries from passing the artificial intelligence list completely.

One of the most common arguments against draft laws of artificial intelligence is that countries should leave the issue to federal governments. Andreneen Horwitz President Matt Perlete, and the chief legal official, Jay Ramsuami, has published a Blog post Last week, many bills argue from artificial intelligence today by violating the clause of trade in the constitution – which limits state governments to pass laws that exceed their borders and weaken trade between states.

TECHRUNCH event

San Francisco
|
27-29 October, 2025

However, Jack Clark, the co -founder of man, argues in a After x The technology industry will build strong AI systems in the coming years and cannot wait until the federal government behaves.

Clark said: “We have long said that we prefer a federal standard.” “But in the absence of this, it creates a solid plan for artificial intelligence that cannot be ignored.”

The chief international affairs official at Openai, Chris Lehan, sent, letter To the Governor of Newsom in August, he argues that no Amnesty International list should pass the California startups – although the message did not mention SB 53 by name.

The former Openaii Policy Research Chief, Miles Brunding, said in A. mail On X, the Lehane message was “full of misleading garbage around SB 53 and artificial intelligence policy in general.” It is worth noting that SB 53 aims to organize the largest artificial intelligence companies in the world – especially those that achieved total revenues of more than $ 500 million.

Despite criticism, political experts say SB 53 is a more modest approach than previous artificial intelligence integrity. Dean Paul, an older colleague at the American Innovation Foundation and former White House policy advisor, said in August Blog post He believes SB 53 has a good chance now to become a law. Paul, who criticized SB 1047, said that SB 53 “showed respect for technical reality”, as well as “the legislative restriction scale.”

Senator Winner previously He said SB 53 was severely affected by Expert Policy Committee Governor Newsom-led by a researcher in Stanford and co-founder of World Labs, Fei-FEI LI-to advise California on how to organize artificial intelligence.

Most artificial intelligence laboratories already have a copy of the internal safety policy required by SB 53. It is published. However, these companies are not bound by anyone but themselves, so sometimes they are Backwardness they Safety obligations imposed by them. SB 53 aims to determine these requirements as the state law, with financial repercussions if the AI ​​laboratory fails to comply.

Earlier in September, legislators in California an average SB 53 to remove a section of the bill that would have to review the developers of the artificial intelligence model by third parties. Technology companies have already fought these types of third -party audits in other artificial intelligence policy battles, on the pretext that they are very stressful.

Leave a Comment