California’s Controversial AI Bill: A Cheat Sheet

sb 1047 is inching closer to the legislative finish line, and will likely pass easily through the chamber — the only question that remains is if newsom will veto it
Sanjana Friedman

The Hunt for Red October
  • California's AI safety bill (SB 1047), which will create a new government agency to regulate AI, has been widely criticized as impractical, onerous, and likely to stifle innovation
  • The bill requires AI researchers to sign a sworn statement affirming their models don’t pose an “unreasonable risk” of harm, even though there is no consensus — at all — on the theoretical harm cutting edge models such as GPT-4 can cause
  • Many of the bill's requirements are so vague that not even the leading AI scientists would agree about how to meet them
  • The scope of SB 1047, combined with California’s outsized importance in the burgeoning AI industry, means the bill will act as an extrajudicial compliance mechanism, with nationwide (and, likely, international) implications

--

Subscribe to The Industry

Devs, prepare yourselves: California’s AI safety bill is inching closer to the legislative finish line. Earlier this month, SB 1047 (officially “The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”) passed in the Assembly Judiciary Committee, a key legislative hurdle on its way to Newsom’s desk. The bill awaits further votes in the Assembly Appropriation Committee and on the chamber floor — both of which it will likely pass easily — and the only question that remains is whether the Governor will veto it.

The scope of SB 1047, combined with California’s outsized importance in the burgeoning AI industry, means the bill would have nationwide (and, likely, international) implications. As Senator Wiener smugly replied to claims the bill would push companies out of state: “An AI lab cannot simply relocate outside of California and avoid SB 1047’s safety requirements because… the bill applies when a model developer is doing business in California, regardless of where the developer is headquartered.” This long-arm statute, he further noted, is exactly how California has successfully gotten businesses nationwide to comply with its strict data privacy and health and safety regulations.

The bill has been amended somewhat substantially since this past May, when news of its passage in the state Senate sparked a flurry of tech infighting online (for a recap of the drama, see Solana’s “Bugman vs. The Robots”), but today’s version is unlikely to undergo significant further modification. Below, we provide a cheat sheet for understanding exactly what’s at stake — an overview of the regulations, key players, and broader legislative context.

THE KEY PLAYERS

SB 1047 was introduced by state Senator Scott Wiener (D-San Francisco) and co-authored by Senators Richard Roth (D-Riverside), Susan Rubio (D-Baldwin Park), and Henry Stern (D-Los Angeles). Its co-sponsors are the Center for AI Safety Action Fund (CAIS), which primarily receives funding from Dustin Moscovitz’s Open Philanthropy nonprofit; Encode Justice, an organization comprised of people under the age of 25 looking to “immediately establish guardrails for AI” (this is how those teenagers you may have seen defending the bill on X are connected to it); and Economic Security Action California, a 501(c)3 affiliate of the Economic Security Project, a nonprofit that mainly advocates for “checking corporate power” and “championing guaranteed income.”

As Nirit Weiss-Blatt pointed out on X, other registered supporters of the bill include a German thinktank studying “AI risks and impacts,” a Swedish startup, and a handful of graphic design firms, along with various other AI safety groups. (See a full list of registered supporters on page 35 of this document.)

THE REGULATIONS

For those interested, the text of SB 1047 (view here) is fairly short and worth reading in full. Broadly, the bill’s provisions can be separated into one of two categories: those that regulate developers of AI models, and those that designate government agents to enforce these regulations.

We’ll start with the first category. As of this writing, a “covered model” under SB 1047 is one that meets either of the following criteria:

  1. “An artificial intelligence model trained using a quantity of computer power greater than 10^26 integer or floating-point operations [FLOPs], the cost of which exceeds $100,000,000 when calculated using average market prices.”
  1. “An artificial intelligence model created by fine-tuning a covered model using a quantity of computer power equal to or greater than three times 10^25 FLOPs.”

The quantity of computing power used to train a model is a rough proxy for its capability, since complex or large-scale tasks (like natural language processing or video content analysis) generally require relatively high computing power. Though no existing model currently meets the bill’s threshold — the most compute used to train a model to-date is the around 5*(10^25) FLOPs used for Google’s Gemini Ultra, which is half of the bill’s threshold — compute used in frontier model training runs has been growing by around 4-5x annually, so we can expect several will reach the threshold within the next year or so.

Incidentally, from 2027 onward, the bill designates a state-run “Frontier Model Division” (discussed below) to modify these thresholds as it sees fit.

Before beginning training, developers of covered models would have to do the following, among other requirements:

  • Implement “administrative, technical, and physical” cybersecurity protections to prevent “unauthorized access to, misuse of, or unsafe post-training modifications of, the covered model and all covered model derivatives that are controlled by the developer…in light of the risks associated with the covered model, including from advanced persistent threats or other sophisticated actors.”
  • Implement the capability to “promptly enact a full shutdown,” of either the model’s training or operation (or that of any developer-controlled model derivatives).
  • Identify specific tests that would provide “reasonable assurance” that the covered model (and derivative models, whether these are controlled by the original model developer or otherwise) do not pose an “unreasonable risk of causing or enabling a critical harm.”
  • Annually re-evaluate the capabilities of their model as pursuant to the bill and, starting in 2028, retain a third-party auditor who conducts compliance checks.
  • Annually submit a report to the state — under penalty of perjury — of ongoing compliance with SB 1047.

Subscribe to The Industry

Per the bill, a “model derivative” is either an unmodified copy of a covered model, a copy of a covered model that has been subjected to “post-training modifications unrelated to fine-tuning” (i.e. adding tools or updating safeguards), or a copy of a covered model that has been fine-tuned using computing power not exceeding 3*(10^25) FLOPs (or a quantity determined by the Frontier Model Division, once it is operational). A “critical harm” includes, among other things:

  • The creation or use of a chemical, biological, radiological or nuclear weapon that results in mass casualties.
  • At least $500,000,000 of damage from cyberattacks on critical infrastructure.
  • Mass casualties or at least $500,000,000 of damage resulting from an AI model “autonomously engaging in [deleterious] conduct.”

Developers would also have to report any “AI safety incidents” that occur within 72 hours to the Frontier Model Division, or potentially face civil action from the Attorney General. An “AI safety incident” could include, among other things:

  • A model “autonomously engaging in behavior other than at the request of a user,” regardless of whether this behavior is deleterious or not.
  • Theft, misappropriation, or “inadvertent release” of the model weights.
  • The “critical failure” of model controls, including those limiting the ability to modify a model.

As Andrew Ng, the head of Google Brain and a professor of computer science at Stanford, notes, many of these requirements are vague and leading AI scientists would likely disagree about how to meet them. And, as Ng points out, how can researchers sign a sworn statement (under penalty of perjury) affirming their models don’t pose an “unreasonable risk” of harm if they disagree about what harms the model could theoretically enable?

On the state agencies side, SB 1047 would create two new government bodies: a five-person Board of Frontier Models, which would operate within the state’s Government Operations Agency and whose members would all be unelected appointees, and the Frontier Model Division, which would operate under the direct supervision of the board. These unelected government employees would, collectively, be responsible for reviewing annual compliance reports, publishing anonymized safety incident reports, and — most controversially — advising the Governor about when to “proclaim a state of emergency relating to AI,” tipping the Attorney General off to violations of SB 1047, and drafting “model jury instructions” for related trials.

As the Context Fund, an AI policy workgroup, points out in its analysis of SB 1047, this seems to essentially amount to affording an unelected, non-term-limited cabal of bureaucrats “unilateral power” to set and enforce AI safety standards statewide.

Elsewhere, the bill would also establish “CalCompute,” a state-owned and hosted cloud platform whose primary focus would be to facilitate research into the “safe and secure deployment” of AI and to foster “equitable innovation.”

THE BROADER CONTEXT

SB 1047 is not the first legislation of its kind in the US. President Biden signed an executive order last October subjecting all American AI models trained using computing power exceeding 10^26 FLOPs to regulation by the Secretary of Commerce, and Colorado recently passed its own “AI safety” bill, SB-205, which regulates developers of “high-risk AI systems.”

Still, it remains unclear whether Newsom will actually sign SB 1047 into law. He has previously vetoed legislative attempts to regulate tech (as, for example, when he rejected a bill last year that would have effectively banned autonomous trucks), and warned of the perils of “over-regulating” AI at a recent event in San Francisco. But the recent budget deal Newsom reached with legislators includes language about coordinating “resources…to implement or procure certain generative AI projects,” suggesting the Governor may be open to SB 1047. We can expect a final answer by September 30.

— Sanjana Friedman

Subscribe to The Industry

0 free articles left

Please sign-in to comment