Groundswell of Opposition to CA’s AI Bill as it Nears Vote

criticisms of sb 1047 have reached a fever pitch, with academics and politicians joining silicon valley in a rejection of the ambiguous regulatory regime it would impose on the industry
Brandon Gorrell and Riley Nork

Leaders from Silicon Valley, academia, and D.C. came out in opposition to California’s AI Safety bill (SB 1047) last week, publishing critical white papers, op-eds, open letters, and letters to Sen. Scott Wiener.

The bill, which Wiener introduced in February, would create a new government agency to regulate AI (read our primer here). Critics argue its “strategically ambiguous” language, criminal penalties, kill switch provision, and other aspects will stifle innovation in the AI sector and hurt California’s economy, the largest in the US.

Last week, Andreessen Horowitz Chief Legal Officer Jaikumar Ramaswamy sent Wiener a 14-page letter, arguing Wiener's defense of the bill in July “made certain legal claims inconsistent with a plain reading” of the bill. Specifically, the letter says, 1047 privileges closed source models over open source, will stifle innovation, and is “troublingly vague,” among other criticisms.

Dr. Fei Fei Li, co-director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and a PhD computer scientist who’s been called the “godmother of AI” for her early work and research in the field, penned an op-ed in Fortune last week that argues 1047 “will have significant unintended consequences, not just for California, but for the entire country.” Like Ramaswamy, Li worries the bill’s “kill switch” provision would discourage developers from working on models, because the models may ultimately be deleted.

Ranking member of the House Committee on Space, Science, and Technology (Science Committee) Zoe Lofgren also sent a letter to Wiener, arguing that while she supports AI regulation, 1047 creates “unnecessary risks for both the public and California’s economy.” The Science Committee has jurisdiction over artificial intelligence, and wrote the first federal law concerning AI, the National Artificial Intelligence Initiative Act.

And on Tuesday, the Democratic lawmaker Ro Khanna, who represents California's 17th district — which includes Silicon Valley — published a statement condemning the bill. "[T]he bill as currently written would be ineffective, punishing of individual entrepreneurs and small businesses, and hurt California’s spirit of innovation," he said.

An open letter from UC faculty and students was also circulated last week, soliciting signatures in opposition to the bill, which they describe as “a catastrophically bad law attempting to regulate ‘AI Safety’ that may significantly impact our ability to conduct research.”

A number of authoritative voices in AI have also recently voiced their concerns with 1047. Anima Anandkumar — the Bren Professor of Computing at CalTech and former senior director of machine learning research at Nvidia — posted on Wednesday that life-saving innovations in AI such as extreme weather prediction will be endangered if SB 1047 passes.

Russell Wald — deputy director at Stanford HAI — critiqued the bill on grounds that “[SB 1047] does not offer the regulation California needs and instead attempts [to] combat unsubstantiated X risk.” Elsewhere, David Hinkle — VP of Software Engineering at Securly — tweeted that “SB 1047 is basically a different bill every time I read it… The AG will decide if they think your model is bad. If they think it’s bad they’ll take you to court and if the judge also thinks it’s bad you’re a criminal.”

“From the scientific community, the builders, the investors, and broadly — across the board — there's been a universal rejection of the ambiguous regulatory regime that SB 1047 imposes,” Chris Lengerich, founder of the AI open-source community Context Fund, told Pirate Wires on a phone call last week.

“In terms of the support for 1047 that I've seen, it hasn’t been very substantive,” he said. “A lot of the support has been very high-level claims, which relate to the idea that intelligence is unsafe, and also that 1047 is the right way to go about managing intelligence, without actually specifying how 1047 works. And I think when you look at the people in support of the bill, the two threads that stand out to me are that they have some type of funding from EA or Open Philanthropy.”

Context Fund has been involved with 1047, “from the very beginning,” Lengerich said. “Back in late March, we were talking to Senator Wiener's office. We spent about a month with them, expressing concerns and trying to get amendments into the bill, but that didn't happen.”

Wiener has said only “the loudest voices” are critiquing SB 1047. But a16z General Partner Martin Casado pointed out that in a thread defending the bill, Wiener cited support from just two startups, one of which is based outside of California, while a letter in opposition to the bill circulated by Y Combinator and a16z was signed by more than 140 AI startup founders. Casado also posted an extensive thread of opposition 1047 from startup founders, policymakers, and academia.

The bill will be considered at the Assembly Committee on Appropriations suspense meeting on Thursday, where it could be further amended.

— Brandon Gorrell and Riley Nork

Please sign-in to comment