The Forces Behind Scott Wiener's Steadfast Defense of SB 1047

in the face of overwhelming resistance to sb 1047, scott wiener continues to defend effective altruists' position, while mischaracterizing opponents as just the "loudest voices"
Dean W. Ball

SB 1047, California’s effort to regulate frontier AI models, is nearing the finish line. It will soon either die a quiet death in the California Assembly or proceed to Governor Newsom’s desk for signature or veto. Though the bill has undergone many revisions since its introduction six months ago, its basic structure remains the same: appointing unelected bureaucrats to set nebulous AI safety “guidelines,” and holding AI model developers legally liable for major crimes committed using their models by third parties beyond the developers’ direct control.

Ultimately, SB 1047 sets the stage for the gradual death of frontier open-source AI. The liability provisions will increase the marginal risk of releasing powerful models as open source. The government-issued guidelines envisioned by the bill will likely be incompatible with open-source models, similar to how the federal US AI Safety Institute’s recent AI misuse mitigation guidelines made recommendations that are impossible for open-source developers to adopt.

This should come as no surprise: the Effective Altruism-based Center for AI Safety (CAIS) co-sponsored 1047, drafting it in all but name, and many Effective Altruists believe open-source AI is an existential threat to every living thing on Earth, and that AI should instead be centralized under a monopoly lab or the government itself. Some have proposed de facto bans of any open-source model larger than GPT-3, such as earlier this year when the CAIS proposed stringent regulations on models trained with more than 10^23 flops; others, like Gladstone AI, have suggested making it a felony to release open models far smaller than Llama 3.

But those supporters are a small part of the AI world. Indeed, almost all other relevant parties outside the Effective Altruism community have been opposed to SB 1047 for months. The bill has been described as unworkable by hundreds of academic researchers, including senior figures such as Andrew Ng, Fei-Fei Li and Yann LeCun. The venture capital community has said it will harm startups across America, since the bill’s author, State Senator Scott Wiener, intends for SB 1047 to apply throughout the United States by regulating companies “doing business” in California rather than just companies based in California. Startup founders themselves have, for the most part, strongly criticized the bill. So has Big Tech. Most recently, prominent Congressional Democrats have also voiced their opposition, including Silicon Valley representatives Zoe Lofgren and Ro Khanna and former House Speaker Nancy Pelosi.

It is uncommon for senior federal officials to weigh in on state legislation; that they chose to do so for SB 1047 reflects not only the seriousness of the bill, but also the implications it carries for the entire American AI field.

Despite the chorus of opposition from virtually all concerned parties, Scott Wiener continues to insist that SB 1047 is “light-touch” regulation supported by the vast majority of Californians and opposed only by a vocal minority of billionaire accelerationists.

What explains this disconnect? Why has Scott Wiener, a powerful California politician with big ambitions, chosen to alienate the tech community (recently referring to opposition to the bill as just the "loudest voices") — arguably the most important interest group for a man who wishes to represent San Francisco in Congress? More critically, why does he favor a relatively small group of advocates while ignoring — even mischaracterizing — many much larger groups of people who are also his constituents? The answer comes from a fusion of two toxic phenomena: Effective Altruists' longstanding influence over Wiener, who had been working on YIMBY initiatives with them for years before 1047, and the California legislature’s conception of itself as America’s regulators-in-chief.

When ChatGPT launched in late 2022, policymakers quickly became sure there was something they had to do about it. No one was quite sure what, exactly, but there was near-universal agreement, in the policymaking community, that action was necessary. We were told there was a race to regulate — and that the United States was losing it, first to China, then to the European Union. “In the coming years,” wrote Anu Bradford in Foreign Affairs, “there will be clear winners and losers not only in the race to develop AI technologies but also in the competition among the regulatory approaches that govern those technologies,” and in this competition, “the United States cannot afford to sit on the sidelines.” The question of what we were regulating, and what that regulation should be, was, evidently, of secondary importance. We were losing the race, and that was all that mattered.

At the time, almost all the groups doing policy work on AI were led by doomers — largely drawn from and funded by the Effective Altruist community. These groups bore nonpartisan and official-sounding names: the Center for AI Policy, the Center for the Governance of AI, and, importantly for our story, the Center for AI Safety, whose leader, Dan Hendrycks, is the intellectual driving force behind SB 1047.

Legislators, not knowing any better, looked to these groups for guidance, if only because they were the only ones who showed up. Much of the initial policy work on AI at the state and federal levels bore the mark of doomer thought, such as aggressive regulation of AI models (such as the Biden Executive Order’s compute-based reporting thresholds), and moves to ban or de facto ban frontier open-source AI (such as the frontier AI regulation framework announced by Senators Romney, Reed, Moran, and King). Elsewhere, the broader AI policymaking community has proposed efforts to centralize AI development under one, or a small handful, of players, such as the Center for the Governance of AI’s recent suggestion that Western governments should acquire all advanced GPUs, or the startup Conjecture’s proposal for the Multinational AGI Consortium (MAGIC), outside of which all AI development would be illegal.

In Washington, these ideas were soon put into competition with those of other experts. The US Senate, for example, led by Majority Leader Chuck Schumer (D-NY) convened “Insight Forums” starting in the fall of 2023, attended by more than 150 AI leaders from academia, startups, venture capital, and Big Tech — as well as non-AI experts such as teachers’ and writers’ unions, the NAACP, and others. Doomers, such as MIT’s Max Tegmark, and non-doomers, such as Stanford’s Andrew Ng, had their voices heard.

These meetings helped calm down the rhetoric from DC. In May 2024, a few months after the Insight Forums, the Senate’s AI working group released their report, and it largely advocated against rushing into onerous regulation of AI models. The process had worked: Congress encountered an emerging technology, got scared, met with a diverse range of experts, and came out of the process with more informed and nuanced views. Notably, the world has yet to end.

No such thing happened in the ideological monoculture of Sacramento. Instead, Wiener — one of California’s most powerful and ambitious politicians— turned to Hendrycks and CAIS, which at that point had received nearly $10m from Open Philanthropy, EA's money arm. CAIS even set up a distinct lobbying group, the Center for AI Safety Action Fund, after "getting lots of inquiries from policymakers, including Senator Wiener... to have a vehicle that could do more direct policy work," per Nathan Calvin, CAIS senior policy counsel. Then, as a co-sponsor of 1047, CAIS and Hendrycks drafted the bill in all but name. (For the full primer on Hendrycks, CAIS, and 1047, read Pirate Wires’ reporting here.)

When Wiener sent out the bill of intent for 1047, lines of communication had already been open between Wiener and EA for years. The Senator has been a champion of YIMBY initiatives since at least 2018, and Open Philanthropy was the "first institutional funder of the movement," per its Wikipedia page. As of late last year, it's donated around $5 million to YIMBY efforts, $500,000 of which had gone to the nonprofit California YIMBY by the time it sponsored Wiener's SB 10, a housing bill that passed and was ultimately signed into law by Gavin Newsom in 2021.

In light of Wiener and EA's long-term relationship, it's not surprising that his stance on AI is the same as EA's. But the trouble with groups like CAIS — and Effective Altruism, broadly — is that they occupy a parallel universe. They are part of a community that has been worrying about AI risk for decades, and has built an intellectual world on top of load-bearing ideas like deceptively aligned AI (where the AI pretends to be your friend, but in fact wants to kill you) and the intelligence explosion (where AI begins to improve itself recursively). They paint a compelling picture of the future for the uninformed, and they do so with ample quasi-scientific jargon. A handful of academics, including senior AI figures like Geoff Hinton and Yoshua Bengio, as well as an even smaller handful of startups (often EA-aligned), lend credibility to this rhetoric.

The insular nature of this group has contributed to intellectual myopia. Their network of nonprofits and intellectual outlets is so robust that they have created a kind of doomer echo chamber. Most of their policy recommendations boil down to a basic idea: “My friends and I have been thinking about AI risk for years, so put us in charge of AI development and AI chips.” One OpenAI employee, writing in a personal capacity on the rationalist (and heavily EA-populated) website LessWrong, described the community as “structurally power-seeking.”

They also stand to gain more directly from AI regulation: people who draft and advise on legislation are often well-positioned for a bureaucratic or other gatekeeping role in the enforcement of that legislation. Other conflicts of interest are possible, too: while drafting SB 1047, Hendrycks, who conducts AI safety and security research, co-founded a startup that may well end up providing SB 1047 compliance services if the bill passes. After being covered by Pirate Wires, Hendrycks announced his intention to divest from this startup; still, it highlights how easily potential conflicts of interest can emerge when one is pushing forward academic research, public policy, and startups at the same time.

Another factor at play in Wiener's steadfast defense of 1047 in the face of overwhelming opposition is that he and his colleagues in the California legislature see themselves as America’s only grown-ups when it comes to technology regulation. In Wiener’s view, the federal government is hopelessly incapable of taking action on much of anything important, so California must bear the burden of regulating AI for all Americans. He seems to believe this is his duty. Wiener summarized his perspective nicely in a recent tweet about net neutrality: “Congress won’t act. It’s why we handle tech regulation in CA.” This explains why SB 1047 is meant to apply nationally, and why there are other major AI bills in California this year that would also regulate the technology nationwide.

In this way, Sacramento is emerging as America’s Brussels. Like the European Union, California politicians are seeking to leverage the unique characteristics of the internet and software to exercise outsized control over those technologies. Because software (particularly frontier AI) is expensive to build but cheap to distribute globally, it is susceptible to control by the first sufficiently large regulator to take action. This is why many provisions of European Union laws like the GDPR and the AI Act regulate technology globally, a phenomenon known as the “Brussels Effect.” With SB 1047, Wiener is aiming for a similar “Sacramento Effect.”

It should come as no surprise, then, that Brussels regulators have collaborated with the California legislature on AI regulation. Referring to the trio of AB 2930 (an “algorithmic bias” bill), AB 3211 (deepfake regulation), and SB 1047, European Union tech bureaucrat Gerard de Graaf said, “if you take these three bills together, you’re probably at 70-80% of what we cover in the AI Act.” The European Union even opened an office in San Francisco in 2022 to help coordinate transnational regulatory efforts.

Wiener and his colleagues in the California legislature are now thoroughly convinced that they, along with their peers in Brussels, can enact a world-spanning regulatory regime for AI, whether the federal government likes it or not. There are no laws preventing a state from coordinating its policy with another country, and outside of Congress’ authority to preempt states from regulating in a given field, there is probably nothing stopping California from regulating AI for all of America.

This kind of power is understandably tempting for an avowed statist like Scott Wiener, who believes that “California literally made” Elon Musk through public policy. It is hard for Wiener to imagine much of anything good happening in the world without the guiding hand of the state somehow involved. In the end, then, the AI safety community told Wiener exactly what he wanted to hear: that everyone agreed AI model regulation is urgently necessary, and that he would be greeted as a hero by nearly everyone, save for a few “accelerationists” who wish to crush the common people beneath the wheels of progress. Both Wiener and the doomers are unreliable narrators here, starkly divorced from the facts on the ground. The result is a headless horseman riding a headless horse.

We should expect better from our political institutions. Washington, for its part, has been better. Perhaps this is because our federal government still bears at least some features of a democratic republic: robust ideological competition among different interest groups. California, on the other hand, is a de facto one-party state — a democracy in name only.

Whether SB 1047 ultimately succeeds, this toxic mixture of regulatory ambition and “structurally power-seeking” doomerism is likely to remain a threat to AI — and technology as a whole — for some time to come. If the bill fails, there is a chance Wiener will learn the hard way that he miscalculated badly. But more likely, he and the doomers will chalk up a loss to the efforts of shadowy billionaires and “dark money,” and we’ll end up right where we started. Given the circumstances, that might be the best outcome one can reasonably expect.

— Dean W. Ball

0 free articles left

Please sign-in to comment