Bugman vs. the Robots

SB 1047, california’s bill to ban open-source AI, the effective altruists behind the bugman politician responsible, and tech's strange bedfellows
Mike Solana

Subscribe to Mike Solana

First strike. The California Senate just passed a bill that will effectively ban open-source AI development in the state unless bimbo governor Gavin Newsom checks his own party with a veto. Am I shocked? Even by today’s gutter policy standards, I’ll admit I was a bit surprised. But in a sprawling, late-stage bureaucracy, in which only the worst people alive are incentivized to work in government, such self-immolating stupidity is the power of even one, committed bugman. Today, that bugman’s name is Scott Wiener, a State Senator and the official architect of SB 1047.

But are California bugmen solely responsible for this first strike attack against the thinking robots, or is there another figure pulling the strings from the shadows? Then, there’s also the question of the legislation itself. Is it really beyond the pale? Among people working in the field, casual use of religious language to describe what’s being built is not uncommon. If software engineers are actually creating a synthetic God, more decisive in battle than the most powerful nuclear weapons on the planet, do we really want Him (or Her) open-sourced for any hostile foreign government to clone? And why are any of these questions, on matters so fundamentally new, considered beyond the pale by attention-starved venture capitalists on social media?

To this final age-old question, I leave speculation to the reader. But I’ve put some thought into the rest. Our story begins with the drama, which exploded early last week —

After his successful vote, Wiener took a victory lap on X, I guess assuming all the AI doomers in his ear would show him some support. But if they did, it was only in their silly little group chats, as industry sentiment shared in public was overwhelmingly, and furiously opposed to the bill. By the following morning, Wiener was already walking back his celebration. We have months of sausage-making left in the Assembly before this thing becomes law, he assured his constituents, and fear not: the state is actively discussing the issue with a “range of stakeholders,” including — I’m sure — a wacky cast of self-proclaimed “AI safety experts,” which is a euphemism for racially regressive political activists and “rationalist” members of an AI doomsday cult.

Fantastic.

You can check out Wiener’s full bill here. But to make a long story short: Californians working in AI will soon be blessed with a range of expensive new safety measures including an evocative “kill switch” mandate, annual compliance check-ins with the state, and a requirement for reporting every time a project experiences one of the as-yet unnamed “safety incidents.” Most of this will be run by a new state org, the (I’m sorry to report, extremely cool sounding) “Frontier Model Division.” We don’t yet know who will be running the division, but since the small handful of people who actually understand artificial intelligence are all busy working in one of the few relevant AI labs, it’s probably safe to assume some incredibly tedious charlatan like Gary Marcus will find his way to power.

In terms of damage to the tech community, it looks like huge companies will survive the legislation, while smaller companies will have a more difficult time, and open-source projects, which are generally not centralized and therefore not built to manage this kind of scrutiny, will find the regulations impossible to navigate. In fact, their destruction appears to be the bill’s intention. But why?

Believe it or not, Wiener actually constitutes a California “moderate,” which means he’s mostly more annoying than he is psychotic. He’s generally known as the YIMBY guy — his least offensive position — and an advocate for building housing basically anywhere he can (a position to which I’d be a little more amenable were the YIMBYs not so dogmatically obsessed with reworking our cities into an endless, lifeless field of Borg-like shipping containers).

But beyond his focus on housing, and despite the fetishistic leather play he encourages in public, Wiener is fundamentally a conservative nanny. His career highlights include a tax on sugary drinks (later blocked by the state), a ban on menthol cigarettes, and, as of last week, legislation that will make it nearly impossible to drive more than 10 MPH over the speed limit. Misguided and obnoxious as he is, this is a man who wants to keep you safe (though notably not from HIV). Is dogmatic AI Safetyism really so far a policy stretch for the over-estrogenized substitute teacher kind of guy who would have felt at home in Demolition Man? Does this ambitious bugman State Senator just really care about ex-risk? I’m not convinced.

Consider the following line from SB 1047:

‘Computing cluster’ means a set of machines transitively connected by data center networking of over 100 gigabits per second that has a theoretical maximum computing capacity of at least 10^20 integer or floating-point operations per second and can be used for training artificial intelligence.

Scott Wiener is a career politician. He hasn’t worked in tech a day in his life. He hasn’t even seemed particularly interested in tech beyond the checks he’s cut from a handful of industry leaders. There is simply no way in hell he even understands, let alone wrote this sentence. So who did?

SB 1047 was “co-authored by Senators Roth and Stern,” two men who also almost certainly don’t understand the words they allegedly wrote. But it was “sponsored by” the Center for AI Safety, which is run by Dan Hendrycks, an effective altruist, and here the pieces start to snap together. Wiener’s pro-housing politics align him with effective altruists in tech, who generously support YIMBY politicians through organizations such as Dustin Moskovitz’s Open Philanthropy Foundation. Dustin, along with most of the effective altruists interested in local housing, is a well-documented AI doomer with associations, if not a smoking gun link, all over the attempted OpenAI coup, in which Sam Altman nearly lost his job in the name of “AI Safety.”

My sense is Wiener probably took a lot of money from effective altruists who liked his housing positions. Later, they sold him on legislation for another subject he could, again, not possibly understand beyond the fact that he’d be doing nanny shit, in a way, which he enjoys. But altogether, just for the language here alone, I think SB 1047 almost has to be a tech-on-tech crime. Which, when you think about it, isn’t really that surprising.

Tension over the nebulous subject of AI safety has grown in San Francisco for years. Now, support for and against regulation seems to be coalescing into two, chaotic camps.

First, tech figures in favor of regulation, or apparently in favor: doomer effective altruists who believe a runaway, self-improving artificial general intelligence, or even just a very powerful AI, poses a non-trivial risk of human extinction; racially-regressive “Sam Altman is doing genocide” zealots (in the vein of Timnit Gebru), who simply want the government to put them specifically in charge; China hawks who don’t want a powerful foreign adversary to clone our technology, and are therefore open to nuking open-source AI development beyond some ambiguous threshold; and (I’m sorry, none of us were born yesterday) corporate executives making a play for regulatory capture.

Subscribe to Mike Solana

Among tech figures against regulation, or at least for now, we have: members of the e/acc community (which I’m still pretty sure is mostly just a fun, uplifting meme); open source libertarians and earnest, thoughtful anarchists (the purest souls of either camp, tech’s True North, our endangered elves of Middle Earth); venture capitalists late to the AI party still investing in a space that, they increasingly worry, won’t work beyond a few giants; and literal Chinese spies.

With George W. Bush-era Republicans siding with genderqueer Burning Man philosophers, and staunch libertarians siding with Chinese state agents, the conversation has become a rat’s nest of strange bedfellows and conflicting interest.

But, separate from whatever opinion you may or may not have concerning artificial intelligence specifically, SB 1047 appears to be a sharp departure from safety legislation in general.

Until recently, spaces like quantum and cryptography largely lived inside the government, where it’s easy to manage security clearance. But there are no shortage of private companies building in both of these spaces today, and provided they aren’t actively working with the government they have no requirement of secrecy — which an end to open-source development of AI would effectively imply. In satellite technology, government applications are classified, much as we see with rocketry and weapons sold to the government. Companies, however, are free to develop whatever they want. In biology, we see a little more precedent for 1047-style regulation, but precautions fall far short of what the EA community (I’m pretty sure!) just worked through the California Senate.

For example, every company or academic lab is overseen by an Institutional Biosafety Committee (IBC), which grants permission for work with recombinant DNA, pathogens, and, in the words of one founder I spoke with, “really anything they feel nervous about.” But institutions shape their own IBC. Labs also undergo biosafety inspections, which seem basically in keeping with SB 1047, but rules tend more toward stickers on lab equipment than proscriptions on what a lab can or can’t work on, and certainly nobody is prohibited from publishing.

Probably the closest comparison to legislation targeting AI is the regulation surrounding nuclear, whether weapons or power, which is generally restricted not by schematics but by materials required to do the work. In America, enriched uranium is illegal to possess without a license. Now, we’re looking at a similar proposal in AI for GPUs, which would sort of be like saying AI labs need a license to work with metal. In any case, just on a gut level, the notion GPUs are in some way equivalent to enriched uranium doesn’t sit right, and restrictions on compute in the name of security would constitute a kind of law we’ve never before seen.

In the pursuit of something truly new, as with artificial intelligence, new requirements in safety should be expected. But in order to justify legislation of a kind that would actively suppress new work in a field, the state should at least offer a compelling argument that artificial intelligence — not just potentially, but right now — is existentially dangerous. And exciting, unworkable thought experiments of a kind popular among EA enthusiasts don’t count.

In terms of AI’s potential, I guess the truth is just I still don’t know what I think will happen, or where this entire project takes us, and I don’t trust anyone who says they do. But I don’t see any evidence we’re standing on the precipice of waking up an AI monster, and if we are, if all of this evidence has been very cautiously and deliberately concealed, we’re not looking at a threat from open source models. We’re looking at a threat from one of the major companies presently insulated from the most damaging effects of SB 1047. Because the thing about open-source? We can see it.

I understand it’s not a very exciting position, but my sense is we should put off the draconian luddite overreach for a minute, and proceed with caution. If Cthulhu does begin to rumble, the hysterical AI doomsday cultists will see it coming. And God knows we’ll hear about it.

-SOLANA

Subscribe to Mike Solana

0 free articles left

Please sign-in to comment