California’s Controversial AI Bill: A Cheat SheetJul 17
sb 1047 is inching closer to the legislative finish line, and will likely pass easily through the chamber — the only question that remains is if newsom will veto it
Sanjana FriedmanSubscribe to The Industry
--
Dan Hendrycks is the Executive & Research Director at the Center for AI Safety (CAIS), whose lobbying arm co-sponsored California’s controversial AI safety bill. He also co-founded Gray Swan, a company that announced its public launch on Tuesday. The CAIS, which is closely associated with Effective Altruism — having received around $10m in grants from its philanthropy arm Open Philanthropy — believes AI poses a risk of human extinction, and Gray Swan is an AI safety compliance company.
A closer look into the connections between Hendrycks, CAIS, SB 1047, and Gray Swan reveals what could be a significant conflict of interest for Hendrycks that looks something like the following: after senator Scott Wiener contacted CAIS about AI regulation, CAIS created its lobbying firm called the Center for AI Safety Action Fund (CAIS AF). Then, after Wiener reached out to the Action Fund to co-sponsor the bill, it — in all but name — wrote the bill. As co-founder of AI safety compliance company Gray Swan, Hendrycks — who appears to have been deeply involved in each of the previous steps, as well as being a public advocate for the bill — could stand to benefit financially from the market the bill would create, and gain outsized power over the AI sector by setting best practice safety standards and controlling the mechanism by which those standards are enforced.
SB 1047, which has been criticized as impractical and likely to throttle innovation in the sector (read our backgrounder on the bill), will create a new government agency called the “Frontier Model Division” to regulate AI. Among other requirements, the bill mandates third-party audits of large AI models that assess its safety and potential harms. The bill would allow third-parties to perform the audits.
In May, Nathan Calvin, senior policy counsel at the CAIS AF appeared on the Cognitive Revolutions podcast to provide detail on the creation of the Action Fund which, as a co-sponsor, helped craft SB 1047. He said:
The [Center for AI Safety Action Fund] was created partially because we were getting lots of inquiries from policymakers, including Senator Wiener, and we wanted to have a vehicle that could do more direct policy work, which is something that 501(c)(3)s aren’t able to do…
Senator Wiener put out a [bill of intent] mid-last year, talking about being interested in doing a bill on these issues, and he approached us and the other co-sponsors for help fleshing that out.
When the show’s co-host Nathan Labenz asked Calvin, “When you refer to the ‘author’ of the bill, is that referring to state senator Wiener?,” Calvin responded, “We provided technical advising, but ultimately Wiener and his staff made the final call on all of the inclusions and the direction of the bill.”
Subscribe to The Industry
Whether or not that’s an issue of semantics, Hendrycks clearly has a deep understanding of 1047. Shortly after it was introduced, Hendrycks posted a detailed thread explaining and defending its provisions, and linking to a page paid for by CAIS’ lobbying arm that encourages visitors to support 1047 by contacting their state legislators. On July 2, he advocated for the bill in testimony to the CA state Judiciary Committee.
In parallel to his involvement with the bill, Gray Swan, where Hendrycks is co-founder and Chief Strategy Advisor, worked in stealth mode to create AI safety tools that may be positioned to capture a portion of the demand the compliance market SB 1047 would create. On Tuesday, Gray Swan came out of stealth to announce its public launch. Its first two products — Shade and Cygnet — could be well-positioned for a SB 1047 regulatory environment:
On Wednesday, Hendrycks posted, “While 1047 requires external audits, they are more PWC [PricewaterhouseCoopers] type audits (did you actually do the stuff you said you did) than other type audits (was your testing good enough?),” in response to suggestions he has the conflicts of interest outlined above.
“[It] is not the intention or the plan for Gray Swan AI [to offer the kinds of audits that SB 1047 mandates],” Hendrycks said in another reply. “As an advisor I am not involved in any such efforts at Gray Swan AI nor am I involved in other efforts that would offer SB 1047-relevant auditing capabilities.”
These comments are unclear at best, disingenuous at worst. A firm like PwC would provide auditing services, but only on the basis of data that would first need to be collected by a tool like Shade. In other words, Gray Swan wouldn’t be the independent auditor, but it could offer the kinds of services that companies would be looking to include in their safety and security protocols that SB 1047 requires.
Also, Gray Swan’s website lists him as “Co-founder & Chief Strategy Advisor,” not simply an “advisor,” as he calls himself in his post. And Hendrycks doesn’t actually deny Gray Swan will offer the kinds of audits the bill mandates, only that it’s not the company’s intent or plan to do that. Oddly, he adds he is not involved in “any such efforts” at the company.
Gray Swan has already won a ~£129,000 contract with the UK government, in which the company will provide “robust safeguards and offensive cyber capability measurement.” Arguably, this already puts the company in a good position — close to regulators — to sponsor similar legislation in the UK, and capitalize on a market for similar AI compliance should relevant regulation pass there.
With Gray Swan and SB 1047, will Hendrycks financially benefit from a market that he worked with regulators to create? In such a market, his company might be seen as an attractive partner to both regulators and potential customers, because its co-founder helped draft the regulation that governs it. And if his company's compliance tools and assessments set best practices for AI safety, Gray Swan — and Hendrycks — could gain enormous power over the sector itself by way of having created the standards by which AI models are judged safe, and the mechanisms by which to enforce those standards.
— Brandon Gorrell
Editor's note #1: article updated to correct an error that said the bill would allow the Frontier Model Division to designate third-party auditors. The bill allows third-party audits and compliance partners.
Editor's note #2: Nathan Calvin, Senior Policy Counsel of CAIS AF told us CAIS AF was created after Wiener "approached us about being a co-sponsor." Per above, in a May podcast appearance, Calvin said the CAIS AF was "created partially because we were getting lots of inquiries from policymakers, including Senator Wiener, and we wanted to have a vehicle that could do more direct policy work." We've updated the piece to clarify this additional step in the sequence of Wiener's communication with CAIS AF: Wiener contacted the CAIS, then it set up the Action Fund, then Wiener approached the Action Fund about being a co-sponsor.
Subscribe to The Industry