The Conflict of Interest at the Heart of CA’s AI Bill

dan hendrycks, an executive at a firm that co-sponsored scott wiener's ai bill, co-founded an ai safety compliance company that launched on tuesday
Brandon Gorrell

Subscribe to The Industry

Loading the Elevenlabs Text to Speech AudioNative Player...
  • A close look into the people and entities who helped draft and advocate for California’s controversial AI safety bill (SB 1047) reveals that one of the bill’s co-sponsors could have a significant conflict of interest
  • Dan Hendrycks, an executive at Center for AI Safety — the firm whose lobbying arm helped draft the bill — co-founded a company called Gray Swan that offers AI safety compliance tools that seem positioned to provide the type of auditing data the bill would require; Gray Swan publicly launched on Tuesday
  • The Center's lobbying arm was set up "partially" because Sen. Scott Wiener contacted them, "and we wanted to have a vehicle that could do more direct policy work," per the lobbying arm's senior policy counsel

--

Dan Hendrycks is the Executive & Research Director at the Center for AI Safety (CAIS), whose lobbying arm co-sponsored California’s controversial AI safety bill. He also co-founded Gray Swan, a company that announced its public launch on Tuesday. The CAIS, which is closely associated with Effective Altruism — having received around $10m in grants from its philanthropy arm Open Philanthropy — believes AI poses a risk of human extinction, and Gray Swan is an AI safety compliance company.

A closer look into the connections between Hendrycks, CAIS, SB 1047, and Gray Swan reveals what could be a significant conflict of interest for Hendrycks that looks something like the following: after senator Scott Wiener contacted CAIS about AI regulation, CAIS created its lobbying firm called the Center for AI Safety Action Fund (CAIS AF). Then, after Wiener reached out to the Action Fund to co-sponsor the bill, it — in all but name — wrote the bill. As co-founder of AI safety compliance company Gray Swan, Hendrycks — who appears to have been deeply involved in each of the previous steps, as well as being a public advocate for the bill — could stand to benefit financially from the market the bill would create, and gain outsized power over the AI sector by setting best practice safety standards and controlling the mechanism by which those standards are enforced.

SB 1047, which has been criticized as impractical and likely to throttle innovation in the sector (read our backgrounder on the bill), will create a new government agency called the “Frontier Model Division” to regulate AI. Among other requirements, the bill mandates third-party audits of large AI models that assess its safety and potential harms. The bill would allow third-parties to perform the audits.

In May, Nathan Calvin, senior policy counsel at the CAIS AF appeared on the Cognitive Revolutions podcast to provide detail on the creation of the Action Fund which, as a co-sponsor, helped craft SB 1047. He said:

The [Center for AI Safety Action Fund] was created partially because we were getting lots of inquiries from policymakers, including Senator Wiener, and we wanted to have a vehicle that could do more direct policy work, which is something that 501(c)(3)s aren’t able to do…
Senator Wiener put out a [bill of intent] mid-last year, talking about being interested in doing a bill on these issues, and he approached us and the other co-sponsors for help fleshing that out.

When the show’s co-host Nathan Labenz asked Calvin, “When you refer to the ‘author’ of the bill, is that referring to state senator Wiener?,” Calvin responded, “We provided technical advising, but ultimately Wiener and his staff made the final call on all of the inclusions and the direction of the bill.”

Subscribe to The Industry

Whether or not that’s an issue of semantics, Hendrycks clearly has a deep understanding of 1047. Shortly after it was introduced, Hendrycks posted a detailed thread explaining and defending its provisions, and linking to a page paid for by CAIS’ lobbying arm that encourages visitors to support 1047 by contacting their state legislators. On July 2, he advocated for the bill in testimony to the CA state Judiciary Committee.

In parallel to his involvement with the bill, Gray Swan, where Hendrycks is co-founder and Chief Strategy Advisor, worked in stealth mode to create AI safety tools that may be positioned to capture a portion of the demand the compliance market SB 1047 would create. On Tuesday, Gray Swan came out of stealth to announce its public launch. Its first two products — Shade and Cygnet — could be well-positioned for a SB 1047 regulatory environment:

  • Shade seems designed to provide the data for audits that would be mandated by SB 1047. According to the company’s press release and product page, Shade is “a comprehensive AI evaluation... tool” that “gives companies the ability to continually assess the risks and vulnerabilities of their AI components,” and offers “detailed reports that offer a clear picture of [an] AI’s vulnerability and resilience under a range of stressors.” Shade will also "[ensure] regulatory compliance."
  • Cygnet is a Llama-3-based custom LLM engineered for “maximal safety,” to be “more resilient to powerful forms of attack than existing [state-of-the-art] LLMs.” Gray Swan is already offering paid API access to Cygnet, which puts it in competition with enterprise versions of major LLMs, such as GPT. In the context of 1047, Cygnet’s competitive advantage over GPT and others could be that it’s the only LLM whose co-founder helped draft the legislation with which potential customers will need to comply.

On Wednesday, Hendrycks posted, “While 1047 requires external audits, they are more PWC [PricewaterhouseCoopers] type audits (did you actually do the stuff you said you did) than other type audits (was your testing good enough?),” in response to suggestions he has the conflicts of interest outlined above.

“[It] is not the intention or the plan for Gray Swan AI [to offer the kinds of audits that SB 1047 mandates],” Hendrycks said in another reply. “As an advisor I am not involved in any such efforts at Gray Swan AI nor am I involved in other efforts that would offer SB 1047-relevant auditing capabilities.”

These comments are unclear at best, disingenuous at worst. A firm like PwC would provide auditing services, but only on the basis of data that would first need to be collected by a tool like Shade. In other words, Gray Swan wouldn’t be the independent auditor, but it could offer the kinds of services that companies would be looking to include in their safety and security protocols that SB 1047 requires.

Also, Gray Swan’s website lists him as “Co-founder & Chief Strategy Advisor,” not simply an “advisor,” as he calls himself in his post. And Hendrycks doesn’t actually deny Gray Swan will offer the kinds of audits the bill mandates, only that it’s not the company’s intent or plan to do that. Oddly, he adds he is not involved in “any such efforts” at the company.

Gray Swan has already won a ~£129,000 contract with the UK government, in which the company will provide “robust safeguards and offensive cyber capability measurement.” Arguably, this already puts the company in a good position — close to regulators — to sponsor similar legislation in the UK, and capitalize on a market for similar AI compliance should relevant regulation pass there.

With Gray Swan and SB 1047, will Hendrycks financially benefit from a market that he worked with regulators to create? In such a market, his company might be seen as an attractive partner to both regulators and potential customers, because its co-founder helped draft the regulation that governs it. And if his company's compliance tools and assessments set best practices for AI safety, Gray Swan — and Hendrycks — could gain enormous power over the sector itself by way of having created the standards by which AI models are judged safe, and the mechanisms by which to enforce those standards.

— Brandon Gorrell

Editor's note #1: article updated to correct an error that said the bill would allow the Frontier Model Division to designate third-party auditors. The bill allows third-party audits and compliance partners.

Editor's note #2: Nathan Calvin, Senior Policy Counsel of CAIS AF told us CAIS AF was created after Wiener "approached us about being a co-sponsor." Per above, in a May podcast appearance, Calvin said the CAIS AF was "created partially because we were getting lots of inquiries from policymakers, including Senator Wiener, and we wanted to have a vehicle that could do more direct policy work." We've updated the piece to clarify this additional step in the sequence of Wiener's communication with CAIS AF: Wiener contacted the CAIS, then it set up the Action Fund, then Wiener approached the Action Fund about being a co-sponsor.

Subscribe to The Industry

Please sign-in to comment