America is Sleepwalking Into a Permanent DEI Bureaucracy That Regulates AI

regardless of who’s president, america still risks stumbling into an ai policy regime that's worse than the one in the european union — here's how
Dean W. Ball

Alamy
  • Even though Trump has vowed to rescind Biden’s AI executive order, an onslaught of state bills around AI is coming
  • These bills will use the AI Risk Management Framework (RMF), an outdated and little-known Office of Management and Budget regulatory memo written before the release of ChatGPT
  • This memo is overly focused on DEI, calling for never-ending DEI-style impact assessments to “mitigate” the “harms” of “systemic bias,” which is “omnipresent” in the “design, implementation, operation, and maintenance” of any AI system
  • Without serious action, state-level bills on AI will have America sleepwalking into an arcane bureaucratic regime that imposes broad government intervention with DEI-style policies on the AI sector

--

On paper, the United States seems to be adopting a “light-touch” approach to AI regulation. Congress has passed no major AI-related legislation since 2020. President Biden’s Executive Order (EO) on AI introduces almost no regulations, save a modest reporting requirement on companies training the very largest models. California’s SB 1047 — the most ambitious AI regulation proposed in America to date — was vetoed by Gavin Newsom. And now, with President Trump’s election (and the GOP’s promise to repeal even the Biden EO) alongside a commanding Republican mandate in Congress, it seems as though America is on the cusp of a golden age of AI innovation.

Yet just beneath the surface, an altogether different story is unfolding. Without a single law passed by Congress, without a dollar of federal funding appropriated, a sweeping regulatory regime for AI is being quietly assembled, going as far as to create from scratch a quasi-regulator for both the development and use of AI. It seeks to regulate AI using principles familiar to anyone who’s observed the modern left: “mitigating systemic bias” and “disparate impact,” ensuring “equity,” and — of course — creating mandates for the “responsible” and “ethical” use of AI.

It is a regime not wholly dissimilar from the European Union’s much-bemoaned AI Act. Except there are no celebratory tweets from regulators, and no public debate to scrutinize. Instead, it’s composed of a maze of decidedly social media-unfriendly guidance documents from federal agencies, NGO-led “workshops,” and state government legislation. A splashy bill from a well-known legislator like Scott Wiener — right in the AI industry’s backyard — may garner attention on social media, but Office of Management and Budget guidance does not. Because of this, many fundamental pieces of this new regime are already in place.

Without serious action — and to an extent, regardless of the impending Republican sweep of the White House and Congress — America risks sleepwalking into a regime that permits broad government intervention into not just AI model development, but into how AI is used by businesses throughout the economy. If you thought the DEI-inflected government actions of the last decade were intrusive, you haven’t seen anything yet. This time, the goal is to get such policies in on the ground floor of the next industrial revolution.

--

The story begins with an anodyne, even positive, bill passed by Congress in the closing days of the Trump administration — the National Artificial Intelligence Initiative Act (NAIIA). This bill did many things, but for our purposes, its most important mandate was to direct the National Institute of Standards and Technology (NIST) — the agency that creates voluntary technical standards for everything from steel to quantum-proof encryption — to develop “a risk mitigation framework for deploying artificial intelligence systems.”

Given that the NAIIA became law in January 2021 — just days before President Trump left office — its implementation was left up to the Biden administration which, two years later, published NIST’s AI Risk Management Framework (RMF). While the document was published a few months after the release of the first version of ChatGPT and other generative AI systems, it was written before those systems were publicly available (the administration’s first draft of the RMF was released in March 2022; ChatGPT was released in November 2022).

Unsurprisingly, then, the RMF is unsuited to the generalist language models that predominate in AI discussions today (it is worth noting that there are NIST risk management guidelines for generative AI, but they simply apply the principles of the RMF). Generative AI is mentioned only twice in this 42-page document, and many of the risks it envisions are from the era of “narrow” machine learning systems — models used to make predictions about whether a loan applicant will repay their loan, or computer vision models trained to identify signs of parasites in produce. It’s also replete with the left’s pre-vibe shift policy priorities: sustainability, corporate social responsibility, and, most of all, identity politics-based equity.

These two themes — documents out-of-date before they are even published, and Woke Era Democratic Party policy priorities — will appear again and again in our story.

The RMF’s goal is to help both developers and deployers of AI systems mitigate the “impacts” and “consequences” of AI, in general, for “individuals, groups, organizations, society, the environment, and the planet.” It’s an abstract guide for mitigating harms on all conceivable parties from all conceivable uses of a technology whose exact contours are far from understood (and were even less so a few years ago).

As one can imagine, this makes the document broad to the point of uselessness. For example, one of hundreds of the RMF’s recommendations is engaging with “trade associations, standards developing organizations, researchers, advocacy groups, environmental groups, civil society organizations, end users, and potentially impacted individuals and communities” to, among other things, “promote discussion of the tradeoffs needed to balance societal values and priorities related to civil liberties and rights, equity, the environment and the planet, and the economy.” So, talk to everyone about everything that could possibly go wrong with your AI model. Sounds time consuming, does it not?

Then there is mitigating against AI bias — or in the RMF’s phraseology, ensuring fairness “with harmful bias managed.” This means more than simply avoiding overt racism (setting aside how we might define that term in this context). “Systems in which harmful biases are mitigated are not necessarily fair,” says the RMF, because they may still “exacerbate existing disparities” (emphasis added). The RMF then taxonomizes bias: there are systemic biases (“the organizational norms, practices, and processes across the AI lifecycle, and the broader society that uses AI systems”), computational or statistical biases (basically, biased data), or human-cognitive biases (“how an individual or group perceives AI system information to make a decision or fill in missing information, or how humans think about purposes and functions of an AI system.”). The latter, by the way, is “omnipresent” in the “design, implementation, operation, and maintenance” of any AI system — though all three “can occur in the absence of prejudice, partiality, or discriminatory intent.”

In short, the RMF lays out a staggeringly broad range of work for both AI developers and deployers in service of a progressive cultural agenda that is all-too-familiar to anyone who’s lived through the past decade of American politics.

“So what?,” a skeptic might respond. A government agency published a vague and mealy-mouthed “framework.” It happens all the time, and this one is voluntary. What could be the harm? Unfortunately, the reality is that this document, and the mentality that produced it, is at the heart of American AI policy today. Viewed in that light, the RMF’s vagueness takes on an almost sinister undertone — it’s a document so broad it’s impossible to comply with. There will always be nits to pick, and, with the right laws in place — lawsuits to file (during the drafting of this essay, the Electronic Privacy Information Center, a privacy NGO, filed a Federal Trade Commission complaint against OpenAI for, among other things, not complying with the RMF).

--

Ten months after the release of the RMF came President Biden’s EO on AI, which, in addition to creating modest reporting requirements for frontier model developers, attempts to coordinate all facets of the federal government to consider how they will regulate AI. It mentions the RMF half a dozen times, for example by directing the Secretary of Homeland Security to tell “owners of critical infrastructure” (a term of art comprising healthcare, financial services institutions, telecommunications providers, apartment and office buildings, malls, casinos, and much else) how they should incorporate the RMF. The RMF is cited repeatedly as an example both to guide an agency’s own use of AI and that agency’s regulation of private sector AI.

One of the many tasks assigned to the federal bureaucracy by the Biden EO was setting rules for how federal agencies should use AI. This weighty job fell to the White House’s Office of Management and Budget (OMB) and took the form of a memo called “M-24-10.” The document covers all parts of the federal bureaucracy except the military and intelligence community, and just as one might expect, it takes a broad approach. Each agency use of AI that is either “rights-impacting” or “safety-impacting” (terms whose definitions comprise two-and-a-half single-spaced pages) requires an “AI impact assessment” — in short, a mountain of compliance work.

If a government agency wants to procure and deploy an AI system that can, say, perform facial recognition at airports, perhaps this tedious process makes some sense. But when you consider all the ways that a government agency might use a language model to improve its internal and external operations — and all the ways each of those uses could conceivably impact “rights” or “safety” according to the OMB’s broad definitions, the amount of paperwork becomes mind-boggling. And this is to say nothing of the fact that these rules also require constant monitoring, repeating the impact studies periodically, and redoing the studies whenever the underlying AI models are updated.

Indeed, the entire notion of an “algorithmic impact assessment” emerged in the mid-aughts, from some combination of legal and policy scholarship, advocacy organizations, and the drafting of Europe’s General Data Privacy Regulation (GDPR), which required “data protection impact assessments.” Some of the seminal scholarship proposing the idea of these assessments explicitly cites the National Environmental Policy Act (NEPA), the US law which famously mires new federally funded construction projects in years, sometimes decades of “environmental impact studies” and related litigation. The EU’s AI Act similarly mandates algorithmic impact assessments for a wide range of use cases of AI. What is most important is this: these ideas all were developed before the rise of general-purpose AI systems. But whether algorithmic impact assessments and the like are appropriate policy tools for general-purpose AI doesn’t matter to the bureaucrats: this is the idea the industry of regulators, consultants, lawyers, lobbyists, auditors, and advocates agreed to, and that industry does not turn on a dime.

In addition to the OMB’s compliance demands, federal agency employees are vaguely directed to “incorporate, as appropriate, additional best practices” from the RMF and from the AI Bill of Rights (another Biden administration doozy, also focused at maximally broad application of civil rights legal frameworks to AI development and use). This highlights how the RMF is deployed, by other bureaucrats, as a kind of passive aggressive cudgel. “We told you to use the RMF as appropriate,” one can imagine a bureaucrat saying — ignoring the fact that the RMF directs its readers to do approximately every possible thing to avoid every kind of conceivable risk from AI. Taken together, these documents create a culture obsessed with the risks of AI — particularly the risks to civil-rights protected demographic groups — before even understanding how to use AI, much less understanding the benefits of AI.

Setting aside the question of whether this sets up government agencies for anything approximating success in their efforts to increase government efficiency using AI (it does not), the OMB guidance has a more troubling potential use. Current laws give the government the ability to impose the OMB’s rules on all government contractors (which would include, incidentally, OpenAI and Google, if not also Anthropic and Meta), as well as all recipients of federal grants, loans, and other financial support — organizations that in total employ easily over 20% of the American workforce. And the president can simply tell the OMB to do precisely this; no Congressional authorization is needed. A report by the influential center-left think tank Center for American Progress even recommends applying these exact OMB rules in this way.

--

Policy items like the example above would almost certainly have been on the agenda of a potential Harris administration (alongside countless other AI-related administrative actions, to be sure). But of course, Kamala Harris didn’t win, and overbroad regulations like these are precisely the kind Donald Trump and J.D. Vance have railed against. Unfortunately, we’re not out of the clear: though DC policy wonks sometimes seem to forget it, there are fifty state governments, too.

Many readers will be familiar with SB 1047, California State Senator Scott Wiener’s ambitious bill to regulate against major risks from catastrophic AI. The bill would have applied to large and expensive-to-train models like the next-generation versions of OpenAI’s GPT series, Anthropic’s Claude, or Meta’s Llama.

But other bills flew under the radar. Chief among them were a trio of bills from Virginia, Connecticut, and Colorado.

Though the bills differed from one another, they shared remarkably common structure: when a “deployer” (an EU-inflected term meaning “a corporate user of AI”) or developer use or make an AI system used for “consequential decisions” (decisions that “materially” affect access to things like financial services, housing, employment, healthcare, legal services, insurance, and more), they must write long algorithmic impact assessments designed to assess and mitigate the possibility that their AI could ever have a differential negative impact on any civil rights-protected demographic group.

Each bill also requires deployers of “high-risk” (another EU similarity) AI systems to implement a “risk management framework.” And the explicit standard for this framework is — you guessed it — the RMF. Sometimes, this remarkably broad framework is described as a minimum standard for compliance.

The Virginia and Connecticut bills didn’t become law, but Colorado’s version — SB 205 — did, although even Governor Jared Polis admitted he signed it “with reservations” about its “complex compliance regime.”

Even the language of these bills is suspiciously similar, sometimes with word-for-word matches between provisions. Clearly, some degree of communication was happening here. It turns out the authors of each bill had been leading members of something called the Multistate AI Policymaker Working Group, convened by the Future of Privacy Forum (FPF). FPF is a DC-based nonprofit that advocates for privacy legislation, with corporate membership representing a large swath of America’s most powerful corporations (all of Big Tech, Anthropic, and OpenAI are members).

We know little about what lobbyists, scholars, activists, law firms, consultants, auditors, and other “stakeholders” were invited to participate in this working group alongside the state legislators. But we do know the intent: to get enough states to pass similar laws to do an end-run on any federal AI policy efforts. And we know the other state legislators leading this working group.

One of them, Representative Giovanni Capriglione, a Republican from Texas, has already shared a draft of a bill he intends to introduce early next year: the Texas Responsible AI Governance Act (TRAIGA). TRAIGA even borrows more directly from the EU’s AI Act, identifying a list of “prohibited uses” of AI, some of which seem to ban many or all current language models from the market (this is likely the result of sloppy drafting rather than legislative intent).

Other members of this working group hail from Florida, Minnesota, New York, California, Maryland, and Alaska. And there’s no reason the failed bills from Connecticut and Virginia won’t make a second appearance. There’s an onslaught of state bills coming, likely regardless of who is president. All of them use ideas that started showing their age years ago — if they ever made sense at all. And all of them use the ostensibly “voluntary” NIST standards as a key part of their foundation.

--

None of these government actions are sexy. None of them are about Skynet, or killer robots, or exotic bioweapons. Instead, they represent perhaps the ugliest thing American domestic statecraft has to offer: the immune system of the status quo — not just the bureaucracy, but the attendant mix of lobbyists, lawyers, compliance experts, consultants, and auditors that undergird and profit from the bureaucracy — seeking to devour a powerful new general-purpose technology. The same ideas that make it near-impossible to build new things in the physical world, coming now for the digital. And it is, at least for now, a bipartisan effort.

In some ways, this is simply the result of bureaucratic momentum. Many of the ideas in these documents predate generalist AI models, and the machine that is known as “the policymaking community” moves far too slowly to keep up with the AI industry. The identity politics-inflected priorities, too, seem far more threadbare today than they would have even two years ago, when they were first conceived. But no matter: the bureaucrats have settled on their frameworks, and so now, the ship barrels ahead, engines at full tilt.

But in other ways, this is something more base: a power grab, plain and simple. A circuitous process like this, far more than a singular bill that draws the world’s attention, is how you assert control over the most promising emerging technology in a generation. You do it before it’s popular, before people or businesses will notice too much. You do it quietly, behind closed doors in working groups and workshops and steering committees with trays of stale brownies lining the wall and a hotel-branded water bottle at every seat. You do it with the active participation of every “stakeholder” you can think of — except for the startups too new to earn a seat at the table, or the ones that haven’t even been founded yet. You do it through endless paragraphs of meandering gobbledygook, through flow charts and Venn Diagrams. This is how the technocracy rolls. This is how you kill an industrial revolution.

— Dean W. Ball

Please sign-in to comment