Trade EverythingJul 11
free markets are responsible for our prosperity. let’s build more of them.
Tarek MansourEditor's Note: Pirate Wires began in Miami, a vibrant city in the endless summer of South Florida, the ancestral land of right-wing Cubans, and elderly northeastern Jews. Today, we are connected online via a different system: a vast array of servers, cables, and computer devices maintained by human actors. In the United States, much of this infrastructure sits on stolen land acquired under the extractive logic of San Francisco yuppie expansion. As an organization, we recognize this history and uplift the sovereignty of Indigenous people, data, and territory. We commit, beyond symbolic rhetoric, to dismantling all ongoing settler-colonial practices and their material implications on our digital worlds.
Our website www.piratewires.com runs on servers located in unceded native lands (Little Havana).
---
People who talk like this are literally determining government policy on AI, the technology that will either propel our society into a new age of general prosperity or destroy it completely.
The above was published by Data & Society, a liberal NGO dedicated to “the social implications of data and automation [and] producing original research to ground informed, evidence-based public debate about emerging technology.” Here’s a taste: the latest academic article linked on Data & Society’s website accuses Facebook of perpetuating “racial capitalism”1 because the company has failed to make its content moderation policies as censorious in the third world as they are here in the West.
Data & Society is headed by Janet Haven, who previously spent a decade in a number of director roles at George Soros’ Open Society Foundations. There, according to her Data & Society bio, “she oversaw funding strategies and grant-making related to technology’s role in strengthening civil society and played a substantial role in shaping the field of data and technology governance.” She now also sits on the congressionally mandated National Artificial Intelligence Advisory Committee (NAIAC), which has been advising the National AI Initiative Office (NAIO) and the President on AI issues since its April 2022 inception. NAIAC’s congressionally mandated function is to shape AI policy. The decisions its members make will help determine the future of the technology and — given the revolutionary potential of AI — the future of our civilization itself.
Haven isn’t the only NAIAC member whose career relies on people taking seriously DEI-type issues in AI. Also on the committee is Navrina Singh, founder and CEO of Credo AI, a company that sells AI products that purportedly help reduce bias in hiring and healthcare, among other things. As part of her prepared testimony to the House Subcommittee on Research and Technology in September of last year, Singh said:
When data scientists are evaluating whether their AI systems are fair, they look at specific technical measures of bias in their AI systems — to understand if these systems are perpetuating harmful societal biases. There are two primary ways that we measure bias in our AI systems: evaluating parity of performance and parity of outcomes.
Parity of performance is about evaluating whether your ML model performs equally well for all different groups that interact with it. For example, does your facial recognition system detect Black women’s faces at the same or similar accuracy rate that it detects white men’s faces?
Parity of outcomes is about evaluating whether your ML model confers a benefit to different groups at the same rate. For example, does your candidate ranking system recommend Black women get hired at the same or similar rate as it recommends white men?
Concern over parity of performance in AI may be reasonable, but concern over parity of outcome leads to some troubling ethical questions. In her congressional testimony, Singh said that a credit risk prediction system that judged black women as less creditworthy than white men would be an example of an unfair parity of outcome. But creditworthiness is usually determined by non-racial factors such as income, debt-to-income ratio, and past credit history, among others. So rectifying this hypothetical unfair parity of outcome would result in black women who are not creditworthy enough qualifying for loans at the expense of anyone who’s more creditworthy. In other words, an AI-generated violation of the Equal Credit Opportunity Act.
In the same testimony, Singh cited what she claimed was a real-world case of a biased AI system assigning different lines of credit to a husband and wife, the wife being granted a much lower line of credit by the AI than the husband. Though she didn’t say so, it's clear Singh was referring to a 2019 incident in which Apple credit cards, issued by Goldman Sachs, were alleged to discriminate against women. It started when a Danish entrepreneur named David Hansson tweeted his wife was denied a credit card increase while his limit was 20 times hers, despite the fact that she had a higher credit score than him and the couple filed joint tax returns. Apple Founder Steve Wozniak responded to Hansson, saying his wife was also issued a lower credit limit. This led to an outcry and a formal investigation by the New York State Department of Financial Services (NYSDFS). In March 2021, NYSDFS completed its investigation and confirmed what Goldman Sachs had said at the outset of the controversy: there was no bias in its algorithm, and the alleged discrepancies were due to a misconception among married consumers that joint income, rather than individual income, is used in calculating credit limits. In short, the women in question had lower credit limits because they made less money than their husbands.
Nevertheless, more than a year later, Singh repeated this debunked story in prepared testimony to Congress, strategically omitting any mention of Hansson, Apple or Goldman Sachs. “In this scenario, there is no reason a wife should have less credit than her husband,” she said. But, as everyone who’s applied for a credit card knows, lenders use income — quite reasonably so — in determining credit limit, whether AI is involved or not. And even if, as claimed but not proven, an AI had actually been trained to give equally credit-worthy women smaller lines of credit, the obvious solution would be to simply to prevent the AI from processing information relating to sex on applications entirely.
DEI-type concerns at NAIAC go all the way to the top. It’s chaired by Miriam Vogel, President and CEO of Equal AI, which calls itself “a movement focused on reducing unconscious bias in the development and use of artificial intelligence” (emphasis added). In addition to Vogel’s chairmanship, two EqualAI board members, Reggie Townsend, and Susan Gonzales, have also landed seats at NAIAC, as has EqualAI senior advisor Victoria Espinel. Though fairly bare, Equal AI’s website does offer a downloadable Algorithmic Impact Assessment tool which it claims will help reduce bias and ensure that AI systems are inclusive. And on its infrequently-updated blog is a link to a 2020 post by Vogel warning about how bias in AI could determine who survives Covid-19. She wrote:
We are creating the perfect storm against persons of color and other underrepresented populations. They are the most at risk to contract COVID-19, most likely to lose their job and most vulnerable to biased AI denying them life-saving measures.
Her blog post, like much written by so-called “experts” during peak Covid, is mostly a work of speculative fiction. The one real issue it highlights — as far as I can tell — is facial recognition inaccuracy vis-a-vis minorities. A World Economic Forum whitepaper called “A Blueprint for Equity and Inclusion in AI,” co-authored by Susan Gonzales, also Vogel’s colleague at both NAIAC and Equal AI, highlights it as well. Facial recognition is an area of extreme focus for them, seemingly one of the very few concrete examples of AI-perpetuated discrimination they’ve been able to come up with. This specific problem could probably be fixed simply by training the AI on a larger dataset, but nobody makes a career as an AI “harms” expert by advocating simple solutions. In fact, the WEF whitepaper gives away the game by directly advocating that DEI people should be at the forefront of product development when it suggests that tech companies adopt “a mentorship programme for staff to be paired with culturally diverse experts in diversity, equity, and inclusion” and to “[establish] partnerships with academic, civil society, [and] public sector institutions to embed equitable and inclusive processes into in-house AI capabilities.” In other words, Gonzalez proposes a make-work program for the DEI-AI industry.
The economic self-interest is evident, but unfortunately, it’s worse than that. The worst thing about the DEI-AI people is that they take their fake jobs very seriously: they’re actively building a system by which social engineering becomes a feature in every AI product ever shipped. Democratic administrations have long handed factions of their support base, most notably labor, tokenistic positions on policy committees. But with AI, the Biden administration has given the DEI class complete control. On imaginary pretexts, overblown problems, and debunked stories, their project will use equality of opportunity — with which no American has a problem — as an alibi for equality of outcome, a more female and ethnically balanced elite. Below it, the suppressed, tiptoeing masses silenced in the name of promoting “equity,” fighting “racial capitalism,” and shielded from “dangerous misinformation,” which, don’t forget the four magic words, disproportionately affects marginalized communities.
When you hear about “equity and inclusion” in AI, remember that “expert” witness Navrina Singh had — for complete lack of supporting evidence otherwise — no choice but to use a debunked story about a discriminatory AI that denied a woman sufficient credit because of her gender, not her income, about which Singh was conspicuously silent. Remember how they suggested bias in AI would kill minorities during the Covid pandemic by denying them testing, ICU beds, and ventilators, something which never happened. Remember the absolutely moronic digital land acknowledgments. Remember that these are the people shaping AI policy — the centerpiece of a new industrial revolution. Most of all, remember this is about power, and that they already have a great deal of it.
-River Page
0 free articles left