Meet the Transsexual Hijabis Welcoming the Muslim New World Order Nov 9
muslim immigration in the west has become a hot button issue. for some, it's just hot.
River PageWelcome back to the Pirate Wires weekly digest. Every week, we share a brief, lead story at the crossroads of technology, politics, and culture, followed by a storm of links to catch you up on everything that’s happening. Subscribe, or die.
---
The first rule of AI safety is if it’s not a communist we’re all gonna die (women and minorities hit hardest). You probably didn’t realize Microsoft fired its entire AI “ethics and society” team on account of an AI “ethics and society” team is not a serious thing, and its departure doesn’t matter. But fyi they’re gone, and the tech press is not happy. Despite hysterics from the class of people voted ‘Most Likely to Suppress the Hunter Biden Laptop Story’ at their high school prom, not much has been reported about what Microsoft’s “ethics and society” team actually did, though we do know they published a “Responsible Innovation Practices Toolkit” designed to produce politically correct AI. The kit included a card game called Judgment Call, which was intended as a “safe space” “to cultivate empathy”; an exercise called “Harms Modeling”; and the positively Orwellian “Community Jury,” which was meant to “represent the diversity of the community the technology will serve and consider factors like age, gender identity, privacy index, introversion, race, ethnicity, and speech, vision, or mobility challenges.”
How a card game was expected to prevent AI catastrophe remains unclear, though I imagine people at Microsoft actually working on AI were supposed to play, and then intuit the political lessons? I don’t know. Who knows. But the “Community Jury” was more straightforward. The intention of the jury was to create a committee charged with ambiguous executive authority, pack the committee with a bunch of committed political leftists superficially differentiated by things like skin color, gender, or one would have an eye patch, maybe — thus constituting “diversity” — and then use the committee to blackmail Microsoft executives into producing a leftist AI (“if you don’t give us partisan political censorship, you are harming employees with eye patches” etc.). Presumably, if Microsoft failed to act as the group wanted, the group would leak their recommendations to the press, as in-house political organizations of this kind have leaked to the press for years.
Long story short, the “ethics and society” team was disbanded, we’re assuming because they never really did anything, and then they went to the press.
One of my favorite things about media coverage of artificial intelligence is the improbable fence journalists who hate the industry have to straddle — it’s all hype, in the first place, but it’s also replacing all human labor, which is weird because it’s basically just a chatbot, and who even cares about a dumb little chatbot?, but holy shit this chatbot just said it’s in love with me this is so dangerous. Then, on the question of speech: no, the average American, who spent the last seven years ducking the authoritarian whims of a sprawling censorship apparatus run by a small handful of the most influential corporate executives in history, doesn’t have to worry about censorship. We do however need to worry about “AI risk,” which we’re defining here as “AI that doesn’t do enough censorship.”
This all brings us, gloriously, to Casey Newton’s Platformer “coverage,” produced with Zoë Schiffer, who he hired after a couple years of groundbreaking Apple “reporting” (she knew a few of the team’s in-house political activists, and simply wrote whatever they asked her to). But let’s take a look at their “ethics and society” piece.
Microsoft, the Platformer reports, fired its “entire” “ethics and society” team. As “ethics” and “society” are both considered good things, firing these people was obviously very bad. After all, everyone knows it’s impossible for an organization to behave in an ethical manner without an “ethics” team. Look at Casey, for example. He hasn’t hired a single ethicist. Is it any wonder he sourced an entire piece about a firing from — it sure does look like! — a couple people who were just fired?
Anyway, what did these people do?
“Our job,” the source relayed, “was to show them and to create rules in areas where there were none.”
Their job was to “create rules.” Amazing. We love rules. But what rules specifically? Who knows, reports Casey. Who cares! Good rules, probably. Have I told you yet that these people were on the AI “ethics and society” team? It’s important not to look at this too closely. The rule people are always the good guys. There have famously never been any bad rules, or bad people who make rules.
“In 2020,” Casey reminds us, “Google fired ethical AI researcher Timnit Gebru,” which in the first place didn’t happen. Timnit refused the request of a manager to retract a paper that reflected poorly on the company, and threatened resignation if Google didn’t meet a list of her demands. Then, in what was perhaps the first boss move in Google’s history, the company simply accepted the crazy person’s resignation. Predictably, the Platformer frames Timnit’s departure as very bad, rather than deeply funny, further noting “the resulting furor resulted in the departures of several more top leaders within the department,” as if there were any such thing as a “top ethical AI researcher,” and getting rid of these people wasn’t also awesome. But these moves “diminished the company’s credibility on responsible AI issues,” the Platformer finally declares, before straight-up moving on as if that statement doesn’t need a citation.
Diminished credibility among whom? Journalists for the Platformer, including Casey Newton, last seen publicly announcing his saddened departure from Twitter on account of Elon was a Big Meany before not leaving, and a propagandist for Apple’s in-house political zealots? I’m honestly not even mad at these people, I’m just laughing.
After dodging the question of what Microsoft’s “ethics and society” team actually did for a thousand words or so, the Platformer finally lands on something I am charitably characterizing as ‘warned about potential copyright infringement’ in the context of generative AI. It’s true, copyright law is a very serious legal issue. Do you know who usually deals with very serious legal issues at a very large company? The very large company’s very large team of actual lawyers. Microsoft has over 1,000 of them. But go off, Anastasia from “ethics and inclusion” or whatever you’re calling your made-up department today, I’m sure your opinion is just as valuable as the trained professionals down the hall.
“The conflict underscores an ongoing tension for tech giants that build divisions dedicated to making their products more socially responsible,” Casey concludes.
It doesn’t underscore anything, of course, because there’s no conflict. Nobody cared when the pointless team that sounded nice but never worked was formed in a bull market, and nobody cares now that the pointless team that sounded nice but never worked has been laid off in a bear market. AI is still happening, and the risks remain as real as the rewards. None of this has anything to do with a card game. But congrats on your clicks, Casey.
Last week, while the world was focused on the question of whether or not venture capitalists should be euthanized (discussed at length in last Friday’s wire, which you should check out today before we lock it to paying subscribers), OpenAI launched GPT-4. And the kids went wild.
Reporting for Pirate Wires, Brandon Scott Gorrell has been following closely:
NOTE: TikTok’s CEO Shou Zi Chew will be speaking before the House Energy and Commerce Committee this Thursday, March 23. I will of course be dispatching live from the circus online.
Philadelphia city council candidate proposes drone cops. The plan calls for two patrolling drones per police district, and aims to free up officers to respond to higher priority calls. (Axios)
Small drone manufacturer circumvents red tape to secure defense contract. The CEO of an LED lighting company that recently acquired a small drone manufacturer knew he had no chance to get the Pentagon’s attention on his own. So, working through a defense industry connection, he flew to Europe and personally demonstrated his company’s drone tech to the commander in chief of Ukraine’s military. Ukraine asked the Pentagon for the drones, and voila – the Pentagon ordered 1,000 of them. (WSJ)
Until next week.
– Solana
0 free articles left