Portrait of a Digital PropagandistFeb 26
exposing the information war tactics behind one of the most prolific pro-hamas propaganda accounts online
Ashley RindsbergSubscribe to The Industry
--
Earlier this year, Meta pulled the plug on its US fact-checking program. Google now refuses to add fact-checks to Search and YouTube. Nearly a decade of work â hundreds of millions of dollars spent, thousands of people hired â is gone, essentially overnight, and for good reason.
My interviews with 20 trust and safety workers at Meta, TikTok, Snapchat, Google, and other major social media platforms reveal how a cottage industry of fact-checkers ballooned after 2016, bankrolled by tech companiesâ donations to nonprofit fact-checking networks. This fact-checking regime was not about truth. Instead, it evolved into a desperate stopgap and PR tactic that was doomed from the start.
At one company, inexperienced middle managers with irrelevant degrees were shaping sweeping fact-checking policies. At another, fact-checkers inadvertently overrode company guidelines, hand-selecting content to review based on what they perceived as âharmfulâ instead of reviewing what was flagged by users.
For nearly a decade, this system was sold to the public as a cure-all for false information. In fact, it was inconsistent, ineffective, and no match for the volume of content it was tasked with reviewing. The real scandal is that platforms spent years pretending this was ever a viable solution.
--
In 2016, after Trumpâs win and the commentariatâs scapegoating of âmisinformation,â it was easy to think social media giants like Facebook simply began âtapping intoâ an already well-established fact-checking industry. But the reality is much different. Online fact-checking at the scale attempted during the Biden years wasnât an organic response from an established industry â it was a tech-backed creation. Meta, Google, TikTok, and major philanthropists poured hundreds of millions into a niche fact-checking ecosystem, inflating it into a global content moderation machine. These companies built the entire fact-checking infrastructure from scratch, turning a once-independent practice into a tool for platform governance. By 2023, what began as a PR-driven experiment had become an unsustainable bureaucracy, and by early 2025, the platforms began dismantling the very system they had engineered.
Online fact-checking was always distinct from journalistic fact-checking. The practice began in traditional journalism as a safeguard against error. At newspapers and magazines, traditional fact-checkers make sure articles are accurate before they go to print. References to proofreaders first appeared in U.S. periodicals in the early 1800s, with the emergence of wire services where space was at a premium, and copy editors arose a century later. The concept of a fact-checking department was created in the 1920s and â30s.
But online fact-checkers have a very different job: challenging claims after theyâve been printed, putting themselves directly in the crosshairs of political disputes.
The first variation of online fact-checking emerged in the 1990s with the rise of blogs, which gave people an outlet to comment on mainstream stories and provide evidence with hyperlinks. Blogs offered a democratic and decentralized way for anyone to critique information. In 1995, a husband-and-wife team of âamateur folkloristsâ with no background in journalism launched the first standalone fact-checking site, Snopes, to counter online rumors, urban legends, and political issues. Spinsanity, the first nonpartisan online fact-checking organization focused on US politics, went live in 2001, billing itself as âthe nationâs leading watchdog of manipulative political rhetoric.â It was run by recent college grads.
The first dedicated nonpartisan US fact-checking site staffed by professional journalists, Factcheck.org, launched in 2003, followed by PolitiFact in 2007. In parallel, television pundits, major publications, and politicians themselves began embracing this kind of ex-post fact-checking. The Washington Post introduced a âFact Checkâ section before the 2008 election, and in 2012, presidential debate moderator Candy Crowley fact-checked Republican Mitt Romney live, to the pleasure of President Barack Obama. âFact checksâ catapulted from Snopes and Spinsanity into mainstream lexicon.
The 2016 US Presidential election was a turning point for online fact-checking. As a response to public and political pressures after Trumpâs victory, tech platforms like Meta and Google began injecting millions of dollars into third-party fact-checking efforts and kicked off their own programs, supercharging the influence of the fact-checking process. In 2017, Google rolled out a fact-check feature in search and news results months after the company âfaced criticism, along with Facebook, for spreading fake news and offensive misinformation.â The accusations, both founded and unfounded, were wide-ranging and global. For example, reports blamed viral WhatsApp messages after mobs lynched 33 people in India. They also linked online misinformation in the UK to half a million children going without measles vaccines.
The International Fact-Checking Network (ICFN), a group of 145 independent fact-checking organizations, was established in 2015 with a grant from Omidyar Network as an outgrowth of the Poynter Institute, the longstanding non-profit journalism school. While board leadership at the Poynter Institute and its foundation included representatives from across the media landscape, IFCNâs funding came from technology platforms and philanthropists. Starting in 2016, Meta âcontributed more than $100 million to programs supporting fact-checking efforts.â In 2017, Google announced a formal partnership with the IFCN and a commitment to expand the number of verified fact-checkers. By 2018, Facebook and Google together had committed well over $500 million to journalism-related projects (including fact-checking, news literacy, and local news support).
The IFCN networksâ cash infusion is reflected in the rapid proliferation of fact-checking organizations. Duke Reportersâ Lab counted 11 fact-checking sites in 2008. By 2023, this figure had jumped to 417 operating across 100 countries and 69 languages.
Tech didnât just fund fact-checking â it built the industry, scaling a bespoke internet-native practice into a global content moderation machine. The IFCN, bankrolled by tech giants and philanthropic cash, ballooned. Between 2016 and 2023, platforms made fact-checking their responsibility. And when it failed, they abandoned it.
But the problem wasnât just size, it was also mismanagement. After leaving Meta, I interviewed 20 trust and safety professionals in hopes of bringing more rigor to these important roles. Responses painted a picture of dysfunction â the people setting the rules often had limited relevant expertise. At one company, fact-checking policies were shaped by a middle manager with a degree in acting, trust and safety employees there told me. Elsewhere, junior PMs led cringeworthy pseudo-philosophical debates about âthe meaning of truthâ with tenured academics. It backfired when those same researchers leaked embarrassing details to the press.
Inadequacies extended into the very language of leadership. One former employee told me, âDuring my first week, I was introduced to someone who said they were a Diplomacy Manager and boasted about hosting âbilatsâ with civil society leaders and academics. I worked at the UN and have only heard that term in diplomatic and government contexts, where officials meet to discuss issues between two countries. There was a lot of posturing.â
For many trust and safety workers on the inside, the disconnect between the systemâs lofty goals and its execution was obvious. Engineers built sophisticated systems and AI classifiers to process billions of posts, but the rules governing them were often left to inexperienced managers or, worse, employees playing politics. One former FBI agent, hired for their decades-long experience defending constitutional principles, put it bluntly: âOur North Star should have been protecting free speech or enlightenment values â instead, decisions were driven by internal politics or vague corporate mandates. Just because you have [insert company] next to your name, doesnât make you an expert in this.â
Highly paid tech workers were in the position of trying to âsolveâ democracy. It was a disaster.
Before the US 2020 Presidential election â which brought us the infamous Hunter Biden laptop fact-check â one platform whose employees I interviewed developed dozens of new rules for how their independent fact-checkers should evaluate content. A critical part of their workflow was deciding which posts to fact-check and which to leave in the queue, as is. As they put it:
I realized fact-checkers were hand-selecting pieces of content based on their potential to do harm, rather than other signals the company wanted them to prioritize like content filters or user reports. Our content policy lead confirmed the company did not want fact-checkers using potential harm as evaluation criteria. But the policy, business, and product teams didnât anticipate this fact-checker behavior â and it undermined a lot of the complicated rules they put in place. Fact-checkers were given power leadership explicitly was trying to prevent, but the team on the ground couldnât build for this. They were inept.
The collapse of social media fact-checking is an admission that top-down truth enforcement doesnât work at internet scale. Employee stories consistently reveal an inability to square sweeping goals with the reality of moderating user-generated content. While technology must always be governed, organizational and political culture shape governing principles â and how those principles play out in practice.
Platforms are now pivoting to new solutions: Community Notes, an experiment in crowdsourced content moderation, has shown early promise. AI-driven moderation tools are also evolving.
Whether these represent a real course correction or another failed experiment remains to be seen. But one thing is clear: if anything resembling truth survives in the AI era, it wonât come from censors. It will come from democratic values, diversity of thought, and new systems designed to make sense of the chaos rather than control it.
âLauren Wagner
Subscribe to The Industry
0 free articles left