Censorship for Profit

inside the quietly growing industry of companies dedicated to fighting “misinformation,” “disinformation,” and “narrative attacks,” whose primary customers are western governments and global brands
Ashley Rindsberg

  • Demand for countering misinformation has exploded since 2016, with startups having raised over $300 million, often with governments as their first and primary customers
  • The company NewsGuard, with over $21 million in funding, pressures advertisers, as well as third-party vendors, to blacklist outlets it deems untrustworthy
  • Blackbird.AI, which last year raised a $20 million Series B, says it protects 2,000 companies and “national security organizations” from “narrative attacks created by misinformation and disinformation”
  • The company Storyzy provides “round-the-clock monitoring” of major social platforms, and will “track and identity disinformation trends and false actors,” for the UK government
  • There is little — if any — data showing a causal effect between misinformation and altered electoral outcomes

--

When Telegram founder Pavel Durov was arrested by French authorities at the end of August, the move came as a shock. But among the few who could (or, at least, should) have seen the arrest coming is Durov himself. For months, European Union rhetoric around the platform had been ramping up, especially around the June 2024 EU elections when officials began raising the alarm about being “flooded” with disinformation.

While the warning applied to all major platforms, Telegram received the kind of attention no company wants. A month before the election, the EU began investigating whether Telegram met the criterion of a major online platform under the Digital Services Act (Pirate Wires explainer here), which took full effect last February. “Disinformation is spreading openly and completely unchecked on Telegram,” the prime minister of Estonia said in May. Her chief complaint — and, she said, that of “other [EU] member states” — was not that disinformation exists on the platform, but that Telegram refused to police it.

That, however, does not mean the content has gone un-policed. Over the past decade, a market segment has emerged to meet the demand from governments and, increasingly, brands for tools that can monitor, label, track and remove what some consider misinformation, disinformation or malinformation (information that happens to be true but is deemed “bad for you”) — collectively termed “MDM.” This segment — whose participants range from scrappy startups to major firms backed by rich government contracts — is quickly burgeoning into a full-blown industry, with top startups receiving upwards of $100 million in venture funding and well-connected firms securing billion-dollar contracts.

Just as the EU began investigating Telegram, in the UK a cross-government body called the Government Communication Service International contracted with a Paris-based OSINT (Open Source Intelligence) SaaS platform called Storyzy to provide “round-the-clock monitoring” of major social platforms, including Telegram. According to the contract, Storyzy will "track and identity disinformation trends and false actors,” at a cost of around $50,000 for a single seat. Storyzy was also tapped to participate in an $3.35 million EU project called ATHENA to identify threats of “foreign information manipulation and interference.”

The market for identifying, monitoring, and reporting on MDM has caught the attention of venture capital. London-based startup Logically, which “develops advanced AI to fight misinformation at scale,” has raised $37 million over four rounds. Fake news detection company Factmata, whose investors included Biz Stone, Craig Newmark and Mark Cuban, was acquired in 2022 for an undisclosed amount. Clarity, which identifies AI generated deep fakes, raised $16 million earlier this year in a round led by Bessemer Partners and Walden Catalyst. Reken, a startup created by the former head of Google’s product trust and safety “that protects against generative AI threats,” raised $10 million in a round led by Greycroft and FPV Ventures. And ActiveFence, which "empowers Trust & Safety and online security professionals,” has raised $100 million. According to Crunchbase, a group of just 16 misinformation startups have raised a combined upward of $300 million over the past few years alone.

For most of these companies, government is both the first and primary customer. Logically had a number of contracts with the UK government worth a combined $1.3 million, awarded by its National Security Online Information Team (NSOIT), formerly known as the Counter Disinformation Unit. Among the misinformation offenders identified by NSOIT using technology like that provided by Logically was a tweet by Dr. Alex de Figueiredo, a research fellow at the London School of Hygiene & Tropical Medicine, questioning whether the COVID-19 vaccine is necessary for children. In another instance, NSOIT flagged an interview by a well-known British journalist, Julia Hartley-Brewer, with a caller who was sharing their experience of suffering under Britain’s COVID-19 lockdowns. The purpose of NSOIT, which rebranded last year after public and official criticism, “is to understand disinformation narratives and attempts to artificially manipulate the information environment to ensure that the government understands the scope and reach of harmful mis and disinformation and can take appropriate action.”

According to UK free speech watchdog Big Brother Watch, Logically was found providing reports to the NSOIT on British citizens — including Big Brother Watch’s own executive director — who post content, or even just like posts, deemed objectionable. Logically was also caught reporting Julia Hartley-Brewer to the government for tweeting in 2021 statistics on cancer deaths during the pandemic — statistics supplied by the government’s own Office for National Statistics — since cancer charities had been noting the negative impact of lockdowns on treatment.

“I do think there's a massive boom in the proliferation of these fact checking companies or counter disinformation, AI-based companies,” said Mark Johnson, an advocacy manager at Big Brother Watch, with whom I spoke in late August on Zoom. Johnson’s name was also flagged by a Logically report to NSOIT for tweeting a link to a parliamentary petition opposing vaccine passports.

“They are tapping into a wider kind of trend, which is essentially censoring — the platforms and other big players will say ‘moderating’ — but really censoring speech based on its perceived veracity and accuracy,” he said. “This is a trend that's happening across the western world at the moment.”

In the US, collaboration between for-profit MDM companies and government runs even deeper. In 2021, the Department of Defense awarded a $979 million contract to Peraton to “counter misinformation” on behalf of United States Central Command, the Department of Defense command responsible for the Middle East and Asia. Peraton was formed after its parent, Veritas Capital (which briefly owned Raytheon Aerospace in the early 2000s) acquired an IT services business from Northrop Grumman. In 2018, major social media platforms including Google, Facebook and Twitter began disabling thousands of accounts and pages that had been identified as purveyors of MDM by a company called Fire Eye, which likened itself to “a private-sector intelligence operation.” The analogy cut close to the bone: one of Fire Eyes' backers is IQT, the venture capital firm owned by the CIA tasked with developing innovation for the agency.

The trend of intensifying government and private sector synergy on MDM began in the wake of the 2016 US presidential election. The semi-official story — told in the media with the help of government, law enforcement and intelligence community officials — is that the election suffered massive interference by foreign adversaries, namely Russia. But research has since shown Russian interference had virtually no impact on the election. A study in the Journal of Economic Perspective assembled a database of fake news stories from that period and found that “fake news in our database would have changed vote shares by an amount on the order of hundredths of a percentage point”— two orders of magnitude less than what would have been required to sway the election.

“A lot of the research on misinformation-disinformation is shoddy,” said Jacob Mchangama, who heads the Future of Free Speech institute at Vanderbilt University, whom I interviewed in late August by phone. “It tries to paint a more negative picture of misinformation-disinformation having a more direct impact on democracy than what I think is a more balanced view of it.” People who overestimate MDM’s impact on elections, Mchangama said, treat it as if it were delivered from a hypodermic needle, where it’s injected straight into the veins of a society, when, in reality, most of it is diffuse and ineffective.

Despite this, the effort to characterize MDM as a mega-threat reached a crescendo this year, with the World Economic Forum calling it the greatest short-term risk of 2024, despite wars raging in the Mideast and Russia-Ukraine. “2024 was this year where you had a lot of experts, and also governments, saying because there are two billion people eligible to vote, this is going to be sort of a super-year for elections, but there’s acute danger of democracy being drowned in AI generated disinformation,” Mchangama says. “While there are certainly examples of that, there’s never been a coordinated campaign or attempt to disrupt these elections that we know of.”

Nevertheless, the fact that the 2016 election outcome favored Trump — a figure seen by the establishment in government, media and business as precipitating a crisis of destabilization — set off alarm bells. In the wake of the election, searches for “disinformation” on Google hit levels ten times higher than the pre-election average. The spike was driven in part by news outlets framing it as the most important political issue of the day, such as when The Intercept baldly asserted that “Disinformation, Not Fake News, Got Trump Elected.” Like much of the MDM-related discourse around Trump’s win, The Intercept asserted that a “vast campaign of disinformation” — an echo of Hillary Clinton’s claims of a “vast right-wing conspiracy” against her husband’s presidency — is what delivered the election to Trump.

But much of the new disinformation panic was spurred by Hillary Clinton herself. Weeks after her stunning electoral loss, Clinton declared a fake news “epidemic” and called for legislation to counter what she called foreign propaganda. Coincidentally, just two weeks later, President Obama signed into law the National Defense Authorization Act for FY 2017, which expanded the remit of a State Department body called the Global Engagement Center from countering terrorism to “counter[ing] foreign state and non-state propaganda and disinformation efforts.”

It was in this political environment that one of today’s most effective for-profit MDM enterprises was born. NewsGuard was founded in 2018 by Steven Brill, a lawyer and journalist who founded Court TV, and Gordon Crovitz, the former publisher of the Wall Street Journal. NewsGuard, which launched with $6 million in initial funding, has raised a total of $21.5 million. With the founders’ extensive networks and funding from deep-pocketed investors, including art collector Eijk van Otterloo and French multinational ad company Publicis Group, the company rose quickly in the then-emerging MDM commercial landscape.

NewsGuard is best known for its “Nutrition Label” of news sites, delivered via a browser extension with individual subscriptions running $4.95 per month. The Nutrition label uses nine criteria to assign an outlet a score of between 0 and 100, with a green checkmark badge to indicate a site can be “generally trusted” and a red exclamation-mark badge indicating the opposite. On its own, a ratings system of this type is fairly unobjectionable — it seems to be well-intentioned and clearly falls within the bounds of NewsGuard’s own right to free speech. The trouble is that NewsGuard does not simply produce and distribute ratings but works hard to enforce them by pressuring advertisers, as well as third-party vendors, to blacklist outlets that don’t meet NewsGuard’s standings.

NewsGuard is not simply screening misinformation, but is de facto working to remove content from the Internet that is damaging to the business interests of major advertisers — something that appears to be core to its business model. In 2021, the company began a partnership with IPG Mediabrands to test its Responsible Advertising for News Segments (RANS) solution, which provides advertisers with a “menu” of sites that should be excluded by advertisers. The RANS test, conducted during the height of the pandemic, was focused largely around COVID-19-related content, especially about vaccines, with one feature listed as: “Exclude all websites flagged as untrustworthy by NewsGuard for publishing health hoaxes such as false cures, anti-vaccine misinformation, or other medical falsehoods.” This year, Pfizer named IPG Mediabrands as its lead creative agency and Publicis Groupe — one of NewsGuard’s original investors — as its media agency.

At the same time, NewsGuard aggressively pursued high profile publishers whose content could be damaging these interests. In May 2021, conservative educational non-profit PragerU received an email from the general counsel of the organization's video hosting vendor stating that the hosting company had received a warning from NewsGuard about its rating. According to a PragerU representative, NewsGuard successfully pressured PragerU’s video hosting company, JW Player, to drop the organization, which was given 30 days to find another solution and migrate all its video content to a different provider. In an email responding to PragerU’s request for clarification, NewsGuard cited mainly COVID-related content on PragerU as the alleged editorial offenses that led to the company’s decision to take action against the site. NewsGuard questioned why a PragerU host claimed that COVID-19 did not kill children without, according to NewsGuard, citing a source. The company also questioned why the organization’s founder, radio host Dennis Prager, claimed in 2020 that the refusal to administer hydroxychloroquine was resulting in unnecessary deaths.

The tone of the correspondence from NewsGuard — which was legalistic and censorious, implying guilt — is as troubling as its content. PragerU supplied lengthy responses justifying its editorial decisions to NewsGuard “Editorial Director,” a journalist named Eric Effron who had formerly worked at Reuters Legal and, previous to that, for one of NewGuard’s founders. But the lengthy back-and-forth, which stretched on for the course of weeks, resulted in only further notifications of infractions. More recently, NewsGuard, through a partnership with American Federation of Teachers, the second-biggest teachers union in the US, began pressuring school districts to refuse partnerships with PragerU, which offers educational content to schools. (Disclosure: I was the subject of two videos for PragerU on the topic of media malfeasance.)

PragerU’s CEO, Marissa Streit, told me in a phone interview that after receiving the notification from NewsGuard she reached out to the heads of other conservative publishers, all of whom had a similar experience. “In the beginning [these publishers] wanted to make sure they didn’t fall into NewsGuard’s red zone because they didn’t want to have problems with vendors, so they would make certain changes and try to adapt things,” she said. These requests were for things like disclosing investors and donors, listing all contributors to the sites, changing content, ensuring that readers who read an article would see a modified version of the piece. “[The outlets] would take all these steps until they realized it was just a show. It doesn’t really matter what you do. They put you in and out of the red zone to control you. In order to continue engaging them, they wouldn’t leave [outlets] in the red zone the entire time, so they would give them an arbitrary three more points [on the sites’ Nutrition Label]. That was their vehicle for control.”

But it’s NewsGuard’s coordination with major advertising bodies that’s lent the company its most widespread, and deepest, impact. NewsGuard was one of the primary MDM verification tools used by Global Alliance for Responsible Media, GARM, a project of the World Federation of Advertisers, which represents 90% of advertisers today (Pirate Wires explainer on GARM here). Launched in 2019, GARM — which was recently disbanded after Elon Musk threatened to sue its parent body, the World Federation of Advertisers — forged a partnership with the World Economic Forum within months of it being established. “[L]everaging the Forum’s existing network of industry, academic, civil society and public-sector partners to amplify its work on digital safety,” GARM was able to partner with some of the world’s most powerful advertisers, including Facebook, Google, LEGO Group, Unilever, NBC Universal - MSNBC and Proctor & Gamble.

Accordion to a House Judiciary reported based, in part, on testimony by GARM initiative lead Robert Rakowitz, GARM “pushes its members to use news rankings organizations, like the Global Disinformation Index (GDI) and NewsGuard, that disproportionately label right-of-center news outlets as so-called misinformation.” With GARM channeling major advertisers to the company, NewsGuard was able to splice its rating system into the global advertising ecosystem. In one instance, Rakowitz sent members NewsGuard ratings on outlets publishing stories related to the Russia-Ukraine war to “ensure you’re working with an inclusion and exclusion list that is informed by trusted partners such as NewsGuard and GDI.”

In 2020, NewsGuard waded into still deeper waters, working with the State Department’s Global Engagement Center (GEC), which funded some of the company’s work — it had won $25,000 from a $3 million grant from GEC. It was this funding that led to a lawsuit by two conservative news sites, The Daily Wire and the Federalist, rated red by NewsGuard, along with the State of Texas, accusing the State Department of “actively intervening in the news-media market to render disfavored press outlets unprofitable.” While the government’s involvement made the case a First Amendment issue, it was NewsGuard that served as the active mechanism. According to the complaint, NewsGuard and GDI “generate blacklists of ostensibly risky or unreliable American news outlets for the purpose of discrediting and demonetizing [them] and redirecting money and audiences to news organizations that publish favored viewpoints.”

“The GEC is giving money to these organizations to develop programs to censor American speech,” said Jenin Younes, a First Amendment lawyer at the New Civil Liberties Alliance — which is representing The Daily Wire and Federalist in the suit — who I spoke to on Zoom in late August. Younes explained that the GEC uses a cutout called Disinfo Cloud to award money to organizations like NewsGuard. She described Disinfo Cloud as “a government entity that's pretending to be private.”

While the State Department grant was paltry, the real money would come months later. In September 2021, the NewsGuard won a $750,000 contract from the Department of Defense to license the company’s Misinformation Fingerprints tool — though NewsGuard had previously described the contract as a “grant.”

The extent of NewsGuard’s government collaboration is, perhaps, no surprise. Among its earliest, and certainly most touted, board advisors is General Michael Hayden, the former director of the NSA and CIA. Other advisors include Tom Ridge, the first Secretary of Homeland Security, and Anders Fogh Rasmussen, the former Danish prime minister and secretary general of NATO. More recently, NewsGuard was sued by Consortium News, whose archive of 20,000 articles and videos stretching back to the 1990s was flagged red on account of issues identified by the Nutrition Label in just four articles.

Despite this, NewsGuard is not slowing down. It has forged partnerships across tech, including with startups like SafetyKit and Zefr. Much more importantly, the company partners with Microsoft, which includes the NewsGuard extension in all new versions of its Edge browser, the second most popular desktop browser with around 13% market share.

With the State Department’s GEC defining it as a foreign threat, the MDM paradigm is now being shifted into the arena of cybersecurity — and, with it, into the corporate sphere. Blackbird.AI is one of the startups that’s capitalizing on the trend. The company says it protects 2,000 companies and “national security organizations" from what it calls “narrative attacks created by misinformation and disinformation.” Like so many companies in the space, Blackbird, which last year raised a $20 million Series B led by a cybersecurity VC, was founded in the wake of the 2016 election. (And, true to the industry norm, its first customer was the Department of Defense.)

“When you think about misinformation, disinformation and manipulation of public perception as a kind of cyberattack,” Blackbird’s CEO Wasim Khaled said in an interview earlier this year, “it helps frame it. With cyber attacks came cyber intelligence and with narrative attacks you’re essentially requiring narrative intelligence.” When asked how Blackbird establishes the ground truth against which its technology can identify MDM threats, Khaled admitted the discussion often becomes “a philosophical conversation” about what constitutes truth.

Just as buy-in from global brands brought DEI out of the shadows of academia and global governance and into the public consciousness (and markets), major brands are now piling into the MDM space. “Disinformation and misinformation are emerging as significant cyber threats to businesses worldwide,” Bank of America cautions customers. Software giant SAP warns ominously of “[misinformation’s] role in recent attempts to bring down governments around the world. Its newfound power comes from its ability to ride digital networks with overwhelming speed and scale.” Multinational consulting firm PwC announces, “Disinformation attacks have arrived in the corporate sector. Are you ready?”

The problem is that there is little — if any — data showing a causal effect, or even a correlation, between MDM and altered electoral outcomes. We know there is MDM out there, but that’s pretty much the extent of what’s been substantiated. If anything, the existing evidence points to the fact that there is no connection. A 2019 study in Science found that just 1% of people consumed 80% of fake news related to the 2016 election on Twitter. Another study, published in the National Bureau of Economic Research, found that far from favoring Trump, 80% of political content on Twitter during the 2016 election cycle came from accounts that favored the Democrats — a drastic swing from the 2012 election when 70% of tweets about Romney were from Republican accounts. “[T]hese findings suggest Twitter content had a clearly pro-Democratic slant around the 2016 presidential election,” the authors concluded. Another study, published in Nature Human Behavior, found that, given the low consumption of MDM, “widespread speculation about the prevalence of exposure to untrustworthy websites has been overstated.”

In 2022, the Carnegie Endowment for International Peace — whose previous director currently heads the CIA — proposed the creation of a “CERN for information.” Modeled on legendary organization that studies particle physics, the proposed Institute for Research on the Information Environment (IRIE) would be an intergovernmental organization dedicated to studying information. “Where CERN ‘exists to understand the mystery of nature for the benefit of humankind,’ IRIE will exist to understand the mystery of the information environment for the benefit of democracies and their citizens,” the project proposal states. Lamenting the current state of affairs of information studies, the proposal’s authors write that “what is really needed is the [information] equivalent of a Large Hadron Collider.”

With Mark Zuckerberg’s August revelation that the White House had pressured Meta into removing COVID-19-related posts, we got a glimpse into the “mysteries” that the would-be CERN of disinformation seeks to unravel. Surely there will be further cases like this one. The question is, as we uncover more, will we like what we see?

— Ashley Rindsberg

Interviews have been edited for clarity.

0 free articles left

Please sign-in to comment