The Costs of Europe’s GDPR Regime

the eu’s approach to regulating tech discourages both innovation and competition — and now brussels has its targets set on ai
Brian Chau

In the past decade, China was a black sheep when it came to social media. Whatever the new app was, odds were it was banned in China. X/Twitter? Banned. Facebook? Banned. Instagram? Banned.

But over the past few months, a dark horse has emerged in the censorship olympics: the European Union.

Earlier this year, the government of Italy took down ChatGPT citing the EU General Data Protection Regulation (GDPR), a monstrous 261-page law cracking down on the European web. By taking down OpenAI’s flagship app, Italy put itself in the same camp as China, Iran, Russia, and North Korea. Only after a few photo-ops and Sam Altman’s endorsement of the EU’s draconian licensing scheme did the bureaucrats back down.

Last month, Meta launched Threads, its competitor to X/Twitter. And it decided to skip the EU entirely — the new app is simply not available to EU citizens. It wasn’t just GDPR. A Meta spokesperson referenced the upcoming Digital Markets Act, a further crackdown on tech companies with equally if not more opaque language. If you live in the EU, expect to increasingly become a second-class digital citizen, just like China in the last decade. Load up your American VPNs and crypto wallets if you want to keep up with the tier one internet.

To understand how the EU got to this place, you have to understand the root of its regulatory state. In establishing their regulatory regime, the EU created a system that automatically rewarded a constituency of full-time, well-paid bureaucrats to enforce them.

In 2018, the European Union began enforcing GDPR to the uniform fanfare of legacy media. It would protect people’s privacy and cybersecurity, they said. At the time, economists and engineers predicted it would be a disaster for the European digital economy. They were right: for example, almost half of new app entries vanished overnight, and the amount of new apps that would ultimately become successful was slashed by 40%.

European politicians advertised GDPR as a way to protect the privacy of European citizens. Did it increase the security of European companies and prevent hackers from stealing data? Nope. That would require technological innovations and code fixes, something bureaucrats are incapable of producing. Did it curtail state surveillance powers? Certainly not. Instead, their citizens got annoying cookie banners and useless compliance paperwork. GDPR ended up being a program that cut checks to bureaucrats, raised costs for companies, and killed half of new app entries.

The EU rugpulled its citizens. They promised one thing and delivered the opposite.

This is a systematic feature, not a bug, of how the EU operates. The European regulatory regime is best understood as a rent-seeking cartel. From a 2018 Federalist Society paper accurately predicting the economic catastrophe caused by GDPR ahead of time:

“The European Commission’s GDPR website claims that the goals of the regulation are to give users more control of their data and to make business “benefit from a level playing field.” But the statute itself suggests another set of stakeholders: litigants, non-profit organizations, data protection professionals, and data regulatory authorities. Non-profit organizations are empowered with new rights to organize class actions, lodge complaints, and receive compensation from fines levied on firms’ annual revenue, as high as four percent of annual revenue.”

The bureaucrats and lawmakers extort companies, hitting them over and over until they burst like a piñata. They disperse their loot to their constituencies — not EU citizenry of course — but to the NGOs, lawyers, and middle managers, which they use to further entrench their power.

But their game can’t continue forever. As geopolitical analyst Samo Burja puts it, “power is viewed by European Bureaucrats as mostly the ability to forbid things.” The EU regulatory regime is built on the lie that you can ban your way to prosperity. The costs of GDPR will irreparably poke a hole in that narrative.

The next step in the EU’s tech crackdown is the 144-page AI Act. Thanks to Technomancers for their summary.

First, the EU establishes arbitrary domain.

Projects will be required to register the anticipated functionality of their systems. Systems that exceed this functionality may be subject to recall. This will be a problem for many of the more anarchic open-source projects…

Risks Very Vaguely Defined: The list of risks includes risks to such things as the environment, democracy, and the rule of law. What’s a risk to democracy? Could this act itself be a risk to democracy? (pg 26).

By making its scope global and enforcement criteria arbitrary, the EU enables its friendly courts to extort companies in unpredictable ways. Next, it sets up the constituency.

Deployment Licensing. Deployers, people, or entities using AI systems, are required to undergo a stringent permitting review project before launch. EU small businesses are exempt from this requirement. (pg 26).

Ability of Third Parties to Litigate. Concerned third parties have the right to litigate through a country’s AI regulator (established by the act). This means that the deployment of an AI system can be individually challenged in multiple member states. Third parties can litigate to force a national AI regulator to impose fines. (pg 71).

Very Large Fines. Fines for non-compliance range from 2% to 4% of a companies gross worldwide revenue. For individuals that can reach €20,000,0000. European based SME’s and startups get a break when it comes to fines. (Pg 75).

This process funds the regulatory regime’s allies. It creates a constituency to fight for the regulatory state, paid for directly by its victims.

Of course, it wouldn’t be an EU law without blanket bans on some of the most productive machine learning techniques:

API Essentially Banned. API’s allow third parties to implement an AI model without running it on their own hardware. Some implementation examples include AutoGPT and LangChain. Under these rules, if a third party, using an API, figures out how to get a model to do something new, that third party must then get the new functionality certified…

LoRA Essentially Banned. LoRA is a technique to slowly add new information and capabilities to a model cheaply. Opensource projects use it as they cannot afford billion-dollar computer infrastructure. Major AI models are also rumored to use it as training in both cheaper and easier to safety check than new versions of a model that introduce many new features at once (pg 14).

The European Union’s regulatory regime has totalitarian ambitions. It has made repeated and unequivocal attempts to pass laws granting itself Chinese-style control over the European internet. And it may very well succeed.

-Brian Chau

0 free articles left

Please sign-in to comment