From GDPR to the AI Act — how Europe became the world's digital regulator

In 2018, the European Union's General Data Protection Regulation came into force. Within months, American technology companies were sending emails to hundreds of millions of users asking for consent to data processing. Privacy policies were rewritten. Compliance departments were expanded. Some services were withdrawn from the European market rather than adapted.

None of this was the primary intention of GDPR. Its authors wanted to protect European citizens' privacy. What they got, additionally, was a demonstration that European regulation could reshape the behaviour of the world's largest technology companies — companies headquartered in California, governed by American law, operating global platforms.

That demonstration has become the foundation of Europe's digital strategy. If regulation could reach American tech companies through their European market access, regulation could be the instrument through which Europe shaped the global digital environment. A continent that had produced no global platform company of significance had discovered a form of technological power that did not require building platforms.

The Regulatory Stack

Europe's digital regulatory framework has been built in layers over the past decade, each addressing a different dimension of the digital economy.

GDPR addresses data — who can collect it, for what purposes, under what conditions, with what rights for individuals. It applies to any company processing the personal data of EU residents, regardless of where that company is headquartered. Its enforcement has been uneven — Ireland's data protection authority, which oversees most American tech companies whose European headquarters are in Dublin, was repeatedly criticised for slow enforcement — but its fines have been substantial. Meta has paid over €1 billion in GDPR fines. Google, Amazon, and others have paid hundreds of millions more.

The Digital Markets Act addresses market structure — specifically, the dominance of large platform companies designated as "gatekeepers" — companies so large that their platforms function as essential infrastructure for other businesses. Gatekeepers face obligations to allow interoperability, to not self-preference their own services, and to allow businesses using their platforms to communicate directly with customers. The DMA is structural regulation: it does not set fines for specific violations so much as reorder the rules under which dominant platforms operate.

The Digital Services Act addresses content and liability — what platforms must do about illegal content, how they must handle advertising transparency, what rights users have over algorithmic recommendation systems. Very large platforms face additional obligations including independent audits and researcher access to data.

The AI Act, which entered application in 2024-2026, addresses artificial intelligence systems according to risk level. High-risk applications — in medical diagnosis, credit scoring, recruitment, critical infrastructure, and law enforcement — face strict requirements for documentation, testing, human oversight, and transparency. Certain applications are prohibited outright. General-purpose AI models above certain capability thresholds face additional transparency obligations.

Europe's Digital Regulatory Framework

GDPR (2018): Personal data protection — applies globally to EU resident data
Digital Markets Act (2022): Gatekeeper platform regulation — interoperability, self-preferencing
Digital Services Act (2022): Platform content and liability — algorithmic transparency, illegal content
AI Act (2024-2026): Risk-based AI regulation — prohibited uses, high-risk requirements, GPAI obligations
Data Act (2025): Rules on access to and use of data generated by connected devices
Cyber Resilience Act (2025): Security requirements for digital products and software
Key enforcer: European Data Protection Board (GDPR), European Commission (DMA, DSA, AI Act)

The Brussels Effect in Digital

The Brussels Effect — the mechanism by which European regulation becomes global regulation through the economics of product standardisation — operates particularly powerfully in digital markets.

Digital products and services are not like physical goods. A car sold in California and a car sold in France can have different safety standards at relatively modest cost — you build different components. A digital platform is a single global system. Building separate versions for Europe and the rest of the world requires either separate codebases, separate data architectures, and separate governance systems — or compliance with the most demanding standard everywhere.

For most technology companies, global compliance with European standards is cheaper than building genuinely separate systems. This is why GDPR privacy controls appeared in products used by Americans who had never heard of the regulation. It is why AI governance frameworks developed for EU compliance are being applied globally by companies that could, technically, operate without them outside Europe.

The Brussels Effect in digital is not universal. Some companies have chosen to restrict European access rather than comply — most visibly in the case of certain American news publishers that chose to block European users rather than implement GDPR consent requirements. But for companies with significant European revenue and global platforms, compliance and global application is the rational economic choice.

What Europe Cannot Do

European digital regulation is powerful in its domain. It is also honest about its limits.

Europe has not produced a global social media platform, a global search engine, a global cloud provider, or a global AI model. The regulatory environment is sometimes cited as a cause — and there is evidence that GDPR compliance costs fall more heavily on smaller companies and startups than on established incumbents with compliance infrastructure already in place. But the absence of European tech giants predates GDPR and reflects deeper structural factors: the fragmented European capital market, the difficulty of building pan-European user bases across language barriers, and the network effects that allowed American platforms to achieve global scale before European regulation became relevant.

European regulation shapes how the digital economy works. It does not determine who builds the platforms that run it. That distinction matters for understanding what European digital policy can and cannot achieve.

The AI Act as a Global Precedent

The AI Act represents the most ambitious attempt yet to regulate artificial intelligence. Whether it works — whether its risk classification is accurate, whether its requirements are implementable, whether it drives AI development in a direction that is genuinely safer rather than simply more burdensome — will determine whether it becomes a global template or a cautionary tale.

The early signals are mixed. Several provisions remain contested in their interpretation. The compliance timelines are demanding. Some high-capability AI developers have expressed concern that the transparency requirements for general-purpose AI models create intellectual property risks.

What is not contested is that the AI Act will affect any company deploying AI systems in Europe, regardless of where those systems were built. The Brussels Effect will operate in AI as it has operated in data, platforms, and content. The question is whether the regulation is well-designed enough to make that global influence a positive one.

Europe in One Sentence

Europe became the world's digital regulator not by building platforms but by making access to its market conditional on compliance with its rules — and discovering that global companies found compliance cheaper than separation.

Looking Ahead to Friday

Friday's EuroTasteDaily Review examines what the Brussels Effect in digital actually means for American companies — and why the AI Act may be the most consequential piece of European regulation since GDPR for anyone building or deploying AI systems.

Keep Reading