Skip to content
Back to field notes
Market Analysis·May 15, 2026·13 min readEuropeRegulationAIPolicy

Europe Regulated the Future Before It Built It

Europe’s most uncomfortable technology problem is not that Americans built better apps. It is that Europe spent two decades becoming the world’s most ambitious digital regulator while failing to become a world-class digital producer.

Europe’s most uncomfortable technology problem is not that Americans built better apps. It is that Europe spent two decades becoming the world’s most ambitious digital regulator while failing to become a world-class digital producer. The continent has excellent engineers, deep universities, sophisticated consumers, strong industrial companies and a rich tradition of rights-based law. Yet the platforms, clouds, AI models, chips, payment networks and operating systems on which European life now depends are overwhelmingly foreign, especially American.

This dependency is no longer just commercial. It is strategic. Apple and Google mediate smartphone access. Microsoft, Amazon and Google dominate cloud infrastructure. OpenAI, Anthropic, Meta and Google shape the frontier of generative AI. Visa and Mastercard remain central to payments. SpaceX has become vital to Western satellite launch capacity. Europe can fine, investigate and regulate these companies, but too often it cannot replace them.

That is the paradox. Europe became a superpower in writing rules for the digital world, but not in building the digital world. The problem is not that every European rule is foolish. Many are defensible in isolation: privacy matters, child protection matters, cybersecurity matters, fair competition matters, AI safety matters. The problem is accumulation. Taken together, Europe’s legal architecture has made scaling a technology company slower, costlier and riskier than it needs to be. It has created a compliance state before creating a growth state.

Mario Draghi’s 2024 report on European competitiveness put the matter bluntly. Europe has no company worth more than €100 billion that was created from scratch in the past 50 years, while all six American companies worth more than €1 trillion were created during that period. The same report found that EU firms spent €270 billion less on research and innovation than their American counterparts in 2021, and that around 30% of European unicorns founded between 2008 and 2021 moved their headquarters abroad, mostly to the United States. Draghi also identified regulatory inconsistency and restriction as barriers to commercialising innovation in Europe.

AI makes the gap impossible to ignore. Stanford’s 2025 AI Index reported that U.S. private AI investment reached $109.1 billion in 2024, almost 12 times China’s level and 24 times the United Kingdom’s. The same report found that U.S. institutions produced 40 notable AI models in 2024, compared with 15 from China and only three from Europe. OECD data for 2025 shows the same pattern: U.S. firms attracted roughly 75% of global AI venture-capital investment, while the EU27 attracted about 6%.

Europe did not lose this race only because America has more venture capital. It lost partly because America built first and regulated later, while Europe often regulated first and hoped companies would emerge anyway.

$0.0B

U.S. private AI investment, 2024

Stanford AI Index 2025

0

Notable AI models from Europe in 2024

Stanford AI Index 2025

0%

EU27 share of global AI venture capital

OECD 2025

The European rulebook has become an operating system of obligations before Europe has built enough of the systems those rules govern.

The European Rulebook Has Become an Industrial Burden#

A European technology company today faces one of the most complex regulatory environments in the world. A startup handling user data must comply with the General Data Protection Regulation. A platform hosting user content must comply with the Digital Services Act. A large platform may fall under the Digital Markets Act. A company building AI systems must prepare for the AI Act. A connected-device company must account for the Data Act. A software or hardware producer must prepare for the Cyber Resilience Act. A company in a critical or important sector may fall under NIS2 cybersecurity rules. A fintech may also face DORA, anti-money-laundering rules and sector-specific supervision.

Each law has a policy rationale. Together they form a dense, expensive and often uncertain operating environment.

The GDPR changed the global privacy debate and gave users stronger rights over their data. But it also imposed obligations around lawful basis, transparency, data minimisation, consent, data-subject rights, breach notifications, data-processing agreements, cross-border transfers and recordkeeping. For Google, Meta or Microsoft, this is a cost of doing business. For a 12-person European startup trying to train a model, personalise a product or run analytics, it can become a structural tax on experimentation.

There is evidence that this cost fell especially hard on smaller European firms. A study published in Marketing Science found negative post-GDPR effects on European ventures relative to ventures in the United States and elsewhere, especially for newer, data-related and business-to-consumer companies. A 2025 NBER summary of research on GDPR and venture investment found that, after the GDPR rollout, monthly EU deals led by U.S. investors fell by 21% relative to comparable U.S. deals, the amount invested fell by 13%, and U.S. investment flows into the EU declined by an estimated $1.6 billion per year.

This is the central problem with Europe’s regulatory model: the largest incumbents can absorb rules that smaller challengers cannot. Compliance departments scale better than startups. Lawyers scale better than founders. The more complex the rulebook becomes, the more it protects those already big enough to manage it.

The Digital Services Act is a good example. It seeks to make online platforms safer, more transparent and more accountable. It gives users rights around content moderation, requires appeal mechanisms and transparency, imposes special duties on very large platforms, and pushes platforms to reduce risks to minors. The European Commission says the DSA includes enhanced protections for minors, including reduced exposure to age-inappropriate content and a ban on targeted advertising to children. These aims are understandable. But for any European company trying to build a consumer platform, the DSA means that legal and trust-and-safety architecture must be built almost from day one.

The Digital Markets Act is aimed mainly at “gatekeepers”: large platforms that control important digital chokepoints. It was designed to restrain the power of companies such as Alphabet, Apple, Amazon, Meta, Microsoft and ByteDance. The Commission describes it as one of the first comprehensive tools to regulate gatekeeper power in the digital economy. But even when the DMA applies directly only to giants, it reshapes the operating environment for everyone else. App stores, advertising markets, data-sharing systems and platform integrations all become legally contested terrain. A European startup does not simply build a product; it must build inside a moving regulatory battlefield.

Then comes the AI Act. The EU’s AI Act entered into force on August 1, 2024, with full application scheduled for August 2, 2026, subject to phased exceptions. Prohibited AI practices and AI-literacy obligations started earlier, while rules for general-purpose AI and high-risk systems follow different timelines. Again, the objective is not irrational. AI can cause harm. High-risk systems in health, employment, education, credit, law enforcement and critical infrastructure should not be deployed carelessly. But the Act also means that European AI builders must classify systems, document risks, maintain technical files, monitor outputs, manage data governance, design human oversight, and prepare for conformity assessments long before many have achieved product-market fit.

The Data Act adds another layer. It gives consumers and businesses greater control over data generated by connected products such as cars, machines and smart devices, and it became applicable from September 12, 2025. The Cyber Resilience Act adds mandatory cybersecurity requirements for products with digital elements, covering design, development, vulnerability handling and maintenance. NIS2 adds cybersecurity obligations across critical sectors.

No single one of these rules destroys innovation. The cumulative burden does. Europe has not created one digital law; it has created an expanding regulatory operating system. For a founder, that means slower launches, higher fixed costs, more legal uncertainty, more paperwork, more fear of fines and less room for informal experimentation. For an American giant, it means hiring another floor of lawyers in Dublin or Brussels.

The American Response: Deregulate and Accelerate#

While Europe keeps layering rules, the United States has moved in the opposite direction, especially under the current Trump administration. In January 2025, President Donald Trump signed Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence.” The order states that U.S. policy is to sustain and enhance America’s AI dominance and directs agencies to revoke AI policies that act as barriers to innovation.

In July 2025, the White House released an AI Action Plan with more than 90 federal policy actions. Its pillars include accelerating AI adoption, building AI infrastructure, streamlining data-center permitting, exporting American AI technology and removing what it calls onerous federal regulations.

The deregulatory turn went further in December 2025, when the White House issued an executive order aimed at creating a national AI policy framework. It criticised state-by-state AI regulation, called for a minimally burdensome national standard, created an AI Litigation Task Force to challenge some state AI laws, and directed federal agencies to consider funding consequences for states with AI rules seen as burdensome.

This does not mean America has no regulation. U.S. companies still face privacy litigation, state AI laws, sectoral rules, copyright disputes, competition cases and product-liability risk. Congress also failed to impose a sweeping federal moratorium on state AI regulation when the Senate removed such a provision from a budget bill by a 99-1 vote. But the direction of travel is clear. The United States is treating AI as a national industrial race. Europe is treating it as a regulatory risk category.

That difference matters. Frontier AI requires huge amounts of capital, energy, compute, data, talent and risk appetite. A government that says “build faster” creates one kind of market. A government that says “classify, document, assess, disclose and prepare for enforcement” creates another.

Europe’s defenders often say its rules will become the global standard. Sometimes that is true. The so-called Brussels Effect is real: companies may comply with EU rules globally because it is easier than running separate systems. But being the global standard-setter is not the same as being the global winner. A continent can export rules while importing the future.

How Regulation Protects the Giants Europe Wants to Weaken#

The irony is that Europe’s rulemaking often strengthens the very American companies it wants to discipline. Big Tech can hire compliance teams, lobbyists, policy specialists, safety researchers, auditors and outside counsel. It can spread fixed costs across billions of users. It can treat fines as business risk. It can negotiate with regulators for years.

A European challenger cannot. A young AI company cannot spend months deciding whether its model is general-purpose, high-risk, systemic-risk, prohibited or merely experimental. A small marketplace cannot build Meta-level content-moderation infrastructure. A connected-device startup cannot easily absorb cybersecurity certification, data-sharing obligations, privacy compliance and consumer-law duties before its first meaningful revenue.

This is how well-intentioned regulation becomes anti-competitive. Not because it is designed to help incumbents, but because fixed costs always favour scale. When Europe raises the legal cost of starting, it raises the value of already being large.

The same logic applies to cloud. European governments complain about dependence on Amazon Web Services, Microsoft Azure and Google Cloud. Yet building a European cloud competitor requires enormous capital expenditure, cheap energy, permissive infrastructure policy, deep enterprise procurement and a unified market. Europe has often lacked all of these at once. Draghi’s report noted that EU companies face electricity prices two to three times higher than U.S. companies and natural-gas prices four to five times higher. Data centres, chips and AI clusters are not built on speeches about sovereignty. They are built on power, permits, capital and customers.

The same applies to chips. Europe has ASML, a world-class Dutch company essential to advanced semiconductor manufacturing, but Europe remains dependent on Asian fabrication and American design ecosystems. Draghi’s report warned that Europe relies heavily on imported digital technology and that 75-90% of global semiconductor fabrication capacity is in Asia.

Europe’s answer to dependency is often another strategy, another package, another regulation, another sovereignty slogan. But sovereignty is not a white paper. Sovereignty is the ability to build, finance, power, sell and scale.

The New Frontier: Social-Media Age Limits, Adult Content and VPNs#

The next phase of European digital regulation is moving beyond platforms and data into age verification, online identity and access control. Again, the goals are understandable. Parents are worried about children’s exposure to addictive design, pornography, self-harm content, scams and algorithmic manipulation. Many of those concerns are real. But the policy direction risks expanding the regulatory perimeter from companies to users themselves.

The European Parliament has called for an EU-wide minimum age of 16 for social media, video-sharing platforms and AI companions, with parental consent for ages 13 to 16. The report was non-legislative, but it also called for bans on certain addictive practices, restrictions on engagement-based recommendation systems for minors, and stronger enforcement against platforms that fail to comply with EU rules.

In May 2026, Reuters reported that Commission President Ursula von der Leyen was considering stronger child-protection measures, including a possible age limit for social media. The same report said the Commission’s planned Digital Fairness Act would target addictive and harmful design practices, manipulative features, influencer marketing and certain uses of AI.

This is not yet an EU-wide social-media ban. That distinction matters. But the political direction is clear: Europe is moving toward age-gated access to major parts of the internet.

Adult content is already in the centre of this shift. The Commission’s age-verification work describes a privacy-preserving system that allows users to prove they are over 18 when accessing legally age-restricted content such as pornography, gambling or alcohol. The Commission said the age-verification solution was feature-ready in April 2026 and could later be adapted to other age ranges. In July 2025, the Commission announced an age-verification blueprint, piloted with countries including Denmark, France, Greece, Italy and Spain, and said it was being tested with adult-content providers.

The Commission has also opened proceedings against major adult platforms under the DSA over suspected failures to prevent minors from accessing pornography. Reuters later reported that the Commission charged four adult platforms with DSA violations over age-verification failures. In France, Aylo suspended access to Pornhub, YouPorn and Redtube in 2025 in protest against French age-verification rules, arguing that the system raised privacy risks.

Here, too, Europe faces a trade-off. Stronger age checks may reduce harm to minors. But they may also normalise identity checks for ordinary internet access. Once infrastructure exists to prove age for pornography, it can be extended to social media, video platforms, app stores, gaming, messaging, AI companions and eventually political content. Policymakers may promise privacy-preserving systems, but history suggests that identity infrastructure tends to expand.

The VPN debate shows how quickly this can happen. A January 2026 European Parliamentary Research Service briefing noted that VPN use had surged among young users seeking to bypass age-verification rules. The same briefing said some argue VPN access should be restricted above a “digital age of majority,” while also stressing that VPNs are important for corporate security, privacy, remote work and freedom of information under authoritarian regimes. The underlying briefing also makes clear that EPRS publications are not an official position of the European Parliament.

So, no: Europe has not enacted a broad VPN ban. But yes: VPN restrictions are now part of the policy conversation around age assurance. That should worry anyone who cares about cybersecurity and civil liberties. VPNs are not merely tools for teenagers bypassing age gates. They are also used by journalists, dissidents, companies, lawyers, travellers and ordinary citizens trying to protect themselves on hostile networks.

This is the pattern of European digital policy: a legitimate concern produces a broad regulatory mechanism; the mechanism creates circumvention; circumvention then becomes the next regulatory target.

Europe’s regulatory instincts come from a sincere place. Europeans are more sceptical of corporate power than Americans. They place greater emphasis on dignity, privacy, consumer protection and social order. They are less willing to accept the Silicon Valley view that society should simply tolerate disruption and clean up later.

That moral instinct is not wrong. The internet did produce harms. Social platforms did amplify manipulation and addiction. Data brokers did build invasive surveillance markets. AI does create risks around fraud, discrimination, misinformation, intellectual property and labour disruption. A serious society cannot ignore these problems.

But Europe has confused seriousness with pre-emption. It increasingly tries to regulate entire technological categories before markets mature. That means lawmakers are often writing rules for products whose business models, technical limits and competitive dynamics are still unknown. The result is regulation based not only on actual harms, but on anticipated harms, theoretical harms and politically salient fears.

This is especially dangerous in AI. The technology is moving too quickly for rigid legal categories. A model that looks general-purpose in one context may be narrow in another. A system that seems low-risk at launch may become high-impact through integration. Open-source models, fine-tuning, agents, synthetic data, retrieval systems and edge deployment all complicate neat classifications. The more Europe tries to freeze AI into legal boxes, the more it risks making the continent unattractive for frontier work.

Meanwhile, America is not waiting. It is building compute clusters, financing model labs, accelerating data-centre permits, exporting AI infrastructure and using federal policy to defend national AI leadership. China is doing the same through state-backed industrial strategy. Europe is still debating how to make compliance elegant.

What Europe Should Do Instead#

The answer is not deregulation in the crude sense. Europe should not abandon privacy, child safety, cybersecurity or competition law. A lawless internet would be worse. But Europe needs regulatory discipline. It needs to treat compliance burdens as economic costs, not as moral free goods.

First, Europe should introduce a regulatory budget for digital companies. Every new obligation should be accompanied by removal, consolidation or simplification of old obligations. If policymakers want AI documentation, cyber reporting, data-access duties and content-safety systems, they should measure how many founder-hours and euros those requirements consume.

Second, Europe should create genuine startup and scale-up safe harbours. Companies below a certain size should not face the same process burden as trillion-dollar platforms unless they create demonstrable high-risk harm. The law should distinguish between a global gatekeeper and a 20-person company trying to test a product.

Third, Europe should shift from paperwork compliance to outcome enforcement. Regulators should punish actual harm, deception, negligence and systemic abuse, but they should avoid forcing young firms to produce corporate-scale documentation before they have corporate-scale impact.

Fourth, Europe should build the missing industrial foundations: cheap energy, fast permits, deep capital markets, procurement access, compute infrastructure and a true single market. Draghi’s report repeatedly emphasised that Europe’s problem is not only regulation but also fragmentation, high energy costs and weak innovation financing.

Fifth, Europe should be careful with identity-based internet controls. Age verification for minors may be necessary in some contexts, especially for pornography. But Europe should not casually drift into a world where access to ordinary digital life requires constant proof of identity, age or eligibility. Any age-assurance system should be narrow, decentralised, privacy-preserving, auditable and resistant to mission creep.

Finally, Europe should stop mistaking enforcement against American companies for technological sovereignty. Fining Meta does not create a European Meta. Investigating Apple does not create a European iPhone. Regulating OpenAI does not create a European frontier model. Sovereignty requires productive capacity.

Europe’s tragedy is that it has many of the ingredients required to compete: talent, universities, capital, sophisticated industries, wealthy consumers and a strong public sector. But it has wrapped those ingredients in a legal culture that treats risk as something to be eliminated before growth begins. That is not how technological revolutions work.

The future will be built by jurisdictions that can tolerate uncertainty, mobilise capital, build infrastructure and correct mistakes quickly. Europe can still be one of them. But it must choose. It can remain the world’s referee, writing rules for games played by others. Or it can become a player again.

Right now, Europe regulates like a superpower and builds like a dependency. That is the imbalance it must fix.

A continent can export rules while importing the future.

Related reads