Meta refuses to sign the EU's AI Code of Practice as August enforcement approaches
Photo: greensefa / Flickr / CC BY 2.0
Why it matters
  • The EU AI Act’s supervision powers over general-purpose AI model providers take effect on August 2, 2026. The Code of Practice is the primary compliance mechanism for that supervision, making refusal to sign a declaration of intent to contest enforcement.
  • Fines under the AI Act reach up to €35 million or 7 percent of global annual revenue — for Meta, a company with roughly $160 billion in 2025 revenue, the maximum would be approximately $11 billion.
  • OpenAI, Google, Midjourney, and Runway have all engaged with the Code. Meta’s refusal isolates it as the largest AI lab taking a direct confrontation posture with European regulators.

Meta Platforms formally refused to sign the European Union’s voluntary Code of Practice for general-purpose AI model providers, with Chief Global Affairs Officer Joel Kaplan stating in a public letter that the Code “introduces legal uncertainties for model developers and measures that go beyond the scope of the AI Act.” The refusal, disclosed in late April, positions Meta as the outlier among major AI labs that have engaged with Brussels: OpenAI, Google, Midjourney, Runway, and ElevenLabs have all entered the Code process, regardless of their specific objections to individual provisions.

The Code of Practice was developed by the European AI Office — the Commission body created to coordinate enforcement of the AI Act’s general-purpose AI provisions — in consultation with developers, civil society, and member state authorities. It operationalises requirements in Chapter V of the AI Act covering systemic-risk AI models: transparency obligations, incident reporting, adversarial testing, and cybersecurity measures. The AI Office acknowledged the Code is imperfect but described it as the established compliance pathway for the August 2 enforcement date.

What the August deadline means

On August 2, 2026, the Commission’s supervision and enforcement powers against general-purpose AI model providers formally activate. After that date, the AI Office can require providers to submit technical documentation, undergo external audits, and implement measures to reduce systemic risk. Non-compliance is subject to fines of up to €35 million or 7 percent of global annual revenue — whichever is higher. For Meta, with approximately $160 billion in 2025 revenue, the 7 percent ceiling would represent roughly $11 billion.

By refusing to sign the Code, Meta does not automatically become non-compliant after August 2; it will still need to demonstrate compliance with the underlying AI Act requirements through some other mechanism. But it has made that demonstration more adversarial — the AI Office’s default position on Code signatories is cooperative scrutiny; its default position on non-signatories is investigation. The company’s legal team presumably regards that risk as preferable to the operational constraints the Code would impose on its Llama model series, which is deployed across Meta’s social platforms and licensed to third-party developers globally.

The broader regulatory exposure

Meta’s refusal arrives as EU antitrust regulators are already scrutinising the “entire” AI operations of major technology companies for potential market-distorting effects. Competition Commissioner Teresa Ribera has raised concerns about how AI enables large companies to “entrench corporate power” in ways that existing competition law was not designed to address. Navigating the AI Act non-compliance path while simultaneously under competition scrutiny creates a regulatory exposure that is wider than any single enforcement action.