Meta has rejected the European Union's new AI code of practice, warning the voluntary guidelines pose legal risks and threaten innovation. The move marks another point of friction between U.S. tech giants and European regulators as the bloc moves to implement its landmark AI Act.
The decision is the latest flashpoint between U.S. tech firms and European regulators. It comes as Apple faces its own battles with the EU over the Digital Markets Act (DMA), having been hit with a €500 million fine in April for anti-steering violations. The company has also appealed another EU order mandating broader iOS interoperability, and recently announced it would delay some iOS 26 features in the region, citing regulatory hurdles.
In a post on LinkedIn, Meta's head of global affairs, Joel Kaplan, stated, "Europe is heading down the wrong path on AI." He argued that the code "introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act."
Rolled out earlier this month, the EU's voluntary AI code of practice is designed to help companies comply with the bloc's sprawling AI Act, covering areas like copyright protection and transparency. While adherence is optional, it can offer companies greater legal protection.
The pushback isn't limited to U.S. companies. The Trump administration has previously criticized the EU's tech regulations, with the White House calling recent fines "economic extortion." Dozens of European firms, including ASML and Airbus, have also signed an open letter asking the European Commission to suspend the AI Act's implementation for two years.
According to the European Commission, companies that choose not to sign the code "will have to demonstrate other means of compliance" and may face greater regulatory scrutiny. The rules impacting general purpose AI models like ChatGPT are set to take effect next month, putting major U.S. developers under heightened EU scrutiny.