Meta has chosen not to endorse the European Union’s code of practice for its AI Act, just weeks before the implementation of the bloc’s regulations for providers of general-purpose AI models.
“Europe is heading down the wrong path on AI,” stated Joel Kaplan, Meta’s chief global affairs officer, in a post on LinkedIn. He explained that after carefully reviewing the European Commission’s Code of Practice for general-purpose AI models, Meta has decided not to sign it. Kaplan expressed concerns about the legal uncertainties and measures introduced by the Code, which he believes extend beyond the scope of the AI Act.
The EU’s code of practice, released recently as a voluntary framework, aims to assist companies in adhering to the bloc’s AI regulations. It requires companies to provide and update documentation on their AI tools, prohibits training AI on pirated content, and mandates compliance with content owners’ requests regarding their works in data sets.
Kaplan criticized the EU’s approach to implementing the legislation, labeling it as “over-reach.” He argued that the law could hinder the development and deployment of advanced AI models in Europe, impacting European companies seeking to leverage such technology.
The AI Act, a risk-based regulation for AI applications, prohibits certain “unacceptable risk” use cases like cognitive behavioral manipulation and social scoring. It also identifies “high-risk” applications such as biometrics, facial recognition, education, and employment, requiring developers to register AI systems and meet quality management obligations.
Major tech companies worldwide, including Alphabet, Meta, Microsoft, and Mistral AI, have been pushing back against the regulations, urging the European Commission to delay the rollout. Despite the opposition, the Commission has maintained its timeline for implementation.
In addition, the EU released guidelines for providers of AI models ahead of the upcoming rules taking effect on August 2. These guidelines will impact providers of “general-purpose AI models with systemic risk,” such as OpenAI, Anthropic, Google, and Meta. Companies with such models in the market before August 2 must comply with the legislation by August 2, 2027.
Techcrunch event
San Francisco
|
October 27-29, 2025