Tech Giants Divided on EU AI Code as Compliance Deadline Nears
The EU's AI General-Purpose Code of Practice has revealed stark differences among leading tech firms. Microsoft has expressed its intent to adopt the European Union's voluntary AI compliance framework, while Meta has firmly declined, labeling the guidelines as excessive regulation that could hinder innovation.
Microsoft President Brad Smith told Reuters on Friday, "We’re likely to sign after reviewing the documents." Smith highlighted his company’s cooperative stance, noting, "We aim to support the initiative while appreciating the AI Office’s direct engagement with the industry."
In contrast, Meta’s Chief Global Affairs Officer, Joel Kaplan, stated on LinkedIn, "Meta will not sign. The code creates legal uncertainties for developers and includes measures that exceed the AI Act’s scope."
Kaplan warned that "Europe’s approach to AI is misguided" and cautioned that the EU AI code could "slow the development and rollout of advanced AI models in Europe, hampering European companies building on these technologies."
Early Adopters vs. Resisters
The tech industry’s split response underscores varied approaches to European regulatory compliance. OpenAI and Mistral have embraced the Code, positioning themselves as early supporters of the voluntary framework.
OpenAI affirmed its commitment, stating, "Adopting the Code reflects our dedication to delivering powerful, accessible, and secure AI models for Europeans to fully benefit from the Intelligence Age."
OpenAI is the second major AI firm to join the EU code of practice for general-purpose AI models, following Mistral, according to industry observers monitoring voluntary commitments.
Earlier this month, over 40 of Europe’s largest companies, including ASML Holding and Airbus, signed a letter urging the European Commission to delay the AI Act’s implementation by two years.
Code Requirements and Timeline
Published on July 10 by the European Commission, the code of practice seeks to provide legal clarity for companies developing general-purpose AI models before mandatory enforcement begins on August 2, 2025.
Developed by 13 independent experts with input from over 1,000 stakeholders, including model providers, small and medium-sized enterprises, academics, AI safety experts, rights-holders, and civil society groups, the voluntary framework sets clear guidelines.
The EU AI code outlines requirements in three key areas. Transparency obligations mandate providers to maintain detailed technical documentation for models and datasets, while copyright compliance requires clear policies on how training data is sourced and used under EU copyright laws.
For the most advanced models, classified as "GPAI with Systemic Risk" (GPAISR), such as OpenAI’s o3, Anthropic’s Claude 4 Opus, and Google’s Gemini 2.5 Pro, additional safety and security obligations apply.
Signatories must publish summaries of training content for their general-purpose AI models and establish policies to comply with EU copyright law. The framework also requires documenting training data sources, conducting thorough risk assessments, and creating governance structures to address potential AI system risks.
Enforcement and Penalties
Non-compliance carries hefty penalties, including fines of up to €35 million or 7% of global annual turnover, whichever is higher. For GPAI model providers, the European Commission may impose fines of up to €15 million or 3% of worldwide annual turnover.
The Commission has indicated that adherence to an approved Code of Practice will streamline compliance, with the AI Office and national regulators focusing on verifying Code commitments rather than auditing every AI system. This encourages early adoption for companies seeking regulatory stability.
The EU AI code is part of the broader AI Act framework. Under the AI Act, obligations for GPAI models, outlined in Articles 50–55, become enforceable twelve months after the Act takes effect on August 2, 2025. Providers of GPAI models already on the market must comply by August 2, 2027.
Industry Impact and Global Implications
The varied responses signal that tech companies are pursuing distinct strategies for navigating global regulatory landscapes. Microsoft’s collaborative approach contrasts sharply with Meta’s defiant stance, potentially shaping how major AI developers engage with international regulations.
Despite opposition, the European Commission remains firm. EU Internal Market Commissioner Thierry Breton has emphasized that the AI Act is vital for consumer safety and trust in emerging technologies, rejecting calls for a delay.
The EU AI code’s voluntary phase offers companies a chance to shape regulatory development through participation. However, mandatory enforcement starting in August 2025 will require compliance regardless of voluntary adoption.
For companies operating globally, the EU framework could influence worldwide AI governance standards, aligning with initiatives like the G7 Hiroshima AI Process and various national AI strategies, potentially positioning European standards as global benchmarks.
Looking Ahead
In the near term, EU authorities, including the European Commission and Member States, will review the Code’s adequacy, with a final endorsement expected by August 2, 2025.
The regulatory framework poses significant implications for global AI development, as companies balance innovation with compliance across jurisdictions. The differing responses to the voluntary code signal potential challenges as mandatory requirements take effect.
See also: Navigating the EU AI Act: Implications for UK businesses
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Related article
Meta Enhances AI Security with Advanced Llama Tools
Meta has released new Llama security tools to bolster AI development and protect against emerging threats.These upgraded Llama AI model security tools are paired with Meta’s new resources to empower c
NotebookLM Unveils Curated Notebooks from Top Publications and Experts
Google is enhancing its AI-driven research and note-taking tool, NotebookLM, to serve as a comprehensive knowledge hub. On Monday, the company introduced a curated collection of notebooks from promine
Alibaba Unveils Wan2.1-VACE: Open-Source AI Video Solution
Alibaba has introduced Wan2.1-VACE, an open-source AI model poised to transform video creation and editing processes.VACE is a key component of Alibaba’s Wan2.1 video AI model family, with the company
Comments (0)
0/200
The EU's AI General-Purpose Code of Practice has revealed stark differences among leading tech firms. Microsoft has expressed its intent to adopt the European Union's voluntary AI compliance framework, while Meta has firmly declined, labeling the guidelines as excessive regulation that could hinder innovation.
Microsoft President Brad Smith told Reuters on Friday, "We’re likely to sign after reviewing the documents." Smith highlighted his company’s cooperative stance, noting, "We aim to support the initiative while appreciating the AI Office’s direct engagement with the industry."
In contrast, Meta’s Chief Global Affairs Officer, Joel Kaplan, stated on LinkedIn, "Meta will not sign. The code creates legal uncertainties for developers and includes measures that exceed the AI Act’s scope."
Kaplan warned that "Europe’s approach to AI is misguided" and cautioned that the EU AI code could "slow the development and rollout of advanced AI models in Europe, hampering European companies building on these technologies."
Early Adopters vs. Resisters
The tech industry’s split response underscores varied approaches to European regulatory compliance. OpenAI and Mistral have embraced the Code, positioning themselves as early supporters of the voluntary framework.
OpenAI affirmed its commitment, stating, "Adopting the Code reflects our dedication to delivering powerful, accessible, and secure AI models for Europeans to fully benefit from the Intelligence Age."
OpenAI is the second major AI firm to join the EU code of practice for general-purpose AI models, following Mistral, according to industry observers monitoring voluntary commitments.
Earlier this month, over 40 of Europe’s largest companies, including ASML Holding and Airbus, signed a letter urging the European Commission to delay the AI Act’s implementation by two years.
Code Requirements and Timeline
Published on July 10 by the European Commission, the code of practice seeks to provide legal clarity for companies developing general-purpose AI models before mandatory enforcement begins on August 2, 2025.
Developed by 13 independent experts with input from over 1,000 stakeholders, including model providers, small and medium-sized enterprises, academics, AI safety experts, rights-holders, and civil society groups, the voluntary framework sets clear guidelines.
The EU AI code outlines requirements in three key areas. Transparency obligations mandate providers to maintain detailed technical documentation for models and datasets, while copyright compliance requires clear policies on how training data is sourced and used under EU copyright laws.
For the most advanced models, classified as "GPAI with Systemic Risk" (GPAISR), such as OpenAI’s o3, Anthropic’s Claude 4 Opus, and Google’s Gemini 2.5 Pro, additional safety and security obligations apply.
Signatories must publish summaries of training content for their general-purpose AI models and establish policies to comply with EU copyright law. The framework also requires documenting training data sources, conducting thorough risk assessments, and creating governance structures to address potential AI system risks.
Enforcement and Penalties
Non-compliance carries hefty penalties, including fines of up to €35 million or 7% of global annual turnover, whichever is higher. For GPAI model providers, the European Commission may impose fines of up to €15 million or 3% of worldwide annual turnover.
The Commission has indicated that adherence to an approved Code of Practice will streamline compliance, with the AI Office and national regulators focusing on verifying Code commitments rather than auditing every AI system. This encourages early adoption for companies seeking regulatory stability.
The EU AI code is part of the broader AI Act framework. Under the AI Act, obligations for GPAI models, outlined in Articles 50–55, become enforceable twelve months after the Act takes effect on August 2, 2025. Providers of GPAI models already on the market must comply by August 2, 2027.
Industry Impact and Global Implications
The varied responses signal that tech companies are pursuing distinct strategies for navigating global regulatory landscapes. Microsoft’s collaborative approach contrasts sharply with Meta’s defiant stance, potentially shaping how major AI developers engage with international regulations.
Despite opposition, the European Commission remains firm. EU Internal Market Commissioner Thierry Breton has emphasized that the AI Act is vital for consumer safety and trust in emerging technologies, rejecting calls for a delay.
The EU AI code’s voluntary phase offers companies a chance to shape regulatory development through participation. However, mandatory enforcement starting in August 2025 will require compliance regardless of voluntary adoption.
For companies operating globally, the EU framework could influence worldwide AI governance standards, aligning with initiatives like the G7 Hiroshima AI Process and various national AI strategies, potentially positioning European standards as global benchmarks.
Looking Ahead
In the near term, EU authorities, including the European Commission and Member States, will review the Code’s adequacy, with a final endorsement expected by August 2, 2025.
The regulatory framework poses significant implications for global AI development, as companies balance innovation with compliance across jurisdictions. The differing responses to the voluntary code signal potential challenges as mandatory requirements take effect.
See also: Navigating the EU AI Act: Implications for UK businesses
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.










