Tech Giants Divided on EU AI Code as Compliance Deadline Nears
The EU's AI General-Purpose Code of Practice has revealed stark differences among leading tech firms. Microsoft has expressed its intent to adopt the European Union's voluntary AI compliance framework, while Meta has firmly declined, labeling the guidelines as excessive regulation that could hinder innovation.
Microsoft President Brad Smith told Reuters on Friday, "We’re likely to sign after reviewing the documents." Smith highlighted his company’s cooperative stance, noting, "We aim to support the initiative while appreciating the AI Office’s direct engagement with the industry."
In contrast, Meta’s Chief Global Affairs Officer, Joel Kaplan, stated on LinkedIn, "Meta will not sign. The code creates legal uncertainties for developers and includes measures that exceed the AI Act’s scope."
Kaplan warned that "Europe’s approach to AI is misguided" and cautioned that the EU AI code could "slow the development and rollout of advanced AI models in Europe, hampering European companies building on these technologies."
Early Adopters vs. Resisters
The tech industry’s split response underscores varied approaches to European regulatory compliance. OpenAI and Mistral have embraced the Code, positioning themselves as early supporters of the voluntary framework.
OpenAI affirmed its commitment, stating, "Adopting the Code reflects our dedication to delivering powerful, accessible, and secure AI models for Europeans to fully benefit from the Intelligence Age."
OpenAI is the second major AI firm to join the EU code of practice for general-purpose AI models, following Mistral, according to industry observers monitoring voluntary commitments.
Earlier this month, over 40 of Europe’s largest companies, including ASML Holding and Airbus, signed a letter urging the European Commission to delay the AI Act’s implementation by two years.
Code Requirements and Timeline
Published on July 10 by the European Commission, the code of practice seeks to provide legal clarity for companies developing general-purpose AI models before mandatory enforcement begins on August 2, 2025.
Developed by 13 independent experts with input from over 1,000 stakeholders, including model providers, small and medium-sized enterprises, academics, AI safety experts, rights-holders, and civil society groups, the voluntary framework sets clear guidelines.
The EU AI code outlines requirements in three key areas. Transparency obligations mandate providers to maintain detailed technical documentation for models and datasets, while copyright compliance requires clear policies on how training data is sourced and used under EU copyright laws.
For the most advanced models, classified as "GPAI with Systemic Risk" (GPAISR), such as OpenAI’s o3, Anthropic’s Claude 4 Opus, and Google’s Gemini 2.5 Pro, additional safety and security obligations apply.
Signatories must publish summaries of training content for their general-purpose AI models and establish policies to comply with EU copyright law. The framework also requires documenting training data sources, conducting thorough risk assessments, and creating governance structures to address potential AI system risks.
Enforcement and Penalties
Non-compliance carries hefty penalties, including fines of up to €35 million or 7% of global annual turnover, whichever is higher. For GPAI model providers, the European Commission may impose fines of up to €15 million or 3% of worldwide annual turnover.
The Commission has indicated that adherence to an approved Code of Practice will streamline compliance, with the AI Office and national regulators focusing on verifying Code commitments rather than auditing every AI system. This encourages early adoption for companies seeking regulatory stability.
The EU AI code is part of the broader AI Act framework. Under the AI Act, obligations for GPAI models, outlined in Articles 50–55, become enforceable twelve months after the Act takes effect on August 2, 2025. Providers of GPAI models already on the market must comply by August 2, 2027.
Industry Impact and Global Implications
The varied responses signal that tech companies are pursuing distinct strategies for navigating global regulatory landscapes. Microsoft’s collaborative approach contrasts sharply with Meta’s defiant stance, potentially shaping how major AI developers engage with international regulations.
Despite opposition, the European Commission remains firm. EU Internal Market Commissioner Thierry Breton has emphasized that the AI Act is vital for consumer safety and trust in emerging technologies, rejecting calls for a delay.
The EU AI code’s voluntary phase offers companies a chance to shape regulatory development through participation. However, mandatory enforcement starting in August 2025 will require compliance regardless of voluntary adoption.
For companies operating globally, the EU framework could influence worldwide AI governance standards, aligning with initiatives like the G7 Hiroshima AI Process and various national AI strategies, potentially positioning European standards as global benchmarks.
Looking Ahead
In the near term, EU authorities, including the European Commission and Member States, will review the Code’s adequacy, with a final endorsement expected by August 2, 2025.
The regulatory framework poses significant implications for global AI development, as companies balance innovation with compliance across jurisdictions. The differing responses to the voluntary code signal potential challenges as mandatory requirements take effect.
See also: Navigating the EU AI Act: Implications for UK businesses
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Related article
Microsoft hosts xAI's advanced Grok 3 models in new AI collaboration
Earlier this month, my *Notepad* investigative journalism uncovered Microsoft's plans to integrate Elon Musk's Grok AI models - revelations that have now been officially confirmed. Today at Microsoft's annual Build developer conference, company execu
Apple Teams Up with Anthropic to Develop AI Coding Tool for Xcode
Apple and Anthropic Collaborate on AI-Powered Coding Assistant
According to Bloomberg, Apple is developing an advanced AI coding assistant that will integrate directly into Xcode, its flagship development environment. This collaboration with Anthrop
Midjourney Unveils Cutting-Edge AI Video Generator for Creative Content
Midjourney's AI Video Generation BreakthroughMidjourney has unveiled its inaugural AI video generation tool, marking a significant expansion beyond its renowned image creation capabilities. The initial release enables users to transform both uploaded
Comments (2)
0/200
RichardSmith
August 24, 2025 at 9:01:17 PM EDT
The EU's AI code is stirring up the tech world! Microsoft’s on board, but I wonder if others will follow or just dodge it. Feels like a chess game with high stakes. 🧠
0
EdwardSmith
August 20, 2025 at 7:01:15 AM EDT
The EU's AI code sounds like a bureaucratic maze! Microsoft’s on board, but I bet others are dragging their feet. Can’t blame them—too many rules stifle innovation. Still, curious to see how this plays out. 🤔
0
The EU's AI General-Purpose Code of Practice has revealed stark differences among leading tech firms. Microsoft has expressed its intent to adopt the European Union's voluntary AI compliance framework, while Meta has firmly declined, labeling the guidelines as excessive regulation that could hinder innovation.
Microsoft President Brad Smith told Reuters on Friday, "We’re likely to sign after reviewing the documents." Smith highlighted his company’s cooperative stance, noting, "We aim to support the initiative while appreciating the AI Office’s direct engagement with the industry."
In contrast, Meta’s Chief Global Affairs Officer, Joel Kaplan, stated on LinkedIn, "Meta will not sign. The code creates legal uncertainties for developers and includes measures that exceed the AI Act’s scope."
Kaplan warned that "Europe’s approach to AI is misguided" and cautioned that the EU AI code could "slow the development and rollout of advanced AI models in Europe, hampering European companies building on these technologies."
Early Adopters vs. Resisters
The tech industry’s split response underscores varied approaches to European regulatory compliance. OpenAI and Mistral have embraced the Code, positioning themselves as early supporters of the voluntary framework.
OpenAI affirmed its commitment, stating, "Adopting the Code reflects our dedication to delivering powerful, accessible, and secure AI models for Europeans to fully benefit from the Intelligence Age."
OpenAI is the second major AI firm to join the EU code of practice for general-purpose AI models, following Mistral, according to industry observers monitoring voluntary commitments.
Earlier this month, over 40 of Europe’s largest companies, including ASML Holding and Airbus, signed a letter urging the European Commission to delay the AI Act’s implementation by two years.
Code Requirements and Timeline
Published on July 10 by the European Commission, the code of practice seeks to provide legal clarity for companies developing general-purpose AI models before mandatory enforcement begins on August 2, 2025.
Developed by 13 independent experts with input from over 1,000 stakeholders, including model providers, small and medium-sized enterprises, academics, AI safety experts, rights-holders, and civil society groups, the voluntary framework sets clear guidelines.
The EU AI code outlines requirements in three key areas. Transparency obligations mandate providers to maintain detailed technical documentation for models and datasets, while copyright compliance requires clear policies on how training data is sourced and used under EU copyright laws.
For the most advanced models, classified as "GPAI with Systemic Risk" (GPAISR), such as OpenAI’s o3, Anthropic’s Claude 4 Opus, and Google’s Gemini 2.5 Pro, additional safety and security obligations apply.
Signatories must publish summaries of training content for their general-purpose AI models and establish policies to comply with EU copyright law. The framework also requires documenting training data sources, conducting thorough risk assessments, and creating governance structures to address potential AI system risks.
Enforcement and Penalties
Non-compliance carries hefty penalties, including fines of up to €35 million or 7% of global annual turnover, whichever is higher. For GPAI model providers, the European Commission may impose fines of up to €15 million or 3% of worldwide annual turnover.
The Commission has indicated that adherence to an approved Code of Practice will streamline compliance, with the AI Office and national regulators focusing on verifying Code commitments rather than auditing every AI system. This encourages early adoption for companies seeking regulatory stability.
The EU AI code is part of the broader AI Act framework. Under the AI Act, obligations for GPAI models, outlined in Articles 50–55, become enforceable twelve months after the Act takes effect on August 2, 2025. Providers of GPAI models already on the market must comply by August 2, 2027.
Industry Impact and Global Implications
The varied responses signal that tech companies are pursuing distinct strategies for navigating global regulatory landscapes. Microsoft’s collaborative approach contrasts sharply with Meta’s defiant stance, potentially shaping how major AI developers engage with international regulations.
Despite opposition, the European Commission remains firm. EU Internal Market Commissioner Thierry Breton has emphasized that the AI Act is vital for consumer safety and trust in emerging technologies, rejecting calls for a delay.
The EU AI code’s voluntary phase offers companies a chance to shape regulatory development through participation. However, mandatory enforcement starting in August 2025 will require compliance regardless of voluntary adoption.
For companies operating globally, the EU framework could influence worldwide AI governance standards, aligning with initiatives like the G7 Hiroshima AI Process and various national AI strategies, potentially positioning European standards as global benchmarks.
Looking Ahead
In the near term, EU authorities, including the European Commission and Member States, will review the Code’s adequacy, with a final endorsement expected by August 2, 2025.
The regulatory framework poses significant implications for global AI development, as companies balance innovation with compliance across jurisdictions. The differing responses to the voluntary code signal potential challenges as mandatory requirements take effect.
See also: Navigating the EU AI Act: Implications for UK businesses
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.




The EU's AI code is stirring up the tech world! Microsoft’s on board, but I wonder if others will follow or just dodge it. Feels like a chess game with high stakes. 🧠




The EU's AI code sounds like a bureaucratic maze! Microsoft’s on board, but I bet others are dragging their feet. Can’t blame them—too many rules stifle innovation. Still, curious to see how this plays out. 🤔












