EU AI Act Moves Forward as Google and xAI Embrace Code of Practice Before Compliance Deadline

European Commission

As general-purpose AI model obligations become effective on August 2, the EU’s landmark AI Act gains momentum. Google has formally signed the full voluntary Code of Practice, while Elon Musk’s xAI has agreed to sign only the safety and security chapter. Meta still refuses to participate. (Reuters; Reuters)

Google joins OpenAI and Anthropic in endorsing the Code, aiming to support EU transparency, copyright, and safety standards under the AI Act. Company executives caution that certain provisions could hamper innovation and expose trade secrets. (Financial Times; Reuters)


What the Code of Practice Means

The voluntary Code offers a compliance roadmap for the AI Act provisions starting August 2. It outlines obligations on transparency documentation, lawful training data sourcing, model safety, and incident reporting. (Reuters; Financial Times)

xAI’s decision to sign only the safety and security portion highlights a partial endorsement, focusing on risk mitigation while rejecting transparency and copyright terms that it considers overly restrictive. (Reuters)


Timeline and Enforcement

The AI Act entered into force on August 1, 2024. Its GPAI obligations are now in effect, with full enforcement for new general-purpose models starting August 2, 2026. High‑risk AI rules, including conformity checks, will apply in subsequent stages. (arXiv analysis; Reuters)

The European Commission has firmly ruled out any delay in implementation, despite lobbying from tech companies seeking a pause. (Nemko Digital report)


Systemic Risk Model Guidelines

On July 18, the Commission issued guidelines specifying obligations for models deemed to carry systemic risk. Companies must conduct adversarial testing, incident reporting and cybersecurity assessments. There is a one-year grace period until August 2026 for compliance. (Reuters)


Industry Divides and Implications

The industry response is split. Meta rejects the Code entirely, citing uncertainty and scope creep. Google and OpenAI support it as a means to prove trustworthy AI development. xAI’s selective signing reflects a focus on safety while distancing from broader obligations. (Reuters; Reuters)

Policy experts warn organizations using or integrating AI in EU markets may be indirectly impacted by these rules, even if not direct model providers themselves. (ITPro)


What Comes Next

Companies now must finalize compliance roadmaps by August 2. Signing the Code may offer legal protection under the AI Act and signal alignment with EU governance expectations. (EU policy release)

As enforcement unfolds, businesses and regulators worldwide will watch closely, as the EU sets precedent for global AI oversight and governance standards. (arXiv review)

Sources: Reuters (Google signs Code), Reuters (xAI partial sign), Financial Times, Reuters (systemic risk guidelines), Nemko Digital (delay ruled out), arXiv analysis of AI Act, EU Commission release

Leave a Reply

Your email address will not be published. Required fields are marked *