Onyx Logo Onyx
Menu

EU AI Act Compliance: A Practical Checklist

β€’ Onyx Team
EU AI Act Compliance Regulation Risk Management
EU AI Act Compliance: A Practical Checklist

The EU AI Act is now in force, with phased enforcement starting in 2026. If you’re deploying AI systems in Europe, you need a compliance strategy, not just legal review, but operational changes to how you build, document, and monitor AI.

Here’s a practical breakdown.

🎯 Risk Categories

The Act classifies AI systems into four tiers:

🚫 Unacceptable Risk (Banned)

  • Social scoring by governments
  • Subliminal manipulation
  • Exploitation of vulnerabilities
  • Real-time biometric identification in public (with exceptions for law enforcement)

Most enterprises won’t touch these.

⚠️ High Risk

  • Employment/recruitment decisions
  • Credit scoring
  • Law enforcement tools
  • Critical infrastructure control
  • Educational assessment
  • Biometric identification/categorization

These require conformity assessment, documentation, and ongoing monitoring.

πŸ‘οΈ Limited Risk (Transparency Obligations)

  • Chatbots and deepfakes (must disclose AI use)
  • Emotion recognition systems
  • Biometric categorization

Lighter requirements, but disclosure is mandatory.

βœ… Minimal Risk

  • Spam filters, inventory management, recommendation engines

No specific obligations beyond general product liability.

βœ… Compliance Checklist for High-Risk Systems

If you’re in the high-risk category, you must:

1️⃣ Risk Management System

  • πŸ“Š Document potential harms and mitigation strategies
  • πŸ” Test for bias across protected attributes (gender, ethnicity, age, etc.)
  • πŸ“ˆ Establish monitoring for drift and performance degradation

2️⃣ Data Governance

  • 🎯 Ensure training data is representative and free of bias
  • πŸ“ Document data sources, preprocessing, and validation
  • βœ“ Implement data quality checks throughout the lifecycle

3️⃣ Technical Documentation

  • πŸ—οΈ Architecture diagrams and design choices
  • πŸƒ Model cards (training data, performance metrics, limitations)
  • πŸ”„ Version control and reproducibility

4️⃣ Transparency & Human Oversight

  • πŸ“– Clear instructions for users
  • πŸ‘€ Human-in-the-loop for high-stakes decisions
  • πŸ”“ Ability to override or contest AI decisions

5️⃣ Accuracy, Robustness, Cybersecurity

  • 🎯 Performance benchmarks and ongoing validation
  • πŸ›‘οΈ Adversarial testing and security audits
  • πŸ“‹ Logging and auditability

6️⃣ Record-Keeping

  • πŸ’Ύ Automatic logging of system events (inputs, outputs, decisions)
  • ⏱️ Retention periods aligned with regulatory requirements

🀝 What This Means for Sovereign AI

The Act’s requirements align naturally with sovereign AI principles:

  • 🏒 On-premise/hybrid deployments - Make auditing and data governance easier
  • πŸ’‘ Explainable models - Interpretable decisions are mandatory for high-risk use cases
  • πŸ” Local control - Simplifies compliance vs. relying on third-party APIs with opaque internals

πŸš€ Getting Started

  1. πŸ—‚οΈ Classify your systems - Map AI use cases to risk tiers (this is often non-obvious, consult legal counsel)
  2. πŸ”Ž Gap analysis - Compare current practices to Act requirements
  3. πŸ› οΈ Technical roadmap - Implement missing controls (logging, bias testing, documentation), build it into your development lifecycle

At Onyx, we help organizations navigate both the regulatory and technical sides of AI Act compliance. Reach out if you’d like a readiness assessment.