EU AI Act Compliance: A Practical Checklist
The EU AI Act is now in force, with phased enforcement starting in 2026. If youβre deploying AI systems in Europe, you need a compliance strategy, not just legal review, but operational changes to how you build, document, and monitor AI.
Hereβs a practical breakdown.
π― Risk Categories
The Act classifies AI systems into four tiers:
π« Unacceptable Risk (Banned)
- Social scoring by governments
- Subliminal manipulation
- Exploitation of vulnerabilities
- Real-time biometric identification in public (with exceptions for law enforcement)
Most enterprises wonβt touch these.
β οΈ High Risk
- Employment/recruitment decisions
- Credit scoring
- Law enforcement tools
- Critical infrastructure control
- Educational assessment
- Biometric identification/categorization
These require conformity assessment, documentation, and ongoing monitoring.
ποΈ Limited Risk (Transparency Obligations)
- Chatbots and deepfakes (must disclose AI use)
- Emotion recognition systems
- Biometric categorization
Lighter requirements, but disclosure is mandatory.
β Minimal Risk
- Spam filters, inventory management, recommendation engines
No specific obligations beyond general product liability.
β Compliance Checklist for High-Risk Systems
If youβre in the high-risk category, you must:
1οΈβ£ Risk Management System
- π Document potential harms and mitigation strategies
- π Test for bias across protected attributes (gender, ethnicity, age, etc.)
- π Establish monitoring for drift and performance degradation
2οΈβ£ Data Governance
- π― Ensure training data is representative and free of bias
- π Document data sources, preprocessing, and validation
- β Implement data quality checks throughout the lifecycle
3οΈβ£ Technical Documentation
- ποΈ Architecture diagrams and design choices
- π Model cards (training data, performance metrics, limitations)
- π Version control and reproducibility
4οΈβ£ Transparency & Human Oversight
- π Clear instructions for users
- π€ Human-in-the-loop for high-stakes decisions
- π Ability to override or contest AI decisions
5οΈβ£ Accuracy, Robustness, Cybersecurity
- π― Performance benchmarks and ongoing validation
- π‘οΈ Adversarial testing and security audits
- π Logging and auditability
6οΈβ£ Record-Keeping
- πΎ Automatic logging of system events (inputs, outputs, decisions)
- β±οΈ Retention periods aligned with regulatory requirements
π€ What This Means for Sovereign AI
The Actβs requirements align naturally with sovereign AI principles:
- π’ On-premise/hybrid deployments - Make auditing and data governance easier
- π‘ Explainable models - Interpretable decisions are mandatory for high-risk use cases
- π Local control - Simplifies compliance vs. relying on third-party APIs with opaque internals
π Getting Started
- ποΈ Classify your systems - Map AI use cases to risk tiers (this is often non-obvious, consult legal counsel)
- π Gap analysis - Compare current practices to Act requirements
- π οΈ Technical roadmap - Implement missing controls (logging, bias testing, documentation), build it into your development lifecycle
At Onyx, we help organizations navigate both the regulatory and technical sides of AI Act compliance. Reach out if youβd like a readiness assessment.