Framework
The EU AI Act in Practice
Regulation (EU) 2024/1689, the EU's harmonized rules on artificial intelligence. Classifies AI systems as prohibited, high-risk, limited-risk, or minimal-risk and imposes obligations proportionate to the tier.
Risk tiers
- Prohibited
- Social scoring, real-time biometric identification in public spaces (with narrow exceptions), exploitative practices targeting vulnerable groups.
- High-risk
- AI systems used in critical infrastructure, education, employment, essential services, law enforcement, migration, justice, democratic processes. Includes specific AI products (medical devices, autonomous vehicles).
- Limited-risk
- Chatbots, deepfakes, emotion-recognition systems. Transparency obligations apply.
- Minimal-risk
- Most AI applications. No specific obligations beyond existing law.
High-risk obligations
- Risk-management system
- Data governance
- Technical documentation
- Record-keeping
- Transparency to deployers and users
- Human oversight
- Accuracy, robustness, and cybersecurity
- Conformity assessment
- Registration in EU database
- Post-market monitoring
Enforcement timeline
Prohibited practices: applicable February 2025. General-purpose AI obligations: August 2025. Most high-risk obligations: August 2026. Full applicability: August 2027.
Posture Check checkpoint
Posture Check governance, data, and runtime dimensions all relevant. Specific gap mapping per Article available in paid Standard Audit.
Score yourself against this framework.
The AI Posture Check is a free 30-question self-assessment that maps your gaps directly to OWASP LLM Top 10, NIST AI RMF, and ISO 42001.
Take the AI Posture Check Need help operationalizing this?
Talk to a CWS engineer about your AI security program.
Schedule a Discovery Call to scope a Standard Audit or Enterprise Program.
Schedule a Discovery Call