The Deadline
August 2, 2026. That's when the EU AI Act's general-purpose AI provisions take full effect. If your AI agents serve EU users — or if your organization operates in the EU — you need to comply.This isn't optional. This isn't guidance. This is regulation with enforcement teeth.
Penalties for non-compliance: up to 35 million euros or 7% of global annual revenue, whichever is higher. For context, GDPR's maximum penalty is 4% of global revenue. The EU AI Act goes further.
And unlike GDPR, which took years for enforcement to ramp up, the EU AI Office has been publicly signaling that it intends to pursue early enforcement actions to establish precedent. The first fines will be front-page news.
Who Does It Apply To?
The EU AI Act applies to:
- Providers — organizations that develop or place AI systems on the EU market
- Deployers — organizations that use AI systems in their operations
- Importers and distributors — organizations that bring third-party AI systems into the EU market
If you're building AI agents that EU-based companies use, you're a provider. If you're using AI agents in your EU operations, you're a deployer. Both have obligations.
The Act uses a risk-based classification system:
- Unacceptable risk — banned outright (social scoring, real-time biometric surveillance)
- High risk — subject to the full requirements (AI in healthcare, law enforcement, critical infrastructure, education, employment)
- Limited risk — transparency obligations (chatbots, deepfakes)
- Minimal risk — no specific requirements
Most AI agents fall into the limited risk or high risk category, depending on their use case. An AI agent that helps developers write code is limited risk. An AI agent that screens job applications is high risk. An AI agent that manages critical infrastructure is high risk.
Regardless of risk category, general-purpose AI systems (which includes most LLM-based agents) have baseline requirements under Articles 52-55 of the Act.
What's Required
The EU AI Act establishes requirements across 5 key articles for AI systems. Here's what each one means for AI agent developers, with specific technical requirements:
Article 9: Risk Management
You must have a documented risk management system that identifies, analyzes, and mitigates risks from your AI system throughout its lifecycle. For AI agents, this means:
- Threat identification — scanning tools and skills for injection, exfiltration, and manipulation risks before deployment. You need evidence that you looked for threats and have a process for finding new ones.
- Runtime monitoring — continuous monitoring of agent behavior for anomalies, policy violations, and attack indicators. Point-in-time assessments are not sufficient.
- Documented coverage — your threat detection must cover known attack categories. The OWASP LLM Top 10 is the de facto standard for LLM risk categorization. You should be able to map your defenses to each category.
- Mitigation evidence — when threats are detected, you need documented evidence of mitigation: what was found, when, what action was taken, and what the outcome was.
Article 11: Technical Documentation
You must maintain documentation of your AI system's design, development, and testing. This includes:
- Security architecture — what security measures are in place, how they work, and what they protect against
- Threat detection methodology — how threats are detected and mitigated, including detection rates and false positive rates
- Audit trails — timestamped, tamper-evident logs of all administrative actions, policy changes, and security events
- Testing records — evidence of security testing, including scan results, penetration test findings, and remediation actions
The documentation must be updated when significant changes occur and must be available to regulatory authorities upon request.
How Scandar helps: Overwatch maintains a complete audit log of all administrative actions (policy changes, agent quarantine, role assignments). Scan results are stored with timestamps and can be exported. The compliance report maps your documentation to specific Article 11 requirements.Article 13: Transparency
Users must be informed that they're interacting with an AI system. For agent-based systems:
- AI disclosure — clear indication to users that they're interacting with an AI agent, not a human
- Decision logging — when AI agents make or influence decisions, the reasoning must be logged and reviewable
- Explainability — ability to explain why an agent took a specific action, especially when that action affected a user
- Data usage transparency — users must understand what data the AI agent processes and how
Article 14: Human Oversight
Your AI system must allow for meaningful human intervention. This means:
- Quarantine capabilities — ability to immediately stop an agent that behaves unexpectedly, without waiting for an automated system to catch it
- Kill switches — manual override for all automated agent processes, accessible to authorized personnel
- Approval gates — human review required for high-risk decisions before they're executed
- Escalation paths — clear procedures for when automated monitoring detects anomalies that require human judgment
Article 15: Accuracy, Robustness, Cybersecurity
Your AI system must be resilient to attacks and operate reliably. For AI agents:
- Injection protection — defense against prompt injection, tool poisoning, and adversarial inputs across all input channels
- Adversarial robustness — evidence that your defenses work against known attack techniques, including encoding evasion, multi-turn injection, and indirect injection via tool results
- Detection accuracy — documented detection rates and false positive rates for your security monitoring. You need to prove your defenses actually work, not just that they exist. See our accuracy benchmark for how we measure this.
- Incident response — documented procedures for security incidents, including detection, containment, eradication, and recovery
How Scandar Maps to EU AI Act
| Article | Requirement | Scandar Coverage | Evidence Produced |
|---|---|---|---|
| Art. 9 | Risk management | scandar-scan (pre-deployment) + Guard (runtime) | Scan reports, threat findings, runtime alerts |
| Art. 11 | Technical documentation | Audit log + compliance reports | Timestamped audit trail, exportable PDF reports |
| Art. 13 | Transparency | Session logging + finding reports | Full session history, decision traces |
| Art. 14 | Human oversight | Agent quarantine + enforcement gate | Quarantine logs, policy enforcement records |
| Art. 15 | Cybersecurity | 140+ detection rules + behavioral monitoring | Detection rate benchmarks, incident timelines |
The Compliance Score
Scandar Overwatch generates a composite compliance score for your agent fleet across four frameworks:
- EU AI Act — mapped to Articles 9, 11, 13, 14, 15
- SOC 2 — mapped to Trust Services Criteria (CC6, CC7, CC8)
- ISO 42001 — mapped to AI management system requirements
- NIST AI RMF — mapped to Govern, Map, Measure, Manage functions
Each framework check is scored as passing, partial, or failing, with specific remediation steps for partial and failing checks. The composite score gives you an at-a-glance view of your compliance posture.
Getting Started
The deadline is August 2, 2026. That's roughly 4 months away. Here's a realistic timeline:
Week 1: AssessmentCommon Questions
"We only serve US customers. Does this apply to us?"If any of your customers have EU operations that use your AI agents, yes. The Act follows the data and the users, not the provider's headquarters.
"Our agents are low-risk. Do we still need to comply?"General-purpose AI provisions apply regardless of risk classification. You have baseline transparency and documentation obligations even for minimal-risk systems.
"Can we just block EU users?"You can, but you're leaving revenue on the table. EU compliance also tends to satisfy other frameworks (SOC 2, ISO 42001) — so the work has compounding returns.
"How long does compliance take?"With Scandar, the technical implementation takes days, not months. The organizational work (policies, procedures, training) takes longer — budget 4-8 weeks total.
The EU AI Act is coming. The good news: with the right tools, compliance is achievable — and it makes your AI agents more secure in the process. Start your assessment today at scandar.ai/eu-ai-act.