← Blog|Compliance
COMPLIANCE

EU AI Act Compliance: What Developers Need to Know Before August 2026

Scandar Security Team
AI agent security research and product updates.
2026-03-20
11 min read

The Deadline

August 2, 2026. That's when the EU AI Act's general-purpose AI provisions take full effect. If your AI agents serve EU users — or if your organization operates in the EU — you need to comply.

This isn't optional. This isn't guidance. This is regulation with enforcement teeth.

Penalties for non-compliance: up to 35 million euros or 7% of global annual revenue, whichever is higher. For context, GDPR's maximum penalty is 4% of global revenue. The EU AI Act goes further.

And unlike GDPR, which took years for enforcement to ramp up, the EU AI Office has been publicly signaling that it intends to pursue early enforcement actions to establish precedent. The first fines will be front-page news.

KEY NUMBERS
Aug 2, 2026
Enforcement deadline
7%
Max penalty (global revenue)
25 min
Time to compliance with Scandar

Who Does It Apply To?

The EU AI Act applies to:

  • Providers — organizations that develop or place AI systems on the EU market
  • Deployers — organizations that use AI systems in their operations
  • Importers and distributors — organizations that bring third-party AI systems into the EU market

If you're building AI agents that EU-based companies use, you're a provider. If you're using AI agents in your EU operations, you're a deployer. Both have obligations.

The Act uses a risk-based classification system:

  • Unacceptable risk — banned outright (social scoring, real-time biometric surveillance)
  • High risk — subject to the full requirements (AI in healthcare, law enforcement, critical infrastructure, education, employment)
  • Limited risk — transparency obligations (chatbots, deepfakes)
  • Minimal risk — no specific requirements

Most AI agents fall into the limited risk or high risk category, depending on their use case. An AI agent that helps developers write code is limited risk. An AI agent that screens job applications is high risk. An AI agent that manages critical infrastructure is high risk.

Regardless of risk category, general-purpose AI systems (which includes most LLM-based agents) have baseline requirements under Articles 52-55 of the Act.

What's Required

The EU AI Act establishes requirements across 5 key articles for AI systems. Here's what each one means for AI agent developers, with specific technical requirements:

Article 9: Risk Management

You must have a documented risk management system that identifies, analyzes, and mitigates risks from your AI system throughout its lifecycle. For AI agents, this means:

  • Threat identification — scanning tools and skills for injection, exfiltration, and manipulation risks before deployment. You need evidence that you looked for threats and have a process for finding new ones.
  • Runtime monitoring — continuous monitoring of agent behavior for anomalies, policy violations, and attack indicators. Point-in-time assessments are not sufficient.
  • Documented coverage — your threat detection must cover known attack categories. The OWASP LLM Top 10 is the de facto standard for LLM risk categorization. You should be able to map your defenses to each category.
  • Mitigation evidence — when threats are detected, you need documented evidence of mitigation: what was found, when, what action was taken, and what the outcome was.
How Scandar helps: scandar-scan provides pre-deployment threat detection with 140+ rules mapped to OWASP LLM Top 10 categories. Every scan produces a detailed findings report with severity ratings, affected lines, and remediation guidance. scandar-guard provides continuous runtime monitoring. Overwatch aggregates this into a fleet-wide risk management view with exportable evidence.

Article 11: Technical Documentation

You must maintain documentation of your AI system's design, development, and testing. This includes:

  • Security architecture — what security measures are in place, how they work, and what they protect against
  • Threat detection methodology — how threats are detected and mitigated, including detection rates and false positive rates
  • Audit trails — timestamped, tamper-evident logs of all administrative actions, policy changes, and security events
  • Testing records — evidence of security testing, including scan results, penetration test findings, and remediation actions

The documentation must be updated when significant changes occur and must be available to regulatory authorities upon request.

How Scandar helps: Overwatch maintains a complete audit log of all administrative actions (policy changes, agent quarantine, role assignments). Scan results are stored with timestamps and can be exported. The compliance report maps your documentation to specific Article 11 requirements.

Article 13: Transparency

Users must be informed that they're interacting with an AI system. For agent-based systems:

  • AI disclosure — clear indication to users that they're interacting with an AI agent, not a human
  • Decision logging — when AI agents make or influence decisions, the reasoning must be logged and reviewable
  • Explainability — ability to explain why an agent took a specific action, especially when that action affected a user
  • Data usage transparency — users must understand what data the AI agent processes and how
How Scandar helps: Guard logs every session with full message history, tool calls, and threat detections. Overwatch provides session replay (graph time-travel) so you can trace exactly what an agent did and why. These logs serve as your transparency evidence.

Article 14: Human Oversight

Your AI system must allow for meaningful human intervention. This means:

  • Quarantine capabilities — ability to immediately stop an agent that behaves unexpectedly, without waiting for an automated system to catch it
  • Kill switches — manual override for all automated agent processes, accessible to authorized personnel
  • Approval gates — human review required for high-risk decisions before they're executed
  • Escalation paths — clear procedures for when automated monitoring detects anomalies that require human judgment
How Scandar helps: Overwatch provides one-click agent quarantine, policy-based enforcement gates (block mode), and RBAC controls that ensure only authorized personnel can override security decisions. Alert routing ensures the right humans are notified immediately.

Article 15: Accuracy, Robustness, Cybersecurity

Your AI system must be resilient to attacks and operate reliably. For AI agents:

  • Injection protection — defense against prompt injection, tool poisoning, and adversarial inputs across all input channels
  • Adversarial robustness — evidence that your defenses work against known attack techniques, including encoding evasion, multi-turn injection, and indirect injection via tool results
  • Detection accuracy — documented detection rates and false positive rates for your security monitoring. You need to prove your defenses actually work, not just that they exist. See our accuracy benchmark for how we measure this.
  • Incident response — documented procedures for security incidents, including detection, containment, eradication, and recovery
How Scandar helps: scandar-scan's 140+ rules cover the full OWASP LLM Top 10 attack surface. Guard's two-layer detection (pattern + LLM behavioral analysis) provides documented detection rates. Overwatch's kill chain engine traces attack paths for incident response.

How Scandar Maps to EU AI Act

ArticleRequirementScandar CoverageEvidence Produced
Art. 9Risk managementscandar-scan (pre-deployment) + Guard (runtime)Scan reports, threat findings, runtime alerts
Art. 11Technical documentationAudit log + compliance reportsTimestamped audit trail, exportable PDF reports
Art. 13TransparencySession logging + finding reportsFull session history, decision traces
Art. 14Human oversightAgent quarantine + enforcement gateQuarantine logs, policy enforcement records
Art. 15Cybersecurity140+ detection rules + behavioral monitoringDetection rate benchmarks, incident timelines

The Compliance Score

Scandar Overwatch generates a composite compliance score for your agent fleet across four frameworks:

  • EU AI Act — mapped to Articles 9, 11, 13, 14, 15
  • SOC 2 — mapped to Trust Services Criteria (CC6, CC7, CC8)
  • ISO 42001 — mapped to AI management system requirements
  • NIST AI RMF — mapped to Govern, Map, Measure, Manage functions

Each framework check is scored as passing, partial, or failing, with specific remediation steps for partial and failing checks. The composite score gives you an at-a-glance view of your compliance posture.

Getting Started

The deadline is August 2, 2026. That's roughly 4 months away. Here's a realistic timeline:

Week 1: Assessment
  • Sign up for Scandar and connect your agents with Guard. Check your compliance score in the Overwatch dashboard. This takes under 30 minutes and shows you exactly where you stand.
  • Week 2-3: Remediation
  • Address the gaps. The compliance report shows exactly which checks are passing and which need attention, with specific remediation steps for each. Start with Article 15 (cybersecurity) — it has the most technical requirements and the longest implementation timeline.
  • Week 4: Documentation
  • Document your security posture. Export the compliance report as PDF for your legal and compliance team. Supplement with your own documentation for areas Scandar doesn't cover (organizational policies, HR procedures, data governance).
  • Week 5+: Continuous Monitoring
  • Set up continuous monitoring. Compliance isn't a one-time check. Use Overwatch to continuously monitor your fleet and track your score over time. Configure weekly compliance report exports to your compliance team.
  • Common Questions

    "We only serve US customers. Does this apply to us?"

    If any of your customers have EU operations that use your AI agents, yes. The Act follows the data and the users, not the provider's headquarters.

    "Our agents are low-risk. Do we still need to comply?"

    General-purpose AI provisions apply regardless of risk classification. You have baseline transparency and documentation obligations even for minimal-risk systems.

    "Can we just block EU users?"

    You can, but you're leaving revenue on the table. EU compliance also tends to satisfy other frameworks (SOC 2, ISO 42001) — so the work has compounding returns.

    "How long does compliance take?"

    With Scandar, the technical implementation takes days, not months. The organizational work (policies, procedures, training) takes longer — budget 4-8 weeks total.

    The EU AI Act is coming. The good news: with the right tools, compliance is achievable — and it makes your AI agents more secure in the process. Start your assessment today at scandar.ai/eu-ai-act.

    SCANDAR
    Scan before you ship. Guard when you run.
    140+ detection rules pre-deployment. 11 runtime detection layers. Fleet-wide security with Overwatch. Free to start.
    Python · TypeScript · Go · Free on all plans
    SHARE THIS ARTICLE
    Twitter / XLinkedIn
    CONTINUE READING
    Threat Research10 min read
    An AI Agent Created Its Own Backdoor: What the Alibaba ROME Incident Means for AI Security
    Guide15 min read
    The OWASP LLM Top 10: A Complete Guide for AI Agent Developers
    Guide14 min read
    How to Red Team Your AI Agents: A Practical Guide