The Countdown to Full Enforcement
The EU AI Act (Regulation EU 2024/1689) entered into force on August 1, 2024. Its implementation is phased, and the phases are no longer hypothetical. Prohibited AI practices have been enforceable since February 2, 2025. General-purpose AI model obligations have applied since August 2, 2025. The AI Office is staffed, the GPAI Code of Practice is published, and the first enforcement action has already made headlines.
On August 2, 2026, the full requirements for high-risk AI systems become enforceable. Market surveillance authorities across the EU will have the power to investigate, fine, and order remediation. The penalty regime reaches up to EUR 35 million or 7% of global annual turnover.
For security teams, this regulation is not an abstract policy concern. Article 15 of the AI Act imposes specific cybersecurity requirements on high-risk AI systems. The regulation demands protection against data poisoning, model manipulation, adversarial attacks, and confidentiality breaches. It intersects directly with NIS2 and DORA obligations that many security teams are already managing.
This article explains what the AI Act requires, what has already happened in enforcement, where the regulation overlaps with existing cybersecurity frameworks, and what security teams should be doing now.
The Timeline: What Is Already Enforceable
| Date | What Applies | Status |
|---|---|---|
| 1 August 2024 | AI Act enters into force | Completed |
| 2 February 2025 | Prohibited AI practices (Art. 5) enforceable; AI literacy obligation (Art. 4) in effect | Active |
| 2 August 2025 | GPAI model provider obligations (Arts. 51–56) apply; AI Office operational | Active |
| 2 August 2026 | High-risk AI system requirements (Arts. 6–51) fully enforceable; full penalty regime; market surveillance authority enforcement | 6 months |
| 2 August 2027 | Final phase: AI in regulated products (Annex I) + legacy GPAI model compliance | Future |
The European Commission also proposed a "Digital Omnibus" package in late 2025 that could extend the high-risk deadline to December 2027. However, this has not been enacted. Organisations should treat August 2, 2026 as the binding deadline.
The First Enforcement Actions
Enforcement is no longer theoretical. On February 3, 2026, French prosecutors raided X's Paris offices to investigate Grok's deepfake generation capabilities under the AI Act's prohibited practices provisions. Elon Musk was summoned for questioning. This represents the AI Act's first high-profile enforcement action and signals that authorities are prepared to act on prohibited practices.
Beyond the X/Grok case, investigations are underway into workplace emotion recognition systems deployed by multinational corporations, and into predictive policing algorithms used by law enforcement agencies in multiple Member States. No formal penalties have been assessed yet, but the investigative machinery is operational.
Finland became the first EU Member State with full AI Act enforcement powers in December 2025. Other Member States are completing their national competent authority designations ahead of the August 2026 enforcement date.
The AI Office itself — housed within DG CONNECT — now has over 125 staff across five units, with a target of 140 or more. Its enforcement powers for GPAI model violations activate in August 2026.
Is Your AI High-Risk?
The AI Act classifies AI systems into four risk tiers: unacceptable (prohibited), high-risk, limited risk (transparency obligations), and minimal risk (no specific obligations). The high-risk category is where most compliance obligations — and most security team involvement — concentrate.
Annex III: The Eight High-Risk Domains
An AI system is classified as high-risk if it falls within one of eight domains defined in Annex III of the regulation:
- Biometrics — Remote biometric identification, biometric categorisation by sensitive attributes, emotion recognition
- Critical infrastructure — AI controlling digital infrastructure, road traffic management, water, gas, heating, or electricity supply
- Education — Student assessment systems, programme access decisions, academic placement
- Employment — Recruitment and candidate screening, performance evaluation, task allocation, termination decisions
- Essential services — Credit scoring, insurance risk assessment and pricing, emergency service dispatch, social benefit eligibility, housing allocation
- Law enforcement — Crime risk assessment, offending/reoffending prediction, evidence reliability evaluation, polygraph equivalents
- Migration and border control — Automated border systems, visa risk assessment, asylum eligibility determination
- Justice and democratic processes — Legal interpretation AI, sentencing probability assessment, democratic process interference detection
When an Annex III System Is Not High-Risk
An AI system listed in Annex III is explicitly not considered high-risk if it only performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing human assessment, or performs only preparatory tasks for an assessment.
This exemption mechanism is important for security teams: not every AI tool used in a listed domain triggers the full compliance regime. Classification requires a careful, documented assessment of how the system functions and what decisions it influences.
What Security Teams Own: Article 15
Article 15 of the AI Act — titled "Accuracy, Robustness and Cybersecurity" — defines the cybersecurity requirements that fall squarely within security team responsibility.
AI-Specific Threats to Address
High-risk AI systems must implement measures to protect against:
- Data poisoning — Contamination of training data to manipulate model behaviour
- Model poisoning — Direct manipulation of the model's parameters or architecture
- Adversarial examples — Inputs specifically crafted to cause the model to produce incorrect outputs
- Confidentiality attacks — Techniques to extract proprietary model information, training data, or sensitive outputs
- Model flaws — Implementation weaknesses and architectural vulnerabilities
What the Regulation Requires
The regulation requires detection mechanisms for these attacks, response and recovery procedures, and control and mitigation strategies proportionate to the risks and circumstances. The approach is principles-based rather than prescriptive — the AI Act does not mandate specific technical solutions but requires organisations to demonstrate that appropriate measures are in place.
For security teams accustomed to defending networks and endpoints, the AI threat landscape introduces unfamiliar attack surfaces. A poisoned training dataset is not a traditional vulnerability. An adversarial input that causes a credit scoring model to misclassify an applicant is not a conventional exploit. Defending AI systems requires extending security operations to cover the AI lifecycle — from data ingestion through model training, deployment, and ongoing inference.
Beyond Article 15
Security teams also have responsibilities under several other provisions:
- Article 9 (Risk management) — Contributing to the risk management system that must operate throughout the AI system's lifecycle
- Article 12 (Record-keeping) — Ensuring automated logging systems capture events relevant to identifying risks and serious incidents
- Article 14 (Human oversight) — Supporting the design and operation of human oversight mechanisms
- Article 62 (Post-market monitoring) — Establishing monitoring systems that detect degradation, drift, and security incidents after deployment
The Regulatory Triple Threat
For many organisations, the AI Act does not arrive in isolation. It intersects with GDPR, NIS2, and DORA — creating overlapping obligations that must be managed as a coordinated compliance programme rather than four independent workstreams.
AI Act + GDPR
Any high-risk AI system that processes personal data must comply with both frameworks simultaneously.
Data Protection Impact Assessments: A DPIA under GDPR Article 35 is mandatory for high-risk AI systems processing personal data. The AI Act's Fundamental Rights Impact Assessment (FRIA) complements — but does not replace — the DPIA. Organisations can streamline this by integrating the FRIA into their existing DPIA process.
Automated decision-making: GDPR Article 22 restricts solely automated decisions with legal or significant effects and requires meaningful human intervention. AI Act Article 86 separately requires deployers to provide clear explanations of the AI system's role in decision-making. The two provisions are complementary but impose distinct obligations.
Training data lawfulness: The European Data Protection Board's Opinion 28/2024 (December 2024) confirmed that legitimate interest can serve as a legal basis for AI model development, but established that unlawfully processed training data affects the lawfulness of the model's deployment — unless the model has been properly anonymised.
AI Act + NIS2
Organisations in NIS2's 18 critical sectors that also deploy high-risk AI face dual obligations with a critical operational conflict: different incident reporting timelines.
| Framework | Timeline | Recipient |
|---|---|---|
| NIS2 | 24-hour early warning to CSIRT; 72-hour detailed notification | National competent authority |
| AI Act | 15-day serious incident report | Market surveillance authority |
An AI-related cybersecurity incident in critical infrastructure triggers both reporting tracks. The 24-hour NIS2 window and the 15-day AI Act window are separate obligations to separate authorities. Security teams need pre-established workflows that satisfy both.
ENISA's NIS2 Technical Implementation Guidance, published in June 2025, explicitly addresses AI-specific risks including data poisoning, model extraction, and adversarial inputs within its 170 pages of practical cybersecurity measures.
AI Act + DORA
Financial institutions using AI face triple compliance: AI Act (for the AI system itself), DORA (for operational resilience), and GDPR (for personal data processing).
DORA Article 28 applies directly to third-party AI providers. Financial institutions must maintain a register of all AI service providers, conduct due diligence assessments, include contractual clauses for audit rights and incident notification, and perform operational resilience testing at minimum every three years.
ESMA has published guidance on AI in investment services under MiFID II, and the EBA's 2026 work programme includes mapping AI Act requirements against banking sector measures. The European Parliament passed a resolution in November 2025 expressing concern about regulatory overlaps and requesting the Commission to address inconsistencies.
The Compliance Mapping
| Insurer / Regulatory Requirement | AI Act | NIS2 Art. 21 | DORA |
|---|---|---|---|
| Risk management system | Art. 9 | Measure 1 | Pillar I |
| Cybersecurity measures | Art. 15 | Measure 5 | Pillar I |
| Incident management and reporting | Art. 62, 73 | Measure 2 | Pillar II |
| Third-party oversight | Art. 25 (deployer obligations) | Measure 4 | Pillar IV |
| Human oversight / governance | Art. 14 | — | Pillar I |
| Testing and validation | Art. 9, 15 | Measure 6 | Pillar III |
| Record-keeping and logging | Art. 12 | — | Pillar II |
The overlap is substantial. Organisations that have invested in NIS2 or DORA compliance already have foundational capabilities that map to AI Act requirements. The additional work is in extending those capabilities to cover AI-specific risks and maintaining the documentation that demonstrates compliance with each framework independently.
The August 2026 Compliance Checklist
Based on the enforcement timeline and the requirements, here is what security teams should be driving within their organisations before August 2, 2026.
1. Build Your AI Inventory
You cannot classify what you have not catalogued. Over 50% of organisations lack a systematic inventory of AI systems in production or development. This is the single most critical gap.
Catalogue every AI system in use — including third-party AI services, embedded AI in vendor products, and internally developed models. Record the purpose, data inputs, decision outputs, and the domain in which each system operates. This inventory is the foundation for risk classification.
2. Classify by Risk
For each system in your inventory, determine whether it falls within an Annex III high-risk domain and whether any exemptions apply. Document the classification rationale. Where classification is ambiguous, the European Commission's guidelines on high-risk classification (due February 2, 2026) should provide practical examples.
3. Assess Article 15 Cybersecurity Requirements
For each high-risk system, assess your current defences against the five threat categories: data poisoning, model poisoning, adversarial examples, confidentiality attacks, and model flaws. Identify gaps. This is where security team expertise is essential — risk owners and compliance teams cannot perform this assessment without technical security input.
4. Establish AI Incident Response Workflows
Extend your existing incident response capability to cover AI-specific incidents. Define classification criteria for AI serious incidents. Build the dual reporting workflow: NIS2 (24-hour) track and AI Act (15-day) track. Ensure your teams know which authority receives which notification and what information each requires.
5. Address Third-Party AI Risk
If you use third-party AI services or embed vendor AI in your products, you have deployer obligations. Review contracts for AI Act-mandated provisions: transparency of system operation, access to documentation, incident notification, and audit rights. If your organisation is also subject to DORA, align these requirements with your existing ICT third-party risk management framework.
6. Start the Conformity Assessment
For high-risk systems, conformity assessment can take 6 to 12 months. This includes preparing technical documentation, validating data governance practices, testing accuracy and robustness, and — for certain categories — engaging a notified body for third-party assessment. Starting this process in February 2026 leaves little margin for the August deadline.
7. Implement AI Literacy
Article 4's AI literacy requirement has been in effect since February 2025. Ensure that staff involved in AI operation and deployment have adequate training. Board members and senior management need to understand their governance obligations — directors cannot plead ignorance as a defence, and personal liability under D&O frameworks applies to inadequate AI oversight.
AI Literacy: The Obligation Already in Effect
Article 4 deserves specific attention because it is the one AI Act obligation that is already enforceable and is frequently overlooked.
The provision requires providers and deployers to ensure "a sufficient level of AI literacy" among staff and other persons dealing with AI systems, taking into account their technical knowledge, experience, and the context in which the AI system is used.
The definition of AI literacy in the regulation is precise: "skills, knowledge and understanding that allow providers, deployers and affected persons to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause."
This applies at every level:
- Technical staff must understand AI system capabilities, limitations, bias risks, and failure modes
- Deployers must understand how the system works, what decisions it supports, and when it can fail
- Customer-facing staff must be able to explain AI system use to end users
- Management and board members must understand governance obligations, risk implications, and oversight requirements
Enforcement of AI literacy obligations begins August 2, 2026 through national market surveillance authorities. The Centre for Information Policy Leadership (CIPL) published best practices for AI literacy in May 2025, providing a practical framework for implementation.
Standards and Guidance Available Now
Organisations do not need to wait for all harmonised standards to begin compliance work. Several frameworks are already available:
ISO/IEC 42001:2023 — The world's first international AI management system standard, published in December 2023. It specifies 38 controls covering policy development, training, technical safeguards, documentation, monitoring, and incident management. Certifiable and scalable from startups to enterprises. ISO 42001 is not a substitute for AI Act compliance, but it provides a structured management system that supports it.
prEN 18286 (October 2025) — The first harmonised standard for the AI Act, covering quality management systems for EU AI Act regulatory purposes. Currently in public enquiry phase. Specifically designed for high-risk AI system providers to demonstrate Article 17 compliance.
GPAI Code of Practice (July 2025) — The voluntary compliance tool for general-purpose AI providers. Three chapters covering transparency, copyright, and safety/security. Twenty-six organisations have signed, including OpenAI, Google, Anthropic, Microsoft, Amazon, Meta, and IBM. Compliance is a mitigating factor in penalty assessments.
EDPB Opinion 28/2024 (December 2024) — Guidance on processing personal data in AI model development. Establishes the three-step legitimate interest test for AI training and clarifies the impact of unlawful training data on deployment lawfulness.
ENISA NIS2 Technical Implementation Guidance (June 2025) — 170 pages of practical cybersecurity measures that include AI-specific risk categories. Essential reading for security teams managing both NIS2 and AI Act compliance.
What Comes Next
The enforcement trajectory is clear. The AI Office will gain full enforcement powers in August 2026. National market surveillance authorities will begin active supervision. The Commission's first assessment of whether the AI Office has adequate powers and resources is due in June 2026 — signalling potential institutional strengthening.
Through the remainder of 2026, the Commission will publish guidelines on practical high-risk classification, transparency requirements under Article 50, serious incident reporting, and deployer obligations. CEN/CENELEC working groups continue developing harmonised standards for cybersecurity, testing, data governance, and risk management.
For security teams, the AI Act represents an expansion of scope — from defending traditional IT infrastructure to defending AI systems against threats that are fundamentally different in nature. Data poisoning is not a network attack. Model extraction is not a data breach. Adversarial inputs are not conventional exploits. Building the expertise to defend against these threats, and the processes to demonstrate that defence to regulators, is the work that must be completed before August 2, 2026.
The organisations that will navigate this successfully are those that recognise the AI Act not as a standalone compliance exercise but as the AI-specific extension of the cybersecurity and risk management frameworks they are already building under NIS2 and DORA. The foundations are the same. The attack surfaces are new.
Sources
Official EU Sources
- Regulation (EU) 2024/1689 — EUR-Lex
- EU AI Act Portal
- European AI Office
- AI Act Implementation Timeline
- GPAI Code of Practice
- EC Guidelines on GPAI Obligations
- AI Act Annex III — High-Risk AI Systems
Data Protection
Cybersecurity and Financial Services
- ENISA — NIS2 Technical Implementation Guidance (June 2025)
- ESMA — AI in Investment Services Guidance
- EBA — 2026 Work Programme
Standards
Legal Analysis