AI, NIS2, and the EU AI Act: What Security and Compliance Teams Need to Do Before August 2026
NIS2, the EU AI Act, and DORA now converge on organisations deploying AI. This guide maps where the three frameworks overlap, what security and compliance teams need to deliver before August 2026, and how a single integrated assessment can satisfy all three.
TL;DR
Three major EU regulations now converge on organisations that deploy AI: NIS2 mandates cybersecurity risk management for essential and important entities, the EU AI Act sets specific cybersecurity requirements for high-risk AI systems with a key deadline of 2 August 2026, and DORA applies financial sector requirements to AI used in ICT processes. Most organisations are managing these separately, through different teams, with different timelines. That approach creates gaps. This guide maps where the three frameworks overlap, what security and compliance teams need to do before August 2026, and how to avoid duplicating effort.
Why these three frameworks are now a single problem
Until recently, cybersecurity compliance and AI governance sat in separate conversations. Cybersecurity teams handled NIS2 and DORA. Legal and compliance teams handled AI policy. The EU AI Act has ended that separation.
The EU AI Act explicitly requires high-risk AI systems to demonstrate cybersecurity resilience, accuracy, and robustness. The cybersecurity requirements in the Act are not met by a legal review or a policy document. They require technical evidence: architecture assessments, adversarial testing, documented vulnerability management. In other words, they require the same type of evidence that NIS2 and DORA have been demanding for years.
For any organisation that is both a NIS2-scoped entity and deploying AI systems, the evidence requirements overlap substantially. Building them separately is unnecessary. Building them together is faster, cheaper, and produces stronger compliance documentation.
What the EU AI Act requires from a security perspective
The EU AI Act classifies AI systems by risk level. The cybersecurity requirements apply most stringently to high-risk systems, defined under Annex III. These include AI used in critical infrastructure, employment and worker management, access to essential services, law enforcement, and education. The 2 August 2026 deadline requires that all such systems in scope demonstrate compliance before being placed on the market or put into service.
Cybersecurity resilience under Article 15
Article 15 of the EU AI Act requires high-risk AI systems to be resilient against attempts by unauthorised third parties to alter their use or performance. This covers adversarial attacks, data poisoning, model manipulation, and prompt injection for systems using language models. Demonstrating this resilience requires structured testing, not just policy assertions.
Technical documentation and audit trails
High-risk AI systems must maintain technical documentation throughout their lifecycle. This includes the data used to train and test the system, the measures taken to ensure accuracy and robustness, and the results of testing. For cybersecurity specifically, this means maintaining records of what was tested, when, what was found, and what was done about it. A penetration test report is among the most defensible forms of this evidence.
Where NIS2 and the EU AI Act overlap
NIS2 requires essential and important entities to implement risk management measures across a defined set of security domains. Several of these domains map directly onto EU AI Act requirements for high-risk systems.
Vulnerability management. NIS2 requires documented vulnerability handling. The EU AI Act requires ongoing monitoring of system performance and robustness, including identification and remediation of security weaknesses. A single vulnerability management process serves both.
Security testing. NIS2 requires security testing as part of risk management. The EU AI Act requires technical evidence of cybersecurity resilience. An AI-specific penetration test that covers both conventional application security and AI-specific attack vectors, prompt injection, model manipulation, RAG pipeline security, produces documentation that satisfies both frameworks simultaneously.
Incident handling. NIS2 requires a tested incident response capability with defined reporting obligations. The EU AI Act requires that high-risk AI systems have post-market monitoring in place to detect and respond to incidents. Both require the same operational capability: the ability to detect, contain, and report security events related to the AI system.
Supply chain security. NIS2 requires assessment of third-party and supplier security. The EU AI Act requires documentation of the data, models, and components used in high-risk AI systems, including third-party components. If your AI system relies on a third-party model provider or data pipeline, both frameworks require you to assess and document that dependency.
Where DORA adds financial sector specifics
Financial entities subject to DORA face a third layer of requirements when AI is involved in ICT processes. DORA requires threat-led penetration testing for significant financial institutions, documented resilience testing for critical ICT systems, and a register of all ICT third-party dependencies including AI providers.
For a bank or insurer deploying an AI system in credit scoring, fraud detection, or customer interaction, DORA applies to the ICT security of that system, the EU AI Act applies to its AI-specific cybersecurity properties, and NIS2 may apply to the organisation as an essential or important entity. All three frameworks require documentation of the same security properties. The most efficient path is a single integrated assessment that produces evidence covering all three.
The August 2026 deadline in practical terms
The 2 August 2026 deadline means that high-risk AI systems must comply before being placed on the market or put into service from that date. Systems already in service have until 2 August 2027 under transitional provisions in most cases, but systems classified as high-risk that were placed on the market before August 2026 will face scrutiny at their next update or significant modification.
Practically, this means the window for organisations to conduct an assessment, identify gaps, remediate, and produce documentation is now. A security assessment of an AI system takes weeks, not days. Remediation of findings takes additional time. Regulatory documentation takes time to prepare. Organisations waiting until June 2026 to start this process will not finish in time.
A practical compliance roadmap
Step 1: Scope your AI systems. Identify which AI systems your organisation uses or deploys. For each, determine whether it falls within the EU AI Act's high-risk categories under Annex III. Be conservative: if there is genuine uncertainty, assume in-scope and verify.
Step 2: Map your regulatory obligations. Determine which frameworks apply to your organisation and to each AI system. A NIS2-scoped organisation deploying high-risk AI under the EU AI Act has overlapping obligations. Document the overlap and identify which evidence satisfies multiple requirements.
Step 3: Conduct a security assessment of each in-scope AI system. This must cover AI-specific attack surfaces including adversarial inputs, prompt injection, model manipulation, and retrieval pipeline security, as well as the underlying application and infrastructure. The output is a test report that serves as technical documentation for both the EU AI Act and NIS2.
Step 4: Remediate findings and document. Findings from the assessment feed into your vulnerability management process. Remediation actions and their outcomes become part of the technical documentation required under both frameworks. This creates an audit trail that demonstrates ongoing, rather than point-in-time, security management.
Step 5: Establish ongoing monitoring. Both NIS2 and the EU AI Act require ongoing security management, not a one-time assessment. Define a recurring testing cadence, integrate AI system monitoring into your incident detection capability, and update technical documentation when the system changes.
FAQ
What does the EU AI Act require for cybersecurity?
The EU AI Act requires high-risk AI systems to achieve appropriate levels of cybersecurity resilience, accuracy, and robustness throughout their lifecycle. Article 15 specifically requires resilience against attempts by third parties to alter the system's use or performance, covering adversarial attacks, data poisoning, and model manipulation. This must be demonstrated through technical documentation including test results. The deadline for high-risk AI systems is 2 August 2026.
How does NIS2 apply to organisations that deploy AI?
NIS2 applies to essential and important entities across a wide range of sectors. If your organisation is NIS2-scoped and deploys AI systems, NIS2 requires you to include those systems within your risk management framework, vulnerability management process, and incident response capability. AI systems that process sensitive data, support critical processes, or are integrated with critical infrastructure are particularly likely to be in scope.
Can a single penetration test satisfy both NIS2 and EU AI Act requirements?
A single AI security assessment can produce evidence relevant to both frameworks if it covers the full scope of both. For NIS2, the assessment must cover the application security and infrastructure of the AI system. For the EU AI Act, it must additionally cover AI-specific attack vectors including adversarial inputs, prompt injection, model manipulation, and retrieval pipeline security. An assessment covering all of these produces a single report that documents compliance-relevant security evidence for both frameworks simultaneously.
What is the EU AI Act deadline for high-risk AI systems?
The EU AI Act entered into force on 1 August 2024. The requirements for high-risk AI systems under Annex III apply from 2 August 2026. Organisations deploying high-risk AI systems must demonstrate compliance with all applicable requirements including cybersecurity before placing systems on the market or putting them into service from that date. Systems already deployed before August 2026 face scrutiny at their next significant modification.
How does DORA interact with the EU AI Act for financial organisations?
Financial entities subject to DORA must treat AI systems used in ICT processes as ICT assets subject to DORA's resilience, testing, and third-party management requirements. When those AI systems also fall within the EU AI Act's high-risk categories, the same system faces both DORA's ICT security requirements and the EU AI Act's AI-specific cybersecurity requirements. An integrated assessment covering both ICT security testing and AI-specific adversarial testing produces documentation relevant to both DORA and the EU AI Act simultaneously.
Related services and resources
Sectricity provides AI systems penetration testing that covers both AI-specific attack vectors and conventional application security, producing a single report relevant to EU AI Act, NIS2, and DORA documentation requirements. For the regulatory background, see our guides on the EU AI Act and NIS2 compliance. For the technical detail on AI attack surfaces, see our guides on prompt injection and how to test AI system security. Start with a free security scan.