EU AI Act: What Does It Mean for Your Security and Compliance after the Omnibus Agreement (May 2026)?
The May 2026 Digital Omnibus agreement moved EU AI Act high-risk deadlines to 2 December 2027 (Annex III) and 2 August 2028 (Annex I). What does that mean for your AI security, compliance, and governance?
TL;DR
The EU AI Act requires organizations to ensure AI systems are secure, transparent, and controllable. The Digital Omnibus agreement of 7 May 2026 moved the deadline for stand-alone high-risk AI systems from 2 August 2026 to 2 December 2027. AI embedded in regulated products under Annex I (medical devices, machinery, toys) shifts to 2 August 2028. Transparency obligations for AI-generated content apply already from 2 December 2026. Companies must identify AI risks, perform security testing, implement monitoring, and provide audit evidence.
The EU AI Act (Regulation EU 2024/1689), published in the Official Journal on 12 July 2024, is the world's first comprehensive AI regulation. The Digital Omnibus agreement of 7 May 2026 moved most high-risk obligations to 2 December 2027 (stand-alone systems, Annex III) and 2 August 2028 (AI embedded in regulated products, Annex I). Transparency obligations for AI-generated content apply from 2 December 2026.
For organizations already subject to NIS2 (Directive EU 2022/2555) or DORA (Regulation EU 2022/2554), the AI Act adds a layer of AI-specific obligations on top of existing cybersecurity requirements.
The ENISA Threat Landscape 2025 confirms that adversarial use of AI, including prompt injection and model poisoning, is already a documented threat in the EU. The MITRE ATT&CK framework now includes AI-specific attack techniques that organizations should validate through security testing.
For AI systems classified as high-risk under Annex III of the AI Act, security testing follows methodologies aligned with OWASP Top 10 for LLM Applications and established penetration testing standards.
What is the EU AI Act and why it matters for organizations
The EU AI Act is the first comprehensive regulation focused specifically on artificial intelligence. While GDPR focused on personal data, the AI Act focuses on the safety, reliability, and control of AI systems.
Under the Digital Omnibus agreement of May 2026, organizations must demonstrate by 2 December 2027 that stand-alone high-risk AI is designed and used according to clear risk frameworks. For AI embedded in regulated products this deadline shifts to 2 August 2028. This means AI is no longer just an innovation initiative, but a full governance and security responsibility.
For companies using AI in processes, products, or decision-making, the risk profile changes significantly. AI introduces new attack surfaces and new compliance obligations that go beyond traditional cybersecurity.
Why the EU AI Act is fundamentally a security and risk management topic
Although often seen as regulatory compliance, the core of the AI Act is risk management and security. Organizations must demonstrate that their systems are robust and protected against manipulation and misuse.
Key security requirements include:
- protection against data poisoning
- protection against prompt injection
- robustness against adversarial inputs
- logging and traceability
- access control
- monitoring of model behavior
This makes AI security testing a necessary step for organizations deploying AI.
Which organizations fall under the EU AI Act
The AI Act applies to both developers and organizations that use AI. Companies implementing AI tools also have responsibilities related to risk and governance.
High-risk AI systems
High-risk systems fall under the strictest requirements. These are systems that impact critical processes or fundamental rights.
Examples include:
- HR and recruitment systems
- fraud detection
- financial models
- critical infrastructure
- healthcare
- identity verification
These systems require extensive risk assessments, security testing, and monitoring.
General-purpose AI and internal applications
Organizations using AI for internal automation must also evaluate risks and implement appropriate controls.
What changes concretely for security and compliance teams
The EU AI Act introduces expectations that align with existing cybersecurity frameworks but with additional focus on AI behavior.
Mandatory AI risk assessments
Organizations must identify and document risks.
Security testing of AI systems
AI must be tested for vulnerabilities such as manipulation and data leakage.
Continuous monitoring and logging
Behavior and output must be monitored.
Incident management and reporting
AI incidents fall under reporting obligations.
Governance and documentation
Organizations must demonstrate that controls are in place.
How the EU AI Act relates to NIS2 and DORA
The AI Act does not exist in isolation. For many organizations, compliance will involve multiple frameworks.
- NIS2 requires risk management and incident reporting
- DORA focuses on resilience and testing
- ISO 27001 requires governance and controls
The AI Act adds specific requirements around model risks and AI behavior.
The biggest AI security risks for organizations
AI creates a new attack surface. Autonomous systems and agents can be misused if not properly secured.
Agent compromise
Agents can be compromised and perform unauthorized actions.
Prompt injection
Manipulated input can change AI behavior.
Data leakage
AI may expose sensitive information through outputs.
Model manipulation and poisoning
Attackers may influence models to change behavior.
These risks show why AI security testing is becoming essential.
How to prepare your organization for the new deadlines (Dec 2026, Dec 2027, Aug 2028)
Organizations preparing for the EU AI Act should follow a structured approach.
1. Inventory all AI systems
Identify where AI is used.
2. Perform an AI risk assessment
Analyze impact and threats.
3. Test AI systems for security vulnerabilities
Perform specialized testing such as prompt injection testing.
4. Implement governance and monitoring
Establish logging and policies.
5. Document controls and evidence
Audit trails will become essential.
Why AI security testing is becoming a new standard
Just as pentesting became standard practice for applications, AI security testing is becoming necessary for organizations using AI.
Companies must demonstrate that systems are secure against misuse and manipulation. This requires a combination of technical testing, governance, and continuous validation.
Conclusion
The EU AI Act makes it clear that artificial intelligence is becoming a regulated technology where safety, transparency, and control are central. Organizations must demonstrate that their AI systems are securely designed and managed, and that risks are actively mitigated.
AI security testing and governance will therefore become essential components of modern cybersecurity strategies. Companies that start risk assessments and testing today will reduce compliance stress and build a strong security foundation.
If you want concrete insight into the risks of your AI systems and how to test and secure them, explore our specialized approach to AI systems security testing.
Frequently Asked Questions about the EU AI Act
When will the EU AI Act fully apply after the May 2026 Omnibus agreement
The Digital Omnibus agreement of 7 May 2026 sets three deadlines: 2 December 2026 for transparency obligations on AI-generated content, 2 December 2027 for stand-alone high-risk AI under Annex III, and 2 August 2028 for high-risk AI embedded in regulated products under Annex I. Fines remain up to 30 million euros or 6% of global turnover.
Does the AI Act apply if we only use AI tools
Yes, deployers also have obligations related to governance and risk assessment.
Is AI security testing mandatory
For high-risk systems you must demonstrate that risks are controlled, making testing necessary in practice.
What is the difference between AI compliance and traditional cybersecurity
AI introduces new risks such as prompt injection, model manipulation, and unpredictable output behavior, requiring additional controls and monitoring.
Related services and resources
Sectricity specialises in AI systems penetration testing, including prompt injection testing, data leakage assessment, and guardrail validation for LLMs and chatbots. Our broader penetration testing services cover web applications, cloud and API environments, and network infrastructure.
For organisations navigating both AI regulation and cybersecurity compliance, explore our audit-ready pentesting mapped to NIS2, ISO 27001, and DORA, or check your readiness with our NIS2 compliance program.