EU AI Act Specific Requirements Broken Down: How Versa Universal SASE Secures GenAI & Ensures Compliance

dhiraj-sehgal
By Dhiraj Sehgal
Senior Director, Product Marketing
August 14, 2025
in
Share
Follow

As we rapidly adopt generative AI (GenAI) tools to boost productivity, we must also brace for the new wave of regulatory compliance.. With enforcement timelines approaching and security obligations tightening, our teams need security platforms that can help implement, enforce and maintain controls and provide continuous reporting as evidence. This blog addresses how you can achieve visibility, assessment, and mitigation of security risks as required by EU AI act.

Understanding the EU AI Act: Scope and Security Requirements

The EU AI Act, finalized in 2024 and entering phased enforcement from 2025 onwards, introduces a comprehensive regulatory framework for AI technologies. It applies to any organization that operates in the EU or offers AI-based products or services to EU citizens—regardless of where the organization is based.

Key Security and Compliance Requirements:

The Act defines a risk-based categorization of AI systems and imposes strict obligations for high-risk AI use cases, particularly those involved in:

  • Biometric identification
  • Critical infrastructure management
  • Credit scoring or employment decisions
  • Educational performance evaluations

For all covered AI systems, the EU AI Act mandates the following core security and governance controls:

  • Cybersecurity measures to prevent data breaches, model poisoning, and adversarial attacks.
  • Human oversight to ensure that decisions can be monitored and overridden.
  • Transparency and explainability of AI model outputs.
  • Compliance with data privacy regulations, including data minimization and purpose limitation.
  • Documentation and logging of AI interactions and decision logic.

Timeline: When Does the EU AI Act Take Effect?

The EU AI Act follows a staged implementation schedule:

  • Mid-2025: Prohibited AI systems must be withdrawn from us.
    • AI systems banned by the EU—such as social scoring, manipulative behavior, and real-time biometric ID in public spaces—must be removed. Use beyond this date risks enforcement action.
  • Late 2025: Requirements for high-risk systems begin to apply (documentation, risk management, human oversight).
    • High-risk AI systems (e.g., in healthcare, education, hiring, or critical infrastructure) must comply with rules for documentation, risk management, and human oversight to ensure safety and accountability.
  • 2026 and beyond: Broader enforcement and market surveillance; fines for non-compliance start to be levied (up to €35 million or 7% of global revenue).

Versa Universal SASE Platform: Compliance-Aligned Security Controls

The Versa platform maps relevant EU AI Act’s cybersecurity requirements and provides out-of-the-box capabilities to secure, govern, and report GenAI usage across the enterprise.

Key Capabilities:

1. Shadow GenAI Visibility & Control

  • Detects access to GenAI tools via DNS and URL inspection
  • Categorizes GenAI tools using a 5-scale reputation model: Trustworthy, Low Risk, Moderate Risk, Suspicious, High Risk
  • Enables reporting on sanctioned, tolerated, and unsanctioned tools

2. Data Leakage Prevention Aligned to Risk Profiles

  • Pre-built data leakage profiles (PII, financial data, source code)
  • Detects and blocks sensitive data exfiltration through GenAI tools
  • Matches top data protection profiles and documents in real-time traffic analysis

3. Security Actions with Human Oversight

  • Allows customizable actions: Block, Allow, Ask (end-user prompt)
  • Enables differentiated handling of high-risk and tolerated tools
  • Incorporates human-in-the-loop enforcement in line with AI Act guidelines

4. Comprehensive Reporting for Compliance Audits

  • Drill-down dashboards showing:
    • High-risk users and tools
    • Sensitive data matched in GenAI interactions
    • Usage justification for tolerated tools (e.g., code generators)

The following table highlights specific areas where Versa can support compliance efforts—particularly around secure access, data protection, threat detection, and continuous risk monitoring.

EU AI Act RequirementRelevant EU AI Act ArticlesVersa Universal SASE CapabilityExplanation
Monitoring and logging of AI usage to ensure oversight and prevent unauthorized systemsArt. 15 (Logging Capabilities) Art. 29 (Post-Market Monitoring)Shadow GenAI Tool Discovery Detects GenAI tools accessed by employees (14K+ tools, incl. 3rd-party SaaS AI apps)AI systems must log usage and enable oversight. Versa provides context and logs of unsanctioned tools and helps build a database.
Risk-based access enforcement to mitigate cybersecurity threatsArt. 9 (Risk Management System) Art. 10 (Data and Data Governance)GenAI Tool Classification by Risk Tags tools using a 5-level risk score: Trustworthy → Low Risk, Limited Risk, High Risk, or BlockEnables enforcement of differentiated controls on high-risk AI use cases.
Prevent use of AI systems that risk unauthorized data processing or leakageArt. 10 (Data Governance) Art. 27 (General Obligations of Providers)Data Protection Profiles Identifies PII, source code, or financial data exfiltrated to GenAI toolsSupports data governance by ensuring that sensitive data is not exposed to unapproved AI systems or tools.
Ensure human oversight and the ability to override or block AI-based operationsArt. 14 (Human Oversight)Real-Time Policy Enforcement Allow/Ask/Block based on risk and usage contextUsers are prompted before action (Ask), providing human-in-the-loop oversight; admins can block or allow access.
Limit access based on user role, authorization level, and use-case legitimacyArt. 9 (Risk Management) Art. 26 (Access Control)Identity-Aware Enforcement Policies linked to user, device, risk profile, and roleEnforces access control aligned to risk category and user profile, reducing chances of unauthorized or discriminatory use.
Prevent usage of AI for prohibited purposes, such as biometric surveillance or social scoringTitle II (Prohibited Practices) Art. 5Prohibited Use Enforcement Block use of tools violating IP rights or enabling illegal activityVersa can block tools or use cases that fall under prohibited categories
Prevents unauthorized personal data processing, mandates explicit consentGDPR + Art. 10 (Data Governance)Consent Enforcement and Privacy Protections Restricts uploading non-public or customer data to GenAI toolsEnforces CASB and DLP rules to authorized AI applications , especially for training on customer data, and provides auditing of non-compliant uploads.
AI systems must be auditable and subject to post-market monitoring and compliance verificationArt. 15 (Logging Capabilities) Art. 61–63 (Market Surveillance, Penalties)Unified Logging and Audit Trails Captures full usage patterns, decisions, violationsHelps enterprises generate verifiable logs and evidence to support internal audits or EU AI Act compliance inspections.
AI governance requires transparency, reporting on incidents, misuse, and operational dataArt. 29 (Post-Market Monitoring) Art. 54 (Incident Reporting)Dashboard Reporting and Alerts Insight into high-risk users, data flows, tool accessVersa provides reports on anomalies, violations, and usage of unapproved GenAI tools that may trigger mandatory reporting under the Act.
Ensure granular, least-privilege access to AI services and data over the networkArt. 14 (Human Oversight) Art. 26 (Access Control Mechanisms)Identity-Aware ZTNA Policies Per-user, per-session access control to AI tools and dataPrevents over-permissive access to AI endpoints or APIs; access policies are enforced per user identity, device risk, and location.
Prevent hidden data leakage via encrypted traffic to external AI servicesArt. 10 (Data Governance) Art. 15 (Logging Capabilities)Encrypted Traffic Inspection (SSL Decryption) Inspects encrypted GenAI traffic at the edgeEnsures encrypted communications with AI tools don’t bypass compliance or DLP rules—critical for visibility and transparency.
Prevent exfiltration of regulated or personal data during AI usageArt. 10 (Data Governance) GDPR + AI Act InteroperabilityInline DLP + CASB for AITraffic Controls sensitive data exposure in SaaS/AI APIsVersatile DLP profiles control which files, fields, and patterns can traverse the network to GenAI tools, enforcing data governance.
Segregate traffic for high-risk AI systems or network segmentsArt. 17 (Quality Management) Art. 28 (Third-Party AI Systems)Application-Aware SD-WAN Routing Routes high-risk or GenAI traffic via secure gatewaysEnables isolation of untrusted or high-risk AI traffic, complying with network segmentation and vendor risk mitigation guidance.
Contain breaches and reduce attack surface for AI workloadsArt. 9 (Risk Management) NIS2-aligned provisions (Recital 84)Microsegmentation for AI Workloads Limit lateral movement across AI services, APIs, URLsVersa supports fine-grained segmentation between AI services, infrastructure, and users—core to risk containment.
Discover unauthorized GenAI access at the network layerArt. 15 (Logging)Art. 29 (Monitoring)DNS Layer Visibility and Control Detects GenAI tools via DNS requests and applies policyDNS telemetry allows discovery of Shadow GenAI tools and enforcement before a full connection is established.
Maintain traceable logs of AI-related communicationsArt. 15 (Logging Capabilities) Art. 29 (Post-Market Monitoring)Real-Time Telemetry & Flow Logs Captures complete session context for GenAI accessVersa generates unified flow logs per user and per tool, helping build auditable records for investigations and compliance audits.
Apply security policies uniformly regardless of location or networkArt. 27 (General Provider Obligations) Art. 54 (Incident Reporting)Resilient Global Policy Enforcement Cloud-native architecture ensures consistent control enforces consistent AI access policies across branches, cloud, remote users—meeting global control requirements.
Adjust network privileges based on live risk evaluationArt. 9 (Risk Management) Art. 14 (Oversight & Control)Continuous Risk-Adaptive Networking Dynamic policy changes based on user risk or tool reputationNetwork policies adapt in real time as tool reputations or user behaviors change, supporting proactive enforcement.

Recent Posts













Gartner Research Report

2024 Gartner® Magic QuadrantTM for SD-WAN

For the fifth year in a row, Versa has been positioned as a Leader in the Gartner Magic Quadrant for SD-WAN. We are one of only three recognized vendors to be in the Gartner Magic Quadrant reports for SD-WAN, Single-Vendor SASE, and Security Service Edge.