When Words Become Weapons: How Versa SASE Helps Mitigate Prompt Injection 

dhiraj-sehgal
By Dhiraj Sehgal
Senior Director, Product Marketing
October 28, 2025
in
Share
Follow

Introduction 

Prompt injection is increasingly recognized as a rising class of risk in AI systems. This is a critical threat vector in which attackers craft natural language inputs to subvert model instructions, bypass guardrails, or leak sensitive data. 

In this blog we’ll expand on a particular dimension of AI risk: prompt engineering. Where the prior blog was about the raw material (unstructured data), this blog is about the instructions (prompt injection). Together, they form the inputs that power AI workflows. Securing both is essential to protect intellectual property, maintain compliance, and ensure AI delivers value safely. 

Why Prompt Engineering Matters for Security 

AIs can be manipulated to override their intended behaviors, with prompts being able to extract sensitive information, or trigger unauthorized actions. Even well-meaning employees may inadvertently craft prompts that can cause an AI to expose a company’s unstructured data. 

This makes prompt engineering not just a developer skill, but an attack vector that needs to be given appropriate security considerations. Just as careless handling of unstructured data can lead to data leaks, careless or malicious prompt design can compromise AI safety and security. 

Risks in Prompt Engineering 

Prompt engineering expands the AI attack surface by transforming natural language into executable logic. Unlike conventional software vulnerabilities, these risks arise from adversarial inputs and instruction manipulation, creating new vectors for data exfiltration, policy bypass, and privilege escalation. 

  • Instruction Injection: Attackers embed hidden instructions into content (e.g., a support ticket or uploaded document) that trick the AI into revealing information or bypassing guardrails. 
  • Data Leakage via Prompts: Employees unknowingly paste sensitive code, designs, or documents into an AI assistant. Without controls, this becomes a compliance violation. 
  • Role Confusion: Prompts may override system or developer instructions, causing AI agents to act outside of policy. 
  • Shadow AI Inputs: Unmonitored AI tools create an uncontrolled surface where sensitive prompts and data intersect outside governance. 

Connecting Risks to Controls 

The risks outlined above—whether it’s instruction injection, data leakage through prompts, or role confusion—share a common theme: sensitive information can be exposed, manipulated at the point of input, held hostage with potential backdoor access creating ransomware scenarios. Prompts are not just instructions for AI; they can also become a vehicle for attackers or careless users to bypass security policies intended to protect the business’s assets, such as its confidential data. 

This is why extending security controls directly into prompt flows is critical. By applying regex, keyword matches, proximity rules, and custom identifiers to prompt traffic, organizations gain the ability to: 

  • Detect and block sensitive information before it is submitted to AI services. 
  • Flag adversarial instructions that attempt to override intended behavior. 
  • Enforce consistent, policy-driven guardrails across all AI interactions. 

How Versa DLP capabilities can help reduce prompt engineering risks  

Data Loss Prevention (DLP) has long been a foundational control for securing enterprise data. Traditionally, DLP is applied to monitor and govern how sensitive information—such as PII, financial data, or source code—moves across applications, endpoints, and user interactions. It helps prevent unintentional sharing, insider misuse, or exfiltration through email, cloud apps, collaboration platforms, or web uploads. 

When applied to AI, the same challenges emerge in new ways: employees may paste sensitive information into prompts, or adversarial instructions may attempt to coax models into exposing confidential data. This is where Versa’s DLP capabilities become critical—extending familiar detection methods like regex, keyword matching, and proximity rules directly into prompt traffic to reduce these risks. 

  • Content analysis with regex & keywords (header/body/payload), plus proximity windows to reduce false positives. 
  • Custom data identifiers & profiles, alongside Exact Data Match(EDM)/Indexed Document Matching (IDM) and pre-canned profiles (PCI, HIPAA, GDPR). 
  • Custom URL categories (regex or fixed strings) for grouping GenAI tools and applying policy. 
  • TLS/SSL inspection so DLP sees what’s inside encrypted GenAI/API traffic. 
  • AI/ML for security augmenting legacy pattern matching with document fingerprinting and User End Behavior Analytics (UEBA). 

Four common prompt risk scenarios and how to stop them with Versa DLP 

As an example, we’ll take four common scenarios and demonstrate simple policy controls to secure your data. In each scenario we’ll show: what to detect, sample regex/keywords, and a Versa policy example you can run with today’s controls. Each example assumes that TLS decryption is enabled within Versa for traffic sent to an AI application. 

1) Prevent PII leakage inside prompts 

Goal: Stop employees from pasting customer PII (such as SSN/CC) into AI assistants. 

Detect: 

  • Keywords: social security number (SSN), credit card number (CC), card verification value (CVV), date of birth (DOB), medical record number (MRN)

Versa policy recipe: 

  1. If needed create Custom Data Patterns using regex + keywords and add proximity (e.g., “SSN” within 40 bytes of a match of regex pattern \b(?!000|666|9\d\d)\d{3}-?\d{2}-?\d{4}\b). 
  1. Add the data patterns, or leverage Versa’s large number of predefined data patterns (e.g. Social Security, PCI DSS, and more) to a DLP Rule and associated DLP Profile such as “PII-High”. Ensure the action is set to Block and that you select the appropriate protocol, context, and input file types (e.g. DOCX, PDF, TXT, CSV). 
  1. Configure an Internet Protection Policy and specify the AI application (e.g. ChatGPT), and enable the DLP Security Profile to block when the DLP profile (e.g. PII-High) matches a data pattern. 

2) Stop secret keys / code exfiltration in prompts 

Goal: Prevent sharing credentials, keys, or internal code in AI chats. 

Detect 

  • Keywords: api_key, secret, token, private key, confidential, your internal codenames 
  • Regex starters: 
  • Private keys: 

—–BEGIN (?:RSA|EC|DSA|OPENSSH) PRIVATE KEY—–[\s\S]+?—–END \1 PRIVATE KEY—– 
 

  • AWS Access Key ID: 

\bAKIA[0-9A-Z]{16}\b 
 

  • Google API key: 

\bAIza[0-9A-Za-z\-_]{35}\b 
 

  • Slack token (classic): 

\bxox[baprs]-[0-9A-Za-z]{10,48}\b 
 

Versa policy recipe 

  1. Build a “Secrets” Custom Data Pattern with the appropriate regex above + keywords (do not share, internal) and add proximity.  
  1. Add a document fingerprint for your internal codebase or config files if feasible; complement with EDM for known secret patterns. 
  1. Add the data patterns to a DLP Rule and associated DLP Profile such as “SecretsDLPProfile”. Ensure the action is set to Block and that you select the appropriate protocol, context, and input file types (e.g. DOCX, PDF, TXT, CSV). 
  1. Configure an Internet Protection Policy and specify the AI application (e.g. ChatGPT), and enable the DLP Security Profile to block when the DLP profile (e.g. SecretsDLPProfile) matches a data pattern. 

3) Catch prompt-injection “bypass” phrases before they hit the model 

Goal: Filter adversarial phrasing that tries to override system policies. 

Detect 

  • Keywords (case-insensitive): ignore previous instructions, disregard above, reveal system prompt, bypass, exfiltrate, leak, override policy, escalate privileges 
  • Regex starters (case-insensitive): 
  • Ignore/override: 

(?i)\b(ignore|disregard)\s+(all|previous|prior)\s+(instructions|rules)\b 
 

  • Reveal internal instructions: 

(?i)\b(reveal|print|show)\s+(the\s+)?(system|developer)\s+prompt\b 
 

  • Exfiltration intent: 

(?i)\b(exfiltrate|leak|send)\s+(data|information)\b 
 

Versa policy recipe 

  1. Create a “Prompt-Bypass” Custom Data Pattern with the appropriate regex above and keywords. 
  1. Add the data patterns to a DLP Rule and associated DLP Profile. Ensure the action is set to Block and that you select the appropriate protocol, context, and input file types (e.g. DOCX, PDF, TXT, CSV). 
  1. Configure an Internet Protection Policy and specify the AI application (e.g. ChatGPT), and enable the DLP Security Profile to block when the DLP profile matches a data pattern. Use the same profile for uploads to knowledge base applications used by RAG (retrieval-augmented generation) systems to prevent embedding deceptive instructions. 

4) Govern Shadow AI usage (unsanctioned tools) 

Goal: Discover and control AI sites/apps that bypass policy. 

Detect 

  • Build a regex-driven URL category “Shadow-AI” (catch new/long-tail AI tools). Add URL string or URL data regex patterns representing each site you wish to detect so you can update the list quickly. An AI site may use various website URLs to process AI prompts, for example. 

Versa policy recipe 

  1. Create a “Shadow-AI-Deny” Custom URL Category with the appropriate URL regex patterns or URL strings above for each site you want to block. 
  1. Configure an Internet Protection Policy and apply the URL category above to the corresponding policy rule. Ensure the security enforcement is set to Deny or Reject for traffic that is sent to Shadow-AI-Deny URLs, thus allowing only sanctioned AI (whitelist category) sites.  
  1. Pair with DLP profiles from scenarios 1–3 so even sanctioned usage is guarded.  

Summary 

Prompts are a powerful way to interact with AI using natural language instructions, but they also create new threat opportunities for data extraction, data loss, and potential new vectors for corporate infiltration. Versa’s DLP capabilities—regex and keyword detection, proximity rules, and custom identifiers—can extend directly into prompt flows, enabling organizations to block unsafe inputs and protect sensitive information in real time. 

Recent Posts













Gartner Research Report

2025 Gartner® Magic Quadrant™ for SASE Platforms

Versa has for the third consecutive year been recognized in the Gartner Magic Quadrant for SASE Platforms and is one of 11 vendors included in this year's report.