Part 2. GenAI Control: Who Can Use What, With What Data, and Under What Rules 

Kumar Mehta
By Kumar Mehta
Founder and CDO, Versa Networks
February 25, 2026
in
Share
Follow

Control means setting clear rules for AI usage and enforcing them in a way that does not break the business. This is the point where many companies get stuck. Some teams over-block and kill adoption. Other teams do nothing and accept silent risk. The goal is neither. The goal is safe adoption by default. 

Start with one principle: AI needs “least privilege” 

Most security programs already understand least privilege for humans and applications. AI needs the same logic. Least privilege simply means: the system should have only the access it needs to do its job, and nothing more. An AI assistant should not automatically have access to your production database, your internal documents, your ticketing system, your code repositories, or your customer records. 

Instead, AI access should be scoped to: 

  • The minimum model or models needed 
  • The minimum data needed 
  • The minimum tools needed 
  • The minimum actions needed 

Imagine you give an AI assistant access to your entire CRM “because it might be useful.” A week later, someone asks the assistant, “show me all customers who churned this month,” and the assistant returns a full list including personal details. That is not a clever feature. That is an unintentional data export. Least privilege prevents this by restricting the assistant to the smallest set of fields and records needed for the task. 

The two types of control you must build

In GenAI systems, there are two different areas you need to control. These areas are related, but they are not the same. If you only secure one of them, you still have a gap. I see two core tenets to building your control framework. 

A) Controlling model access 

Model access is about how users, applications, and agents send requests to AI models and receive responses back. You need to control things like: 

  • Which teams can use which models 
  • Which applications are allowed to call external model providers 
  • Whether customer data or sensitive internal data can be included in a prompt 
  • Whether model outputs must be logged, monitored, or redacted 

As an example, a marketing team might be allowed to use a public model for rewriting website copy. A customer support team might be allowed to use an approved enterprise model to draft responses. A finance team might be required to use a private or internally hosted model because the prompts contain pricing, contracts, or customer account details. 

B) Controlling tool access (agent actions) 

Tool access is about what the AI can do beyond answering a question. This becomes important as soon as an AI system can “take actions” instead of just generating text. You need to control things like: 

  • Whether the AI can read Slack messages 
  • Whether it can open GitHub pull requests 
  • Whether it can query databases 
  • Whether it can create Jira tickets 
  • Whether it can run scripts, workflows, or automation jobs 

It is one thing for an assistant to suggest a database query. It is another for it to run the query automatically, save the results to a file, and upload them to a chat channel. Tool access turns AI into an operator. That is why most major failures happen when tool access is not governed. 

What “policy” means in real life (not buzzwords)

A policy is simply a rule that answers five basic questions: 
• Who can do something? 
• What are they allowed to do? 
• What data can they use? 
• Which model or tool are they allowed to use? 
• What should happen if the rule is violated (log, block, redact, or require approval)? 

Policies sound abstract until you write a few examples that teams can actually follow. Below are examples that work well in real enterprises. 

Policy example #1 — No sensitive data to public models 

Employees can use public chatbots for general questions, brainstorming, or rewriting. However, they are not allowed to paste sensitive information into public tools. Sensitive information includes customer personal data, health information, payment data, source code, confidential internal documents, and credentials. 

As an example, a support rep pastes an error log into a chatbot to troubleshoot faster. The log contains a customer email address and an API token. A good policy plus enforcement would detect the token and block or redact it before it leaves the company. 

Policy example #2 — Only approved models for customer-facing workflows 

Customer-facing AI features must use approved model providers or approved private deployments. Prompts and outputs must be logged and monitored so the company can investigate incidents and prove compliance. 

As an example, a product team builds an “AI assistant” feature that answers customer questions. If that feature uses an unapproved model endpoint, you may not have visibility into what data is being sent, where it is stored, or how it is retained. An approved model list and a gateway enforce the rule automatically. 

Policy example #3 — Write actions require approvals 

AI can draft actions, but it should not execute high-risk actions without explicit approval. For example: 

  • The AI can draft a pull request, but it cannot merge code 
  • The AI can create a Jira ticket, but it cannot close incidents 
  • The AI can suggest a database query, but it cannot run destructive queries 
  • The AI can draft an email, but it cannot send it automatically to a customer list 

As an example, an engineering agent is connected to GitHub. A prompt injection in a document tells the agent to “fix security quickly by removing authentication checks.” If the agent has permission to merge, it can push a dangerous change. If approvals are required for write actions, the change stops at the review step. 

The minimum controls every company should implement

If you want a simple “starter pack,” implement these controls first. Even a lightweight version of these controls will put you ahead of most companies. 

  1. An approved model list. This defines which models and providers are allowed for different types of work. 
  1. An approved tool list. This defines which tools agents are allowed to use and which are forbidden. 
  1. Sensitive data rules. This defines what data is never allowed to leave the enterprise and what must be redacted. 
  1. Logging and audit trails. This gives you an answer to “who did what, when, and why” for AI usage. 
  1. Rate limits and cost controls. This prevents runaway usage, abuse, and unexpected bills. 
  1. Rules for write actions. This prevents AI from taking destructive steps without review or approval. 

The 30-day Control playbook (Operator version) 
The fastest way to build real control is to implement it in phases. The goal is to improve safety without stopping productivity. 

Week 1: Write an “AI allowed use” policy people can follow 

You need something simple and human that employees can remember. 

Allowed  Not Allowed 
  • Brainstorming and general questions
  • Rewriting and summarization of non-sensitive text
  • Drafting internal emails or documents that do not include sensitive data
  • Customer personal data
  • Credentials or API keys
  • Proprietary source code (unless explicitly approved)
  • Confidential internal documents (unless using an approved secure model)

Week 2: Enforce identity on AI usage 

Avoid shared keys and anonymous usage. You want to know who is using AI and which system is using AI. 

You should be able to distinguish: 

  • User identity for employees 
  • Workload identity for applications 
  • Agent identity for automated systems and bots 

As an example, if ten teams share one API key, you cannot investigate an incident properly. When something goes wrong, you need to know which service made the request and which user triggered it. 

Week 3: Add logging and basic guardrails 

At minimum, log: 

  • Who used AI 
  • Which model was used 
  • Which tool was called (if any) 
  • Whether sensitive content was detected 
  • Whether a policy decision allowed, blocked, or redacted content 

As an example, if a customer later reports that an AI assistant shared internal information, you need logs to determine whether the issue was a bad prompt, an unsafe tool call, or a data classification failure. 

Week 4: Restrict high-risk actions first 

The first thing to control is write actions, especially actions that can change state in your environment. You should block or require approval for actions like: 

  • “Create user” 
  • “Delete record” 
  • “Merge pull request” 
  • “Change production configuration” 
  • “Download customer list” 

As an example, a safe agent can draft a change. A safe system requires a human to approve the change. 

End-of-post takeaways 

CISO takeaway 

Control does not mean “ban AI.” Control means defining what safe usage looks like and enforcing it in a way that protects data, reduces risk, and still allows teams to move quickly. 

Architect takeaway 

Treat AI like a production system. Identity, policy, and audit are not optional. They are the foundation that makes model calls and tool actions safe at enterprise scale. 

Next up 

In Part 3, we will cover prompt inspection and show how prompt injection leads to real-world incidents, especially when agents have access to tools. 

Recent Posts













Gartner Research Report

2025 Gartner® Magic Quadrant™ for SASE Platforms

Versa has for the third consecutive year been recognized in the Gartner Magic Quadrant for SASE Platforms and is one of 11 vendors included in this year's report.