Part 1 explained discovery: you cannot secure AI if you cannot see it. In this post, we cover the next step: control.
Control means setting clear rules for AI usage and enforcing them in a way that does not break the business. This is the point where many companies get stuck. Some teams over-block and kill adoption. Other teams do nothing and accept silent risk. The goal is neither. The goal is safe adoption by default.
Most security programs already understand least privilege for humans and applications. AI needs the same logic. Least privilege simply means: the system should have only the access it needs to do its job, and nothing more. An AI assistant should not automatically have access to your production database, your internal documents, your ticketing system, your code repositories, or your customer records.
Instead, AI access should be scoped to:
Imagine you give an AI assistant access to your entire CRM “because it might be useful.” A week later, someone asks the assistant, “show me all customers who churned this month,” and the assistant returns a full list including personal details. That is not a clever feature. That is an unintentional data export. Least privilege prevents this by restricting the assistant to the smallest set of fields and records needed for the task.
In GenAI systems, there are two different areas you need to control. These areas are related, but they are not the same. If you only secure one of them, you still have a gap. I see two core tenets to building your control framework.
A) Controlling model access
Model access is about how users, applications, and agents send requests to AI models and receive responses back. You need to control things like:
As an example, a marketing team might be allowed to use a public model for rewriting website copy. A customer support team might be allowed to use an approved enterprise model to draft responses. A finance team might be required to use a private or internally hosted model because the prompts contain pricing, contracts, or customer account details.
B) Controlling tool access (agent actions)
Tool access is about what the AI can do beyond answering a question. This becomes important as soon as an AI system can “take actions” instead of just generating text. You need to control things like:
It is one thing for an assistant to suggest a database query. It is another for it to run the query automatically, save the results to a file, and upload them to a chat channel. Tool access turns AI into an operator. That is why most major failures happen when tool access is not governed.
A policy is simply a rule that answers five basic questions:
• Who can do something?
• What are they allowed to do?
• What data can they use?
• Which model or tool are they allowed to use?
• What should happen if the rule is violated (log, block, redact, or require approval)?
Policies sound abstract until you write a few examples that teams can actually follow. Below are examples that work well in real enterprises.
Policy example #1 — No sensitive data to public models
Employees can use public chatbots for general questions, brainstorming, or rewriting. However, they are not allowed to paste sensitive information into public tools. Sensitive information includes customer personal data, health information, payment data, source code, confidential internal documents, and credentials.
As an example, a support rep pastes an error log into a chatbot to troubleshoot faster. The log contains a customer email address and an API token. A good policy plus enforcement would detect the token and block or redact it before it leaves the company.
Policy example #2 — Only approved models for customer-facing workflows
Customer-facing AI features must use approved model providers or approved private deployments. Prompts and outputs must be logged and monitored so the company can investigate incidents and prove compliance.
As an example, a product team builds an “AI assistant” feature that answers customer questions. If that feature uses an unapproved model endpoint, you may not have visibility into what data is being sent, where it is stored, or how it is retained. An approved model list and a gateway enforce the rule automatically.
Policy example #3 — Write actions require approvals
AI can draft actions, but it should not execute high-risk actions without explicit approval. For example:
As an example, an engineering agent is connected to GitHub. A prompt injection in a document tells the agent to “fix security quickly by removing authentication checks.” If the agent has permission to merge, it can push a dangerous change. If approvals are required for write actions, the change stops at the review step.
If you want a simple “starter pack,” implement these controls first. Even a lightweight version of these controls will put you ahead of most companies.
The 30-day Control playbook (Operator version)
The fastest way to build real control is to implement it in phases. The goal is to improve safety without stopping productivity.
Week 1: Write an “AI allowed use” policy people can follow
You need something simple and human that employees can remember.
| Allowed | Not Allowed |
|
|
Week 2: Enforce identity on AI usage
Avoid shared keys and anonymous usage. You want to know who is using AI and which system is using AI.
You should be able to distinguish:
As an example, if ten teams share one API key, you cannot investigate an incident properly. When something goes wrong, you need to know which service made the request and which user triggered it.
Week 3: Add logging and basic guardrails
At minimum, log:
As an example, if a customer later reports that an AI assistant shared internal information, you need logs to determine whether the issue was a bad prompt, an unsafe tool call, or a data classification failure.
Week 4: Restrict high-risk actions first
The first thing to control is write actions, especially actions that can change state in your environment. You should block or require approval for actions like:
As an example, a safe agent can draft a change. A safe system requires a human to approve the change.
CISO takeaway
Control does not mean “ban AI.” Control means defining what safe usage looks like and enforcing it in a way that protects data, reduces risk, and still allows teams to move quickly.
Architect takeaway
Treat AI like a production system. Identity, policy, and audit are not optional. They are the foundation that makes model calls and tool actions safe at enterprise scale.
Next up
In Part 3, we will cover prompt inspection and show how prompt injection leads to real-world incidents, especially when agents have access to tools.
Subscribe to the Versa Blog