By now you’ve seen the building blocks:
This post ties these pieces into one system that a real enterprise can run.
AI security becomes easier to understand when you separate “thinking” from “doing.”
That means you need enforcement in both places. If you only secure model calls but ignore tool calls, your AI can still take unsafe actions. If you only secure tools but ignore model access, you can still leak sensitive data to an unapproved model provider.
A practical reference architecture works like this:
As an example, a user asks an AI assistant to “summarize this customer escalation.” The assistant retrieves Slack messages and internal notes. Those tool outputs may contain sensitive details. Before the assistant uses them to generate a response, the system inspects them and redacts anything that should not leave the company.
Here is a common incident chain: 1. A user asks the assistant to summarize an email or document. 2. The document contains hidden malicious instructions. 3. The model attempts to follow those malicious instructions. 4. The agent tries to call tools. 5. Sensitive data is exposed or unsafe actions are taken.
Instead, the secure architecture prevents the attack chain by inspecting prompts for suspicious instruction patterns, restricting tool access through the MCP Gateway unless explicitly authorized, redacting sensitive data before any output leaves the system, and logging every step so security teams can quickly investigate and respond if needed. Here is how the secure architecture stops the chain:
Phase 1 (first 30 days):
Phase 2 (days 31–60): Control
Phase 3 (days 61–90): Runtime enforcement
A mature AI security program provides clear visibility into which models are used and by whom, prevents sensitive data from being sent to unapproved models, inspects prompts and responses for injection attacks and data leakage, governs what tools agents can access and the actions they can take, and ensures all significant AI activity is logged and auditable. The key takeaway is that GenAI security becomes manageable when it’s treated like a real system—by securing the brain (models), the hands (tools), and the memory (data).
GenAI security becomes manageable when you build it like a real system:
Secure the brain (models). Secure the hands (tools). Secure the memory (data).
In the Part 7, we’ll show the full end-to-end traffic flow diagram that makes “north–south” and “east–west” enforcement intuitive.
Subscribe to the Versa Blog