Beyond Automation: 3 Real-World Use Cases Where MCP Servers Redefine SASE

gerardo-melesio
By Gerardo Melesio
Systems Engineer
June 11, 2025
in
Share
Follow

What is MCP?

“Just as USB provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.” 

MCP operates by allowing AI tools – in this context referred to as MCP clients – to access data from different applications, as shown in Figure 1. Within this framework, the MCP server acts as an exchange layer between your LLM and other applications.  

Model Context Protocol (MCP)
Figure 1 Illustration of a client communicating with multiple servers over the Model Context Protocol MCP to access local and remote data sources

Building on the USB analogy, if MCP is the protocol that enables communication, the MCP server is like a cable that connects two services: an application and an LLM.  

Through an MCP server, an LLM can access an app to perform basic troubleshooting, generate actions, validate configurations, and make decisions intelligently and automatically. In the context of Versa technology, it acts as a broker between your AI and Versa Director, our virtualization and service creation platform, allowing it to dynamically query elements on your network without directly exposing the information to the model. 

Compared to other integration methods, MCP’s open standard lets you connect a third-party application to multiple LLMs – like ChatGPT, Claude, or Ollama – simply by updating configuration settings instead of rewriting code. At the same time, the MCP Servers enforces application boundaries: the AI agent can only invoke the specific endpoints the server exposes. This ensures all operations remain within preapproved boundaries and prevents any direct access to your infrastructure. 

Sample architecture and deployment steps 

The diagram below illustrates our reference setup. In this architecture, the MCP server acts as a proxy connecting our LLM and the Versa Director: 

  • LLM → MCP Server: The model consumes human-language instructions. It then uses MCP to translate those instructions to specific actions. 
  • MCP Server → Versa Director: MCP performs those actions using predefined API calls – either read or (where permitted) write operations. 
  • Pre-approved actions: Versa MCP Server only exposes a whitelisted set of endpoints so the AI can’t access unintended resources. 
Reference architecture for a Versa MCP Server deployment
Figure 2 Reference architecture for a Versa MCP Server deployment

Some disclaimers 

Before we explore the use cases, please note that this article is not an endorsement of any AI model or specific LLM technology. We’re using Claude for Desktop simply because it is the one we happen to be working with at this very moment. You should evaluate the pros and cons of any LLM solution, including privacy, security, and cost considerations. Furthermore, you must ensure its use is authorized by your organization.  
 
Also, none of the examples below work out-of-the-box – each demo combines multiple prompt-engineering and customization techniques. Remember that the MCP server itself contains no built-in logic; it merely enables LLMs to interact securely with our Versa infrastructure. 

Use Case No. 1: Automate tedious tasks in a much simpler way than APIs 

Versa offers two orchestration and monitoring tools to manage security and networking operations: Versa Director and Versa Concerto. But even with such advanced tools, every engineer faces problems that involve repetitive and tedious tasks. To speed things up, engineers who work with Versa technologies utilize APIs to interact with our technology and build scripts that perform those tasks in an automated way. 

With an MCP server, IT professionals have a new tool to automate tasks that would take a lot of effort for a human to accomplish. Instead of building a Python script to make API calls to Versa Director and then scrape the data, an engineer can just pose a question to their LLM of choice. The LLM will get the data through the MCP server and then process it accordingly. 

Imagine you are a Managed Service Provider that has customers deployed in different SASE gateways. Your boss asks you to present them with a list of customers per region. Instead of spending time retrieving the data from your SASE portal, you can simply ask your LLM the question. The AI will then know exactly which API calls are required to gather the information. An additional benefit of using this technique is that you can ask the AI model to present the output in any way you want – which means you get to spend more time deriving valuable insights from the data, instead of spending it gathering the information. 

Here below is an example of how this is achieved on Claude Desktop.  

Example prompt with Claude Desktop, showcasing its applicability for our use case no.1
Figure 3 – Example prompt with Claude Desktop, showcasing its applicability for our use case no.1 

Use Case No. 2: Act as a troubleshooting assistant to NOC or SOC engineers 

Support roles are the backbone of any technology-driven organization, keeping systems running smoothly and outages at bay. Having worked in support for many years, I’ve found that the toughest challenge is knowledge transfer. Engineers must quickly learn new platforms, architectures, and the specifics of each customer environment. Even with thorough deployment guides, new hires still need time to absorb it all. Working in a network operations center (NOC), I’ve often wished I could clone a colleague whose deep understanding of a project or technology feels irreplaceable. 

The good news is that, with AI, we can capture an expert’s knowledge and use it in real time to debug a network. We can achieve this by leveraging a technique called prompt engineering.  Prompt engineering works by providing the LLM with context, instructions, or examples of what to do in certain scenarios. A senior engineer might create a troubleshooting guide that lists common commands and outlines the network’s topology, so the AI model has the necessary context. Prompts can also include guardrails or special notes. For example, “avoid suggesting a topology change” or “remember the network uses a full-mesh topology.” 
 
In order to leverage this knowledge, a user can simply submit this document at the beginning of a new chat. The AI model will use it to suggest possible troubleshooting steps and act as an assistant to the engineer performing the task, applying in real time all the knowledge from the existing documentation.  

Demonstration of Claude desktop acting as a troubleshooting aid. Claude performs a series of troubleshooting steps using the guidelines that we passed in the versa-director-commands.md file attached in the beginning of the chat.
Figure 4 Demonstration of Claude desktop acting as a troubleshooting aid Claude performs a series of troubleshooting steps using the guidelines that we passed in the versa director commandsmd file attached in the beginning of the chat
Demonstration of Claude desktop acting as a troubleshooting aid. After the troubleshooting process, Claude provides a possible Root Cause Analysis (RCA) and some recommended actions. Remember to always verify AI results, they can make mistakes! 
Figure 5 Demonstration of Claude desktop acting as a troubleshooting aid After the troubleshooting process Claude provides a possible Root Cause Analysis RCA and some recommended actions Remember to always verify AI results they can make mistakes

To unlock even greater value, you can develop a bespoke model by creating a custom GPT. Such a model would be fine-tuned to your own runbooks, operational procedures, and historical incidents. This dedicated assistant will surface context-aware recommendations tailored to your network’s architecture and team’s workflows. 

Use Case No. 3: Perform security policy audits and checks. 

Our final example focuses on auditing security rules. As networks expand in a Zero-Trust environment, it’s critical to continuously audit rules for shadowing and gaps. 

For this use case, prompt engineering also brings a lot of value. We can use it to give the model our audit requirements to identify rule conflicts, spot unused entries, and highlight overly permissive access. 

We can also create custom methods for the Versa MCP Server. Although our Versa engineering team has provided ready-to-go functions for core operations, customization is often necessary to cover specific use cases. Since Versa’s MCP is open source, you can add your own methods to fill any gaps. Just check your organization’s AI usage policies and security guidelines before doing so and include detailed comments in new methods so the AI knows when and how to invoke them. 

Example of how to create a new MCP method. The comments you add here are the indications to the LLM on how to use this method.
Figure 6 Example of how to create a new MCP method The comments you add here are the indications to the LLM on how to use this method

Once configured, the AI model can produce a thorough, professional-quality audit report in seconds. This will save you countless hours while ensuring your security posture stays airtight.

Claude Desktop working as a security auditor, as described in our use case no.3 .  The left side of the prompt is the “thought” process that Claude went through. On the right, Claude created an artifact, a sample document that you can quickly export and share.
Figure 7 – Claude Desktop working as a security auditor, as described in our use case no.3 .  The left side of the prompt is the “thought” process that Claude went through. On the right, Claude created an artifact, a sample document that you can quickly export and share. 

Conclusions

In this article, we’ve shown how Versa’s MCP server empowers the modern network and we highlighted example use cases that simplify your day-to-day operations. These are just a handful of possibilities. The potential applications are endless, and we can’t wait to see what you’ll build next.  

Recent Posts








Topics





Top Tags


Gartner Research Report

2024 Gartner® Magic QuadrantTM for SD-WAN

For the fifth year in a row, Versa has been positioned as a Leader in the Gartner Magic Quadrant for SD-WAN. We are one of only three recognized vendors to be in the Gartner Magic Quadrant reports for SD-WAN, Single-Vendor SASE, and Security Service Edge.