Data centers are at the heart of any enterprise, serving as centralized points to host servers, applications, workloads, and compute that power everything from transactional systems to security and analytics. Today, as AI becomes embedded in every workflow, from customer service to software development, these data centers have evolved into the locations that host AI tools and data.
Use cases for AI are growing rapidly. Yet another application is moving to utilize AI, and a new AI-based application emerges every other day with great promises.
Regulated industry-based Enterprises or those with critical intellectual property or sensitive information are not willing to put their sensitive data on third-party clouds or AI applications hosted elsewhere. This data may be business or technical data or both.
Shadow IT discovery and management, DLP solutions and Generative AI Firewalls are also employed to discover and control enterprise users accessing to third-party AI tools on the Internet and how they may be using sanctioned ones.
Beyond sovereignty, industry-specific mandates for finance, healthcare, and government may require sensitive data kept on-premises to maintain auditability and compliance with data conformance. In many cases, this translates to running AI locally in the enterprise’s own data center protected with tight access controls and comprehensive cyber security and data protection stack.
With data sovereignty in focus, local AI datacenters must process sensitive datasets locally, in compliance with security and sovereignty regulations, while training may also happen in the same datacenter. Once trained, AI models can be localized and deployed regionally for inference on-premises, closer to the user.
Enterprise’s data, hosted by AI data centers, as well as the AI model itself, whether developed in-house or acquired from third parties, is stored in AI data centers. One may think of the AI data center as the brain and central nervous system of an enterprise IT infrastructure. To prevent accidental leakage, enterprises utilize zero trust based access control, comprehensive security stacks to protect AI apps and data within their own data centers.
As AI gains more popularity, the need for compute, networking, storage and AI infrastructure required to train large-scale AI models, and for inferencing purposes continues to accelerate. The rise of generative AI, chatbots, vision systems, AI-based applications, increased use of big data for AI purposes and autonomous operations requires AI datacenters designed for the purpose. It is important for Enterprises to understand how they can address the needs of connectivity and security of AI datacenters with the ever-increased importance of their data centers, so that they can architect and implement solutions accordingly.
With these new requirements, today’s AI datacenters need performance, and scale while preserving sovereignty to meet demands for AI as its importance for businesses is accelerating. Industry reports show that the global AI inference market is expected to grow to US $254.98 billion by 2030, at a CAGR of ~19.2%.
Within the AI datacenter, AI requires accelerated compute, ultra-fast connectivity to facilitate high performance Remote Direct Memory Access (RDMA), non-blocking and non-oversubscribed architecture, that provides consistent low latency to enable training setups and for high-performance inferencing purposes. Such needs arise from the fact that networked GPUs aim to access the distributed memory content of each other using RDMA to run very large matrix computations or to retrieve data that may be spread across the memory banks of GPUs. During the process, if any packet gets dropped due to oversubscription or if jitter or unexpected delay of a packet occurs, model training can stall, get impacted, and/or user or application experience can be poor.
While traditional enterprise datacenters were still fast, they did not need to tackle such demanding problems as each compute or workload instance would be self-contained and would talk to the other using APIs or traditional means across the datacenter network. An underlying datacenter network with reasonable oversubscription could handle the demand. If some packets get dropped or arrive a bit later than planned, the application layers would usually handle that, as applications were built to accommodate this assumption. It was never a goal to build traditional enterprise datacenters with the types of high‑bandwidth, low‑latency, non-blocking architecture AI datacenters require.
New requirements are observed on the access to AI data centers front, As the role and criticality of datacenters are increasing further with AI now, zero trust based access to AI models or AI apps and data, data leak protection, and AI-specific compliance and governance gain importance.
It is time to understand how these needs can be addressed for the new world with on-prem AI for training and inferencing in centralized or distributed locations. It is worth noting that only about one-third of data center operators report running any AI training or inference workloads. This indicates many legacy datacenters are not yet setup for AI-scale operations.
Flexible, Scalable Architecture for AI Inference Workloads
Scalability in AI training and inferencing datacenters depends on high-performance, converged networking that seamlessly provides connectivity for compute and storage. For intra-DC purposes, Versa Ethernet switches support Data Center Bridging (DCB) standards. IT teams can build an Ethernet-based data centers that meets the storage and data movement needs of intra-AI data centers.
Furthermore, Versa Ethernet switches now support Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCEv2). Thanks to this key capability, IT teams can build AI datacenters with low latency, high throughput, and efficient GPU interconnectivity for distributed training and inference at scale. Versa Networks can now help build enterprise AI datacenters with AI clusters adjacent to classical enterprise applications and data repositories within the datacenter, while each part of the datacenter is catered according to its needs. This approach helps Enterprises leverage their existing investment while they are expanding it for the AI age.
Software-Defined Data Center Architecture for Agility
Versa delivers a software-defined datacenter architecture with application-aware, SLA-driven intelligence for high-performance, secure connectivity. At the datacenter edge, Versa OS (VOS) unifies L2–L7 services including routing, segmentation, security, and traffic engineering.
While helping connect AI racks with RoCEv2 based non-blocking Ethernet fabric, Versa solution can connect AI racks or compute systems to the rest of the enterprise datacenter using standards-based EVPN/VXLAN overlays with built-in on-ramp/off-ramp functionality. This overlay-based connectivity can start from Versa’s smart NICs, aka SD-NICs, placed within the same AI compute systems or from on-ramp/off-ramp gateways connected to standard Ethernet NICs of these servers. This hybrid approach allows enterprise applications or workloads residing in the same datacenter to make use of AI tools, and apps while being located in separate parts of the same datacenter. One may think of the boundary between enterprise datacenter vs AI data center racks as an AI on-ramp/off-ramp edge. This AI edge may be on a dedicated Ethernet switch or on Versa SD-NICs that run VOS natively while being placed in AI compute platforms. If the AI Edge is on the Ethernet switch, one side can communicate using VXLAN based overlays with the classical part of the center while the other would use RoCEv2 based Ethernet to communicate with AI clusters.
Access control to AI racks within the datacenter can be controlled by inline micro-segmentation and ZTNA capabilities of the Versa solution on the AI on-ramp/off-ramp edge.
Versa views the edges of the network as critical locations for policy enforcement functions, while such edges include WAN edges, LAN edges, cloud edges, and now AI on-ramp/off-ramp edges. Versa DC solution provides a rich set of policy-based access control and comprehensive security on the listed edges of the network, including consistent policy across multi-vendor fabrics.
This edge-centric, elastic model enables IT teams to scale effortlessly on either side, whether it is AI racks or enterprise application racks, by adding links and ports. Meanwhile, the logical gateway functionality, whether on VOS-based switches or in SD-NICs, provides consistent functionality.
Open, Non-Proprietary approaches to solving customer needs
The entire experience provided by Versa is supported by best-of-breed, industry-standard hardware, protocols and encapsulations and automated operations through APIs and workflow templates. Versa provides democratized platforms built on open reference designs, eliminating the need for proprietary chassis, cables, software constructs which ultimately impose constraints on datacenter or IT operators.
Today, many legacy datacenter solutions are vendor-proprietary. They rely on rigid, proprietary chassis-based designs or proprietary Ethernet stacks or encapsulations of sorts to create ultimate vendor lock-in. This is also clearly observed in AI datacenters too. Furthermore, such approaches can create islands of single fault domains limiting scalability, agility, resiliency and security, which are very hard to troubleshoot and operate in real life.
In contrast, Versa’s software-defined, standards-based, horizontally scalable architecture replaces costly forklift upgrades with open, programmable, and resilient infrastructure designed for modern enterprise and AI inference datacenters.
Figure 1: Chassis-Based vs. Hardware-Agnostic Platform for Datacenter Appliances
| Category | Proprietary Platform | Software-defined Platform |
| Architecture | Proprietary, monolithic chassis or stack with fixed hardware modules | Modular, software-defined design that allows selection from a range of stackable appliances |
| Scalability | Vertical scaling only with the addition of modules/blades within chassis or stack limits. | Horizontal scaling with the ability to add more nodes on demand. |
| Performance Optimization | Optimized for dedicated ASICs but limited adaptability to AI workloads | Thanks to the intelligent software based approach which makes use of distributed building blocks, performance and scale can increase linearly and predictably |
| Vendor Lock-in | Yes, with proprietary backplane, OS, and management tools. | Minimal with an open architecture that supports multiple vendors and hardware options. |
Versa ensures high-performance connectivity from branches and remote users to data centers
Versa delivers high-performance, WAN path intelligent connectivity between enterprise branches and data centers via its Secure SD-WAN. It supports application-aware, SLA-driven routing to dynamically select the optimal path across broadband, MPLS, or 5G links, ensuring consistent performance for AI workloads, SaaS, and critical business applications. Additionally, it supports key capabilities of WAN optimization and Quality of Experience (QoE) with hierarchical QoS to ensure superior user experience even when WAN links are degraded in performance.
For remote and hybrid users, the Versa SASE extends the same enterprise-grade connectivity and protection directly to the endpoint. Users connect securely to data centers through Versa Secure Service Edge (SSE) PoPs, gaining zero-trust access, inline threat prevention, and AI-driven traffic steering for optimal performance. Whether users are working from home, at a branch, or on the move, Versa ensures low-latency access to data center resources with unified policies, continuous posture checks, and consistent user experience.
Extending Zero Trust to the datacenter
Versa brings Zero-Trust, least-privilege access to data centers, including AI datacenters, whether power users are connecting to train or tune AI models or various enterprise users connecting to leverage AI inferencing for their purposes.
Versa ZTNA solutions come with rich set of inline security capabilities such as NGFW, IPS/IDS, threat prevention, malware detection, and DNS security to extend Zero Trust for AI models, including LLMs, NLPs, and other generative and deep-learning models. With inline micro-segmentation, IT leaders can now isolate GPU nodes, storage tiers, orchestrators, and services, preventing lateral movement and providing fine-grained visibility into which services are accessing resources from both internal and external sources.
Protecting data across data centers with Advanced DLP
A robust data leak protection (DLP) posture for modern data centers must protect data in motion and data at rest equally. Versa achieves this by combining deep content inspection with contextual analysis for each user, device, and application. Traffic moving across datacenters, SD-WAN and cloud is inspected inline, while cloud apps and storage are scanned for sensitive data exposure. The capability is available as a cloud-delivered service or as an on-prem deployment and can run standalone or alongside other Versa security controls to unify enforcement. Most importantly, Versa utilizes a single policy engine, broad protocol coverage, and support for all file formats to extend rich DLP criteria and actions, allowing IT leaders to avoid the complexity and gaps associated with configuring DLP across multiple, incompatible consoles.
Securing datacenters and branch networks with Gen AI firewalls
Protect AI workloads in the data center and branch users accessing AI applications with a GenAI firewall. Versa GenAI firewalls discover and classify usage by separating sanctioned, tolerated, and unsanctioned tools. It delivers shadow GenAI visibility, monitors prompts/responses, and routes traffic to approved applications while blocking or sandboxing unknown apps. It also provides built-in workflows for allowed exceptions, applies data protection to prevent sensitive uploads, and delivers centralized management, monitoring, and reporting so IT leaders can prove how GenAI is used, by whom, and with what guardrails.
Unified observability & AI-Ops
Versa’s Digital Experience Monitoring (DEM) and streaming telemetry provide second-level fidelity from user to WAN to datacenter. IT leaders gain visibility into real-time telemetry, correlate anomalies across LAN/WAN/security, and tie them to business intent policies. AI-enabled operations baseline normal behavior, flags outliers and can trigger actions like path reselection, QoS adjustments, or safe rollbacks to minimize Mean time to resolution (MTTR).
AI-optimized data centers are now a prerequisite for achieving a competitive advantage. Building them means combining accelerator-dense compute, high-speed fabrics, modern storage, and advanced cooling with an edge-to-cloud networking and security fabric. With Versa’s solutions, organizations can connect, secure, and monitor these environments as a scalable system, delivering the performance, protection, and agility that next-generation AI demands. Learn more about how Versa can help build a next-generation datacenters fit for AI here.
Subscribe to the Versa Blog