AI-Optimized Data Centers: The Non-Negotiable Blueprint for What’s Next

dogu-narin
By Dogu Narin
VP of Product Management, Versa Networks
November 13, 2025
in
Share
Follow

Data centers are at the heart of any enterprise, serving as centralized points to host servers, applications, workloads, and compute that power everything from transactional systems to security and analytics. Today, as AI becomes embedded in every workflow, from customer service to software development, these data centers have evolved into the locations that host AI tools and data.

Today’s reality of AI anywhere and the role of data centers

Use cases for AI are growing rapidly. Yet another application is moving to utilize AI, and a new AI-based application emerges every other day with great promises.

Regulated industry-based Enterprises or those with critical intellectual property or sensitive information are not willing to put their sensitive data on third-party clouds or AI applications hosted elsewhere. This data may be business or technical data or both.

Shadow IT discovery and management, DLP solutions and Generative AI Firewalls are also employed to discover and control enterprise users accessing to third-party AI tools on the Internet and how they may be using sanctioned ones.

Beyond sovereignty, industry-specific mandates for finance, healthcare, and government may require sensitive data kept on-premises to maintain auditability and compliance with data conformance. In many cases, this translates to running AI locally in the enterprise’s own data center protected with tight access controls and comprehensive cyber security and data protection stack.

With data sovereignty in focus, local AI datacenters must process sensitive datasets locally, in compliance with security and sovereignty regulations, while training may also happen in the same datacenter. Once trained, AI models can be localized and deployed regionally for inference on-premises, closer to the user.

Enterprise’s data, hosted by AI data centers, as well as the AI model itself, whether developed in-house or acquired from third parties, is stored in AI data centers. One may think of the AI data center as the brain and central nervous system of an enterprise IT infrastructure. To prevent accidental leakage, enterprises utilize zero trust based access control, comprehensive security stacks to protect AI apps and data within their own data centers. 

As AI gains more popularity, the need for compute, networking, storage and AI infrastructure required to train large-scale AI models, and for inferencing purposes continues to accelerate.  The rise of generative AI, chatbots, vision systems, AI-based applications, increased use of big data for AI purposes and autonomous operations requires AI datacenters designed for the purpose. It is important for Enterprises to understand how they can address the needs of connectivity and security of AI datacenters with the ever-increased importance of their data centers, so that they can architect and implement solutions accordingly.

With these new requirements, today’s AI datacenters need performance, and scale while preserving sovereignty to meet demands for AI as its importance for businesses is accelerating. Industry reports show that the global AI inference market is expected to grow to US $254.98 billion by 2030, at a CAGR of ~19.2%.

The Limitations of Traditional Data Centers in the AI Era

Within the AI datacenter, AI requires accelerated compute, ultra-fast connectivity to facilitate high performance Remote Direct Memory Access (RDMA), non-blocking and non-oversubscribed architecture, that provides consistent low latency to enable training setups and for high-performance inferencing purposes. Such needs arise from the fact that networked GPUs aim to access the distributed memory content of each other using RDMA to run very large matrix computations or to retrieve data that may be spread across the memory banks of GPUs. During the process, if any packet gets dropped due to oversubscription or if jitter or unexpected delay of a packet occurs, model training can stall, get impacted, and/or user or application experience can be poor.

While traditional enterprise datacenters were still fast, they did not need to tackle such demanding problems as each compute or workload instance would be self-contained and would talk to the other using APIs or traditional means across the datacenter network. An underlying datacenter network with reasonable oversubscription could handle the demand. If some packets get dropped or arrive a bit later than planned, the application layers would usually handle that, as applications were built to accommodate this assumption. It was never a goal to build traditional enterprise datacenters with the types of high‑bandwidth, low‑latency, non-blocking architecture AI datacenters require.

New requirements are observed on the access to AI data centers front, As the role and criticality of datacenters are increasing further with AI now, zero trust based access to AI models or AI apps and data, data leak protection, and AI-specific compliance and governance gain importance.

How Versa can help build the next‑generation AI-Ready Data Centers

Flexible, Scalable Architecture for AI Inference Workloads

Scalability in AI training and inferencing datacenters depends on high-performance, converged networking that seamlessly provides connectivity for compute and storage. For intra-DC purposes, Versa Ethernet switches support Data Center Bridging (DCB) standards. IT teams can build an Ethernet-based data centers that meets the storage and data movement needs of intra-AI data centers.

Furthermore, Versa Ethernet switches now support Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCEv2). Thanks to this key capability, IT teams can build AI datacenters with low latency, high throughput, and efficient GPU interconnectivity for distributed training and inference at scale. Versa Networks can now help build enterprise AI datacenters with AI clusters adjacent to classical enterprise applications and data repositories within the datacenter, while each part of the datacenter is catered according to its needs. This approach helps Enterprises leverage their existing investment while they are expanding it for the AI age.

Software-Defined Data Center Architecture for Agility

Versa delivers a software-defined datacenter architecture with application-aware, SLA-driven intelligence for high-performance, secure connectivity. At the datacenter edge, Versa OS (VOS) unifies L2–L7 services including routing, segmentation, security, and traffic engineering.

While helping connect AI racks with RoCEv2 based non-blocking Ethernet fabric, Versa solution can connect AI racks or compute systems to the rest of the enterprise datacenter using standards-based EVPN/VXLAN overlays with built-in on-ramp/off-ramp functionality. This overlay-based connectivity can start from Versa’s smart NICs, aka SD-NICs, placed within the same AI compute systems or from on-ramp/off-ramp gateways connected to standard Ethernet NICs of these servers. This hybrid approach allows enterprise applications or workloads residing in the same datacenter to make use of AI tools, and apps while being located in separate parts of the same datacenter. One may think of the boundary between enterprise datacenter vs AI data center racks as an AI on-ramp/off-ramp edge. This AI edge may be on a dedicated Ethernet switch or on Versa SD-NICs that run VOS natively while being placed in AI compute platforms. If the AI Edge is on the Ethernet switch, one side can communicate using VXLAN based overlays with the classical part of the center while the other would use RoCEv2 based Ethernet to communicate with AI clusters.

Access control to AI racks within the datacenter can be controlled by inline micro-segmentation and ZTNA capabilities of the Versa solution on the AI on-ramp/off-ramp edge.

Versa views the edges of the network as critical locations for policy enforcement functions, while such edges include WAN edges, LAN edges, cloud edges, and now AI on-ramp/off-ramp edges. Versa DC solution provides a rich set of policy-based access control and comprehensive security on the listed edges of the network, including consistent policy across multi-vendor fabrics.

This edge-centric, elastic model enables IT teams to scale effortlessly on either side, whether it is AI racks or enterprise application racks, by adding links and ports. Meanwhile, the logical gateway functionality, whether on VOS-based switches or in SD-NICs, provides consistent functionality.

Open, Non-Proprietary approaches to solving customer needs

The entire experience provided by Versa is supported by best-of-breed, industry-standard hardware, protocols and encapsulations and automated operations through APIs and workflow templates. Versa provides democratized platforms built on open reference designs, eliminating the need for proprietary chassis, cables, software constructs which ultimately impose constraints on datacenter or IT operators.

Today, many legacy datacenter solutions are vendor-proprietary. They rely on rigid, proprietary chassis-based designs or proprietary Ethernet stacks or encapsulations of sorts to create ultimate vendor lock-in. This is also clearly observed in AI datacenters too. Furthermore, such approaches can create islands of single fault domains limiting scalability, agility, resiliency and security, which are very hard to troubleshoot and operate in real life.

In contrast, Versa’s software-defined, standards-based, horizontally scalable architecture replaces costly forklift upgrades with open, programmable, and resilient infrastructure designed for modern enterprise and AI inference datacenters.

Figure 1: Chassis-Based vs. Hardware-Agnostic Platform for Datacenter Appliances

CategoryProprietary PlatformSoftware-defined Platform
ArchitectureProprietary, monolithic chassis or stack with fixed hardware modulesModular, software-defined design that allows selection from a range of stackable appliances
ScalabilityVertical scaling only with the addition of modules/blades within chassis or stack limits.Horizontal scaling with the ability to add more nodes on demand.
Performance OptimizationOptimized for dedicated ASICs but limited adaptability to AI workloadsThanks to the intelligent software based approach which makes use of distributed building blocks, performance and scale can increase linearly and predictably
Vendor Lock-inYes, with proprietary backplane, OS, and management tools.Minimal with an open architecture that supports multiple vendors and hardware options.

Versa ensures high-performance connectivity from branches and remote users to data centers

Extending Zero Trust to the datacenter

Versa ZTNA solutions come with rich set of inline security capabilities such as NGFW, IPS/IDS, threat prevention, malware detection, and DNS security to extend Zero Trust for AI models, including LLMs, NLPs, and other generative and deep-learning models. With inline micro-segmentation, IT leaders can now isolate GPU nodes, storage tiers, orchestrators, and services, preventing lateral movement and providing fine-grained visibility into which services are accessing resources from both internal and external sources.

Protecting data across data centers with Advanced DLP

Securing datacenters and branch networks with Gen AI firewalls

Unified observability & AI-Ops

Conclusion

Recent Posts













Gartner Research Report

2025 Gartner® Magic Quadrant™ for SASE Platforms

Versa has for the third consecutive year been recognized in the Gartner Magic Quadrant for SASE Platforms and is one of 11 vendors included in this year's report.