Key Considerations for Implementing Modern Virtualization
The enterprise virtualization market has reached a definitive inflection point. Following Broadcom’s acquisition of VMware, organizations are facing a new reality characterized by rising costs, licensing uncertainty, and a shrinking ecosystem of legacy support. Industry data suggests that 74% of enterprises are actively taking steps to reduce their VMware footprint, looking for alternatives that offer greater control and economic predictability.
Treating this transition as a simple “rip and replace” of a hypervisor is a strategic risk. Modernization is not just about changing vendors; it is about fundamentally reshaping your operational model to unify legacy virtual machines (VMs) with modern, containerized workloads on a single, future-proof platform.
For IT leaders, the challenge is no longer just about keeping the lights on, it is about designing an infrastructure that eliminates silos between “traditional” and “cloud-native” operations. This requires a deep understanding of Kubernetes-native architectures, new approaches to storage resilience, and a clear migration strategy that mitigates risk for mission-critical applications.
This three-part series of articles is designed to guide infrastructure architects and platform engineers through this transformation. Rather than jumping straight into specific solutions, we explore the foundational requirements for building a modern virtualization stack.
- Part 1: Strategic Shift: Why Orgs Should Choose Modern Virtualization. We examine why the industry is moving toward Kubernetes-native virtualization, the architectural differences between legacy HCI and modern stacks, and the business case for unifying your infrastructure to support AI and Edge workloads.
- Part 2: Key Considerations for Implementing Modern Virtualization: Designing for Resilience & Day 2 Operations. We dive into the technical “must-haves” for a production-grade environment, addressing critical challenges like storage topology awareness, preventing “boot storms,” and ensuring zero-data-loss disaster recovery (DR) without legacy SANs.
- Part 3: Choosing the Right Technology Partners for Your Modern Virtualization Journey. Finally, we apply these architectural principles to a real-world solution, demonstrating how SUSE Virtualization and Portworx by Everpure address these specific challenges to deliver a seamless, enterprise-ready platform.
Why Modernize Your Virtualization Stack Now?
The virtualization market has arrived at an inflection point. Following Broadcom’s acquisition of VMware, enterprises are facing rising costs and licensing uncertainty. Industry surveys, such as The Voice of Kubernetes Experts Report 2025 , show that about three-quarters of enterprises are actively taking steps to reduce their VMware footprint and nearly sixty percent are exploring Kubernetes-based alternatives. Organizations are no longer just looking for a replacement; they are seeking a platform that unifies traditional virtual machines (VMs) with modern containerized workloads.
Moving to a modern virtualization stack is driven by three economic and operational imperatives:
- TCO Reduction & Licensing Freedom: Organizations report 30–50% cost savings by moving from legacy virtualization to open source, Kubernetes-based platforms that avoid proprietary licensing traps and by optimizing storage consumption through intelligent thin provisioning and capacity management.
- Operational Consolidation: Managing separate silos for VMs, containers, and storage is expensive. By managing VMs as Kubernetes objects, a single team can manage both legacy and cloud-native applications in a unified infrastructure platform.
- Future-Proofing for AI & Edge: Traditional platforms struggle with the dynamic demands of AI/ML. A Kubernetes-native foundation allows for the efficient orchestration of GPU resources and data-intensive workloads alongside standard VMs.
Core Pillars of a Modern Hyperconverged Stack
To address the imperatives of cost reduction and operational consolidation, organizations require a platform that bridges the gap between traditional enterprise reliability and cloud-native agility. This is achieved by combining two critical technological pillars into a single, modern hyperconverged stack.
- Cloud-Native Compute (The Hypervisor): A modern, open-source hypervisor powered directly by Kubernetes. Unlike legacy platforms that simply wrap Kubernetes in proprietary layers, a truly modern solution manages virtual machines (VMs) as first-class Kubernetes objects (leveraging standards like KubeVirt). This allows operations teams to manage legacy VMs and modern containers side-by-side through a single pane of glass, centralizing observability, security, and orchestration across the entire IT estate.
- Software-Defined Storage (The Data Layer): A purpose-built, Kubernetes-native data services platform. This layer acts as the software-defined storage backbone, abstracting the underlying physical hardware—whether local disks or external enterprise arrays—to deliver a resilient, elastic storage pool. It provides the essential “Day 2” enterprise capabilities that vanilla Kubernetes natively lacks, such as synchronous disaster recovery, automated capacity management, and high availability.
By unifying these compute and data layers, organizations can successfully eliminate infrastructure silos and escape the burden of legacy licensing taxes, establishing a clear, sustainable path to infrastructure modernization.
Building for Performance and Resilience
Building a modern virtualization stack requires shifting from a legacy “static appliance” mindset to a distributed systems approach. To prioritize performance and resilience, organizations must address key considerations for their infrastructure. These include the cluster topology, network strategy, and storage strategy.
Cluster Topology and Failure Domains
The physical layout of your nodes dictates your resilience against hardware failure.
- Quorum and Capacity: While a minimum of three nodes is required to establish a basic quorum, a six-node cluster is the recommended baseline for production virtualization environments to minimize the “blast radius” (capacity loss) during a node failure.
- Fault Domains: Furthermore, a common architectural pitfall is choosing storage and tooling that do not understand Kubernetes topology. Your underlying storage, backup, and disaster recovery tools must natively respect cluster fault domains (such as racks, zones, and regions). If a solution assumes a flat network layout — which is typical of legacy SANs — it risks saturating cross-zone links during replication. A truly modern storage architecture must automatically manage intelligent replica placement distributed across these physical fault domains to guarantee true high availability.
Network Fabric Strategy
In a hyperconverged environment, the network acts as the storage backplane.
- Traffic Isolation: A common architectural mistake is sharing a single network interface for both management traffic and storage replication. Architects should design for dedicated network interfaces to isolate management traffic from heavy data replication traffic, preventing control plane lockups during peak loads.
- Capacity: For virtualization workloads, a minimum of 10 Gbps is required, but 25 Gbps or higher is strongly recommended.
- Throughput: Configuring Jumbo Frames (MTU 9000) can improve replication throughput and reduce CPU overhead.
Enterprise Storage Strategy
While default or built-in distributed storage solutions may suffice for basic, non-critical needs, mission-critical enterprise workloads require an advanced, purpose-built data services platform to meet strict SLAs for high performance and disaster recovery.
- RWX (ReadWriteMany) Requirement: To support Live Migration the ability to move a running VM between nodes without downtime, the underlying storage must support shared access. The storage architecture must facilitate this via a shared filesystem that enables ReadWriteMany (RWX) access modes. This architecture is what allows both the source and destination hosts to access the VM disk simultaneously during the migration handoff, ensuring seamless mobility.
- Performance and Intelligent Data Locality: To maximize performance and prevent network bottlenecks, the storage architecture must integrate directly with the Kubernetes scheduler. It must ensure that when a virtual machine starts or restarts, it is placed on a physical node where a replica of its data already exists locally. This “data locality” eliminates unnecessary network hops for read operations, delivering near-native I/O performance essential for database and performance-sensitive workloads.
- Backing Storage Configuration: For bare-metal deployments, a modern software-defined storage layer typically consumes raw block devices. When integrating this environment with external enterprise storage arrays, specific configurations are necessary:
- Multipathing: This must be configured on the host nodes to ensure highly available, redundant connectivity to the backend storage array via protocols like iSCSI or Fibre Channel.
- Metadata Journaling: A dedicated high-speed volume (such as an SSD or NVMe device) should be allocated specifically for storage metadata journaling. This prevents write latency bottlenecks and ensures smooth operations in high-transaction environments.
Summary
By implementing modern virtualization design considerations, you can seamlessly modernize your infrastructure to support increasingly cloud-native, data-intensive, and edge-focused workloads without disrupting existing applications. Moving away from legacy platforms and prioritizing critical architectural pillars, such as resilient cluster topologies, dedicated network fabrics, and intelligent data locality, eliminates unnecessary complexity and unlocks true agility.
SUSE Virtualization with Portworx by Everpure provide an enterprise solution that perfectly aligns with these recommended design considerations. Together, they deliver a unified, Kubernetes-native foundation that runs VMs and containers side-by-side. This powerful combination simplifies day-to-day operations and provides resilient, workload-aware data services, ensuring your enterprise is perfectly positioned to meet today’s demands and effortlessly adapt to future growth.
Take Your Next Steps
Learn more about how SUSE and Portworx deliver efficient and cost-effective solutions to enterprises navigating the complexities of Kubernetes and VM management with the following resources:
- Register for a Hands-On Lab (HOL): Experience the SUSE and Portworx solution firsthand by joining an interactive session.
- Get the white paper: The Business Value of SUSE Rancher Prime with Virtualization.
- Schedule a demo or reach out to us at isv-cosell@suse.com to discuss your specific needs.