Harvester v1.5.x
Harvester is adopting a new lifecycle strategy that simplifies version management and upgrades. This strategy includes the following:
- Four-month minor release cadence
- Two-month patch release cadence (best effort)
- Component adoption policy
The following table outlines the supported upgrade paths. For more information about upgrades, see the documentation. Before upgrading Harvester v1.5, you must first upgrade Rancher to a supported patch version of Rancher v2.11. The table below shows the correct upgrade path.
Installed Version Supported Upgrade Versions Rancher Version Before Harvester Upgrade v1.4.2 v1.5 v2.11 v1.4.3 v1.5 v2.11 v1.5.x v1.5.y (where y is greater than x) v2.11
Product Lifecycle & Support Phases
In earlier versions, the terms Latest and Stable were used to provide guidance for upgrades. Starting with v1.5, the terms Community and Prime are used to differentiate the releases intended for community users and SUSE customers.
NOTE: "Prime" refers to SUSE Rancher Prime, which is SUSE's enterprise container management platform. SUSE Virtualization is the cloud-native virtualization product that is part of SUSE Rancher Prime.
Product | Version | Type | Released |
---|---|---|---|
SUSE Virtualization | v1.5.1 | Prime | 30 June 2025 |
Harvester | v1.5.0 | Community | 25 April 2025 |
The listed versions all have an EOM date of 30 December 2025 and an EOL date of 30 December 2026.
PHASES | DESCRIPTION |
---|---|
GA to EOM | Upon General Availability of all major and minor releases, products are fully supported and maintained until the End of Maintenance date. Support entails general troubleshooting of a specific issue to isolate potential causes. Issue resolution is pursued through one or more of the following:
|
EOM to EOL | After a product release reaches its End of Maintenance date, no further code-level maintenance will be provided, except for critical security-related fixes on a request basis. Product will continue to be supported until it reaches End of Life, in the form of:
|
EOL | Once a product release reaches its End of Life date, the customer may continue to use the product within the terms of product licensing agreement. - Support Plans from SUSE do not apply to product releases past their EOL date. |
Scope of Support
A Harvester support entitlement includes support for the components that are bundled with the Harvester appliance. Certain components such as Longhorn storage are subject to additional usage based entitlement, see SUSE T&C or speak to your Account Executive for details.
- Hypervisor Host OS
- Storage (Longhorn)
- Harvester cloud provider (CCM) and storage interface (CSI)
- Terraform provider
- Windows VMDP Drivers
Support for guest Kubernetes distributions (e.g. RKE2, k3s) or Rancher Manager require a separate entitlement in addition to Harvester.
Harvester & Rancher Support Matrix
Rancher is an open-source multi-cluster management platform. Harvester has integrated Rancher by default starting with Rancher v2.6.1.
Harvester Version | Rancher Version | Harvester Node Driver Supported K8S Versions |
---|---|---|
v1.5 | v2.11 | RKE2 1.30, 1.31, 1.32 |
Rancher Manager Supported Deployment
If a Harvester cluster requires Rancher Manager to be deployed on it, we support the following configuration:
- 3 node/VM RKE2 or k3s cluster
- Installed using the Rancher Helm chart according to the Rancher Documentation
- Single node Rancher managers are not supported
Harvester CCM & CSI Drivers
CCM and CSI drivers support RKE1, RKE2 and k3s distributions, unless otherwise noted.
For Harvester CCM and CSI driver integration with RKE1, please install or upgrade to the latest chart manually from the Catalog Apps. Refer to the Harvester doc here for more details.
Harvester Cloud Provider | Harvester CSI Driver | RKE2 Version | Feature Upgrade Support |
---|---|---|---|
v0.2.10 | v0.1.23 | >=v1.30.13+rke2r1, >=v1.31.9+rke2r1, >=v1.32.5+rke2r1 | Yes |
v0.2.9 | v0.1.23 | >=v1.29.14+rke2r1, >=v1.30.10+rke2r1, >=v1.31.6+rke2r1 | Yes |
v0.2.9 | v0.1.22 | >=v1.29.13+rke2r1, >=v1.30.9+rke2r1, >=v1.31.5+rke2r1 | Yes |
v0.2.6 | v0.1.21 | >=v1.29.12+rke2r1, >=v1.30.8+rke2r1, >=v1.31.4+rke2r1 | Yes |
v0.2.4 | v0.1.18 | >=v1.27.16+rke2r2, >=v1.28.13+rke2r1, >=v1.29.8+rke2r1, >=v1.30.4+rke2r1 | Yes |
v0.2.4 | v0.1.17 | >=v1.26.15+rke2r1 | Yes |
v0.2.3 | v0.1.17 | >=v1.26.6+rke2r1 | Yes |
v0.2.2 | v0.1.16 | >=v1.26.6+rke2r1 | Yes |
v0.2.2 | v0.1.16 | >=v1.25.11+rke2r1 | Yes |
v0.2.2 | v0.1.16 | >=v1.24.15+rke2r1 | Yes |
v0.1.14 | v0.1.15 | >=v1.23.17+rke2r1 | No |
Other Dependency Versions
Harvester Version | Harvester Terraform Provider |
---|---|
v1.5.1 | v0.6.7 |
v1.5.0 | v0.6.7 |
Guest Operating System Support
The following guest operating systems have been validated to run on SUSE Virtualization during release testing:
OS Family | Versions |
---|---|
SLES | 15 SP6 |
OpenSUSE Leap | 15.6 |
SLE Micro | 6 |
Ubuntu | 24.04 |
RHEL | 9.4 |
Windows Server | 2022 |
Windows (workstation) | 11 |
A broad range of operating systems should function well in virtual machines on SUSE Virtualization. SUSE Virtualization supports x86_64 and ARM64 physical host servers, and each can run guest operating systems for their respective architecture. Generally, older versions of those operating systems will continue to function, even if they have not been validated during the latest release testing. Any known issues with guest operating systems will be provided in the release notes.
Hardware Requirements
For best results, SUSE recommends using YES certified for SLES 15 SP3 or SP4 Hardware with Harvester: https://www.suse.com/yessearch/. Harvester is built on SLE technology and YES certified hardware has additional validation of driver and system board compatibility.
To get the Harvester server up and running, the following minimum hardware is required:
Type | Requirements |
---|---|
Cluster | Minimum of 3 servers in each cluster |
CPU | x86_64 only. Hardware-assisted virtualization is required. 8-core processor minimum for testing; 16-core or above is required |
Memory | 32 GB minimum, 64 GB or above required |
Disk Capacity | 200 GB minimum for testing (180 GB minimum when using multiple disks); 500 GB or above is required for production. |
Disk Performance | 5,000+ random IOPS per disk(SSD/NVMe). Management nodes (first 3 nodes) must be fast enough for Etcd. Only local disks or Hardware RAID is supported. |
Network Card | 1 Gbps Ethernet minimum for testing, 10Gbps Ethernet minimum required for production. |
Network Switch | Trunking of ports required for VLAN support |
Testing environments with 1 GB NIC / 140 GB disk are not supported
We recommend server-class hardware for the best results. Laptops and nested virtualization are not commercially supported.