Product Lifecycle & Support Phases
All Harvester releases will go through three stages: In Development, Latest, and then finally Stable. Please ensure you do not upgrade your production cluster to anything other than a listed Stable version. Use Latest versions for test purposes only.
Please see the 1.1.x release page for information on the current stable release, 1.1.2.
All versions listed below have an EOL date of 08 September 2024.
|1.2.2 - In Development
|1.2.1 - Stable
|26 Oct 2023
|07 Sep 2023
Harvester follows an N -1 support policy (
meaning that the 2 most recent minor versions receive security and bug fixes) along with a 26-week release cycle. This results in a release being supported for 14 months (
12 months of support and 2 months of upgrade period).
|GA to EOM
Upon General Availability of all major and minor releases, products are fully supported and maintained until the End of Maintenance date. Support entails general troubleshooting of a specific issue to isolate potential causes. Issue resolution is pursued through one or more of the following:
|EOM to EOL
After a product release reaches its End of Maintenance date, no further code-level maintenance will be provided, except for critical security-related fixes on a request basis. Product will continue to be supported until it reaches End of Life, in the form of:
Once a product release reaches its End of Life date, the customer may continue to use the product within the terms of product licensing agreement.
- Support Plans from SUSE do not apply to product releases past their EOL date.
Scope of Support
A Harvester support entitlement includes support for the components that are bundled with the Harvester appliance. Certain components such as Longhorn storage are subject to additional usage based entitlement, see SUSE T&C or speak to your Account Executive for details.
- Hypervisor Host OS
- Storage (Longhorn)
- Harvester cloud provider (CCM) and storage interface (CSI)
- Terraform provider
- Windows VMDP Drivers
Support for guest Kubernetes distributions (e.g. RKE2, k3s) or Rancher Manager require a separate entitlement in addition to Harvester.
Harvester & Rancher Support Matrix
Rancher is an open-source multi-cluster management platform. Harvester has integrated Rancher by default starting with Rancher v2.6.1.
|Harvester Node Driver Supported K8S Versions
|RKE1 & RKE2 v1.23, v1.24, v1.25, v1.26
Rancher Manager Supported Deployment
If a Harvester cluster requires Rancher Manager to be deployed on it, we support the following configuration:
- 3 node/VM RKE2 or k3s cluster
- Installed using the Rancher Helm chart according to the Rancher Documentation
- Single node Rancher managers are not supported
Harvester CCM & CSI Drivers
CCM and CSI drivers support RKE1, RKE2 and k3s distributions, unless otherwise noted.
For Harvester CCM and CSI driver integration with RKE1, please install or upgrade to the latest chart manually from the Catalog Apps. Refer to the Harvester doc here for more details.
Starting with Harvester v1.2.0 + Rancher v2.7.6, please choose your RKE2 cluster to the following versions for the best Harvester cloud provider & CSI driver capability support.
|Harvester Cloud Provider
|Harvester CSI Driver
|Feature Upgrade Support
For the RKE2 cluster versions with Harvester v1.1.2, please refer to the v1.1.x support matrix for more details.
Other Dependency Versions
|Harvester Terraform Provider
Guest Operating System Support
Full support for the following guest operating systems that have been validated to run in Harvester:
All other x86 operating systems are supported on a "best effort" basis.
For best results, SUSE recommends using YES certified for SLES 15 SP3 or SP4 Hardware with Harvester: https://www.suse.com/yessearch/. Harvester is built on SLE technology and YES certified hardware has additional validation of driver and system board compatibility.
To get the Harvester server up and running, the following minimum hardware is required:
|Minimum of 3 servers in each cluster
|x86_64 only. Hardware-assisted virtualization is required. 8-core processor minimum for testing; 16-core or above is required
|32 GB minimum, 64 GB or above required
|200 GB minimum for testing (180 GB minimum when using multiple disks); 500 GB or above is required for production.
|5,000+ random IOPS per disk(SSD/NVMe). Management nodes (first 3 nodes) must be fast enough for Etcd. Only local disks or Hardware RAID is supported.
|1 Gbps Ethernet minimum for testing, 10Gbps Ethernet minimum required for production.
|Trunking of ports required for VLAN support
Testing environments with 1 GB NIC / 140 GB disk are not supported
We recommend server-class hardware for the best results. Laptops and nested virtualization are not commercially supported.