Product Lifecycle & Support Phases
All Harvester releases will go through three stages: In Development, Latest, and then finally Stable. Please ensure you do not upgrade your production cluster to anything other than a listed Stable version. Use Latest versions for test purposes only.
|1.0.3||09 Aug 2022||09 Aug 2023||20 Feb 2023|
|1.0.2||07 May 2022||07 May 2023||20 Feb 2023|
|1.0.1||07 Apr 2022||07 Apr 2023||20 Feb 2023|
|1.0.0||21 Dec 2021||20 Dec 2022||20 Feb 2023|
Harvester follows an N -1 support policy (
meaning that the 2 most recent minor versions receive security and bug fixes) along with a 26-week release cycle. This results in a release being supported for 14 months (
12 months of support and 2 months of upgrade period).
|GA to EOM|
Upon General Availability of all major and minor releases, products are fully supported and maintained until the End of Maintenance date. Support entails general troubleshooting of a specific issue to isolate potential causes. Issue resolution is pursued through one or more of the following:
|EOM to EOL|
After a product release reaches its End of Maintenance date, no further code-level maintenance will be provided, except for critical security-related fixes on a request basis. Product will continue to be supported until it reaches End of Life, in the form of:
Once a product release reaches its End of Life date, the customer may continue to use the product within the terms of product licensing agreement.
- Support Plans from SUSE do not apply to product releases past their EOL date.
Scope of Support
A Harvester support entitlement includes support for the components that are bundled with the Harvester appliance. Certain components such as Longhorn storage are subject to additional usage based entitlement, see SUSE T&C or speak to your Account Executive for details.
- Hypervisor Host OS
- Storage (Longhorn)
- Harvester cloud provider (CCM) and storage interface (CSI)
- Terraform provider
- Windows VMDP Drivers
Support for guest Kubernetes distributions (e.g. RKE2, k3s) or Rancher Manager require a separate entitlement in addition to Harvester.
Harvester & Rancher Support Matrix
Rancher is an open-source multi-cluster management platform. Harvester has integrated Rancher by default starting with Rancher v2.6.1.
|Harvester Version||Rancher Version||Harvester Node Driver Supported K8S Versions|
|v1.0.2-v1.0.3||v2.6.7||RKE1 & RKE2 v1.22, v1.23, v1.24|
|v1.0.2-v1.0.3||v2.6.6||RKE1 & RKE2 v1.22, v1.23, v1.24|
|v1.0.1-v1.0.2||v2.6.5||RKE1 & RKE2 v1.22, v1.23, v1.24|
|v1.0.1||v2.6.4||RKE1 & RKE2 v1.22, v1.23, v1.24|
|v1.0.0||v2.6.3||RKE1 & RKE2 v1.22, v1.23, v1.24|
|v1.0.2-v1.0.3||v.2.6.8||RKE1 & RKE2 v1.22, v1.23, v1.24|
Rancher Manager Supported Deployment
If a Harvester cluster requires Rancher Manager to be deployed on it, we support the following configuration:
- 3 node/VM RKE2 or k3s cluster
- Installed using the Rancher Helm chart according to the Rancher Documentation
- Single node Rancher managers are not supported
Harvester CCM & CSI Driver
CCM and CSI drivers support RKE1, RKE2 and k3s distributions unless otherwise noted
For Harvester CCM and CSI driver integration with RKE1, please install or upgrade to the latest chart manually from the Catalog Apps. Refer to the Harvester doc here for more details.
|Harvester Cloud Provider||Harvester CSI Driver||RKE2 Version||Feature Upgrade Support|
Other Dependency Versions
|Harvester Version||Harvester Terraform Provider|
Guest Operating System Support
Full support for the following guest operating systems that have been validated to run in Harvester:
All other x86 operating systems are supported on a "best effort" basis.
For best results, SUSE recommends using YES certified for SLES 15 SP3 or SP4 Hardware with Harvester: https://www.suse.com/yessearch/. Harvester is built on SLE technology and YES certified hardware has additional validation of driver and system board compatibility.
To get the Harvester server up and running, the following minimum hardware is required:
|Memory||32 GB minimum, 64 GB or above preferred|
|Disk Capacity||140 GB minimum for testing, 500 GB or above recommended for production. Custom partitioning not supported.|
|Disk Performance||5,000+ random IOPS per disk(SSD/NVMe). Management nodes (first 3 nodes) must be fast enough for Etcd. Only local disks or Hardware RAID is supported.|
|Network Card||1 Gbps Ethernet minimum for testing, 10Gbps Ethernet recommended for production.|
|Network Switch||Trunking of ports required for VLAN support|
|CPU||x86_64 only. Hardware-assisted virtualization is required. 8-core processor minimum; 16-core or above preferred|
Testing environments with 1 GB NIC / 140 GB disk are not supported
We recommend server-class hardware for the best results. Laptops and nested virtualization are not officially supported.