How SUSE Virtualization engineering teams leverage support-bundle-kit
SUSE Virtualization is for running virtual machines, whether they are legacy applications, cloud-native workloads, or a combination of the two.
Whenever a user runs into an issue specific to their environment, a common ask for the issue reporter is to provide a Support Bundle.
What happens under the hood?
When a user triggers a supportbundle from the UI, asupportbundle CRD is created.
(⎈|local:harvester-system)➜ ~ kubectl get supportbundle -o yaml
apiVersion: v1
items:
- apiVersion: harvesterhci.io/v1beta1
kind: SupportBundle
metadata:
creationTimestamp: "2025-09-15T08:29:52Z"
generation: 3
name: bundle-local-v1.6.0-6wtbg
namespace: harvester-system
resourceVersion: "39724494"
uid: 441cea18-7423-4a67-b227-9b6b45ed1d89
spec:
description: test support bundle
expiration: 30
issueURL: sample
nodeTimeout: 30
timeout: 50
status:
progress: 16
state: generating
kind: List
metadata:
resourceVersion: ""
This is picked up by the supportbundle controller, which creates a supportbundle-manager deployment
in harvester-system namespace.
(⎈|local:harvester-system)➜ ~ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
supportbundle-manager-bundle-local-v1.6.0-6wtbg 1/1 1 1 80s
The deployment runs the support-bundle-kit manager subcommand.
The support-bundle-kit manager in turn triggers the creation of a support-bundle-kit agent daemonset.
(⎈|local:harvester-system)➜ ~ kubectl get daemonset
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
supportbundle-agent-bundle-local-v1.6.0-6wtbg 1 1 1 1 1 harvesterhci.io/managed=true 43s
The support-bundle-kit has a number of predefined collectors.
The support-bundle-kit manager is responsible for collecting the following information from the k8s apiserver.
In the case of SUSE Virtualization, the following information is collected:
- Cluster-level resources, including all available CRDs
- Non-sensitive SUSE Virtualization settings
- All namespaced resources, excluding secrets, from a list of
predefined namespaces:- cattle-dashboards
- cattle-fleet-system
- cattle-provisioning-capi-system
- fleet-local
- local
- cattle-fleet-clusters-system
- cattle-logging-system
- cattle-system
- harvester-system
- longhorn-system
- cattle-fleet-local-system
- cattle-monitoring-system
- default
- kube-system
- pod logs for all pods in the predefined and any additional namespaces
Note: Additional namespaces can be defined by using the SUSE Virtualization setting support-bundle-namespaces
The support-bundle-kit agent runs on each node and collects the following information from each node:
- rke2-agent /rke2-server logs
- kernel logs
- kubelet logs
- containerd logs
- supportconfig
- Harvester config from /oem
Once the agent process on each node has collected the above information, it is zipped up and uploaded to
the support-bundle-kit manager deployment pod.
The support-bundle-kit manager waits for the job timeout or all nodes’ zip files to be available before packaging all the collected information into a predefined structure.
All the cluster and node specific information is packaged in a zip file and made available for download via the browser session.
Analyzing the support bundle
To review the contents of the supportbundle, users can just unzip the downloaded file and review the contents of k8s resources and logs.
An easier method exists via the support-bundle-kit simulator:
support-bundle-kit simulator -h
Simulate a support bundle by loading into an empty apiserver
The simulator will run an embedded etcd, apiserver and a minimal virtual kubelet.
It will then load the support bundle into this setup, allowing users to browse and interact with
support bundle contents using native k8s tooling like kubectl
Usage:
support-bundle-kit simulator [flags]
Flags:
--bundle-path string location to support bundle. default is . (default ".")
--client-burst int Burst for the kubernete client to push objects to the api server (default 100)
--client-qps float32 QPS for the kubernete client to push objects to the api server (default 100)
-h, --help help for simulator
--reset reset sim-home, will clear the contents and start a clean etcd + apiserver instance
--sim-home string default home directory where sim stores its configuration. default is $HOME/.sim (default "/Users/username/.sim")
--skip-load skip load / re-load of bundle. this will ensure current etcd contents are only accessible
Global Flags:
--config string config file (default is $HOME/.support-bundle-utils.yaml)
--debug set logging level to debug
--trace set logging level to trace
--version version[=true] --version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version
Users can simply point to the extract zip file directory and run the support-bundle-kit simulator subcommand:
(⎈|local:harvester-system)➜ supportbundle_0a8dba22-92dc-4e4c-ac39-6af9c0ba2a8b_2025-09-15T08-29-53Z support-bundle-kit simulator --reset .
INFO[0000] Creating embedded etcd server
{"level":"warn","ts":"2025-09-15T19:03:41.884485+1000","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
{"level":"warn","ts":"2025-09-15T19:03:41.991070+1000","caller":"etcdserver/metrics.go:224","msg":"failed to get file descriptor usage","error":"cannot get FDUsage on darwin"}
INFO[0000] Client will be configured with QPS: 100.000000, Burst: 100
This runs an embedded etcd instance, wires it up with a k8s apiserver and a fake kubelet. It then proceeds to load the contents of the bundle back into this fake cluster.
An admin kubeconfig is generated at $HOME/.sim/admin.kubeconfig.
In a few minutes, once the contents are processed, users can use any k8s tooling of choice to interact with this simulated cluster.
Related Articles
Jun 18th, 2024