Unlocking FinOps: Integrating OpenCost with SUSE Observability
IT Observability has become a strategic pillar that drives efficiency, resilience, and security across the organization, as highlighted in our blog post, “Observability Across DevOps, SRE, SecOps, & FinOps” by Vishal Ghariwala. In today’s complex IT landscape, integrating observability is crucial for optimizing FinOps operations and ensuring your cloud spending is as efficient as possible.
Cloud costs are continuously rising, making their optimization paramount. That’s where tools like OpenCost become indispensable. OpenCost, an open-source and vendor-neutral project, offers a real-time solution for measuring and allocating infrastructure and container costs in Kubernetes environments. By combining the power of SUSE Observability with OpenCost, you gain unprecedented insight into your expenditures, enabling smarter decisions and significant savings.
Now that we’ve grasped the importance of merging observability with cost management, let’s delve into the practical side. Our SUSE Observability already provides a unified platform for metrics, logs, and traces, offering comprehensive visibility into your environments. Integrating OpenCost will extend this visibility, adding detailed, actionable cost data directly within your observability dashboard. In the following sections, we’ll provide a step-by-step guide on how to implement this integration, empowering your FinOps team with the data needed to optimize your cloud resources and spending.
Before You Begin
Important: Before proceeding with the installation, ensure you have proper access to the Application Collection catalog at https://apps.rancher.com. Log in with your user and verify your access. If you don’t have access, please contact SUSE support.
To check for access, after signing in, click on your profile picture at the top right, then navigate to Settings -> Profile. You must have the “Prime” entitlement under your organization to successfully install both SUSE Observability and OpenCost.
On the same page, go to Settings -> Service Accounts. If you don’t already have a service account created for your Organization, please create one here.
Crucial: Make sure to save your username and the service account token in a secure place. This token cannot be retrieved again once created; you’d need to delete and create a new service account if it’s lost!
Using Application Collection
The Application Collection is a curated set of artifacts from SUSE, built with supply chain security best practices in mind. It’s the recommended source for all components used within Rancher Prime. Leveraging the Application Collection not only streamlines your installation processes by offering pre-validated artifacts but also significantly enhances your operational security by minimizing the risk of compromised software. It’s a critical prerequisite for gaining access to and deploying key solutions like SUSE Observability and OpenCost, ensuring that your cloud-native environments are built on a foundation of integrity and performance.
To leverage it, you’ll need to create a Kubernetes Secret object in the namespace where you plan to deploy your applications. This secret must contain the necessary credentials. Without this, you’ll encounter errors when trying to download referenced images.
Important: Kubernetes requires that Secret objects be defined in the same namespace where they will be used for each deployment. You cannot reference a Secret from another namespace.
For example, if you plan to deploy applications within an opencost
namespace, here’s how you’d set it up:
First, create the opencost
namespace if it doesn’t already exist:
kubectl create namespace opencost
Next, create the shared secret to access the Application Collection. All subsequent component installations that reference this namespace will use this secret:
kubectl create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-username=<USERNAME FROM APP COLLECTION SERVICE ACCOUNT> --docker-password=<SERVICE ACCOUNT TOKEN FROM APP COLLECTION> -n opencost
Then, log in to Helm’s registry:
helm registry login dp.apps.rancher.io -u <USERNAME FROM APP COLLECTION SERVICE ACCOUNT> -p <SERVICE ACCOUNT TOKEN FROM APP COLLECTION>
With these steps complete, you should be ready to start deploying Helm charts from the Application Collection!
Important Note on Image Pull Errors: All Helm charts within the Application Collection expect the credentials to be named application-collection
as a Secret object in the same namespace. If you encounter errors like “ErrImagePull” or “ImagePullBackOff,” you can specify which Secret to use in your Helm command with an additional parameter:
--set "global.imagePullSecrets[0].name=application-collection"
Alternatively, if you’re editing a YAML file, add the following section:
global:
imagePullSecrets:
- application-collection
Installing Rancher Prime
Important: Rancher must be installed on the master node for the “local” cluster ONLY.
Install cert-manager
On the master node of your “local” Kubernetes cluster, execute the following commands:
kubectl create namespace cert-manager
kubectl create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-username=<USERNAME FROM APP COLLECTION SERVICE ACCOUNT> --docker-password=<SERVICE ACCOUNT TOKEN FROM APP COLLECTION> -n cert-manager
helm registry login dp.apps.rancher.io -u <USERNAME FROM APP COLLECTION SERVICE ACCOUNT> -p <SERVICE ACCOUNT TOKEN FROM APP COLLECTION>
helm upgrade --install cert-manager oci://dp.apps.rancher.io/charts/cert-manager -n cert-manager --set "global.imagePullSecrets[0].name=application-collection" --set crds.enabled=true
After a short while, verify that cert-manager
is successfully deployed:
kubectl get pods --namespace cert-manager
Install Rancher
On the master node of your “local” cluster, run the following:
helm repo add rancher-prime https://charts.rancher.com/server-charts/prime
helm repo update
kubectl create namespace cattle-system
kubectl create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-username=<USERNAME FROM APP COLLECTION SERVICE ACCOUNT> --docker-password=<SERVICE ACCOUNT TOKEN FROM APP COLLECTION> -n cattle-system
helm install rancher rancher-prime/rancher --namespace cattle-system --set hostname=<FQDN of your Rancher management node> --set bootstrapPassword=admin
Monitor and wait for the rollout to complete:
kubectl -n cattle-system rollout status deploy/rancher
Once the rollout is complete, open your web browser and navigate to the FQDN you specified for your Rancher management node. Supply the bootstrap password (in this case, “admin”) to finish the setup.
Import Your Downstream Cluster into Rancher
From your Rancher main screen, click on “Home” (the house icon). At this stage, you should only see the “local” cluster.
- Click on “Import Existing“.
- Select “Generic” and name the cluster “downstream”.
- Click “Create“.
- Follow the instructions on the “Registration” page and run the provided commands on a master node of your downstream cluster.
The cluster will be imported and become available in the Rancher UI in a few minutes.
Installing SUSE Observability
Before Starting
Please ensure you meet the following requirements:
-
A resolvable DNS entry for your Observability main UI ingress (e.g.,
suse-observability.mydomain
). -
A resolvable DNS entry for your OpenTelemetry collector ingress (e.g.,
suse-observability-otlp.mydomain
). -
To avoid errors related to Docker sockets, ensure no file or directory named
/var/run/docker.sock
exists on the host OS of each node. Otherwise, the node-agent might get confused and not use the correct RKE2/containerd socket. -
The
promtail
pod might fail due to the number ofinotify
probes on some host OSes. To prevent this, configure a higher number ofinotify
probes onsysctl
:# Edit the file: sudo vi /etc/sysctl.d/50-inotify-promtail.conf # Add the following line: fs.inotify.max_user_instances = 512 # Apply the changes: sudo sysctl -p # Verify the change: sysctl fs.inotify.max_user_instances
You should see the output:
fs.inotify.max_user_instances = 512
Installation
Add the SUSE Observability Helm repository:
helm repo add suse-observability https://charts.rancher.com/server-charts/prime/suse-observability helm repo update
Create the values files:
export VALUES_DIR=.
helm template --set license='<SUSE Observability License key>' --set baseUrl='<FQDN for your Observability main UI>' --set sizing.profile='10-nonha' suse-observability-values suse-observability/suse-observability-values --output-dir $VALUES_DIR
In this example, we’re using the smallest profile for a standalone Observability server in a non-HA configuration. Refer to the official Observability documentation for more sizing options.
Run the installation:
helm upgrade \
--install \
--namespace suse-observability \
--values $VALUES_DIR/suse-observability-values/templates/baseConfig_values.yaml \
--values $VALUES_DIR/suse-observability-values/templates/sizing_values.yaml suse-observability suse-observability/suse-observability
Creating Ingresses for SUSE Observability with Valid Certificates
For the imported downstream cluster recently created, you should create a new cert-manager installation using Helm. Please, return to session above to install it in this cluster
First, create the cert-manager
issuer. This example uses Let’s Encrypt for production certificates. Remember to replace <YOUR EMAIL ADDRESS>
with your actual email.
# cat issuer.yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: <YOUR EMAIL ADDRESS>
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
ingressClassName: nginx
Create the Issuer
object:
kubectl apply -n suse-observability -f issuer.yaml
You should see output similar to: issuer.cert-manager.io/letsencrypt-prod created
Verify that it’s been created correctly:
kubectl describe issuer letsencrypt-prod -n suse-observability
Under the “Status:” section, you should see:
Reason: ACMEAccountRegistered Status: True Type: Ready
Now, let’s create the main UI ingress. Note the annotations, which are essential for automatic certificate creation and allowing larger POST sizes. Replace <FQDN to the Observability UI>
with your actual FQDN.
# cat observability-ui.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: 50m
name: observability-ui
namespace: suse-observability
spec:
ingressClassName: nginx
rules:
- host: <FQDN to the Observability UI>
http:
paths:
- backend:
service:
name: suse-observability-router
port:
number: 8080
path: /
pathType: Prefix
tls:
- hosts:
- <FQDN to the Observability UI>
secretName: observability-tls-secret
Apply the Ingress:
kubectl apply -n suse-observability -f observability-ui.yaml
If everything deploys correctly, you should now be able to access your SUSE Observability UI via your web browser by navigating to its FQDN.
Check if the certificate has been successfully issued:
kubectl get certificate -n suse-observability
Installing the Observability Agent
First, retrieve the administrator password from the suse-observability-values/templates/baseConfig_values.yaml
file. You’ll usually find it mentioned in the last comment within that file.
Access the FQDN of your SUSE Observability ingress and authenticate with the username “admin” and the password you just retrieved.
Open the menu at the top-left corner, select “StackPacks“, then select “Kubernetes“. Click on the “Install” button and follow the provided instructions. You’ll find a Helm command there to install the agent to your cluster with the proper API key and values.
Installing OpenCost
Create OpenCost Namespace
kubectl create namespace opencost
Create Application Collection Secret and Install OpenCost
Create the application collection secret inside the opencost
namespace and then install OpenCost using Helm:
kubectl create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-username=<USERNAME FROM APP COLLECTION SERVICE ACCOUNT> --docker-password=<SERVICE ACCOUNT TOKEN FROM APP COLLECTION> -n opencost
helm registry login dp.apps.rancher.io -u <USERNAME FROM APP COLLECTION SERVICE ACCOUNT> -p <SERVICE ACCOUNT TOKEN FROM APP COLLECTION>
helm upgrade --install opencost oci://dp.apps.rancher.io/charts/opencost --version 2.1.2 -n opencost --set "global.imagePullSecrets[0].name=application-collection"
Installing OpenCost Exporter
Install the OpenCost exporter:
kubectl apply --namespace opencost-exporter -f https://raw.githubusercontent.com/opencost/opencost/develop/kubernetes/exporter/opencost-exporter.yaml
Installing Prometheus
Prometheus is a prerequisite for OpenCost, as OpenCost relies on Prometheus for scraping metrics and data storage.
To mirror your Prometheus metrics in SUSE Observability, you need to lookup the API Key that’s used to send in metrics into SUSE Observability. The API Key can be found in the description of the installed Kubernetes StackPack in SUSE Observability. Save your <SUSE OBSERVABILITY API KEY>
to be used in the next step. You can see more in SUSE Observability docs.
Before installing Prometheus, create a configuration file named values-prometheus.yaml
with the following content. Remember to replace <SUSE OBSERVABILITY API KEY>
with your actual SUSE Observability API key.
vim values-prometheus.yaml
USER-SUPPLIED VALUES:
alertmanager:
enabled: false
extraScrapeConfigs: |-
- job_name: opencost
honor_labels: true
scrape_interval: 1m
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
dns_sd_configs:
- names:
- opencost.opencost
type: 'A'
port: 9003
prometheus-pushgateway:
enabled: false
prometheus:
prometheusSpec:
remoteWrite:
- url: http://suse-observability-router.suse-observability.svc.cluster.local:8080/receiver/prometheus/api/v1/write
basicAuth:
username: apikey
password: "<SUSE OBSERVABILITY API KEY>"
For the installation of Prometheus, use the following command:
helm install prometheus --repo https://prometheus-community.github.io/helm-charts prometheus \
--namespace prometheus-system --create-namespace \
--set prometheus-pushgateway.enabled=false \
--set alertmanager.enabled=false \
-f values-prometheus.yaml
This command will install Prometheus in the prometheus-system
namespace with settings tailored for use with OpenCost. You can also configure your own Prometheus instance if you prefer; refer to the OpenCost documentation on “Providing your own Prometheus” for more details.
Verifying Your OpenCost and Prometheus Integration
After successfully installing OpenCost, opencost-exporter
, and prometheus-server
with remote-write
configured to point to your SUSE Observability server, you should verify the setup with the following steps:
Configure Ingresses
In the Rancher Prime UI, navigate to Service Discovery > Ingresses in your Kubernetes environment and create the following Ingresses:
- OpenCost Exporter Ingress:
- Path:
/metrics
- Target Service:
opencost
- Port:
9003
- Path:
- Prometheus Server Ingress:
- Path:
/
- Target Service:
prometheus-server
- Port:
80
- Path:
- SUSE Observability Ingress:
- Path:
/
- Target Service:
suse-observability-router
- Port:
8080
- Path:
Confirm OpenCost Metrics in Prometheus
Access the Prometheus Server Ingress you just created. In the query interface, search for a metric sent by OpenCost, such as container_cpu_allocation{job="opencost"}
.
(For a complete list of all OpenCost metrics, refer to the official OpenCost documentation).
If this metric returns data, it confirms that OpenCost is successfully sending metrics to Prometheus.
Validate Prometheus Remote Write to SUSE Observability
To ensure that Prometheus is forwarding OpenCost metrics to SUSE Observability via remote_write
, access the SUSE Observability Ingress you created.
In the SUSE Observability UI, from the “sandwich menu” (three horizontal bars in the top-left corner), click on “Metrics“. In the “Query” field, search for the same example metric: container_cpu_allocation{job="opencost"}
.
If this query returns information, it confirms that the remote export was successful, and your OpenCost metrics are now flowing into SUSE Observability.
Next Steps
At this point, you can access all the metrics collected by OpenCost directly through the SUSE Observability UI.
The next steps involve configuring Monitors with your desired rules and metrics. This will enable real-time observability and allow you to set up customized alerts, ensuring you’re always on top of your cloud cost optimization efforts.
Related Articles
May 28th, 2025
Restarting Kubernetes Pods: A Detailed Guide
Dec 09th, 2024