Grafana Alloy – Part 1: Replacing Promtail

Share
Share
For a long time, Promtail was the go-to tool for collecting and sending logs to Grafana Loki. As Promtail has officially entered its Long-Term Support (LTS) phase, the future of log collection now lies with Grafana Alloy, a single, unified agent built for logs, metrics, and traces.
Starting with SLES 15 SP7, Grafana Alloy is part of the official repositories in addition to the existing monitoring tools. The complete shift has happened with SLES 16, where Alloy is fully replacing all older collectors.

 

Why Grafana Alloy

Grafana Alloy offers a lot more flexibility. Its OTLP-compatibility (OpenTelemetry Protocol) prepares the agent for the future of logging, metrics, and tracing, while still maintaining connectivity with existing Prometheus and Loki backends.
This flexibility is achieved through a modular dataflow model, which is the core concept of Alloy.
A typical dataflow is characterized by the following sequence (components):

 

Sources → Processors → Writers

 

This means you can build customized pipelines by connecting individual components depending on your observability goals. Data can be captured, filtered, manipulated, and pushed to different backends. This makes Alloy much more flexible than Promtail and offers significant advantages. Instead of running many different agents (e.g. Promtail, Node-Exporter, etc.) on the monitored host, Alloy can be used for a variety of services.

 
This first article will discuss the migration from Promtail to Alloy.

 

From Promtail to Alloy – How to Migrate

As already mentioned Grafana Alloy is shipped starting with SLES 15 SP7. This means it can be install directly from the SUSE repositories:

# zypper in alloy

 

The configuration file can be found under:

/etc/alloy/config.alloy

 
Grafana Alloy has a build-in convert command which helps you to migrate to the new configuration style.
This utility is designed to smoothly translate your existing Promtail configuration into the new Alloy syntax. The command is quite easy to understand. All you need is the Promtail config file and an output path.

alloy convert --source-format=promtail --output=<OUTPUT_CONFIG_PATH> /etc/loki/promtail.yaml

 

Even though this automated configuration conversion is performed on a best-effort basis, familiarizing yourself with the new Alloy configuration concept might be a good idea. So let us take a look at the configuration in detail.

 

Comparing the configurations

First of all, let us take a look at a typical Promtail configuration. In our example, we use a simple one that only captures the system’s journald logs:

/etc/loki/promtail.yaml:

server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /var/lib/promtail/positions.yaml

# Writer (3)
clients:
  - url: http://loki-host:3100/loki/api/v1/push

# Source (1)
scrape_configs:
- job_name: journal
  journal:
    max_age: 24h
    labels:
      job: loki_messages

  # Processor (2)
  relabel_configs:
    - source_labels: ['__journal__systemd_unit']
    target_label: 'unit'

    - source_labels: ['__journal__hostname']
    target_label: 'host'

    - source_labels: ['__journal__pid']
    target_label: 'pid'

    - source_labels: ['__journal__kernel_device']
    target_label: 'kernel_device'

    - source_labels: ['__journal__priority']
    target_label: 'prio'
Although the parts of Promtail don’t perfectly match the dataflow parts in Alloy, I’ve marked the three sections with comments as discussed above. Let us now look at each of the components and compare the Promtail with the Alloy config.

 

Source

In our example we will read from the systemd journal.

 

On Promtail

In Promtail, the scrape_configs block configures how Promtail is reading logs from the system.
# Source (1)
scrape_configs:
- job_name: journal
  journal:
    max_age: 24h
    labels:
      job: systemd-journal

 

On Alloy

On Alloy the component loki.source.journal performs the same job.
// Source (1)
  loki.source.journal "journal" {
  max_age="24h0m0s"
  relabel_rules=discovery.relabel.journal.rules
  forward_to=[loki.write.default.receiver]
  labels={
    job="systemd-journal",
  }
}
However, there are at least two important things defined in addition.
  • `relabel_rules` is pointing to the relabel component (Processors).
  • `forward_to` is pointing to the next component (Writers).

 

Processors

The Processors part is used for relabel the systemd system labels in some human readable names.

 

On Promtail

Promtail refers to this as the pipeline_stages block, which is configured directly under the job_name within the scrape_config block.
# Processor (2)
relabel_configs:
- source_labels: ['__journal__systemd_unit']
  target_label: 'unit'

- source_labels: ['__journal__hostname']
  target_label: 'host'

- source_labels: ['__journal__pid']
  target_label: 'pid'

- source_labels: ['__journal__kernel_device']
  target_label: 'kernel_device'

- source_labels: ['__journal__priority']
  target_label: 'prio'

 

On Alloy

Alloy will do that in a extra component (see: relabel_rules = discovery.relabel.journal.rules above).
// Processor (2)
discovery.relabel "journal" {
  targets=[]

  rule{
    source_labels=["__journal__systemd_unit"]
    target_label="unit"
  }

  rule {
    source_labels=["__journal__hostname"]
    target_label="host"
  }

  rule {
    source_labels=["__journal__pid"]
    target_label="pid"
  }

  rule {
    source_labels=["__journal__kernel_device"]
    target_label="kernel_device"
  }

  rule {
    source_labels=["__journal__priority"]
    target_label="prio"
  }
}

 

Writers

The Writers component is the final stage in the dataflow pipeline and also the easiest component to configure. It simply requires the URL of the backend you are sending data to. In our case, the Loki URL.

 

On Promtail

# Writer (3)
clients:
- url: http://loki-host:3100/loki/api/v1/push

 

On Alloy

// Writer (3)
loki.write "default" {
  endpoint{
    url="http://loki-host:3100/loki/api/v1/push"
  }
  external_labels = {}
}

 

The Complete config for Alloy

// Source (1)
loki.source.journal "journal" {
  max_age="24h0m0s"
  relabel_rules=discovery.relabel.journal.rules
  forward_to=[loki.write.default.receiver]
  labels={
    job="loki_messages",
  }
}

// Processor (2)
discovery.relabel "journal" {
  targets=[]
  rule{
    source_labels=["__journal__systemd_unit"]
    target_label="unit"
  }

  rule {
    source_labels=["__journal__hostname"]
    target_label="host"
  }

  rule {
    source_labels=["__journal__pid"]
    target_label="pid"
  }

  role {
    source_labels=["__journal__kernel_device"]
    target_label="kernel_device"
  }

  rule {
    source_labels=["__journal__priority"]
    target_label="prio"
  }
}

// Writer (3)
loki.write "default" {
  endpoint{
    url="http://loki-host:3100/loki/api/v1/push"
  }
  external_labels = {}
}

 

Conclusion

Both configuration above are very basic. They simply capture the logs from the Journald, rename some labels and push it to loki. In reallity the config are often more complex and it might be possible that the Alloy migration tool is not working so smoothly like in our example. However, by understanding the main concept it is much easier to migrate and also to write more complex configurations. Important is always that all components must be explicitly chained together, forming a continuous dataflow pipeline. This means that every component’s output must be configured to point directly to the input of the next component in the sequence.

 

Alloy can do so much more and collecting logs is only a small part of it. (watch out for the next article in this series)

 

Links to documentations

Please also take a look at the official Grafana Alloy documentation: https://grafana.com/docs/alloy/latest/
As mentioned Promtail is deprecated. It is in LTS since Feb. 2025 and is expected to reach EOL on March 2, 2026.
This is mentioned in the following link: https://grafana.com/docs/loki/latest/send-data/promtail/

 

Share
(Visited 338 times, 1 visits today)
Avatar photo
1,137 views