Self Hosting

Observability Stack on K3s: Grafana + Prometheus + Loki + Alloy

July 1, 2025
30 min read
intermediate
kubernetes
grafana
prometheus
loki
monitoring
observability
self-hosted
Observability Stack on K3s: Grafana + Prometheus + Loki + Alloy

Cluster Provisioning and Alert Manager through Grafana

Your cluster is running, your pods are spinning, and you have absolutely no idea what’s happening inside. Welcome to the blind spot of self-hosting. The Grafana-Prometheus-Alloy-Loki stack is your escape from this darkness—a unified observability platform that collects metrics, aggregates logs, and screams at you before your users do.

Prometheus scrapes and stores time-series metrics from your nodes, pods, and applications. Loki does the same for logs, but without indexing the content (making it lightweight and cheap). Alloy acts as the collection agent, shipping logs from your pods to Loki. Grafana ties everything together with dashboards, queries, and alerting rules.

What is Alloy?

Alloy is Grafana's unified telemetry collector—the successor to Promtail, Grafana Agent, and a dozen other single-purpose collectors. Instead of running separate agents for logs, metrics, and traces, Alloy handles all three through a single binary with a declarative configuration language. It replaced Promtail in early 2024, so if you're following older tutorials that mention Promtail, Alloy is the modern equivalent with broader capabilities.

Why not just use managed observability?

You absolutely could. Datadog, New Relic, Grafana Cloud—they all work beautifully. They also charge per host, per metric, per GB ingested, per alert, and occasionally per dream you had about Kubernetes. A modest 3-node cluster with reasonable log volume can easily cost $200-500/month in managed observability. This stack? Zero. The only cost is your time and the compute resources you're already paying for.

Rather configure than operate?

This stack takes 30 minutes to deploy and a lifetime to maintain. ZipOps clusters ship with Prometheus, Loki, and Grafana pre-configured—same stack, zero YAML. If observability is a means to an end, not the end itself, see what we're building.

In production Kubernetes environments, this stack handles millions of metrics and gigabytes of logs daily. It’s what powers observability at companies that decided their cloud bills were getting ridiculous.

Init

What you need

Before diving in, ensure you have:

  • A running k3s cluster from the Hetzner Terraform setup

  • kubectl configured against your cluster with proper kubeconfig file

  • [optional] a domain for access without port-forward (strongly recommended)

How the Stack Fits Together

The observability pipeline flows in two parallel streams:

Metrics path: Prometheus scrapes /metrics endpoints from your pods and nodes every 15-30 seconds, stores time-series data locally, and exposes it to Grafana for dashboards and alerting rules.

Logs path: Alloy runs as a DaemonSet on every node, tails container logs from /var/log/pods, enriches them with Kubernetes metadata (namespace, pod name, labels), and ships them to Loki. Loki indexes only metadata—not log content—keeping storage costs low.

Grafana sits at the query layer, pulling from both Prometheus and Loki to correlate metrics spikes with log entries. When CPU hits 90%, you can jump directly to logs from that pod at that timestamp.

observability-data-flow

Apply

We’re deploying four Kubernetes resources: a namespace for isolation, then three HelmChart resources that leverage k3s’s built-in Helm controller. This approach means no local Helm CLI required—just apply the manifests and let k3s handle the rest.

Namespace

Every good monitoring stack deserves its own room. This namespace isolates all observability components from your application workloads.

namespace.yaml

YAML
apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
  labels:
    name: monitoring

Nothing to edit here—monitoring is the conventional namespace name that other tools expect. Apply it:

Shell
kubectl apply -f namespace.yaml

Grafana + Prometheus Stack

The kube-prometheus-stack chart is an all-in-one deployment that bundles Prometheus, Grafana, Alertmanager, and a collection of pre-configured dashboards and alerting rules. This is where most of your configuration lives.

The serviceMonitorSelector settings below tell Prometheus to scrape all ServiceMonitors across all namespaces—not just those created by this Helm chart. Without this, your application metrics won’t be discovered.

prometheus-stack.yaml

YAML
Lines requiring your attention
  • storageClassName: longhorn — change if you're using a different storage provisioner
  • storage: 10Gi — adjust based on cluster size and retention period
  • adminPassword: ChangeMe123! — this is the initial password only (more on this below)
  • root_url: https://grafana.example.com — set your actual domain or remove if using port-forward

The node-exporter configuration looks aggressive with those runAsUser: 0 and privilege escalation settings. This is intentional—node-exporter needs deep access to host metrics that unprivileged containers cannot reach. The security context is scoped appropriately.

k3s doesn’t expose kube-controller-manager or kube-scheduler metrics the way kubeadm clusters do. The defaultRules section disables alerts that would otherwise fire constantly on non-existent endpoints. The disabled components (kubeProxy, kubeControllerManager, kubeScheduler) reflect k3s architecture, where these run as embedded processes rather than standalone pods. Enabling them would just generate scrape errors.

Alloy Log Collector

Alloy replaces the older Promtail agent for log collection. It gathers pod logs via the Kubernetes API and ships them to Loki with consistent labels for filtering.

alloy.yaml

YAML
Lines requiring your attention
  • name: my-cluster — purely a label, pick whatever identifies this cluster to you

The labelsToKeep array is curated to balance queryability with cardinality. High-cardinality labels (like pod UIDs) would explode Loki’s index size. The extraDiscoveryRules ensure your apps get a consistent service_name label regardless of whether they use app or app.kubernetes.io/name in their manifests.

Loki Log Storage

Loki stores logs in a cost-efficient way by only indexing metadata (labels), not log content. This single-binary deployment mode is perfect for small-to-medium clusters.

Loki’s query performance depends heavily on caching. The chunksCache and resultsCache sections below store recent chunks and query results in memory. The 512Mi allocation handles moderate query loads—increase if dashboards feel sluggish.

loki.yaml

YAML
Lines requiring your attention
  • retention_period: 168h — how long logs are kept (7 days)
  • size: 5Gi — persistent volume size for log storage
  • storageClass: longhorn — change if using different storage provisioner

The SingleBinary deployment mode bundles all Loki components into one pod. For larger clusters (50+ nodes, heavy log volume), you’d switch to the distributed mode with separate read/write/backend replicas. But for most self-hosted scenarios, single binary keeps resource usage reasonable.

When to Scale Beyond SingleBinary

SingleBinary Loki works well for clusters under 50 nodes ingesting less than 100GB/day of logs. Beyond that, you'll hit memory pressure and query timeouts.

Signs you've outgrown SingleBinary:

  • Loki OOMKilled during large queries
  • Dashboard queries timing out after 30s
  • Ingestion lag visible in Alloy metrics

The upgrade path: Set deploymentMode: SimpleScalable and configure read, write, and backend replica counts. This separates ingestion from querying, letting you scale each independently. The config in this article is structured to make that transition straightforward—just uncomment the replica settings and set singleBinary.replicas: 0.

Deploy the Stack

Apply everything in order:

Shell
kubectl apply -f namespace.yaml
kubectl apply -f prometheus-stack.yaml
kubectl apply -f alloy.yaml
kubectl apply -f loki.yaml

<-- Back to Blog <-- Back to Self Hosting