Self Hosting

Self-host K3s on Hetzner Cloud with Terraform: Under $25/month Production Kubernetes

December 18, 2025
20 min read
Easy
K8s
Hetzner
Self-host
Fundamentals
Self-host K3s on Hetzner Cloud with Terraform: Under $25/month Production Kubernetes

Self-host K3s on Hetzner Cloud with Terraform: Under $25/month Production Kubernetes

Managed Kubernetes costs 5-10x what raw compute costs. You’re paying for someone else to run kubectl for you.

After pricing out a basic cluster on AWS and GKE—a few private services, a couple public endpoints, nothing exotic—the estimates landed north of $200/month before traffic costs. That moved managed Kubernetes out of my budget immediately. So I built the stack myself on Hetzner Cloud. Here’s the exact setup.

This guide gets you a production-ready K8s cluster on Hetzner Cloud for under $25/month. You’ll own every layer—compute, networking, storage, ingress. Spin up a private music server, self-hosted cloud storage, a side project API—whatever you want, whenever you want, no permission required. No vendor lock-in. No surprise bills.

Difficult Easy – Time to deploy ~20 minutes

Init

What You Need

Shell
# Install everything via Homebrew (MacOS/Linux/WSL)
brew install terraform kubectl k9s hcloud packer coreutils
  • Hetzner Cloud accountLogin/Sign-up here

  • Terminal fluency — You live here now

  • 20 minutes — That’s it

Following along?

The ZipOps newsletter publishes the next articles in this series: TLS automation, observability stack, and deploying real applications. No spam, just infrastructure guides for developers who'd rather own their stack.

Get Your API Key

Hetzner needs to trust your Terraform commands. Create a project in Hetzner Cloud Console, go to Security > API Tokens, generate a token with Read & Write permissions.

hcloud-api-key-read-write-draw

This key is root access to your cloud. Treat it like a password.

Build Base Images

Hetzner spins up nodes from snapshots. This script creates them—a temporary VM builds the OS image, then self-destructs. Takes ~6 minutes. Costs pennies.

Bash cli

Shell
tmp_script=$(mktemp) && \
curl -sSL -o "${tmp_script}" https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/create.sh && \
chmod +x "${tmp_script}" && \
"${tmp_script}" && \
rm "${tmp_script}"

Fish cli

Shell
set tmp_script (mktemp) && \
curl -sSL -o "$tmp_script" https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/create.sh; 
chmod +x "$tmp_script" && \
bash "$tmp_script" && \
rm "$tmp_script"

Expected output

Shell
==> Builds finished. The artifacts of successful builds are:
--> hcloud.microos-x86-snapshot: 'OpenSUSE MicroOS x86 by Kube-Hetzner' (ID: *********)
--> hcloud.microos-arm-snapshot: 'OpenSUSE MicroOS ARM by Kube-Hetzner' (ID: *********)
⚠️ If VM tier/location miss

Sometimes selected VM tier are not available anymore, or it's not available in predefined cluster location. After the process completed, double check Server --> Snapshot and make sure you have exactly 2 images:

  • OpenSUSE MicroOS x86 by Kube-Hetzner
  • OpenSUSE MicroOS ARM by Kube-Hetzner

hetzner-snapshots

If something is missing, update kube.tf as well as hcloud-microos-snapshots.pkr.hcl updating location and/or server_type with actually valid values and run the same script again.

If process completed successfully and you have both snapshots in your project, safe delete temporary files

Shell
rm hcloud-microos-snapshots.pkr.hcl kube.tf

Generate SSH Keys

You need password-less SSH access to your nodes. This is your backdoor when things break. Do not lose this key

Shell
# Press Enter twice (no passphrase)
mkdir -p cred/terra && \
ssh-keygen -t ed25519 -f cred/terra/hetzner-key -C "your-email@example.com"
Guard This Key

Lose this key and you lose SSH access to your nodes. I've seen teams locked out of production because someone ran rm -rf in the wrong directory. Store it in a password manager. Back it up encrypted. This is your break-glass access.

Your file tree now looks like this

Shell
project-folder/
└── cred/
    └── terra/
        ├── hetzner-key        # Private. Never share. Don't lose.
        └── hetzner-key.pub    # Public. Goes to Hetzner.

Apply

Define Your Cluster

This is where you decide: how many nodes, what size, which datacenter. The kube.tf file IS your infrastructure. Version control it.

📐 Quick Architecture Context

Control Plane Nodes — Run the API server, scheduler, and etcd (cluster state database). Lose etcd without backups = lose your cluster. Run 3 for production HA, 1 is fine for dev & budget projects.

Worker Nodes — Run your application pods. Scale these horizontally. One dying shouldn't cripple you.

Cilium — Replaces kube-proxy with eBPF. Faster networking, built-in ingress, egress control. This config is production-ready.

Longhorn — Distributed storage. Pods request volumes, Longhorn replicates across nodes. Node dies, data survives in volumes.

k8s-cluster-nodes

Create kube.tf in your project root

HCL
Why These Specific Cilium Settings?

This config isn't arbitrary. kubeProxyReplacement: true eliminates iptables overhead—measurable latency improvement under load. routingMode: native with ipv4NativeRoutingCIDR avoids VXLAN encapsulation between nodes. The ingressController block replaces Traefik (k3s default) with Cilium's eBPF-based ingress. I burned a month debugging packet drops before landing on this combination.

Server tier and location

Same rule apply as before: sometimes tier are not available anymore or not available in a given location. Eventually update row 8 to 10 for cluster location, control panel tier and worker node tier, then apply again.

K8s will handle any broken installation on its own.

Set Your API Token

Never hardcode secrets. Export the token, so Terraform can read it without storing it in files.

Shell
# bash
export TF_VAR_hcloud_token="your-api-token-here"

# fish
set -xg TF_VAR_hcloud_token "your-api-token-here"

Deploy

One command. Terraform reads your config, diffs against reality, and creates what’s missing.

Shell
terraform init && terraform apply

Review the plan. Type yes to confirm. Watch your cluster materialize.

Shell
Plan: 36 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
  Enter a value: yes

hetzner-console-node-spin-up-process

~8 minutes later

Shell
Apply complete! Resources: 36 added, 0 changed, 0 destroyed.

Outputs:
kubeconfig = <sensitive>
floating_ip = "X.X.X.X"

Note your floating IP

This is your internet door, you have to use it in ingress/gateway as well as in DNS setup.

If you eventually miss it, just extract it again from Terraform

Shell
terraform output floating_ip

Real-World Application

Connect to Your Cluster

kubectl needs credentials. Extract them from Terraform output.

Shell
mkdir -p cred/kubectl && \
terraform output -raw kubeconfig > cred/kubectl/kubeconfig

Define path variable for kubeconfig

bash

Shell
export KUBECONFIG="$(pwd)/cred/kubectl/kubeconfig"

fish

Shell
set -xg KUBECONFIG "$(pwd)/cred/kubectl/kubeconfig"

Verify you’re connected

Shell
kubectl get nodes

You should see your control plane and worker nodes in Ready state

Shell
NAME                STATUS   ROLES                       AGE     VERSION
control-plane-qeb   Ready    control-plane,etcd,master   5m45s   v1.31.14+k3s1
worker-rqk          Ready    <none>                      5m11s   v1.31.14+k3s1
Want the cluster without the Terraform?

You just provisioned infrastructure that takes most teams a sprint to configure. ZipOps runs this exact stack—Hetzner, Cilium, Longhorn—but you deploy with a simple ./zipops instead of terraform apply. If you need this done quick: see what we're building

Explore with K9s

Raw kubectl works. K9s is faster. It’s a terminal UI that lets you navigate resources in real-time.

Shell
k9s

Type :pods to see running pods. :services for services. :nodes for node status. This is your K8s IDE. k9s-test-cluster-example-pods


Deploy Your First App

Theory is worthless without proof. Let’s put nginx on your cluster and hit it from a browser.


<-- Back to Blog <-- Back to Self Hosting