Esc
Type to search posts, tags, and more...
Skip to content

K3s Up and Running with k3sup

Deploying K3s on a dedicated server with k3sup in minutes — from bare metal to a working cluster, plus the practical next steps to make it useful.

Contents

Why K3s on Bare Metal

I migrated my blog and side projects from a cloud VM running docker-compose to a dedicated server running K3s. The economics were obvious: Hetzner’s server auction regularly has machines with 64GB RAM and enterprise SSDs for around 30 EUR/month. That is more compute than most people will ever need for personal infrastructure, and it costs less than a mid-tier VPS.

K3s is the right tool here. It is a single-binary Kubernetes distribution from Rancher that strips out the components you do not need for a small cluster (cloud controller, in-tree storage drivers, etc.) while keeping full Kubernetes API compatibility. One node, one binary, real Kubernetes.

Provisioning the Server

Hetzner delivered the server in about 4 minutes. I reinstalled with Ubuntu 24.04 LTS using their automated provisioning tools — straightforward, no surprises.

Deploying K3s with k3sup

k3sup (pronounced “ketchup”) handles the SSH-based install of K3s on remote machines. No Ansible playbooks, no manual SSH sessions. Just point it at a server and it does the rest.

brew install k3sup

Then deploy:

export SERVER_IP=<your-server-ip>
export USER=root

k3sup install \
  --ip $SERVER_IP \
  --user $USER \
  --ssh-key ~/.ssh/my_secret_key \
  --k3s-extra-args '--disable traefik'

k3sup writes a kubeconfig file to your current directory. Point your local kubectl at it:

export KUBECONFIG=$(pwd)/kubeconfig
kubectl config use-context default
kubectl get node -o wide

You should see your node in Ready state. That is a working Kubernetes cluster.

A Note on Container Runtimes

The original version of this post used --docker in the k3sup flags. K3s has since dropped Docker as a supported runtime — containerd is the default and the only sensible choice now. Containerd is lighter, faster, and has been the industry standard runtime for Kubernetes for years. If you are migrating from Docker workflows, your container images work exactly the same; only the runtime changed.

Why I Disable Traefik

K3s ships with Traefik as the default ingress controller. I disable it with --disable traefik because I prefer to manage my own ingress stack. If you are fine with the bundled Traefik, drop that flag and you get an ingress controller out of the box.

What Comes Next

A bare K3s node is useful but not production-ready for hosting services. Here is what I set up immediately after the initial install.

cert-manager for TLS

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml

Then create a ClusterIssuer for Let’s Encrypt:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: you@example.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            ingressClassName: nginx

Ingress Controller

I use ingress-nginx. Install via Helm:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --create-namespace

Persistent Storage

K3s bundles the local-path provisioner by default, which is fine for a single-node setup. It creates PersistentVolumes backed by local disk. No additional setup required — just create PVCs and they work.

The Result

From a fresh Hetzner server to a working K3s cluster with TLS and ingress takes about 15 minutes. Most of that is waiting for Helm charts to pull images. The actual k3sup deployment is under 60 seconds.

For a homelab or personal infrastructure, this is the sweet spot: real Kubernetes semantics, minimal operational overhead, and a fraction of the cost of managed Kubernetes or cloud VMs.

! Was this useful?