How to Self-Host k0s with Docker

What Is k0s?

k0s is a lightweight, CNCF-certified Kubernetes distribution packaged as a single binary with zero external dependencies. Created by Mirantis, it bundles everything — etcd, CoreDNS, kube-proxy, Metrics Server, and Konnectivity — into one process. It replaces managed Kubernetes services like EKS, GKE, and AKS, giving you a production-grade cluster on your own hardware for free. Official site

Prerequisites

  • A Linux server (Ubuntu 22.04+ recommended)
  • Docker and Docker Compose installed (guide)
  • 1 GB of free RAM minimum (2 GB+ recommended for controller + worker)
  • 2 GB of free disk space
  • Root/sudo access (privileged mode required for nested containers)

Docker Compose Configuration

k0s is a Kubernetes distribution, not a typical web app. Running it in Docker is best suited for testing, development, and single-node homelabs. For production multi-node clusters, the binary installation method (covered below) is recommended.

Create a docker-compose.yml file:

services:
  k0s-controller:
    image: docker.io/k0sproject/k0s:v1.32.1-k0s.0
    container_name: k0s-controller
    hostname: k0s-controller
    privileged: true  # Required for kubelet to manage pods
    restart: unless-stopped
    volumes:
      - k0s-data:/var/lib/k0s      # Cluster state and etcd data
      - pods-log:/var/log/pods      # Pod logs
    tmpfs:
      - /run                         # Runtime data
    ports:
      - "6443:6443"                  # Kubernetes API server
    command: ["k0s", "controller", "--single"]  # Single-node mode (controller + worker)

volumes:
  k0s-data:
  pods-log:

The --single flag runs both controller and worker in one container — ideal for homelabs.

Start the cluster:

docker compose up -d

Initial Setup

After the container starts, wait 30–60 seconds for all internal components to initialize.

Get your kubeconfig to interact with the cluster:

# Extract kubeconfig from the running container
docker exec k0s-controller k0s kubeconfig admin > ~/.kube/config

# Verify the cluster is running
kubectl get nodes

You should see one node in Ready state. Check that system pods are running:

kubectl get pods -A

Expected output includes pods for CoreDNS, kube-proxy, kube-router, and metrics-server.

Configuration

k0s uses a YAML config file at /etc/k0s/k0s.yaml inside the container. Generate and customize it:

# Generate default config
docker exec k0s-controller k0s config create > k0s.yaml

# Edit the config, then mount it
# Add to docker-compose.yml volumes:
# - ./k0s.yaml:/etc/k0s/k0s.yaml:ro

Key configuration sections:

SectionWhat It Controls
spec.network.podCIDRPod IP range (default: 10.244.0.0/16)
spec.network.serviceCIDRService IP range (default: 10.96.0.0/12)
spec.network.providerCNI plugin: kuberouter (default) or calico
spec.storage.typeBackend: etcd (multi-node), sqlite (single-node)
spec.api.addressAPI server bind address
spec.api.externalAddressExternal address for multi-node setups

Multi-Node Setup (Docker)

For a controller + separate worker setup:

services:
  k0s-controller:
    image: docker.io/k0sproject/k0s:v1.32.1-k0s.0
    container_name: k0s-controller
    hostname: k0s-controller
    privileged: true
    restart: unless-stopped
    volumes:
      - k0s-controller-data:/var/lib/k0s
    tmpfs:
      - /run
    ports:
      - "6443:6443"     # Kubernetes API
      - "9443:9443"     # k0s join API
      - "8132:8132"     # Konnectivity (controller-worker tunnel)
    command: ["k0s", "controller"]

  k0s-worker:
    image: docker.io/k0sproject/k0s:v1.32.1-k0s.0
    container_name: k0s-worker
    hostname: k0s-worker
    privileged: true
    restart: unless-stopped
    volumes:
      - k0s-worker-data:/var/lib/k0s
      - pods-log:/var/log/pods
    tmpfs:
      - /run
    depends_on:
      - k0s-controller

volumes:
  k0s-controller-data:
  k0s-worker-data:
  pods-log:

After starting, generate a join token and connect the worker:

# Generate worker join token
TOKEN=$(docker exec k0s-controller k0s token create --role=worker)

# Join the worker to the cluster
docker exec k0s-worker k0s worker $TOKEN

For production self-hosting, install k0s as a native service:

# Download and install
curl -sSLf https://get.k0s.sh | sudo sh

# Install as single-node (controller + worker)
sudo k0s install controller --single

# Start the service
sudo k0s start

# Check status
sudo k0s status

# Get kubeconfig
sudo k0s kubeconfig admin > ~/.kube/config

This runs k0s under systemd with automatic restarts and proper resource management.

Key Ports

PortServiceDirection
6443Kubernetes API (kube-apiserver)Workers/clients → Controller
9443k0s join APIController ↔ Controller
8132Konnectivity tunnelWorkers → Controller
2380etcd peer communicationController ↔ Controller
10250Kubelet APIController → Workers

Reverse Proxy

If you want to expose the Kubernetes API externally behind a reverse proxy, use TCP passthrough (not HTTP termination) since the API uses mTLS:

# Nginx stream block (not http block)
stream {
    server {
        listen 6443;
        proxy_pass k0s-controller:6443;
    }
}

For most homelab setups, direct port exposure is simpler. See Reverse Proxy Setup for general guidance.

Backup

Back up the k0s data directory:

# Stop the cluster (if possible) for consistent backup
docker compose stop k0s-controller

# Back up the named volume
docker run --rm -v k0s-data:/source -v $(pwd):/backup alpine \
  tar czf /backup/k0s-backup-$(date +%Y%m%d).tar.gz -C /source .

# Restart
docker compose start k0s-controller

For the binary install, back up /var/lib/k0s/. See Backup Strategy for automated approaches.

Troubleshooting

CoreDNS Pods Stuck in Pending

Symptom: kubectl get pods -A shows CoreDNS pods in Pending state. Fix: This happens when using Docker’s custom networks. Use the default bridge network (remove any networks: configuration) or use host networking.

”Cannot run nested containers”

Symptom: Pods fail to start with permission errors. Fix: Ensure privileged: true is set in the Docker Compose service. k0s needs elevated privileges to run containerd inside Docker.

etcd Performance Warnings

Symptom: Slow cluster responses, etcd logs show “took too long” warnings. Fix: Use SSD-backed storage for the k0s-data volume. etcd is extremely sensitive to disk latency. Network-attached storage is not suitable.

Worker Node Not Joining

Symptom: Worker starts but doesn’t appear in kubectl get nodes. Fix: Ensure port 8132 (Konnectivity) is accessible from worker to controller. Regenerate the join token — tokens expire after 24 hours by default.

Disk Pressure Eviction

Symptom: Pods get evicted with “DiskPressure” condition. Fix: kubelet evicts pods when disk usage exceeds 85%. Ensure at least 15% free disk space on the volume backing /var/lib/k0s.

Resource Requirements

  • Controller only: 1 vCPU, 1 GB RAM, 500 MB disk
  • Controller + worker (single-node): 2 vCPU, 2 GB RAM, 2 GB disk
  • Per additional worker: 0.5 vCPU, 512 MB RAM baseline (plus workload resources)
  • Storage: SSD recommended for etcd performance

Verdict

k0s is the best Kubernetes distribution for self-hosters who want a genuine, CNCF-certified cluster without the complexity of kubeadm. It’s lighter than k3s in some respects (zero host dependencies, single binary), though k3s has a larger community and more third-party guides. If you want to learn Kubernetes on real hardware or run production workloads at home, k0s is an excellent choice. If you just need to run containers without Kubernetes complexity, Docker Compose or Docker Swarm are simpler.

Comments