Install Uptime Kuma on Proxmox VE

Why Proxmox for Uptime Kuma?

Running Uptime Kuma inside your Proxmox environment lets you monitor the entire cluster from within the network. You can track every VM, LXC container, and service with direct network access. The LXC footprint is tiny — 1 vCPU and 512 MB RAM. The main gotcha is Docker container monitoring: the Docker socket is not available inside an LXC by default, so if you want to monitor containers in other VMs, you need an alternative approach. This guide covers that and more.

Prerequisites

  • Proxmox VE 8.x installed and accessible
  • A CT template downloaded (Ubuntu 22.04 or Debian 12)
  • 512 MB RAM available on the Proxmox host
  • 4 GB disk space
  • Network access to the services you want to monitor

Create the LXC Container

Via Proxmox Web UI

  1. Click Create CT on your Proxmox node
  2. Configure:
    • Hostname: uptime-kuma
    • Password: set a root password
    • Template: Ubuntu 22.04 or Debian 12
    • Disk: 4 GB
    • CPU: 1 core
    • Memory: 512 MB
    • Swap: 256 MB
    • Network: DHCP or static IP on vmbr0
  3. Enable Nesting under Options (required for Docker)

Via CLI

pct create 111 local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst \
  --hostname uptime-kuma \
  --memory 512 \
  --swap 256 \
  --cores 1 \
  --rootfs local-lvm:4 \
  --net0 name=eth0,bridge=vmbr0,ip=dhcp \
  --features nesting=1 \
  --unprivileged 0 \
  --start 1

Install Docker Inside the LXC

pct enter 111

apt update && apt install -y ca-certificates curl gnupg
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" > /etc/apt/sources.list.d/docker.list
apt update && apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

Verify:

docker run --rm hello-world

Docker Compose Configuration

mkdir -p /opt/uptime-kuma && cd /opt/uptime-kuma

Create docker-compose.yml:

services:
  uptime-kuma:
    image: louislam/uptime-kuma:2.2.1
    container_name: uptime-kuma
    restart: unless-stopped
    volumes:
      - uptime-kuma-data:/app/data
      # Docker socket available for monitoring containers in THIS LXC
      - /var/run/docker.sock:/var/run/docker.sock:ro
    ports:
      - "3001:3001"
    environment:
      - DATA_DIR=/app/data
    healthcheck:
      test: ["CMD-SHELL", "node -e \"const http = require('http'); const options = { hostname: '127.0.0.1', port: 3001, path: '/api/health', timeout: 2000 }; const req = http.request(options, (res) => { process.exit(res.statusCode === 200 ? 0 : 1); }); req.on('error', () => process.exit(1)); req.end();\""]
      interval: 60s
      timeout: 10s
      retries: 3
      start_period: 30s

volumes:
  uptime-kuma-data:

Start it:

docker compose up -d
docker compose ps

Access the web UI at http://lxc-ip:3001.

First-Time Setup

  1. Navigate to http://lxc-ip:3001 and create your admin account
  2. Start adding monitors for your Proxmox infrastructure

Monitoring Your Proxmox Cluster

Monitor Proxmox Nodes

Add ping or HTTP monitors for each Proxmox host:

  • Type: Ping
  • Hostname: 192.168.1.10 (your Proxmox node IP)
  • Friendly Name: pve-node-1
  • Heartbeat Interval: 30 seconds

Monitor Proxmox Web UI

  • Type: HTTP(s) — Keyword
  • URL: https://pve-node-ip:8006
  • Keyword: Proxmox
  • Accepted Status Codes: 200
  • Ignore TLS Error: Yes (Proxmox uses a self-signed cert by default)

Monitor VMs and LXC Containers

For each VM or LXC running a web service:

  • Type: HTTP(s)
  • URL: http://container-ip:port
  • Heartbeat Interval: 60 seconds

For non-web services, use:

  • Type: TCP Port
  • Hostname: container IP
  • Port: service port

Monitor Services via Proxmox API

Proxmox exposes a REST API. You can monitor node status, storage, and cluster health through HTTP keyword checks:

  • Type: HTTP(s) — JSON Query
  • URL: https://pve-node-ip:8006/api2/json/nodes
  • Headers: Authorization: PVEAPIToken=user@pam!tokenid=token-value
  • Ignore TLS Error: Yes

Docker Socket in LXC — The Workaround

The Docker socket mounted in the Compose file above only monitors containers inside this specific LXC. It cannot see containers in other VMs or LXCs because each Docker instance has its own socket.

To monitor Docker containers running in other VMs/LXCs, use HTTP or TCP monitors instead. This is actually the better approach — it tests whether the service is actually responding, not just whether the container process is running.

If you genuinely need Docker socket monitoring across hosts, deploy a Docker socket proxy in each VM/LXC and point Uptime Kuma at them:

# In each remote VM/LXC that runs Docker:
docker run -d \
  --name docker-socket-proxy \
  --restart unless-stopped \
  -p 2375:2375 \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -e CONTAINERS=1 \
  -e POST=0 \
  tecnativa/docker-socket-proxy:0.3.0

Then in Uptime Kuma, add remote Docker hosts under Settings > Docker Hosts with the TCP connection string tcp://remote-vm-ip:2375.

Proxmox-Specific Optimization

Resource Limits

Uptime Kuma is lightweight. You can safely run it alongside other containers in the same LXC if needed. Monitor memory usage:

pct exec 111 -- free -m

If you see high memory usage, reduce the number of monitors or increase heartbeat intervals.

Backup via Proxmox

Add CT 111 to your Proxmox backup schedule:

  1. Datacenter > Backup > Add
  2. Select CT 111
  3. Schedule: daily
  4. Retention: keep-daily=7, keep-weekly=4

This captures the entire Uptime Kuma dataset including monitoring history.

High Availability (Optional)

If you run a Proxmox cluster with HA, you can make the Uptime Kuma LXC highly available:

  1. Go to Datacenter > HA > Add
  2. Select CT 111
  3. Set Max Restart and Max Relocate

If the node hosting the LXC fails, Proxmox migrates it to another node automatically. Your monitoring stays up even when hardware fails.

Troubleshooting

Docker won’t start inside LXC

Enable nesting:

# On Proxmox host
pct set 111 --features nesting=1
pct restart 111

Uptime Kuma can’t reach other VMs/LXCs

Verify network connectivity from inside the LXC:

pct exec 111 -- ping -c 3 <target-ip>

If pinging fails, check that the LXC and target are on the same bridge/VLAN, or that routing is configured between VLANs.

Self-signed certificate warnings for Proxmox monitors

Set Ignore TLS Error: Yes for any monitor pointing at a Proxmox web UI or API endpoint. Proxmox uses self-signed certificates by default.

Monitoring history grows too large

Go to Settings > General > Keep monitor history for and set a retention period. For a Proxmox lab, 90 days is usually enough. This keeps the SQLite database manageable within the 4 GB disk allocation.

LXC clock drift causes false alerts

NTP should be running on the Proxmox host and synced to LXCs automatically. If you see time-related issues:

pct exec 111 -- date
# Compare with host:
date

If they differ, install chrony or systemd-timesyncd in the LXC.

Resource Requirements

  • LXC allocation: 1 vCPU, 512 MB RAM, 4 GB disk
  • Actual usage: ~80-150 MB RAM depending on monitor count, negligible CPU
  • Disk: ~50 MB for the app, SQLite grows slowly with monitoring history

Comments