Uptime Kuma vs Healthchecks: Which Monitoring Tool?
Quick Verdict
These tools solve fundamentally different problems. Uptime Kuma actively polls your services and tells you when something goes down. Healthchecks passively listens for pings from your cron jobs and scripts, and alerts you when something fails to check in. Comparing them head-to-head is like comparing a smoke detector to a time clock — both keep you informed, but about entirely different things.
If you need to know whether your websites, APIs, and services are reachable: Uptime Kuma. If you need to know whether your backups, cron jobs, and scheduled tasks actually ran: Healthchecks. Most serious self-hosting setups benefit from running both.
The Core Difference
This is the single most important thing to understand before choosing between these tools.
Uptime Kuma is an active monitor. It reaches out to your services on a schedule — HTTP requests, TCP pings, DNS lookups, Docker container checks — and records whether they respond. The monitoring server initiates every check. If your Nextcloud instance stops responding to HTTP requests, Uptime Kuma catches it within seconds.
Healthchecks is a passive monitor (dead man’s switch). Your cron jobs and scripts ping Healthchecks when they complete successfully. Healthchecks tracks whether those pings arrive on schedule. If your nightly backup script was supposed to ping at 3:00 AM and it is now 3:30 AM with no ping, Healthchecks alerts you. The monitored systems initiate the check, not the monitoring server.
Neither tool replaces the other. A service can be “up” (Uptime Kuma sees HTTP 200) while its backup cron is silently failing (Healthchecks catches the missed ping). Conversely, all your cron jobs can be running fine while your web server is unreachable.
Feature Comparison
| Feature | Uptime Kuma | Healthchecks |
|---|---|---|
| Monitoring approach | Active (polls targets) | Passive (receives pings) |
| HTTP/HTTPS monitoring | Yes — status codes, response time, SSL expiry | No |
| TCP/UDP monitoring | Yes | No |
| DNS monitoring | Yes | No |
| Docker container monitoring | Yes (via Docker socket) | No |
| Cron job monitoring | No | Yes — core feature |
| Script/task monitoring | No | Yes — with start/success/fail signals |
| Status page | Built-in, customizable | Built-in badge endpoints |
| Notification channels | 90+ (Slack, Discord, Telegram, email, webhooks, etc.) | 20+ (Slack, Discord, Telegram, email, webhooks, etc.) |
| Web dashboard | Full-featured with graphs | Clean, focused on check status |
| API | Yes (WebSocket-based) | Yes (REST) |
| Multi-user support | Yes (v2+) | Yes, with teams and projects |
| Authentication | Built-in (username/password, 2FA) | Built-in (email/password) |
| Grace periods | N/A | Yes — configurable per check |
| Ping protocols | N/A | HTTP, email, and raw UDP/TCP |
| Maintenance windows | Yes | No |
| Certificate expiry alerts | Yes | No |
| Cron expression validation | N/A | Yes — validates schedules |
Docker Setup
Both tools are straightforward to self-host. Uptime Kuma is simpler since it has no external database dependency. Healthchecks uses PostgreSQL for data storage.
Uptime Kuma
Uptime Kuma uses an embedded SQLite database, so the entire stack is a single container.
Create a docker-compose.yml:
services:
uptime-kuma:
image: louislam/uptime-kuma:2.2.1
container_name: uptime-kuma
restart: unless-stopped
ports:
- "3001:3001"
volumes:
- uptime-kuma-data:/app/data
# Optional: monitor Docker containers directly
# volumes:
# - /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
uptime-kuma-data:
Start it:
docker compose up -d
Open http://your-server:3001, create an admin account, and start adding monitors. That is the entire setup.
Healthchecks
Healthchecks requires PostgreSQL. The official Docker image bundles uWSGI as the application server and runs background workers (alert sending, report generation, optional SMTP listener) automatically.
Create a .env file:
# Database
DB=postgres
DB_HOST=db
DB_NAME=healthchecks
DB_USER=postgres
DB_PASSWORD=change-this-strong-password
DB_PORT=5432
# Site settings
SITE_NAME=Healthchecks
SITE_ROOT=http://localhost:8000
ALLOWED_HOSTS=localhost,your-domain.example.com
SECRET_KEY=change-this-to-a-random-50-char-string
# Email (required for alerts)
[email protected]
EMAIL_HOST=smtp.example.com
EMAIL_PORT=587
EMAIL_HOST_USER=your-smtp-user
EMAIL_HOST_PASSWORD=your-smtp-password
EMAIL_USE_TLS=True
# Registration
REGISTRATION_OPEN=True
DEBUG=False
Create a docker-compose.yml:
services:
db:
image: postgres:16-alpine
container_name: healthchecks-db
restart: unless-stopped
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- healthchecks-db:/var/lib/postgresql/data
web:
image: healthchecks/healthchecks:v4.0
container_name: healthchecks
restart: unless-stopped
env_file:
- .env
ports:
- "8000:8000"
# Uncomment to receive pings via email (SMTP listener)
# - "2525:2525"
depends_on:
db:
condition: service_started
volumes:
healthchecks-db:
Start it:
docker compose up -d
The uWSGI server runs database migrations automatically on startup. Open http://your-server:8000 and create an account.
To add a check, create one in the dashboard, then have your cron job or script call the unique ping URL:
# Add to the end of your backup script
curl -fsS --retry 3 http://your-server:8000/ping/your-unique-uuid
Performance and Resource Usage
| Resource | Uptime Kuma | Healthchecks |
|---|---|---|
| RAM (idle) | ~70-100 MB | ~80-120 MB (app) + ~50 MB (PostgreSQL) |
| RAM (100 monitors) | ~120-180 MB | ~100-150 MB + ~80 MB (PostgreSQL) |
| CPU (idle) | Minimal | Minimal |
| Disk (application) | ~200 MB | ~300 MB (app + PostgreSQL) |
| Disk (data, 6 months) | ~100-500 MB (SQLite) | ~200-800 MB (PostgreSQL, depends on ping volume) |
| External dependencies | None (embedded SQLite) | PostgreSQL |
| Startup time | ~3 seconds | ~8-10 seconds (migrations + uWSGI workers) |
Uptime Kuma’s single-container architecture makes it lighter on total system resources. Healthchecks adds the overhead of PostgreSQL, but PostgreSQL gives it better concurrent write performance under heavy ping loads — relevant if you have hundreds of tasks reporting in simultaneously.
For a typical self-hosting setup with 20-50 monitors, both tools run comfortably on the smallest VPS or a Raspberry Pi 4.
Use Cases
Choose Uptime Kuma If…
- You want to monitor whether your websites, APIs, and services are online
- You need a public or internal status page showing uptime percentages
- SSL certificate expiry monitoring matters to you
- You want to monitor Docker containers directly via the Docker socket
- You prefer a zero-dependency single-container deployment
- You need maintenance windows to suppress alerts during planned downtime
- Response time tracking and latency graphs are useful for your setup
- You monitor external third-party services (not just your own)
Choose Healthchecks If…
- You run scheduled tasks (backups, database dumps, report generation, sync jobs) and need to know when they fail silently
- Your cron jobs currently fail without anyone noticing for days
- You want to validate that tasks complete within expected time windows
- You need to track task duration (Healthchecks records start and completion pings)
- You manage many servers with cron jobs and want a central dashboard for all of them
- You want a dead man’s switch — alerting on the absence of an expected signal
Run Both If…
- You self-host more than a handful of services (you almost certainly have both “is it up?” and “did it run?” monitoring needs)
- Uptime Kuma watches your services; Healthchecks watches your automation
- They complement each other perfectly with minimal resource overhead
Alerting
Both tools support the major notification channels (Slack, Discord, Telegram, email, generic webhooks), but Uptime Kuma has a much larger selection — over 90 notification providers versus roughly 20 for Healthchecks. If you rely on a niche notification service, Uptime Kuma is more likely to support it natively.
Healthchecks compensates with tight cron-aware alerting. You configure a schedule and a grace period per check. If a ping is 5 minutes late, Healthchecks waits through the grace period before alerting — reducing false positives from tasks that occasionally run a few seconds slow. This cron-schedule awareness is something Uptime Kuma does not offer because it is not designed for that use case.
Final Verdict
Stop thinking of these as competitors. They monitor orthogonal concerns.
Uptime Kuma is the best self-hosted active monitoring tool available. Single container, gorgeous dashboard, 90+ notification providers, status pages, certificate monitoring. If you self-host anything that listens on a port, you should be running Uptime Kuma.
Healthchecks is the best self-hosted cron/task monitoring tool. Clean interface, cron-expression-aware scheduling, grace periods, and a dead-simple ping API that takes one curl line to integrate. If you run scheduled tasks — and every self-hosting setup does — you should be running Healthchecks.
For most self-hosters: run both. Uptime Kuma on port 3001, Healthchecks on port 8000, total overhead under 400 MB of RAM. Your future self will thank you the first time Healthchecks catches a silently failing backup that Uptime Kuma had no way to detect.
FAQ
Can Uptime Kuma monitor cron jobs like Healthchecks?
Not natively. Uptime Kuma monitors whether services respond to active checks (HTTP, TCP, DNS). It has a “Push” monitor type where a script can POST to Uptime Kuma, which gives basic dead-man’s-switch behavior. However, it lacks Healthchecks’ cron-expression awareness, grace periods, and task duration tracking. For proper cron monitoring, use Healthchecks.
Can Healthchecks check if a website is up?
No. Healthchecks is a passive monitor — it waits for pings from your scripts and alerts when they don’t arrive. It does not actively check whether a URL responds. For website uptime monitoring, use Uptime Kuma or a similar active monitor. Many self-hosters run both tools side by side.
Do I need both tools for a typical homelab?
If you run any scheduled tasks (backups, database dumps, cleanup scripts), yes. Uptime Kuma tells you when a service is unreachable. Healthchecks tells you when a task fails silently. A backup script can return a successful HTTP response to Uptime Kuma while failing to actually write data — only Healthchecks catches that if the completion ping never arrives. The combined RAM overhead is under 400 MB.
Can I create a public status page with either tool?
Uptime Kuma has a built-in status page feature with customizable groups, custom CSS, and a public URL you can share. Healthchecks provides badge endpoints (SVG/JSON) showing check status, but does not have a full status page. For a public-facing status page, Uptime Kuma is the right choice. For dedicated status pages, also consider Gatus or Upptime.
How do I integrate Healthchecks with my backup script?
Add a single curl call at the end of your script: curl -fsS --retry 3 http://your-server:8000/ping/your-uuid. Healthchecks also supports start signals (/start endpoint) to track task duration and failure signals (/fail endpoint) to report explicit failures. Borgmatic has built-in Healthchecks support — just add the ping URL to its config file.
Can Uptime Kuma monitor Docker containers directly?
Yes. Mount the Docker socket (/var/run/docker.sock) into the Uptime Kuma container and use the “Docker Container” monitor type. It checks whether a container is running without needing an exposed port. This is useful for monitoring background workers, queue processors, and other containers that don’t have a web interface.
Related
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments