Docker Resource Limits Explained

What Are Docker Resource Limits?

Docker containers share your host’s CPU, memory, and disk I/O by default. Without limits, a single misbehaving container can consume all available RAM and crash every service on your server — including SSH, making remote recovery impossible.

Resource limits let you cap what each container can use. They’re essential for any self-hosted server running multiple services.

Prerequisites

Memory Limits

Memory limits are the most important resource constraint. A container that exceeds its memory limit gets killed by the OOM (Out of Memory) killer — which is better than it taking down your entire server.

Setting Memory Limits in Docker Compose

services:
  nextcloud:
    image: nextcloud:33.0.0
    deploy:
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M
    restart: unless-stopped
  • limits.memory — Hard ceiling. Container is killed if it exceeds this.
  • reservations.memory — Soft guarantee. Docker tries to reserve this much for the container. Used for scheduling decisions.

Memory Limit Formats

FormatMeaning
512M512 megabytes
1G1 gigabyte
2048M2048 megabytes (2 GB)
256K256 kilobytes (rarely useful for containers)

What Happens When a Container Exceeds Its Memory Limit

Docker sends a SIGKILL to the container process. The container exits with code 137. If you have restart: unless-stopped set, Docker restarts it immediately.

Check if a container was OOM-killed:

docker inspect --format='{{.State.OOMKilled}}' container_name

CPU Limits

CPU limits prevent a single container from monopolizing all processor cores. This is especially important on low-power hardware like mini PCs or Raspberry Pis where CPU is scarce.

Setting CPU Limits in Docker Compose

services:
  jellyfin:
    image: jellyfin/jellyfin:10.11.6
    deploy:
      resources:
        limits:
          cpus: "2.0"
        reservations:
          cpus: "0.5"
    restart: unless-stopped
  • limits.cpus — Maximum CPU cores the container can use. "2.0" means two full cores.
  • reservations.cpus — Minimum CPU guaranteed to the container.

CPU Limit Examples

ValueMeaning
"0.5"Half of one CPU core
"1.0"One full core
"2.0"Two full cores
"0.25"Quarter of one core

Unlike memory limits, exceeding a CPU limit doesn’t kill the container. Docker throttles it — the container slows down but keeps running.

CPU Shares (Relative Priority)

For relative CPU priority instead of hard limits, use cpu_shares:

services:
  # Higher priority — gets more CPU when contested
  homeassistant:
    image: ghcr.io/home-assistant/home-assistant:2026.3.1
    cpu_shares: 1024
    restart: unless-stopped

  # Lower priority — yields CPU to higher-priority containers
  freshrss:
    image: freshrss/freshrss:1.28.1
    cpu_shares: 256
    restart: unless-stopped

CPU shares only matter when containers compete for CPU. If no contention exists, both containers can use as much CPU as they need. The default value is 1024.

Storage Limits

Docker doesn’t natively limit per-container disk usage through Compose in the same way as CPU and memory. But you can control storage growth:

Limit Container Log Size

Container logs are the most common source of unexpected disk growth. Limit them globally in /etc/docker/daemon.json:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Or per-service in Docker Compose:

services:
  myapp:
    image: myapp:1.0
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"
    restart: unless-stopped

This keeps a maximum of 3 log files at 10 MB each — 30 MB total per container.

Limit Temporary Storage

Use tmpfs mounts with size limits for temporary data:

services:
  myapp:
    image: myapp:1.0
    tmpfs:
      - /tmp:size=100M
    restart: unless-stopped

Practical Sizing Guide for Self-Hosted Apps

These are starting points. Monitor actual usage and adjust.

AppMemory LimitMemory ReservationCPU Limit
Vaultwarden128M64M0.5
Pi-hole256M128M0.5
Nextcloud512M256M1.0
Jellyfin (no transcoding)1G512M1.0
Jellyfin (with transcoding)4G1G4.0
Home Assistant512M256M1.0
Immich2G1G2.0
PostgreSQL512M256M1.0
Redis128M64M0.25
Nginx Proxy Manager256M128M0.5

Monitoring Resource Usage

Check real-time container resource usage:

docker stats

Output shows CPU %, memory usage/limit, network I/O, and disk I/O for every running container.

For a single container:

docker stats nextcloud

For a snapshot (non-streaming):

docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"

Use this data to tune your limits. Set memory limits to roughly 150% of observed peak usage to allow for spikes without unnecessary kills.

Common Mistakes

Setting Limits Too Low

If you set a memory limit below what the application actually needs, the container enters a restart loop — it starts, allocates memory, gets OOM-killed, restarts, repeat. Check docker inspect for OOM kills if a container keeps restarting.

Forgetting Database Containers

App containers are rarely the memory hog. PostgreSQL or MariaDB running alongside your app often consume more memory. Always set limits on database containers too.

Using deploy Without Docker Compose v3+

The deploy.resources syntax requires Compose file format version 3+. If you’re using version 2 format, use the legacy syntax:

# Compose v2 syntax (legacy)
services:
  myapp:
    image: myapp:1.0
    mem_limit: 512m
    cpus: 1.0

With docker compose (v2 CLI), both syntaxes work. Use the deploy.resources syntax for consistency.

Not Setting Log Limits

A single noisy container can fill your disk with log data overnight. Always configure log rotation either globally in daemon.json or per-service.

Next Steps

FAQ

Do resource limits work with Docker Compose v2?

Yes. The docker compose CLI (v2) supports both the deploy.resources syntax and the legacy mem_limit/cpus syntax. Use deploy.resources for new projects.

Will my container crash if it hits the CPU limit?

No. CPU limits throttle the container — it runs slower but stays alive. Only memory limits cause the container to be killed when exceeded.

Should I set resource limits on every container?

Yes. At minimum, set memory limits on every container. A single container without limits can consume all available RAM and crash your entire server. CPU limits are less critical but recommended for compute-heavy services like media transcoding.

How do I know what limits to set?

Run docker stats for a few days under normal usage. Set memory limits to 150% of peak observed usage. Set CPU limits based on how many cores you want to dedicate to each service.

Do resource reservations guarantee resources?

Reservations are soft guarantees. Docker uses them for scheduling decisions but doesn’t strictly enforce them on a single-host setup. They’re more meaningful in Docker Swarm mode.

Comments