Pi-hole: High Memory Usage — Fix

The Problem

Pi-hole’s memory usage climbs well beyond the expected 100-200 MB, sometimes reaching 500 MB or more. The web UI may become sluggish, DNS queries slow down, and on constrained devices like a Raspberry Pi, the system starts swapping.

Common symptoms:

pihole-FTL[1234]: WARNING: Ram usage is 85% (512 MB / 600 MB)

Or the container gets OOM-killed:

docker logs pihole | grep -i "killed"

The Cause

Pi-hole’s memory usage comes from four sources, each independently capable of consuming hundreds of megabytes:

SourceTypical ImpactDefault Behavior
Gravity database (blocklists)50-300 MBLoads all domains into FTL memory
Query log (DNS queries)50-500 MBLogs every query indefinitely
Long-term data (FTL database)50-200 MB on diskStores 365 days of query history
Network table10-50 MBTracks all clients ever seen

The biggest culprit is usually oversized blocklists. The default lists block ~100K domains and use ~80 MB. Adding aggressive community lists (OISD, Energized, Hagezi) can push this to 1-2 million domains and 300+ MB.

The Fix

Method 1: Reduce Blocklist Size

Check your current gravity size:

docker exec pihole bash -c "sqlite3 /etc/pihole/gravity.db 'SELECT COUNT(*) FROM gravity;'"

If this returns more than 500,000 domains, your lists are oversized. Most users get excellent ad blocking with 200-300K domains.

Remove aggressive lists through the web UI: Adlists → disable or remove lists with millions of entries. Good defaults:

ListDomainsCoverage
Steven Black’s Unified~85KAds + malware
OISD (small)~70KAds + tracking
Hagezi Light~60KAds + tracking

After removing lists, update gravity:

docker exec pihole pihole -g

Method 2: Limit Query Logging

By default, Pi-hole logs every DNS query forever. Set a retention limit:

environment:
  FTLCONF_maxlogage: "24"        # Hours to keep in-memory (default: 24)
  FTLCONF_dbimport: "false"      # Don't load full DB into memory on start

Or disable query logging entirely (saves the most memory but loses dashboard statistics):

environment:
  FTLCONF_privacylevel: "3"      # Disable query logging completely

Method 3: Trim the Long-Term Database

Pi-hole’s FTL database (pihole-FTL.db) stores 365 days of queries by default. On busy networks, this file grows to gigabytes.

Set a shorter retention period:

environment:
  FTLCONF_maxDBdays: "30"        # Keep only 30 days (default: 365)

Manually trim an existing oversized database:

docker exec pihole bash -c "
  sqlite3 /etc/pihole/pihole-FTL.db 'DELETE FROM queries WHERE timestamp < strftime(\"%s\", \"now\", \"-30 days\");'
  sqlite3 /etc/pihole/pihole-FTL.db 'VACUUM;'
"

Method 4: Set Container Memory Limits

Prevent Pi-hole from consuming all available RAM:

services:
  pihole:
    image: pihole/pihole:2026.02.0
    deploy:
      resources:
        limits:
          memory: 512M

This forces the container to stay within 512 MB. If it hits the limit, FTL will evict old query data from memory rather than crash.

Prevention

Add these environment variables to your Docker Compose from the start:

environment:
  FTLCONF_maxlogage: "24"
  FTLCONF_maxDBdays: "90"
  FTLCONF_dbimport: "false"

Set up a monthly gravity database optimization:

# Add to crontab
0 3 1 * * docker exec pihole bash -c "pihole -g && sqlite3 /etc/pihole/gravity.db 'VACUUM;'"

Monitor memory usage with a simple health check:

healthcheck:
  test: ["CMD", "bash", "-c", "[ $(cat /sys/fs/cgroup/memory.current 2>/dev/null || echo 0) -lt 536870912 ]"]
  interval: 60s
  timeout: 5s
  retries: 3

Comments