How to Self-Host PostHog with Docker

What Is PostHog?

PostHog is an open-source product analytics platform that combines event tracking, session replays, feature flags, A/B testing, and user surveys in a single self-hosted package. It replaces a combination of Google Analytics, Mixpanel, Hotjar, and LaunchDarkly. The official site offers both cloud and self-hosted options.

Prerequisites

  • A Linux server (Ubuntu 22.04+ recommended)
  • Docker and Docker Compose v2.33.1+ installed (guide)
  • 30 GB of free disk space minimum
  • 8 GB of RAM minimum (16 GB recommended)
  • 4 vCPUs minimum
  • A domain name with DNS pointing to your server
  • Ports 80 and 443 available

PostHog’s self-hosted stack is significantly heavier than most self-hosted apps. It runs ~20 services including ClickHouse, Kafka (Redpanda), PostgreSQL, Redis, and Elasticsearch. Plan your hardware accordingly.

Installation

PostHog provides an official deploy script that handles the Docker Compose setup. This is the recommended approach — the compose file is too complex to maintain manually.

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/posthog/posthog/HEAD/bin/deploy-hobby)"

The script will:

  1. Prompt you for your domain name
  2. Generate a POSTHOG_SECRET and ENCRYPTION_SALT_KEYS
  3. Download docker-compose.hobby.yml and docker-compose.base.yml
  4. Create a .env file with your configuration
  5. Start all services

If you prefer to inspect everything first:

# Download the files manually
curl -fsSL https://raw.githubusercontent.com/posthog/posthog/HEAD/docker-compose.hobby.yml -o docker-compose.hobby.yml
curl -fsSL https://raw.githubusercontent.com/posthog/posthog/HEAD/docker-compose.base.yml -o docker-compose.base.yml
curl -fsSL https://raw.githubusercontent.com/posthog/posthog/HEAD/bin/deploy-hobby -o deploy-hobby
chmod +x deploy-hobby

Key Environment Variables

The deploy script generates a .env file. The critical values:

# Generated automatically by the deploy script
POSTHOG_SECRET=<auto-generated-56-char-secret>
ENCRYPTION_SALT_KEYS=<auto-generated-16-byte-hex>
DOMAIN=posthog.example.com
POSTHOG_APP_TAG=latest

PostHog recommends using the latest tag rather than pinning to a specific version. Their deploy script explicitly sets this.

Services in the Stack

The hobby deployment runs these services:

ServiceImagePurpose
webposthog/posthogMain Django backend and frontend
workerposthog/posthogCelery background task processor
pluginsposthog/posthog-nodeNode.js plugin system
captureghcr.io/posthog/posthog/captureRust event ingestion
replay-captureghcr.io/posthog/posthog/replay-captureSession replay capture
dbpostgres:15.12-alpinePrimary database
redis7redis:7.2-alpineCaching and queues
clickhouseclickhouse/clickhouse-server:25.12.5.44Analytics event storage
kafkaredpandadata/redpanda:v25.1.9Message broker
zookeeperzookeeper:3.7.0Coordination service
objectstorageminio/minioS3-compatible object storage
temporaltemporalio/auto-setup:1.20.0Workflow orchestration
proxycaddyReverse proxy with automatic TLS

Initial Setup

  1. After the deploy script finishes, open https://your-domain.com in your browser
  2. Create your admin account on the first visit
  3. Set up your organization and first project
  4. Install the PostHog snippet on your website or app:
<script>
  !function(t,e){var o,n,p,r;e.__SV||(window.posthog=e,e._i=[],e.init=function(i,s,a){function g(t,e){var o=e.split(".");2==o.length&&(t=t[o[0]],e=o[1]),t[e]=function(){t.push([e].concat(Array.prototype.slice.call(arguments,0)))}}(p=t.createElement("script")).type="text/javascript",p.async=!0,p.src=s.api_host+"/static/array.js",(r=t.getElementsByTagName("script")[0]).parentNode.insertBefore(p,r);var u=e;for(void 0!==a?u=e[a]=[]:a="posthog",u.people=u.people||[],u.toString=function(t){var e="posthog";return"posthog"!==a&&(e+="."+a),t||(e+=" (stub)"),e},u.people.toString=function(){return u.toString(1)+".people (stub)"},o="init push capture register register_once register_for_session unregister opt_out_capturing has_opted_out_capturing opt_in_capturing reset isFeatureEnabled getFeatureFlag getFeatureFlagPayload reloadFeatureFlags group updateEarlyAccessFeatureEnrollment getEarlyAccessFeatures getActiveMatchingSurveys getSurveys onFeatureFlags".split(" "),n=0;n<o.length;n++)g(u,o[n]);e._i.push([i,s,a])},e.__SV=1)}(document,window.posthog||[]);
  posthog.init('YOUR_PROJECT_API_KEY', {api_host: 'https://your-domain.com'})
</script>

Replace YOUR_PROJECT_API_KEY with the key from your project settings.

Configuration

Scaling Limits

The hobby deployment scales to approximately 100,000 events per month. Beyond that, PostHog recommends their cloud offering. For higher self-hosted volumes, you’d need to tune ClickHouse and Kafka settings manually.

Caddy Reverse Proxy

The stack includes Caddy as a reverse proxy with automatic Let’s Encrypt certificates. If you’re already running a reverse proxy, you can modify the docker-compose.hobby.yml to expose the web service directly:

# In docker-compose.hobby.yml, change the web service ports
web:
  ports:
    - "8000:8000"

Then configure your existing reverse proxy (Nginx Proxy Manager, Traefik, or Caddy) to forward to port 8000.

Environment Tuning

Edit the .env file to adjust:

# Increase if you need more workers
POSTHOG_CELERY_CONCURRENCY=4

# Adjust Redis memory limit (default 200MB)
REDIS_MAXMEMORY=512mb

Backup

PostHog stores data across multiple services. A complete backup requires:

  1. PostgreSQL database:
docker exec posthog-db-1 pg_dump -U posthog posthog > posthog-pg-backup.sql
  1. ClickHouse data: Back up the clickhouse-data Docker volume
  2. Object storage (MinIO): Back up the objectstorage Docker volume

For the full backup strategy, see Backup Strategy.

Troubleshooting

ClickHouse Out of Memory

Symptom: ClickHouse crashes or becomes unresponsive. Queries timeout. Fix: ClickHouse needs significant RAM. Increase your server to at least 16 GB RAM, or limit ClickHouse memory in the compose file:

clickhouse:
  deploy:
    resources:
      limits:
        memory: 4G

Events Not Appearing in Dashboard

Symptom: You’ve added the tracking snippet but no events show up. Fix: Check that your api_host in the PostHog snippet points to your self-hosted instance URL, not app.posthog.com. Also verify the Kafka and capture services are running:

docker compose ps

Container Keeps Restarting

Symptom: The web or worker container restarts in a loop. Fix: Check logs for the specific service:

docker compose logs web --tail 100
docker compose logs worker --tail 100

Common causes: insufficient RAM (8 GB minimum), PostgreSQL not ready, or missing environment variables.

Temporal UI Not Loading

Symptom: The Temporal dashboard at port 8081 shows a blank page or connection error. Fix: Temporal is a workflow engine PostHog uses internally. If it’s not critical for your use case, you can ignore it. If you need it, check that Elasticsearch is running:

docker compose logs elasticsearch --tail 50

Upgrade Failures

Symptom: After pulling new images, services fail to start. Fix: Run the deploy script again — it handles migrations:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/posthog/posthog/HEAD/bin/deploy-hobby)"

Resource Requirements

  • RAM: 8 GB minimum, 16 GB recommended
  • CPU: 4 vCPUs minimum
  • Disk: 30 GB minimum, grows with event volume (ClickHouse stores all raw events)
  • Network: Ports 80 and 443 for web access

PostHog is the heaviest self-hosted analytics platform. If you want something lighter, consider Plausible or Umami.

Verdict

PostHog is the most feature-rich self-hosted analytics platform available. If you need product analytics — funnels, cohorts, session replays, feature flags, and A/B testing — nothing else comes close in the self-hosted space. But it’s a resource hog. The ~20-service stack needs a dedicated server with at least 8 GB RAM, and it’s realistically capped at 100K events/month for the hobby deployment.

For most self-hosters who just want website analytics (pageviews, referrers, top pages), PostHog is overkill. Use Plausible or Umami instead — they’re lighter, simpler, and privacy-focused. But if you’re building a product and need deep user behavior insights without sending data to third parties, PostHog is the answer.

Frequently Asked Questions

How much RAM does PostHog actually need?

The hobby deployment needs 8 GB minimum. The ~20 services (ClickHouse, Kafka/Redpanda, PostgreSQL, Redis, Elasticsearch, Temporal, MinIO, and multiple application containers) share memory. 16 GB is recommended for stable operation.

Can PostHog replace Google Analytics?

PostHog does everything Google Analytics does (pageviews, events, funnels, retention) plus session replays, feature flags, and A/B testing. But it’s far heavier — 8 GB RAM minimum vs Plausible or Umami at ~300 MB. If you just need web analytics, use Plausible or Umami. PostHog is for product analytics.

Is PostHog free when self-hosted?

Yes. The hobby deployment has no event limits or usage caps. The MIT-licensed codebase includes all features. PostHog’s cloud offering has a free tier with paid usage-based pricing for high volumes.

Can I pin PostHog to a specific version?

PostHog officially recommends using the latest tag and their deploy script for updates. Pinning to a specific version is possible but not officially supported — migrations between versions may require the deploy script to run properly.

What’s the maximum event volume for self-hosted PostHog?

The hobby deployment handles approximately 100,000 events per month reliably. Beyond that, you need to tune ClickHouse and Kafka settings manually, or consider PostHog Cloud. ClickHouse’s memory usage grows with event volume.

Does PostHog capture session replays?

Yes. PostHog records DOM snapshots and replays user sessions — you can watch exactly what users did on your site. Session replays are stored in the MinIO (S3-compatible) container and consume significant disk space. Configure retention policies to manage storage growth.

Comments