Privacy-Friendly Analytics Setup
Why Privacy-Friendly Analytics Matter
Google Analytics collects 72 data points per visitor. It tracks users across sites, builds advertising profiles, and requires cookie consent banners that annoy everyone and tank your conversion rates. Since GDPR enforcement began, running GA4 without a consent banner is a legal risk in the EU. Running it with a banner means 30-50% of visitors decline cookies and disappear from your data entirely.
Privacy-friendly analytics solve all three problems at once:
- No cookies. No consent banner needed. No GDPR headaches. You see 100% of your traffic.
- You own the data. Visitor data never leaves your server. No third party mines it for ad targeting.
- Lighter pages. Google’s
gtag.jsis 28 KB gzipped and makes multiple network requests. Plausible’s script is under 1 KB.
The tradeoff is real: you lose user-level tracking, cross-site attribution, and the deep integration Google Analytics has with Google Ads. If you depend on those for paid acquisition campaigns, privacy-friendly analytics are a complement, not a replacement. For content sites, blogs, documentation, and most self-hosted projects, they are strictly better.
What Data You Actually Need
Most site owners use less than 5% of what GA4 collects. Here is what matters for a content site:
| Metric | Privacy Analytics | GA4 |
|---|---|---|
| Pageviews | Yes | Yes |
| Unique visitors (anonymized) | Yes | Yes |
| Traffic sources / referrers | Yes | Yes |
| Top pages | Yes | Yes |
| Country-level location | Yes (IP-based, not stored) | Yes (cookie-based) |
| Device / browser / OS | Yes | Yes |
| Bounce rate | Yes | Yes |
| Session duration | Varies by tool | Yes |
| UTM campaign tracking | Yes | Yes |
| User-level journey tracking | No | Yes |
| Cross-domain tracking | No | Yes |
| Conversion funnels | Limited | Yes |
| Google Ads integration | No | Yes |
If the “No” column does not affect your business, self-hosted privacy analytics give you everything you need with none of the baggage.
The Three Lightest Options Compared
Three tools dominate the self-hosted privacy analytics space. Each takes a different approach.
| Feature | Plausible CE | Umami | GoAccess |
|---|---|---|---|
| Approach | JavaScript snippet | JavaScript snippet | Server log parsing |
| Cookie-free | Yes | Yes | Yes (no JS at all) |
| GDPR compliant without consent | Yes | Yes | Yes |
| Real-time dashboard | Yes | Yes | Yes |
| Script size | <1 KB | ~2 KB | N/A (no script) |
| Database | PostgreSQL + ClickHouse | PostgreSQL | None (flat files) |
| RAM usage (idle) | ~500 MB | ~200 MB | ~50 MB |
| Custom events | Yes | Yes | No |
| API | Yes | Yes | Limited (JSON export) |
| Multi-site support | Yes | Yes | Yes (separate configs) |
| Docker support | Official | Official | Official |
| License | AGPL-3.0 | MIT | MIT |
| Latest version | v3.2.0 | v3.0.3 | 1.10.1 |
| Best for | Simplicity, drop-in GA replacement | Customization, multi-site dashboards | Zero-JS, minimal infrastructure |
The short answer: Use Plausible if you want the simplest path from GA4 to privacy analytics. Use Umami if you want more control over dashboards and event tracking. Use GoAccess if you want zero JavaScript on your site and already have access logs.
Prerequisites
- A Linux server with Docker and Docker Compose installed (Docker Compose Basics)
- A domain or subdomain for your analytics instance (e.g.,
analytics.example.com) - 1 GB RAM minimum for Umami, 2 GB for Plausible (ClickHouse is hungry)
- A reverse proxy for HTTPS termination (Reverse Proxy Setup)
Option 1: Plausible Community Edition
Plausible is the closest thing to a drop-in GA4 replacement. The dashboard is clean, opinionated, and shows exactly what you need on a single page. No training required — anyone on your team can read it.
Plausible CE requires three services: the Plausible application, PostgreSQL for metadata, and ClickHouse for analytics event data.
Create a project directory and the required files:
mkdir -p /opt/plausible && cd /opt/plausible
ClickHouse Configuration
ClickHouse needs a few config tweaks for low-resource operation. Create these files before starting the stack.
Create clickhouse/clickhouse-config.xml:
<clickhouse>
<logger>
<level>warning</level>
<console>true</console>
</logger>
<listen_host>0.0.0.0</listen_host>
<http_port>8123</http_port>
<tcp_port>9000</tcp_port>
<profiles>
<default>
<log_queries>0</log_queries>
<log_query_threads>0</log_query_threads>
</default>
</profiles>
</clickhouse>
Create clickhouse/clickhouse-user-config.xml:
<clickhouse>
<listen_host>0.0.0.0</listen_host>
</clickhouse>
Docker Compose
Create docker-compose.yml:
services:
plausible_db:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=plausible-db-password # Change this
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
plausible_events_db:
image: clickhouse/clickhouse-server:24.12-alpine
restart: unless-stopped
volumes:
- event-data:/var/lib/clickhouse
- event-logs:/var/log/clickhouse-server
- ./clickhouse/clickhouse-config.xml:/etc/clickhouse-server/config.d/logging.xml:ro
- ./clickhouse/clickhouse-user-config.xml:/etc/clickhouse-server/users.d/logging.xml:ro
ulimits:
nofile:
soft: 262144
hard: 262144
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1"]
interval: 10s
timeout: 5s
retries: 5
plausible:
image: ghcr.io/plausible/community-edition:v3.2.0
restart: unless-stopped
command: sh -c "/entrypoint.sh db createdb && /entrypoint.sh db migrate && /entrypoint.sh run"
depends_on:
plausible_db:
condition: service_healthy
plausible_events_db:
condition: service_healthy
ports:
- "8000:8000"
volumes:
- plausible-data:/var/lib/plausible
ulimits:
nofile:
soft: 65535
hard: 65535
environment:
- BASE_URL=https://analytics.example.com # Your analytics domain — MUST change
- SECRET_KEY_BASE=REPLACE_WITH_64_BYTE_SECRET # Generate with: openssl rand -base64 48
- DATABASE_URL=postgres://postgres:plausible-db-password@plausible_db:5432/plausible_db
- CLICKHOUSE_DATABASE_URL=http://plausible_events_db:8123/plausible_events_db
- DISABLE_REGISTRATION=invite_only # Set to 'true' after creating your account
- [email protected] # From address for emails
# Uncomment and configure for email (account creation, reports):
# - SMTP_HOST_ADDR=smtp.example.com
# - SMTP_HOST_PORT=587
# - SMTP_USER_NAME=your-smtp-user
# - SMTP_USER_PWD=your-smtp-password
# - SMTP_HOST_SSL_ENABLED=true
volumes:
db-data:
event-data:
event-logs:
plausible-data:
Generate the Secret Key
openssl rand -base64 48
Copy the output and replace REPLACE_WITH_64_BYTE_SECRET in the compose file.
Start Plausible
docker compose up -d
The first startup takes 30-60 seconds as ClickHouse initializes and migrations run. Check logs:
docker compose logs -f plausible
Once you see [info] Running Plausible on port 8000, the instance is ready at http://your-server-ip:8000.
Initial Setup
- Open your Plausible instance in a browser
- Create your admin account (the first account becomes the owner)
- Add your site domain
- Copy the tracking script (covered below in “Adding the Tracking Script”)
- Set
DISABLE_REGISTRATION=truein the compose file and restart to lock down signups
Option 2: Umami
Umami is lighter than Plausible (no ClickHouse), offers more dashboard customization, and has a built-in API for pulling data into other tools. It supports custom events, multiple dashboards per site, and ad-blocker evasion through script/endpoint renaming.
Umami needs two services: the Node.js application and PostgreSQL.
Create a project directory:
mkdir -p /opt/umami && cd /opt/umami
Docker Compose
Create docker-compose.yml:
services:
umami:
image: ghcr.io/umami-software/umami:v3.0.3
restart: unless-stopped
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://umami:change-this-password@umami_db:5432/umami
APP_SECRET: REPLACE_WITH_RANDOM_STRING # Min 32 chars. Generate with: openssl rand -hex 32
# Rename tracking paths to bypass ad blockers (optional):
# TRACKER_SCRIPT_NAME: custom-script-name
# COLLECT_API_ENDPOINT: /api/custom-endpoint
depends_on:
umami_db:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:3000/api/heartbeat || exit 1"]
interval: 10s
timeout: 5s
retries: 5
init: true
umami_db:
image: postgres:15-alpine
restart: unless-stopped
volumes:
- umami-db-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: umami
POSTGRES_USER: umami
POSTGRES_PASSWORD: change-this-password # Must match DATABASE_URL above
healthcheck:
test: ["CMD-SHELL", "pg_isready -U umami"]
interval: 10s
timeout: 5s
retries: 5
volumes:
umami-db-data:
Generate the App Secret
openssl rand -hex 32
Replace REPLACE_WITH_RANDOM_STRING in the compose file with the output.
Start Umami
docker compose up -d
Umami starts faster than Plausible — typically under 15 seconds. Check logs:
docker compose logs -f umami
Once healthy, Umami is available at http://your-server-ip:3000.
Initial Setup
- Log in with the default credentials: admin / umami
- Change the admin password immediately (Settings > Profile)
- Add your website (Settings > Websites > Add Website)
- Copy the tracking script from the website settings page
Bypassing Ad Blockers
Many ad blockers target analytics scripts by URL pattern. Umami lets you rename the script and collection endpoint:
environment:
TRACKER_SCRIPT_NAME: custom-data
COLLECT_API_ENDPOINT: /api/custom-collect
With this configuration, the tracking script URL becomes /custom-data.js instead of the default /script.js, which ad blockers are less likely to block. Proxy the analytics subdomain through your main domain’s reverse proxy for even better results.
Option 3: GoAccess (No JavaScript)
GoAccess takes a fundamentally different approach: it parses your web server’s access logs instead of injecting JavaScript. Zero scripts on your site. Zero impact on page load. No data sent to any external server — not even your own analytics instance.
The tradeoff: GoAccess cannot track client-side events, single-page app navigation, or JavaScript-dependent metrics. It sees what your web server sees — HTTP requests. For static sites and server-rendered pages, this is often enough.
GoAccess runs as a single binary with no database. Feed it a log file and it produces a real-time HTML dashboard or terminal UI.
Quick Docker Setup
docker run --rm -v /var/log/nginx:/var/log/nginx:ro \
-v /opt/goaccess/data:/srv/data \
-v /opt/goaccess/html:/srv/report \
allinurl/goaccess:1.10.1 \
--log-file=/var/log/nginx/access.log \
--log-format=COMBINED \
--output=/srv/report/index.html \
--real-time-html \
--ws-url=wss://stats.example.com \
--port=7890
For persistent operation, create a docker-compose.yml:
services:
goaccess:
image: allinurl/goaccess:1.10.1
restart: unless-stopped
ports:
- "7890:7890"
volumes:
- /var/log/nginx:/var/log/nginx:ro # Mount your web server logs
- goaccess-data:/srv/data
- goaccess-html:/srv/report
command: >
--log-file=/var/log/nginx/access.log
--log-format=COMBINED
--output=/srv/report/index.html
--real-time-html
--ws-url=wss://stats.example.com
--port=7890
--persist
--restore
--db-path=/srv/data
volumes:
goaccess-data:
goaccess-html:
Serve the generated index.html from the goaccess-html volume through your reverse proxy, and use WebSocket passthrough for real-time updates.
GoAccess is best suited for operators who want analytics without touching their frontend code. For most self-hosters running content sites, Plausible or Umami will be more practical.
Adding the Tracking Script
Both Plausible and Umami work by adding a single <script> tag to your site.
Plausible
Add this to the <head> of every page:
<script defer data-domain="yoursite.com" src="https://analytics.example.com/js/script.js"></script>
Replace yoursite.com with the domain you registered in Plausible and analytics.example.com with your Plausible instance URL.
Plausible offers script extensions for additional tracking:
<!-- Track outbound link clicks -->
<script defer data-domain="yoursite.com" src="https://analytics.example.com/js/script.outbound-links.js"></script>
<!-- Track file downloads -->
<script defer data-domain="yoursite.com" src="https://analytics.example.com/js/script.file-downloads.js"></script>
<!-- Combine multiple extensions -->
<script defer data-domain="yoursite.com" src="https://analytics.example.com/js/script.outbound-links.file-downloads.js"></script>
Umami
Add this to the <head> of every page:
<script defer src="https://analytics.example.com/script.js" data-website-id="YOUR-WEBSITE-ID"></script>
The data-website-id is a UUID generated when you add the site in Umami’s dashboard.
For custom event tracking:
// Track a button click
umami.track('signup-button-click');
// Track with properties
umami.track('download', { file: 'docker-compose.yml', format: 'yaml' });
Reverse Proxy Configuration
Run your analytics instance behind a reverse proxy on a subdomain like analytics.example.com or stats.example.com. This gives you HTTPS and keeps the analytics service isolated.
If you use Nginx Proxy Manager, create a new proxy host pointing to your analytics container’s port (8000 for Plausible, 3000 for Umami). Enable SSL with Let’s Encrypt.
For Caddy, add to your Caddyfile:
analytics.example.com {
reverse_proxy localhost:8000 # or :3000 for Umami
}
For Nginx:
server {
listen 443 ssl;
server_name analytics.example.com;
ssl_certificate /etc/letsencrypt/live/analytics.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/analytics.example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8000; # or :3000 for Umami
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# WebSocket support (needed for Plausible real-time)
location /api/live/websocket {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
See Reverse Proxy Setup for detailed configuration guides.
Performance Impact
Privacy-friendly analytics scripts are dramatically smaller than Google’s.
| Script | Transfer Size | Requests | Blocking Time |
|---|---|---|---|
| Google Analytics (gtag.js) | ~28 KB gzipped | 2-3 | ~50-80 ms |
| Plausible | ~0.8 KB gzipped | 1 | <5 ms |
| Umami | ~2 KB gzipped | 1 | <5 ms |
| GoAccess | 0 KB (no script) | 0 | 0 ms |
Both Plausible and Umami use the defer attribute, meaning the script loads asynchronously and never blocks page rendering. The performance difference is negligible on modern connections but adds up across millions of pageviews — fewer bytes served, lower CDN costs, faster Time to Interactive.
Common Mistakes
Running ClickHouse on a 1 GB RAM server. Plausible requires ClickHouse, which allocates significant memory at startup. Budget at least 2 GB total RAM for a Plausible stack. If you only have 1 GB, use Umami instead.
Forgetting to set DISABLE_REGISTRATION after creating your account. Both tools allow open registration by default. Lock this down immediately after creating your admin account or anyone who finds your analytics URL can create an account.
Using the old Plausible Docker Hub image. The plausible/analytics image on Docker Hub is frozen at v2.0.0 (July 2023). The current image is ghcr.io/plausible/community-edition on GitHub Container Registry. Using the old image means missing two years of features and security fixes.
Not configuring IP anonymization. Both tools hash or discard IP addresses by default, but verify this in your configuration. Storing raw IP addresses, even on your own server, has GDPR implications.
Exposing the analytics port directly. Always put your analytics instance behind a reverse proxy with HTTPS. Running on a bare HTTP port means your tracking data transits the network unencrypted.
Next Steps
- Deploy your chosen analytics tool using the Docker Compose configs above
- Add the tracking script to your site and verify data is flowing
- Set up a reverse proxy with HTTPS for your analytics subdomain
- Configure email (SMTP) if you want weekly reports from Plausible
- Explore custom events in Umami for tracking specific user actions
- For a deeper comparison, see Plausible vs Umami
Related
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments