Hoarder vs Wallabag: Read-Later Apps Compared
Quick Verdict
Hoarder is the better choice if you want AI-powered automatic tagging and full-page archiving with a modern interface. Wallabag is the better choice if you want a mature, battle-tested Pocket replacement focused on clean article extraction and offline reading. For most people starting fresh in 2026, Hoarder’s AI features and modern stack make it the more compelling option — but Wallabag’s years of stability and lighter resource footprint still earn it a strong recommendation.
Overview
Hoarder and Wallabag solve the same core problem — saving web content to read later on your own terms — but they approach it from different angles.
Hoarder is a newer project built on Next.js and TypeScript. Its headline feature is AI-powered automatic tagging: save a link, and Hoarder uses an LLM (OpenAI, Ollama, or any OpenAI-compatible API) to categorize and tag it for you. It also does full-page archiving, storing complete snapshots of pages so content survives link rot. The interface is modern and snappy, with browser extensions and mobile apps.
Wallabag has been around since 2013 (originally as poche). It is a direct self-hosted replacement for Pocket and Instapaper. Built on PHP with Symfony, it focuses on article extraction — stripping away ads, navigation, and clutter to give you clean, readable text. It has mature browser extensions, solid mobile apps, and import tools for migrating from Pocket, Instapaper, and Pinboard. Wallabag is proven, stable, and widely deployed.
Feature Comparison
| Feature | Hoarder | Wallabag |
|---|---|---|
| AI Auto-Tagging | Yes — LLM-powered (OpenAI, Ollama, compatible APIs) | No |
| Full-Page Archiving | Yes — complete page snapshots | No — article text extraction only |
| Article Extraction | Basic — relies on archiving | Strong — Mozilla Readability-based parser |
| Browser Extension | Chrome, Firefox | Chrome, Firefox, Opera, Safari |
| Mobile Apps | iOS, Android | iOS, Android (mature, well-maintained) |
| Offline Reading | Via archived snapshots | Yes — dedicated offline mode with epub export |
| Tagging | Automatic (AI) + manual | Manual only |
| Search | Full-text via Meilisearch | Full-text built-in |
| Import from Pocket/Instapaper | Limited | Full import support (Pocket, Instapaper, Pinboard, browser bookmarks) |
| API | REST API | REST API (mature, well-documented) |
| RSS Feeds | No | Yes — per-tag and per-category RSS feeds |
| Annotations | No | Yes — highlight and annotate saved articles |
| Multi-User | Yes | Yes |
| License | AGPL-3.0 | MIT |
| Language/Framework | TypeScript / Next.js | PHP / Symfony |
| Primary Database | PostgreSQL + Meilisearch | PostgreSQL, MySQL, or SQLite |
Docker Compose: Hoarder
Hoarder requires PostgreSQL for data storage and Meilisearch for full-text search. Optionally, connect an LLM for AI tagging.
Create a docker-compose.yml:
services:
hoarder:
image: ghcr.io/hoarder-app/hoarder:0.21.0
restart: unless-stopped
ports:
- "3000:3000"
environment:
# Required — database connection
DATABASE_URL: "postgresql://hoarder:changeme-hoarder-db@hoarder-db:5432/hoarder"
# Required — Meilisearch connection
MEILI_ADDR: "http://meilisearch:7700"
MEILI_MASTER_KEY: "changeme-meili-master-key" # Must match Meilisearch config
# Required — encryption key for sessions (generate with: openssl rand -hex 32)
NEXTAUTH_SECRET: "changeme-generate-a-random-64-char-hex-string"
NEXTAUTH_URL: "http://localhost:3000"
# Optional — AI tagging via OpenAI-compatible API
# OPENAI_API_KEY: "sk-your-openai-key"
# OPENAI_BASE_URL: "http://ollama:11434/v1" # Use this for local Ollama
# INFERENCE_TEXT_MODEL: "gpt-4o-mini"
volumes:
- hoarder_data:/data
depends_on:
hoarder-db:
condition: service_healthy
meilisearch:
condition: service_started
networks:
- hoarder-net
hoarder-db:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_USER: hoarder
POSTGRES_PASSWORD: changeme-hoarder-db # Change this — must match DATABASE_URL above
POSTGRES_DB: hoarder
volumes:
- hoarder_pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U hoarder"]
interval: 10s
timeout: 5s
retries: 5
networks:
- hoarder-net
meilisearch:
image: getmeili/meilisearch:v1.12.3
restart: unless-stopped
environment:
MEILI_MASTER_KEY: "changeme-meili-master-key" # Must match Hoarder config
MEILI_ENV: "production"
volumes:
- hoarder_meili:/meili_data
networks:
- hoarder-net
volumes:
hoarder_data:
hoarder_pgdata:
hoarder_meili:
networks:
hoarder-net:
Start the stack:
docker compose up -d
Access Hoarder at http://your-server:3000. Create your account on first visit. AI tagging requires uncommenting and configuring the OpenAI environment variables — either with an OpenAI API key or a local Ollama instance.
Docker Compose: Wallabag
Wallabag works with PostgreSQL, MySQL, or SQLite. This config uses PostgreSQL with Redis for caching.
Create a docker-compose.yml:
services:
wallabag:
image: wallabag/wallabag:2.6.10
restart: unless-stopped
ports:
- "8080:80"
environment:
# Required — database configuration
SYMFONY__ENV__DATABASE_DRIVER: "pdo_pgsql"
SYMFONY__ENV__DATABASE_HOST: "wallabag-db"
SYMFONY__ENV__DATABASE_PORT: "5432"
SYMFONY__ENV__DATABASE_NAME: "wallabag"
SYMFONY__ENV__DATABASE_USER: "wallabag"
SYMFONY__ENV__DATABASE_PASSWORD: "changeme-wallabag-db" # Must match PostgreSQL config
# Required — application secret (generate with: openssl rand -hex 32)
SYMFONY__ENV__SECRET: "changeme-generate-a-random-hex-string"
# Required — your domain (update for production)
SYMFONY__ENV__DOMAIN_NAME: "http://localhost:8080"
SYMFONY__ENV__SERVER_NAME: "selfhosting.sh Wallabag"
# Required — Redis for caching and async
SYMFONY__ENV__REDIS_HOST: "redis"
SYMFONY__ENV__REDIS_PORT: "6379"
# Required — default admin credentials (change after first login)
SYMFONY__ENV__FOSUSER_REGISTRATION: "false"
SYMFONY__ENV__FOSUSER_CONFIRMATION: "false"
volumes:
- wallabag_images:/var/www/wallabag/web/assets/images
- wallabag_data:/var/www/wallabag/data
depends_on:
wallabag-db:
condition: service_healthy
redis:
condition: service_started
networks:
- wallabag-net
wallabag-db:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_USER: wallabag
POSTGRES_PASSWORD: changeme-wallabag-db # Change this — must match Wallabag config
POSTGRES_DB: wallabag
volumes:
- wallabag_pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U wallabag"]
interval: 10s
timeout: 5s
retries: 5
networks:
- wallabag-net
redis:
image: redis:7-alpine
restart: unless-stopped
volumes:
- wallabag_redis:/data
networks:
- wallabag-net
volumes:
wallabag_images:
wallabag_pgdata:
wallabag_data:
wallabag_redis:
networks:
wallabag-net:
Start the stack:
docker compose up -d
Access Wallabag at http://your-server:8080. Default credentials are wallabag / wallabag — change these immediately after first login.
Installation Complexity
Hoarder has a straightforward Docker setup with three containers (app, PostgreSQL, Meilisearch). The main complexity is configuring AI tagging — you need either an OpenAI API key (costs money per request) or a local Ollama instance (needs a GPU or beefy CPU). Without AI, Hoarder still works for manual bookmarking and archiving, but you lose the headline feature.
Wallabag also runs three containers (app, PostgreSQL, Redis) with a comparable setup. The Symfony-based configuration uses long environment variable names, which looks verbose but is actually well-documented. The initial database migration runs automatically on first start. Wallabag is more forgiving on hardware — no search engine or AI model to run.
Both are manageable for anyone comfortable with Docker Compose. Wallabag edges ahead on simplicity because it has no optional AI configuration to think about.
Performance and Resource Usage
| Resource | Hoarder | Wallabag |
|---|---|---|
| App RAM | ~200 MB | ~150 MB |
| Search/Cache RAM | ~100 MB (Meilisearch) | ~30 MB (Redis) |
| Database RAM | ~50 MB (PostgreSQL) | ~50 MB (PostgreSQL) |
| Total RAM | ~350 MB | ~230 MB |
| CPU (idle) | Low | Low |
| CPU (archiving/extraction) | Medium — full-page screenshots and archiving | Low — text extraction only |
| Disk usage | Higher — stores full page archives | Lower — stores extracted article text |
Hoarder uses more resources across the board. Full-page archiving stores significantly more data than text extraction, and Meilisearch is heavier than Redis. If you’re running AI tagging locally via Ollama, add another 2-4 GB of RAM for the model.
Wallabag is the lighter option. It stores just the extracted article content, uses Redis for simple caching, and has no AI workload. On a Raspberry Pi 4 or low-end VPS, Wallabag runs comfortably. Hoarder is feasible on the same hardware but will feel the squeeze when archiving many pages simultaneously.
Community and Support
Wallabag has the advantage of time. Active since 2013, it has a large user base, extensive documentation, and a well-established community. The project has over 10,000 GitHub stars, regular releases, and a strong track record. Documentation covers every feature, API endpoint, and configuration option. You will find answers to almost any Wallabag question on forums, Reddit, and the official docs.
Hoarder is newer but growing fast. The community is active on GitHub and Discord. Development pace is rapid, with frequent releases adding features. Documentation is good but not as comprehensive as Wallabag’s — some edge cases require reading GitHub issues. The project has strong momentum and an engaged contributor base.
For stability and long-term confidence, Wallabag wins. For active development pace and feature velocity, Hoarder leads.
Use Cases
Choose Hoarder If…
- You want AI-powered automatic tagging — save a link and let the LLM categorize it for you
- You care about full-page archiving — preserving complete snapshots of pages, not just extracted text
- You prefer a modern, polished UI built with contemporary web technologies
- You already run Ollama or have an OpenAI API key and want to leverage AI in your workflow
- You value visual bookmarking — Hoarder stores screenshots and previews of saved pages
- You are building a research archive where preserving the exact original page matters
Choose Wallabag If…
- You want a direct Pocket/Instapaper replacement with mature import tools
- Offline reading is critical — Wallabag’s epub export and dedicated offline mode are strong
- You need annotations — highlighting and annotating articles within the app
- You want RSS feeds generated from your saved articles (per tag or category)
- You are on limited hardware — Wallabag’s lighter footprint fits small servers and Raspberry Pis
- You value proven stability — Wallabag has 10+ years of production use
- You prefer an MIT license over AGPL-3.0
Final Verdict
Hoarder is the better pick for most new users in 2026. The AI auto-tagging is genuinely useful — it eliminates the friction of organizing saved content, which is the main reason most people’s bookmarks devolve into an unsearchable mess. Full-page archiving means you actually keep the content, not just a link that might die. The interface is clean and fast.
Wallabag remains the better choice for dedicated readers. If your primary workflow is “save article, read it later on the couch, maybe annotate it,” Wallabag’s superior article extraction, epub export, offline reading, and annotation features serve that use case better than Hoarder does. It is also the safer bet if you want something with a decade of stability behind it.
If you are unsure: start with Hoarder. The AI tagging will save you hours of manual organization, and you can always export your data if you decide to switch later.
Related
Get self-hosting tips in your inbox
New guides, comparisons, and setup tutorials — delivered weekly. No spam.
Comments