How to Self-Host Open WebUI with Docker
What Is Open WebUI?
Open WebUI is a self-hosted web interface for interacting with large language models. It provides a ChatGPT-like experience — conversation history, model switching, file uploads, web search, and multi-user support — all running on your own hardware. It connects to Ollama for local models and supports OpenAI-compatible APIs for cloud models.
Updated March 2026: Verified with latest Docker images and configurations.
Prerequisites
- A Linux server (Ubuntu 22.04+ recommended)
- Docker and Docker Compose installed (guide)
- 4 GB of RAM minimum (8 GB+ recommended with Ollama)
- 5 GB of free disk space (plus model storage)
- Ollama running locally or accessible on the network
Docker Compose Configuration
Create a docker-compose.yml file:
Open WebUI + Ollama (Recommended)
services:
ollama:
image: ollama/ollama:v0.18.2
container_name: ollama
volumes:
# Stores downloaded LLM models
- ollama_data:/root/.ollama
restart: unless-stopped
# Uncomment for NVIDIA GPU support
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: all
# capabilities: [gpu]
open-webui:
image: ghcr.io/open-webui/open-webui:v0.8.10
container_name: open-webui
ports:
- "3000:8080"
volumes:
# Stores database, uploads, and user data
- open-webui_data:/app/backend/data
environment:
# Connect to the Ollama container by service name
- OLLAMA_BASE_URL=http://ollama:11434
# CHANGE THIS — used for JWT token signing
- WEBUI_SECRET_KEY=change-this-to-a-random-64-char-string
# Disable telemetry
- SCARF_NO_ANALYTICS=true
- DO_NOT_TRACK=true
- ANONYMIZED_TELEMETRY=false
depends_on:
- ollama
restart: unless-stopped
volumes:
ollama_data:
open-webui_data:
Open WebUI Only (Connecting to Existing Ollama)
If Ollama is already running on the host or another server:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:v0.8.10
container_name: open-webui
ports:
- "3000:8080"
volumes:
- open-webui_data:/app/backend/data
environment:
# Point to Ollama on the host machine
- OLLAMA_BASE_URL=http://host.docker.internal:11434
- WEBUI_SECRET_KEY=change-this-to-a-random-64-char-string
- SCARF_NO_ANALYTICS=true
- DO_NOT_TRACK=true
- ANONYMIZED_TELEMETRY=false
extra_hosts:
- "host.docker.internal:host-gateway"
restart: unless-stopped
volumes:
open-webui_data:
Start the stack:
docker compose up -d
Initial Setup
- Open
http://your-server-ip:3000 - Click Sign Up to create the first account. The first user automatically becomes the admin.
- After signing in, pull a model if Ollama has none:
- Go to Settings (gear icon) > Models
- Enter a model name (e.g.,
llama3.1) and click the download icon - Wait for the download to complete
- Start chatting by selecting a model from the dropdown and typing a message
Configuration
User Management
- First user is admin. All subsequent users are regular users by default.
- Admin can promote users at Admin Panel > Users
- To pre-create an admin account on first startup, set:
environment:
- [email protected]
- WEBUI_ADMIN_PASSWORD=your-strong-password
Single-User Mode (No Login)
For personal servers where only you have network access:
environment:
- WEBUI_AUTH=False
Warning: This cannot be changed after first startup without resetting the database. Decide before your first launch.
Connecting to OpenAI API
Open WebUI can also use OpenAI or any OpenAI-compatible API alongside Ollama:
environment:
- OPENAI_API_BASE_URL=https://api.openai.com/v1
- OPENAI_API_KEY=sk-your-api-key
This lets you use GPT-4, Claude (via compatible proxies), or any other OpenAI-compatible endpoint alongside your local models.
Database Configuration
By default, Open WebUI uses SQLite stored in the data volume. For production with multiple users, switch to PostgreSQL:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:v0.8.10
environment:
- DATABASE_URL=postgresql://openwebui:password@db:5432/openwebui
depends_on:
- db
db:
image: postgres:16-alpine
container_name: open-webui-db
environment:
- POSTGRES_DB=openwebui
- POSTGRES_USER=openwebui
- POSTGRES_PASSWORD=change-this-strong-password
volumes:
- db_data:/var/lib/postgresql/data
restart: unless-stopped
volumes:
db_data:
Key Environment Variables
| Variable | Default | Purpose |
|---|---|---|
OLLAMA_BASE_URL | http://localhost:11434 | Ollama API URL. Use service name in Compose |
WEBUI_SECRET_KEY | t0p-s3cr3t | JWT signing key. Must change in production |
WEBUI_AUTH | True | Set False for single-user mode |
CORS_ALLOW_ORIGIN | * | Restrict to your domain in production |
OFFLINE_MODE | False | Disable outbound version checks |
SAFE_MODE | False | Restricted operation mode |
Advanced Configuration (Optional)
RAG (Retrieval-Augmented Generation)
Open WebUI supports uploading documents and using them as context for conversations:
- In a chat, click the + button and upload a PDF, TXT, or other document
- The content is automatically chunked and indexed
- The model uses the document as context when answering questions
For large document collections, configure a vector database in the admin settings.
Web Search Integration
Enable web search so models can access current information:
- Go to Admin Panel > Settings > Web Search
- Configure a search provider (SearXNG, Google, Brave, etc.)
- If you’re self-hosting SearXNG, point it to your instance
Model Presets and Customization
Create custom model configurations:
- Go to Workspace > Models
- Click Create a Model
- Set a system prompt, temperature, and other parameters
- Save and use it as a named model in chats
Authentication with OAuth/OIDC
For enterprise setups, configure SSO:
environment:
- OAUTH_PROVIDER_NAME=Authentik
- OPENID_PROVIDER_URL=https://auth.example.com/application/o/open-webui/.well-known/openid-configuration
- OAUTH_CLIENT_ID=your-client-id
- OAUTH_CLIENT_SECRET=your-client-secret
- OAUTH_SCOPES=openid email profile
Reverse Proxy
Forward port 3000 (or your chosen host port) through your reverse proxy.
Nginx config snippet:
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support for streaming responses
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Increase timeout for long model responses
proxy_read_timeout 300s;
}
See Reverse Proxy Setup for full configuration.
Backup
Back up the data volume:
docker compose stop open-webui
docker run --rm -v open-webui_data:/data -v $(pwd):/backup alpine \
tar czf /backup/open-webui-backup.tar.gz /data
docker compose start open-webui
The data volume contains the database (conversations, users, settings), uploaded files, and custom configurations.
See Backup Strategy for a comprehensive approach.
Troubleshooting
”Ollama is not reachable”
Symptom: Open WebUI shows Ollama connection error.
Fix: Verify OLLAMA_BASE_URL is set correctly:
- If Ollama is in the same Compose file:
http://ollama:11434 - If Ollama is on the host:
http://host.docker.internal:11434(withextra_hostsconfigured) - If Ollama is on another server:
http://192.168.1.x:11434
Test from inside the container:
docker exec open-webui curl http://ollama:11434/api/tags
No models appear in dropdown
Symptom: Model selector is empty.
Fix: Models need to be downloaded into Ollama first:
docker exec ollama ollama pull llama3.1
Then refresh the Open WebUI page.
Login page keeps redirecting
Symptom: Sign-in redirects back to the login page.
Fix: This usually means WEBUI_SECRET_KEY changed between restarts, invalidating all sessions. Set a persistent key in your Compose file and restart:
docker compose down && docker compose up -d
Streaming responses not working behind proxy
Symptom: Responses appear all at once instead of streaming word-by-word.
Fix: Ensure your reverse proxy supports Server-Sent Events. In Nginx:
proxy_buffering off;
proxy_cache off;
Data lost after container recreation
Symptom: Conversations and settings disappear.
Fix: The data volume must persist. Use a named volume (open-webui_data:/app/backend/data), not an anonymous volume. If you used docker run without -v, migrate to a Compose file with named volumes.
Resource Requirements
- RAM: ~500 MB for Open WebUI alone. Add Ollama’s requirements (model size + 2-4 GB).
- CPU: Low for the web UI. Model inference is handled by Ollama.
- Disk: ~2 GB for the application, plus conversation history and uploaded files.
Verdict
Open WebUI is the best self-hosted ChatGPT alternative. The interface is polished, feature-rich, and actively developed. Combined with Ollama, you get a fully local AI assistant with zero data leaving your network. Multi-user support, conversation history, document uploads, and web search make it genuinely useful for daily work.
If you want a simpler text-generation interface without the ChatGPT-style features, Text Generation WebUI gives more low-level control. But for most people, Open WebUI + Ollama is the right stack.
Frequently Asked Questions
Do I need a GPU to run Open WebUI?
No. Open WebUI is just the web interface — it doesn’t run AI models itself. It connects to Ollama or OpenAI-compatible APIs. A GPU is needed for Ollama to run models efficiently, but Open WebUI itself uses minimal resources.
Can I use Open WebUI with ChatGPT/OpenAI?
Yes. Set OPENAI_API_BASE_URLS and OPENAI_API_KEYS in your Compose file. Open WebUI supports any OpenAI-compatible API, including OpenAI, Anthropic (via proxy), and local alternatives.
Open WebUI vs text-generation-webui — which is better?
Open WebUI provides a ChatGPT-like experience with conversations, users, and document uploads. Text Generation WebUI is more technical, offering parameter control, model loading options, and advanced inference settings. Use Open WebUI for daily chat; use text-generation-webui for experimentation.
Can multiple users share one instance?
Yes. Open WebUI has built-in multi-user support with separate conversations, settings, and permissions per user. The first registered user becomes admin.
How do I add new models?
Models are managed through Ollama, not Open WebUI. Pull models via the Ollama CLI (ollama pull llama3.1) and they automatically appear in Open WebUI’s model dropdown.
Does Open WebUI support document uploads (RAG)?
Yes. Upload PDF, DOCX, TXT, and other documents through the chat interface. Open WebUI indexes them and uses retrieval-augmented generation to answer questions based on your documents.
Related
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments