How to Self-Host Flowise with Docker Compose
What Is Flowise?
Flowise is a visual drag-and-drop tool for building LLM-powered chatbots, agents, and RAG pipelines. Instead of writing code, you connect nodes in a canvas — LLM providers, vector databases, document loaders, and tools — to create AI workflows. Built on LangChain.js, Flowise can connect to OpenAI, Anthropic, Ollama, and dozens of other LLM providers.
Prerequisites
- A Linux server (Ubuntu 22.04+ recommended)
- Docker and Docker Compose installed (guide)
- 2 GB+ RAM
- 5 GB free disk space
- An LLM backend (Ollama, OpenAI API key, or similar)
Docker Compose Configuration
Create a docker-compose.yml file:
services:
flowise:
image: flowiseai/flowise:3.0.13
container_name: flowise
ports:
- "3000:3000"
volumes:
- flowise_data:/root/.flowise
environment:
# Authentication
- FLOWISE_USERNAME=admin
- FLOWISE_PASSWORD=changeme-use-strong-password
# API key storage
- APIKEY_PATH=/root/.flowise
- SECRETKEY_PATH=/root/.flowise
# Database (SQLite by default)
- DATABASE_PATH=/root/.flowise
# Optional: Log level
- LOG_LEVEL=info
# Optional: Execution timeout (ms)
- EXECUTION_TIMEOUT=300000
restart: unless-stopped
volumes:
flowise_data:
For production with PostgreSQL:
services:
flowise:
image: flowiseai/flowise:3.0.13
container_name: flowise
ports:
- "3000:3000"
volumes:
- flowise_data:/root/.flowise
environment:
- FLOWISE_USERNAME=admin
- FLOWISE_PASSWORD=changeme-use-strong-password
- DATABASE_TYPE=postgres
- DATABASE_HOST=flowise-db
- DATABASE_PORT=5432
- DATABASE_USER=flowise
- DATABASE_PASSWORD=flowise-db-password-change-this
- DATABASE_NAME=flowise
- APIKEY_PATH=/root/.flowise
- SECRETKEY_PATH=/root/.flowise
depends_on:
flowise-db:
condition: service_healthy
restart: unless-stopped
flowise-db:
image: postgres:16-alpine
container_name: flowise-db
volumes:
- flowise_db_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=flowise
- POSTGRES_PASSWORD=flowise-db-password-change-this
- POSTGRES_DB=flowise
healthcheck:
test: ["CMD-SHELL", "pg_isready -U flowise"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
volumes:
flowise_data:
flowise_db_data:
Start the stack:
docker compose up -d
Initial Setup
- Open
http://your-server:3000in your browser - Log in with the credentials you set in
FLOWISE_USERNAME/FLOWISE_PASSWORD - Click Chatflows → Add New to create your first workflow
- Drag nodes from the sidebar to build your pipeline
Quick Start: Ollama Chatbot
- Drag ChatOllama node onto the canvas
- Set Base URL to
http://host.docker.internal:11434(or your Ollama URL) - Set Model Name to
llama3.2 - Drag a Conversational Agent node and connect it
- Click Save then Chat to test
Configuration
Key Environment Variables
| Variable | Default | Description |
|---|---|---|
FLOWISE_USERNAME | Username for UI authentication | |
FLOWISE_PASSWORD | Password for UI authentication | |
DATABASE_TYPE | sqlite | Database type: sqlite, postgres, mysql |
DATABASE_PATH | /root/.flowise | SQLite database location |
APIKEY_PATH | /root/.flowise | API key storage path |
SECRETKEY_PATH | /root/.flowise | Secret key storage path |
LOG_LEVEL | info | Logging level: error, info, verbose, debug |
EXECUTION_TIMEOUT | 300000 | Flow execution timeout in ms |
CORS_ORIGINS | * | Allowed CORS origins |
Embedding a Chat Widget
Every chatflow gets an embed code. Click the </> Embed button to get a script tag you can paste into any website:
<script type="module">
import Chatbot from "https://cdn.jsdelivr.net/npm/flowise-embed/dist/web.js"
Chatbot.init({
chatflowid: "your-chatflow-id",
apiHost: "https://flowise.example.com",
})
</script>
Advanced Configuration
RAG Pipeline
Build a document-based Q&A chatbot:
- Document Loader → PDF, CSV, or web page loader
- Text Splitter → Recursive Character Text Splitter
- Embeddings → Ollama Embeddings or OpenAI Embeddings
- Vector Store → Chroma, Pinecone, or Qdrant
- Retrieval QA Chain → Connects to your LLM
API Access
Every chatflow is accessible via API:
curl -X POST http://localhost:3000/api/v1/prediction/your-chatflow-id \
-H "Content-Type: application/json" \
-d '{"question": "What is self-hosting?"}'
Reverse Proxy
Configure your reverse proxy to forward to port 3000. WebSocket support is required for the chat interface. See Reverse Proxy Setup.
Backup
Back up the Flowise data volume:
docker run --rm -v flowise_data:/data -v $(pwd):/backup alpine \
tar czf /backup/flowise-backup.tar.gz /data
This contains chatflows, credentials, API keys, and the SQLite database (if using SQLite). See Backup Strategy.
Troubleshooting
Cannot Connect to Ollama
Symptom: ChatOllama node fails to connect.
Fix: Use http://host.docker.internal:11434 on Docker Desktop, or the Docker bridge IP (http://172.17.0.1:11434) on Linux. Ensure Ollama is listening on 0.0.0.0: set OLLAMA_HOST=0.0.0.0:11434.
Chatflow Execution Times Out
Symptom: Flow returns timeout error.
Fix: Increase EXECUTION_TIMEOUT environment variable. Default is 300 seconds (5 minutes). Complex RAG pipelines with large documents may need more.
Authentication Bypass
Symptom: UI accessible without login.
Fix: Ensure both FLOWISE_USERNAME and FLOWISE_PASSWORD are set. If either is empty, authentication is disabled.
Resource Requirements
- RAM: 200-500 MB (Flowise itself is lightweight; LLM backends are separate)
- CPU: Low
- Disk: 1-5 GB depending on number of chatflows and stored documents
Verdict
Flowise is the easiest way to build AI chatbots and RAG pipelines without writing code. The drag-and-drop interface makes it accessible to non-developers while remaining powerful enough for complex workflows. The embeddable chat widget is a killer feature for deploying AI chatbots to websites.
Choose Flowise for no-code chatbot building. Choose Langflow if you need Python custom components and more advanced multi-agent orchestration.
Frequently Asked Questions
Does Flowise need its own LLM, or can I use OpenAI/Anthropic?
Flowise doesn’t include an LLM — it connects to external providers. You can use OpenAI, Anthropic, Ollama (local), Google AI, Azure OpenAI, HuggingFace, and dozens more. For fully local/private AI, pair Flowise with Ollama running on the same server.
Can I embed a Flowise chatbot on my website?
Yes. Every chatflow generates an embeddable script tag that you can paste into any website. The chat widget appears as a floating button that visitors can click to interact with your AI pipeline. You can customize the widget’s appearance, initial messages, and behavior.
How does Flowise compare to Langflow?
Both are visual LLM workflow builders. Flowise is built on LangChain.js (JavaScript), focuses on simplicity, and has a polished embeddable chat widget. Langflow is built on LangChain (Python), supports custom Python components, and is better for multi-agent orchestration. Choose Flowise for ease of use; choose Langflow for Python ecosystem access.
Does Flowise store conversation history?
Yes. Flowise stores chat histories in its database (SQLite or PostgreSQL). You can view past conversations in the UI. For production chatbots, use PostgreSQL to avoid SQLite’s concurrency limitations.
Can Flowise connect to my own documents for RAG?
Yes. The RAG pipeline is a core feature — drag in a document loader (PDF, CSV, web scraper), connect it to a text splitter, embeddings model, and vector store (Chroma, Pinecone, Qdrant, or others). Flowise handles the ingestion, chunking, embedding, and retrieval pipeline visually.
Is Flowise suitable for production chatbots?
Yes, with the PostgreSQL configuration. The embeddable widget, API access, and webhook integrations make it viable for customer-facing chatbots. For high-traffic deployments, ensure your LLM provider can handle the request volume — Flowise itself is lightweight but the LLM backend is the bottleneck.
Related
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments