Install Portainer on Proxmox VE
Why Proxmox for Portainer?
In a Proxmox homelab, you typically run Docker across multiple VMs and LXC containers. Portainer centralizes the management of all those Docker instances into a single web UI. Deploy it in a lightweight VM, install Portainer Agents on your other Docker hosts, and manage everything from one place — no SSH required.
This guide covers creating a minimal VM for Portainer, deploying agents across your Proxmox environment, and managing Docker in both VMs and LXC containers from the Portainer dashboard.
For Portainer’s features and general setup, see the main Portainer guide.
Prerequisites
- Proxmox VE 8.0+ installed and accessible
- Hardware: 1 vCPU and 1 GB RAM for the Portainer VM (minimal footprint)
- Storage: 8 GB for the VM OS disk
- Network: A bridge interface configured (default
vmbr0) - Other Docker hosts in your Proxmox environment (VMs or LXC containers running Docker) — optional, for multi-node management
LXC vs VM: Which to Choose
| LXC Container | VM | |
|---|---|---|
| Resource usage | ~50 MB RAM overhead | ~200 MB RAM overhead |
| Docker inside | Requires nesting feature enabled | Works natively |
| Performance | Near-native | ~5% virtualization overhead |
| Portainer use case | Fine for managing a single Docker instance | Recommended. Better isolation, simpler Docker setup, and Portainer itself is lightweight enough that VM overhead is negligible. |
This guide uses a VM. Portainer in a VM “just works” with Docker. LXC requires enabling nesting (Options > Features > Nesting), and some Docker operations can behave unexpectedly in nested environments. Since Portainer uses minimal resources, the VM overhead is negligible.
Platform Setup
Download the Ubuntu Cloud Image
SSH into your Proxmox host:
cd /var/lib/vz/template/iso/
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
Create the VM
Portainer is lightweight — a minimal VM is sufficient:
VM_ID=310
# Create a minimal VM — 1 vCPU, 1 GB RAM
qm create $VM_ID --name portainer --memory 1024 --cores 1 --cpu host \
--net0 virtio,bridge=vmbr0 --ostype l26 --agent enabled=1
# Import the cloud image
qm importdisk $VM_ID /var/lib/vz/template/iso/noble-server-cloudimg-amd64.img local-lvm
# Attach disk
qm set $VM_ID --scsihw virtio-scsi-single --scsi0 local-lvm:vm-${VM_ID}-disk-0,iothread=1,discard=on
# Resize to 8 GB (Portainer needs very little disk)
qm disk resize $VM_ID scsi0 8G
# Cloud-init
qm set $VM_ID --ide2 local-lvm:cloudinit
qm set $VM_ID --boot order=scsi0
qm set $VM_ID --ciuser portainer --cipassword YOUR_PASSWORD \
--ipconfig0 ip=dhcp --sshkeys ~/.ssh/authorized_keys
qm set $VM_ID --agent enabled=1
Start the VM:
qm start $VM_ID
Initial VM Configuration
SSH into the VM:
ssh portainer@VM_IP_ADDRESS
Install Docker and the QEMU guest agent:
sudo apt update && sudo apt upgrade -y
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER
newgrp docker
sudo apt install -y qemu-guest-agent
sudo systemctl enable --now qemu-guest-agent
Docker Compose Configuration
Create the project directory:
mkdir -p ~/portainer && cd ~/portainer
Create docker-compose.yml:
services:
portainer:
image: portainer/portainer-ce:2.39.1
container_name: portainer
restart: unless-stopped
security_opt:
- no-new-privileges:true
volumes:
# Docker socket for managing local Docker
- /var/run/docker.sock:/var/run/docker.sock
# Persistent data (users, settings, stacks, environment configs)
- portainer-data:/data
ports:
# HTTPS web UI (primary)
- "9443:9443"
# HTTP web UI (disable after confirming HTTPS works)
- "9000:9000"
# TCP tunnel for Edge Agents
- "8000:8000"
volumes:
portainer-data:
Start Portainer:
docker compose up -d
Verify:
docker compose ps
curl -sk https://localhost:9443/api/status | python3 -m json.tool
First-Time Setup
- Open
https://vm-ip:9443in a browser - Accept the self-signed certificate warning
- Create your admin account within 5 minutes
- Click Get Started to connect to the local Docker environment
- The local Docker endpoint appears automatically
Connecting Docker Hosts Across Your Proxmox Environment
This is where Portainer on Proxmox becomes powerful — managing Docker on all your VMs and LXC containers from one place.
Deploy Portainer Agent on Other VMs
On each VM that runs Docker:
docker run -d \
--name portainer-agent \
--restart unless-stopped \
-p 9001:9001 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/docker/volumes:/var/lib/docker/volumes \
portainer/agent:2.39.1
Then in Portainer:
- Environments > Add environment
- Docker Standalone > Agent
- Enter the VM’s IP and port
9001 - Name it descriptively (e.g.,
media-server-vm,nextcloud-vm)
Deploy Agent in LXC Containers
If you run Docker inside LXC containers (with nesting enabled), the Agent works the same way:
# Inside the LXC container
docker run -d \
--name portainer-agent \
--restart unless-stopped \
-p 9001:9001 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/docker/volumes:/var/lib/docker/volumes \
portainer/agent:2.39.1
Edge Agent for Remote Proxmox Nodes
If you have Proxmox nodes on different networks (remote site, VPS), use the Edge Agent. It connects outbound to Portainer, so no port forwarding is needed:
- In Portainer: Environments > Add environment > Edge Agent
- Copy the generated command (includes
EDGE_IDandEDGE_KEY) - Run it on the remote Docker host
docker run -d \
--name portainer-edge-agent \
--restart unless-stopped \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/docker/volumes:/var/lib/docker/volumes \
-v /:/host \
-e EDGE=1 \
-e EDGE_ID=<your-edge-id> \
-e EDGE_KEY=<your-edge-key> \
-e EDGE_INSECURE_POLL=1 \
portainer/agent:2.39.1
Architecture Overview
A typical Proxmox + Portainer setup:
Proxmox Host
├── VM: portainer (this VM)
│ └── Portainer CE → manages all Docker below
├── VM: media-server
│ └── Docker + Portainer Agent → Jellyfin, *arr stack
├── VM: nextcloud
│ └── Docker + Portainer Agent → Nextcloud, Redis, PostgreSQL
├── LXC: pihole
│ └── Docker + Portainer Agent → Pi-hole
└── LXC: monitoring
└── Docker + Portainer Agent → Uptime Kuma, Beszel
All managed from https://portainer-vm-ip:9443.
Proxmox-Specific Tips
Start Portainer VM on Boot
Ensure the Portainer VM starts automatically when Proxmox boots:
In the Proxmox web UI: select the VM → Options > Start at boot → Enable.
Set a Start/Shutdown order of 1 so Portainer starts before your other Docker host VMs (which might depend on the Portainer API for GitOps deployments).
Snapshot Before Upgrades
Before updating Portainer:
# On the Proxmox host
qm snapshot $VM_ID pre-portainer-update --description "Before Portainer CE update"
Update Portainer:
# Inside the VM
cd ~/portainer
docker compose pull
docker compose up -d
If something breaks, roll back:
qm rollback $VM_ID pre-portainer-update
Firewall Rules
If you use Proxmox’s built-in firewall, allow Portainer’s ports:
In the Proxmox web UI: select the VM → Firewall > Add:
- TCP 9443 (HTTPS UI)
- TCP 9001 (Agent communication — allow from Docker host IPs only)
- TCP 8000 (Edge Agent tunnel — only if using Edge Agents)
Resource Monitoring
Portainer uses minimal resources. Expected usage in the Proxmox UI:
| Metric | Typical Value |
|---|---|
| CPU | 1-3% (spikes when loading UI) |
| RAM | 100-150 MB |
| Disk I/O | Negligible |
| Network | Negligible (agent communication is lightweight) |
If you see higher usage, check how many environments are connected — each Agent connection adds a small overhead.
Troubleshooting
”Your Portainer instance timed out” on first access
You have 5 minutes to create the admin account. If you miss it:
docker compose restart portainer
Agent connection shows “Disconnected” in Portainer
- Verify the Agent is running on the remote host:
docker ps | grep portainer-agent - Check network connectivity:
curl -k https://agent-ip:9001/api/status - If using Proxmox firewall, ensure port 9001 is allowed between the Portainer VM and the agent host
- Verify no IP address changes (use static IPs or DHCP reservations for all Docker hosts)
Cannot access Portainer from outside Proxmox network
Portainer’s HTTPS port (9443) must be reachable. If your Proxmox host is behind a router:
- Port forward 9443 on your router to the Portainer VM’s IP
- Or use a VPN (Tailscale, WireGuard) to access the Proxmox network directly
Docker socket permission denied in LXC
If the Portainer Agent in an LXC container cannot access the Docker socket:
# Verify Docker socket permissions inside the LXC
ls -la /var/run/docker.sock
Ensure the LXC container has the nesting feature enabled in Proxmox: Container > Options > Features > Nesting.
Resource Requirements
- VM allocation: 1 vCPU, 1 GB RAM, 8 GB disk
- Portainer container: ~100 MB RAM, negligible CPU
- Per Agent: ~30 MB RAM on each managed host
- Network: Minimal — agent polling is lightweight
Related
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments