Install Jellyfin on Proxmox VE

Why Proxmox for Jellyfin?

If you run a Proxmox homelab, Jellyfin belongs in a container alongside your other services rather than on a separate bare-metal box. Proxmox gives you snapshot backups, resource limits, and the ability to migrate Jellyfin between hosts — things you don’t get running Docker on bare metal.

The main challenge is GPU passthrough. Jellyfin needs access to your GPU for hardware transcoding, and getting that GPU into an LXC container or VM requires Proxmox-specific configuration. This guide covers both approaches: Intel iGPU passthrough to an LXC container (simpler, recommended) and NVIDIA discrete GPU passthrough to a VM (more complex, required for NVIDIA).

For the general Jellyfin setup, see How to Self-Host Jellyfin with Docker. This guide focuses on the Proxmox-specific parts.

Prerequisites

  • Proxmox VE 8.0+ installed on the host
  • Docker and Docker Compose installed inside the LXC/VM (guide)
  • An Intel CPU with integrated graphics (for iGPU passthrough to LXC) or an NVIDIA GPU (for PCI passthrough to VM)
  • Media files accessible via local storage, NFS share, or CIFS/SMB mount
  • SSH access to the Proxmox host

LXC vs VM: Which to Choose

FactorLXC ContainerVM
OverheadNear-zero (shared kernel)Moderate (full OS, hypervisor layer)
Intel iGPU passthroughSupported via cgroup device rulesSupported via PCI passthrough
NVIDIA GPU passthroughNot reliably supportedSupported via PCI passthrough + IOMMU
Storage flexibilityBind mounts from host, NFSVirtio disk, NFS, CIFS
Snapshot/backupFast (Proxmox built-in)Slower (full disk image)
Resource efficiencyExcellentGood
ComplexityLowMedium-High

Recommendation: Use an LXC container with Intel iGPU passthrough unless you need NVIDIA GPU passthrough, which requires a VM.

LXC Container Setup (Intel iGPU)

This is the recommended approach for Intel CPUs with integrated graphics.

Create the LXC Container

From the Proxmox web UI or command line:

# On the Proxmox host
pveam update
pveam download local ubuntu-24.04-standard_24.04-2_amd64.tar.zst

pct create 110 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst \
  --hostname jellyfin \
  --memory 4096 \
  --swap 512 \
  --cores 2 \
  --rootfs local-lvm:16 \
  --net0 name=eth0,bridge=vmbr0,ip=dhcp \
  --features nesting=1 \
  --unprivileged 0 \
  --start 0

Key settings:

  • --memory 4096 — 4 GB RAM. Sufficient for Jellyfin with hardware transcoding. Increase to 8 GB if you have a large library (20,000+ items) or plan to run multiple concurrent transcodes.
  • --cores 2 — 2 vCPUs minimum. Bump to 4 if you expect software transcoding fallback.
  • --rootfs local-lvm:16 — 16 GB root disk for the OS, Docker, and Jellyfin config. Media is mounted separately.
  • --unprivileged 0 — Creates a privileged container. Required for straightforward /dev/dri access. Unprivileged containers can work but require additional UID/GID mapping that complicates the setup.
  • --features nesting=1 — Required for Docker to run inside the container.

Configure Intel iGPU Passthrough

First, verify the iGPU is available on the Proxmox host:

# On Proxmox host
ls -la /dev/dri/
# Should show: card0, renderD128

Get the device major/minor numbers:

ls -la /dev/dri/card0 /dev/dri/renderD128
# Output example:
# crw-rw---- 1 root video  226, 0 Mar  4 10:00 /dev/dri/card0
# crw-rw---- 1 root render 226, 128 Mar  4 10:00 /dev/dri/renderD128

The major number is 226. Now edit the LXC configuration on the Proxmox host:

nano /etc/pve/lxc/110.conf

Add these lines at the bottom:

# Intel iGPU passthrough
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file

This grants the container access to the Intel GPU device nodes and bind-mounts them into the container’s filesystem.

Mount Media Storage

You have two options for getting media into the container.

Option A: Bind mount from the Proxmox host

If your media is on a local disk or ZFS pool on the Proxmox host:

# Add to /etc/pve/lxc/110.conf
mp0: /mnt/data/media,mp=/mnt/media,ro=1

This mounts /mnt/data/media on the host to /mnt/media inside the container, read-only.

Option B: NFS mount inside the container

If your media is on a NAS:

# Inside the LXC container after starting it
apt install -y nfs-common
mkdir -p /mnt/media
echo "nas-ip:/volume1/media /mnt/media nfs defaults,ro 0 0" >> /etc/fstab
mount -a

Start the Container and Install Docker

# On Proxmox host
pct start 110
pct enter 110

Inside the container:

apt update && apt upgrade -y
curl -fsSL https://get.docker.com | sh

# Verify GPU access
ls -la /dev/dri/
# Should show card0 and renderD128

# Install vainfo for verification
apt install -y vainfo
vainfo
# Should list supported VA-API profiles

If vainfo shows profiles, the iGPU passthrough is working. Proceed to the Docker Compose section.

VM Setup (NVIDIA GPU)

Use this approach if you have a discrete NVIDIA GPU and need PCI passthrough.

Enable IOMMU

On the Proxmox host, enable IOMMU in the bootloader:

# For Intel CPUs
nano /etc/default/grub
# Change: GRUB_CMDLINE_LINUX_DEFAULT="quiet"
# To:     GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

update-grub
reboot

After reboot, verify IOMMU is active:

dmesg | grep -e DMAR -e IOMMU
# Should show: DMAR: IOMMU enabled

Blacklist Host GPU Drivers

Prevent the Proxmox host from claiming the NVIDIA GPU:

echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia_drm" >> /etc/modprobe.d/blacklist.conf

# Load vfio-pci instead
echo "vfio-pci" >> /etc/modules-load.d/vfio.conf

update-initramfs -u
reboot

Identify IOMMU Group

Find your NVIDIA GPU’s PCI address and IOMMU group:

lspci -nn | grep -i nvidia
# Example output:
# 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106 [GeForce RTX 3060] [10de:2503] (rev a1)
# 01:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1)

Note the PCI IDs (e.g., 10de:2503,10de:228e). Both the GPU and its audio controller must be passed through together — they share an IOMMU group.

Configure VFIO to grab these devices:

echo "options vfio-pci ids=10de:2503,10de:228e" >> /etc/modprobe.d/vfio.conf
update-initramfs -u
reboot

Create the VM

In the Proxmox web UI:

  1. Create a new VM (ID 111, for example).
  2. OS: Ubuntu Server 24.04 ISO.
  3. System: Machine type q35, BIOS OVMF (UEFI), add EFI disk.
  4. CPU: 4 cores minimum (type: host — required for PCI passthrough).
  5. Memory: 8192 MB (8 GB).
  6. Disk: 32 GB on local-lvm (for OS + Docker + Jellyfin config).
  7. Network: VirtIO, bridge vmbr0.

After creating the VM, add the GPU via Hardware > Add > PCI Device:

  • Select your NVIDIA GPU (01:00.0).
  • Check All Functions (passes through both GPU and audio controller).
  • Check PCI-Express.
  • Check Primary GPU if this is the only GPU in the VM.

Install Ubuntu and NVIDIA Drivers in the VM

Boot the VM, install Ubuntu Server, then:

sudo apt update && sudo apt upgrade -y

# Install NVIDIA driver
sudo apt install -y ubuntu-drivers-common
sudo ubuntu-drivers autoinstall
sudo reboot

# Verify
nvidia-smi

Install Docker and the NVIDIA Container Toolkit:

curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER

# NVIDIA Container Toolkit
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
  sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

sudo apt update
sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

Mount your media storage inside the VM via NFS or CIFS, then proceed to the Docker Compose section.

Docker Compose Configuration

For LXC with Intel iGPU

services:
  jellyfin:
    image: jellyfin/jellyfin:10.11.6
    container_name: jellyfin
    user: "1000:1000"
    group_add:
      - "render"
      - "video"
    ports:
      - "8096:8096/tcp"
      - "7359:7359/udp"
    volumes:
      - jellyfin-config:/config
      - jellyfin-cache:/cache
      - /mnt/media/movies:/media/movies:ro
      - /mnt/media/tv:/media/tv:ro
      - /mnt/media/music:/media/music:ro
    environment:
      - JELLYFIN_PublishedServerUrl=http://CONTAINER_IP:8096
    devices:
      - /dev/dri/renderD128:/dev/dri/renderD128
      - /dev/dri/card0:/dev/dri/card0
    restart: unless-stopped

volumes:
  jellyfin-config:
  jellyfin-cache:

Notes:

  • Both render and video groups are included — some Proxmox LXC setups assign GPU devices to one or the other.
  • Both /dev/dri/card0 and /dev/dri/renderD128 are passed through. Some Intel GPUs require card0 for full functionality.
  • Replace CONTAINER_IP with the LXC container’s IP address.
  • Adjust /mnt/media/* paths to match your bind mount or NFS mount point.

For VM with NVIDIA GPU

services:
  jellyfin:
    image: jellyfin/jellyfin:10.11.6
    container_name: jellyfin
    user: "1000:1000"
    runtime: nvidia
    ports:
      - "8096:8096/tcp"
      - "7359:7359/udp"
    volumes:
      - jellyfin-config:/config
      - jellyfin-cache:/cache
      - /mnt/media/movies:/media/movies:ro
      - /mnt/media/tv:/media/tv:ro
      - /mnt/media/music:/media/music:ro
    environment:
      - JELLYFIN_PublishedServerUrl=http://VM_IP:8096
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=all
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu, video]
    restart: unless-stopped

volumes:
  jellyfin-config:
  jellyfin-cache:

Notes:

  • runtime: nvidia and deploy.resources are both needed for the NVIDIA Container Toolkit.
  • capabilities: [gpu, video] — the video capability enables NVENC/NVDEC. Without it, Jellyfin sees the GPU but cannot transcode.
  • Replace VM_IP with the VM’s IP address.

Start the Stack

mkdir -p /opt/jellyfin
cd /opt/jellyfin
# Save the appropriate docker-compose.yml above
docker compose up -d
docker compose logs -f jellyfin

Hardware Transcoding Setup

Intel VAAPI (LXC)

After the container is running, verify GPU access from inside Docker:

docker exec -it jellyfin ls -la /dev/dri/
# Should show card0 and renderD128

In the Jellyfin web UI (http://CONTAINER_IP:8096):

  1. Go to Dashboard > Playback > Transcoding.
  2. Set Hardware acceleration to Video Acceleration API (VAAPI).
  3. Set VA-API Device to /dev/dri/renderD128.
  4. Enable hardware decoding for H.264, HEVC, and any other codecs your GPU supports.
  5. Enable Hardware encoding.
  6. Enable Tone mapping (Intel 10th gen and newer).
  7. Click Save.

Test by playing a file that requires transcoding. Check Dashboard > Active Sessions — you should see (HW) next to the transcode codec.

NVIDIA NVENC (VM)

Verify GPU access:

docker exec -it jellyfin nvidia-smi

In the Jellyfin web UI:

  1. Go to Dashboard > Playback > Transcoding.
  2. Set Hardware acceleration to NVIDIA NVENC.
  3. Enable hardware decoding for all codecs.
  4. Enable Hardware encoding.
  5. Enable Tone mapping (requires a Turing or newer GPU — RTX 2000+).
  6. Click Save.

Resource Allocation Guidelines

WorkloadvCPUsRAMNotes
Direct play only (1-3 streams)22 GBNo transcoding, minimal resources
Hardware transcoding (1-2 streams)24 GBGPU does the heavy lifting
Hardware transcoding (3-5 streams)44-8 GBMore CPU headroom for metadata and overhead
Software transcoding fallback4+8 GBCPU-bound, resource-hungry

Disk allocation:

  • Root disk: 16 GB for LXC, 32 GB for VM (OS + Docker + Jellyfin config and database).
  • Cache: Allocate 10-20 GB for transcoding cache. On LXC, the cache lives in the Docker volume on the root disk. For heavy transcoding workloads, bind-mount the cache to a fast SSD.
  • Media storage: Separate from the root disk. Use bind mounts (LXC) or NFS/CIFS mounts (VM).

Community Helper Scripts

The tteck Proxmox helper scripts include a one-click Jellyfin LXC installer. It handles container creation, Docker installation, and basic configuration:

# Run from the Proxmox host shell
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/jellyfin.sh)"

This creates a minimal LXC container with Jellyfin pre-installed (without Docker — it installs Jellyfin natively). It does not configure GPU passthrough. Use the script for a quick start, then add the GPU passthrough configuration from this guide manually.

If you prefer the Docker-based setup (recommended for consistency and easier updates), follow the manual LXC creation steps in this guide instead.

First-Time Setup

  1. Open http://CONTAINER_OR_VM_IP:8096 in your browser.
  2. Select language.
  3. Create admin account.
  4. Add media libraries pointing to your mounted media paths (/media/movies, /media/tv, /media/music).
  5. Set metadata language and country.
  6. Enable remote access if needed.
  7. Complete the wizard.

The initial library scan is CPU-intensive. On an LXC container with 2 vCPUs and a large library, it may take 20-40 minutes. Let it finish before testing transcoding performance.

Troubleshooting

/dev/dri Not Visible in LXC Container

Symptom: ls /dev/dri/ inside the container shows nothing or “No such file or directory.”

Fix: The cgroup device rules or bind mounts in the LXC config are wrong. On the Proxmox host, verify:

# Check the host has the devices
ls -la /dev/dri/

# Check the LXC config
cat /etc/pve/lxc/110.conf | grep -i dri

Ensure the lxc.cgroup2.devices.allow and lxc.mount.entry lines are present and the major/minor numbers match your actual devices (check with ls -la /dev/dri/ on the host). Restart the container after any changes:

pct stop 110 && pct start 110

NVIDIA GPU Not Visible in VM

Symptom: nvidia-smi inside the VM shows “No devices found” or the command is not found.

Fix: Check the PCI passthrough configuration:

  1. Verify IOMMU is enabled: dmesg | grep -e DMAR -e IOMMU on the Proxmox host.
  2. Verify the GPU is bound to vfio-pci: lspci -nnk -s 01:00 should show Kernel driver in use: vfio-pci.
  3. If it shows nvidia or nouveau, the blacklist did not work. Re-run update-initramfs -u and reboot the host.
  4. In the VM hardware config, ensure PCI-Express and All Functions are checked.

Docker Cannot Access GPU Inside LXC

Symptom: docker exec -it jellyfin ls -la /dev/dri/ shows the devices, but Jellyfin cannot use them for transcoding. Logs show permission errors.

Fix: The Docker container’s user does not have permission to access the GPU device. Inside the LXC container:

# Check device permissions
ls -la /dev/dri/renderD128
# If owned by root:root, the render/video groups don't exist in the container

# Create the groups and set permissions
groupadd -g 44 video 2>/dev/null
groupadd -g 104 render 2>/dev/null
chown root:render /dev/dri/renderD128
chmod 660 /dev/dri/renderD128

Then ensure the Docker Compose file includes group_add for both render and video. Restart the container.

NFS Mount Drops or Becomes Stale

Symptom: Media files disappear from Jellyfin, or playback fails with I/O errors. ls /mnt/media hangs.

Fix: The NFS mount has gone stale, typically after a NAS reboot or network interruption. Force remount:

sudo umount -f /mnt/media
sudo mount -a

Prevent stale mounts by adding NFS options in /etc/fstab:

nas-ip:/volume1/media /mnt/media nfs defaults,ro,soft,timeo=30,retrans=3 0 0

The soft option returns an error instead of hanging forever on a stale mount. timeo=30 sets a 3-second timeout.

Container Runs Out of Disk Space

Symptom: Jellyfin stops working, Docker logs show disk-related errors.

Fix: The transcoding cache has filled the root disk. Check disk usage:

df -h /
du -sh /var/lib/docker/volumes/

Clear the transcoding cache:

docker compose stop jellyfin
docker volume rm $(docker volume ls -q | grep cache)
docker compose up -d

To prevent this, either increase the LXC root disk size (pct resize 110 rootfs +10G) or bind-mount the cache directory to a larger storage pool.

Comments