Proxmox VE System Requirements: Minimum & Recommended Hardware (2026)

Quick Recommendation

For most self-hosters running Proxmox VE 8.4, a used Dell OptiPlex 7050/7060 Micro with an Intel i5, 32 GB DDR4, and a 500 GB NVMe boot drive is the sweet spot. It costs $150–$200, draws 15–25W, and handles 5–10 VMs or 20+ containers without breaking a sweat. If you need more — GPU passthrough, ZFS with ECC, or 10+ VMs — step up to a used workstation or rack server.

Proxmox VE 8.4 Hardware Requirements

Proxmox VE 8.4 (the latest release, based on Debian 12.10 Bookworm) ships with Linux kernel 6.8 (stable) or 6.14 (opt-in), QEMU 9.2.0, and ZFS 2.2.7. Here are the official and practical hardware requirements:

ComponentMinimumRecommendedProduction
CPU64-bit (Intel/AMD), VT-x/AMD-V, 2 cores4+ cores with VT-d/AMD-Vi, Intel i5 or equivalent8+ cores, Xeon/EPYC with ECC support, VT-d for passthrough
RAM2 GB (Proxmox OS only)32 GB DDR4 (5–10 VMs)64–128 GB ECC DDR4/DDR5 (10+ VMs + ZFS ARC)
Boot Storage32 GB (SSD or NVMe)500 GB NVMe2x 500 GB NVMe in ZFS mirror
VM Storage100 GB SATA SSD500 GB–1 TB NVMe2+ TB NVMe + HDD pool for bulk
Network1x 1 Gbps Ethernet1x 1 Gbps Intel NIC2x 1 Gbps or 1x 10 GbE, Intel I210/I225/I226
GPUNot requiredOptional (iGPU for transcoding passthrough)Discrete GPU for AI/transcoding passthrough
Power SupplyAny80+ Bronze, 300W+80+ Gold, 450W+, UPS recommended

Key takeaways:

  • RAM is the primary bottleneck. VMs need dedicated memory allocations — you cannot overcommit like containers. Budget 2 GB per lightweight VM, 4–8 GB per heavy VM, plus 1 GB per TB of ZFS storage for ARC.
  • VT-x is required, VT-d is recommended. VT-x enables virtualization. VT-d (Intel) or AMD-Vi enables PCI passthrough (GPU, NIC, HBA). Check your BIOS — VT-d is sometimes disabled by default.
  • NVMe matters for VMs. Random I/O performance directly affects VM responsiveness. SATA SSDs work but NVMe is a significant upgrade for database VMs and containers.
  • Intel NICs are preferred. Realtek works but has known issues with certain Proxmox kernel versions. Intel I210/I225/I226 have native driver support.
  • Proxmox 8.4 new features: vGPU live migration (migrate running VMs with attached vGPUs), virtiofs directory passthrough (share host files with VMs without NFS overhead), and an official backup provider plugin API.

What Proxmox Needs from Hardware

Proxmox VE is a Type 1 hypervisor based on Debian Linux. It runs directly on bare metal, so hardware compatibility matters more than with Docker-only setups.

CPU

  • VT-x/VT-d required. Intel VT-x (or AMD-V) for virtualization, VT-d (or AMD-Vi) for PCI passthrough. Every modern CPU supports VT-x, but check your BIOS — it’s sometimes disabled by default.
  • Core count matters more than clock speed. Each VM gets dedicated vCPUs. Plan 1–2 cores per lightweight VM, 4+ for heavy workloads (databases, transcoding).
  • Intel generally has better IOMMU grouping for PCI passthrough. AMD is fine for standard VM use but may need the ACS override patch for some passthrough scenarios.
  • Avoid Intel 13th/14th gen for 24/7 servers. These CPUs have documented stability issues (random freezes and reboots) in always-on workloads. If you go Intel desktop, 12th gen or older is safer for server use.

RAM

  • RAM is the primary bottleneck in Proxmox. Each VM needs its own allocation — you can’t share like containers do.
  • 32 GB minimum for a serious Proxmox host. 64 GB if you plan to run 10+ VMs.
  • ECC recommended for ZFS. Not strictly required, but ZFS without ECC means a memory error can corrupt your pool silently. If you’re storing important data on ZFS, use ECC.
  • Plan for ZFS ARC. Proxmox 8.1+ limits ARC to 10% of RAM (capped at 16 GB) on new installs. For best ZFS performance, allocate 2 GB base + 1 GB per TB of ZFS storage.

Storage

  • NVMe for boot and VM disks. Proxmox itself needs maybe 32 GB. VMs benefit enormously from NVMe speeds — random I/O matters.
  • Separate boot and data drives. Don’t run VMs off the same drive as Proxmox OS if you can avoid it.
  • ZFS or LVM-thin for storage pools. ZFS gives snapshots, checksums, and compression. LVM-thin gives thin provisioning with less RAM overhead.
  • SATA/SAS HDDs for bulk storage. NAS VMs, media libraries, backups — use spinning disks behind a ZFS mirror or RAID-Z.
  • Never boot from USB. USB drives wear out fast under Proxmox’s constant logging. Use an SSD or NVMe, even a cheap 128 GB one.

Networking

  • At least 1 Gbps. Ideally 2+ NICs for separating management traffic from VM traffic.
  • Intel NICs preferred. Realtek works but Intel I210/I225/I226 have better driver support in Proxmox.
  • 2.5 GbE is the new baseline for modern hardware. Most new motherboards and mini PCs ship with 2.5 GbE Intel I226-V NICs.
  • 10 GbE is worth it if you’re doing iSCSI, Ceph, or running storage-heavy VMs. See our 10GbE networking guide.

Budget Build — Under $200

Used Dell OptiPlex 7050 Micro

SpecDetails
CPUIntel Core i5-7500T (4C/4T, 2.7 GHz)
RAM32 GB DDR4 (2x16 GB)
Boot drive500 GB NVMe (add a 2242 or 2280 M.2)
Networking1x Gigabit Intel I219-LM
Power35W TDP, ~15W idle
Price$120–$180 (used, eBay/refurb)

What you can run: 5–8 lightweight VMs (Pi-hole, Home Assistant, Nextcloud, Vaultwarden) or 20+ LXC containers. No GPU passthrough — no discrete GPU slot.

Pros:

  • Incredibly cheap for the performance
  • Tiny form factor (fits on a shelf)
  • Low power draw
  • Reliable business-class hardware

Cons:

  • Max 32 GB RAM (2 DIMM slots)
  • No PCIe expansion (no GPU passthrough)
  • Single NIC

Mini PC Build — $250–$500

Minisforum MS-01 or Beelink SER8

SpecMS-01 (barebones)Beelink SER8
CPUIntel i5-12600H (12C/16T) or i9-13900H (14C/20T)AMD Ryzen 7 8745H (8C/16T)
RAMUp to 64 GB DDR5 (user-supplied)32 GB DDR5 (included)
Boot drive3x NVMe slots (incl. U.2)1x NVMe (500 GB–1 TB included)
Networking2x 10 GbE SFP+ + 2x 2.5 GbE1x 2.5 GbE
ExpansionPCIe 4.0 x16 slot, USB4 x2USB4 x2, no PCIe slot
Power~25–40W idle~15–25W idle
Price$549 barebones (i9-12900H)$350–$450 configured

What you can run: 10–15 VMs comfortably. The MS-01 is the standout — dual 10 GbE SFP+ and a full PCIe x16 slot in a mini PC form factor. GPU passthrough works via the PCIe slot (MS-01) or eGPU via USB4 (both).

Why mini PCs are gaining ground:

  • Lower power draw than tower PCs (15–40W vs 25–90W)
  • Surprisingly expandable (MS-01 has 3 NVMe + PCIe x16)
  • Silent or near-silent under normal load
  • Modern CPUs with 8–14 cores handle serious VM workloads

Mid-Range Build — $300–$500

Used Dell OptiPlex 7080 Tower or HP EliteDesk 800 G6 Tower

SpecDetails
CPUIntel Core i7-10700 (8C/16T, 2.9 GHz)
RAM64 GB DDR4 (2x32 GB)
Boot drive500 GB NVMe
Data drive1 TB NVMe for VM storage
Networking1x Gigabit Intel + optional PCIe NIC
Power65W TDP, ~25W idle
Price$300–$450 (used)

What you can run: 10–15 VMs running concurrently. GPU passthrough with a low-profile card. Plex transcoding in a VM. TrueNAS VM with HBA passthrough if you add drives.

Pros:

  • 8 cores / 16 threads is serious VM capacity
  • 64 GB RAM handles many VMs
  • PCIe slot for GPU or 10 GbE NIC
  • iGPU available for Quick Sync transcoding passthrough

Cons:

  • Tower form factor takes more space
  • Single PSU (no redundancy)

High-End Build — $500–$1,000

Used Dell PowerEdge T340 or HP ProLiant ML350 Gen10

SpecDetails
CPUIntel Xeon E-2278G (8C/16T, 3.4 GHz) or Xeon Silver 4210
RAM128 GB ECC DDR4
Boot drive2x 500 GB NVMe (ZFS mirror)
Data storage4x 4 TB SATA in RAID-Z1
Networking2x Gigabit Intel + optional 10 GbE
Power~80W idle
Price$600–$1,000 (used)

What you can run: Everything. 20+ VMs, Ceph cluster node, TrueNAS with ZFS, GPU passthrough for transcoding or AI workloads, multiple networks with VLANs.

Pros:

  • ECC RAM (critical for ZFS)
  • Hot-swap drive bays
  • iLO/iDRAC for remote management
  • Built for 24/7 operation
  • Multiple PCIe slots

Cons:

  • Louder (server fans)
  • Higher power draw (~80–120W idle)
  • Larger form factor
  • Higher electricity cost (~$85–$125/year at $0.12/kWh)

DIY Build — Custom

For maximum flexibility, build your own.

ComponentRecommendationPrice
CPUIntel Core i5-12400 (6C/12T) or AMD Ryzen 7 9700X (8C/16T)$150–$250
MotherboardASRock B660M with 4 DIMM slots, Intel I226-V NIC (Intel) or ASRock B650M (AMD)$100–$150
RAM64 GB DDR4 ECC (if board supports) or non-ECC$80–$120
Boot drive500 GB NVMe (WD SN770 or Samsung 980)$40–$50
CaseFractal Design Node 304 or Jonsbo N2$80–$100
PSUCorsair SF450 or Seasonic 450W 80+ Gold$60–$80
Total$510–$750

Why Intel i5-12400 over i5-13500: The 12th gen avoids the documented 13th/14th gen stability issues (random freezes and reboots in 24/7 server environments). The i5-12400 provides 6 cores/12 threads with proven reliability. If you go AMD, the Ryzen 7 9700X is excellent — 8 cores at 65W TDP with ECC support on most AM5 boards.

Pros: Choose exactly what you need. Easy to upgrade. Better IOMMU grouping with consumer Intel boards for passthrough.

Cons: More work. No remote management (unless you add a BMC card). No hot-swap bays (unless your case supports it).

CPU Comparison for Proxmox

CPUCores/ThreadsTDPPassmark (Multi)PriceBest For
Intel N1004C/4T6W~5,500$150 (mini PC)Lightweight — 3-5 containers, not ideal for VMs
Intel N3058C/8T15W~9,000$300 (mini PC)Low-power Proxmox — 5-8 VMs with CPU pinning
Intel i5-7500T4C/4T35W~5,800$120 (used Micro)Budget Proxmox — 5-8 VMs
Intel i5-104006C/12T65W~12,600$80 (used)Mid-range — 8-12 VMs
Intel i7-107008C/16T65W~16,000$100 (used)Solid all-rounder — 10-15 VMs
Intel i5-124006C/12T65W~19,500$140 (new)Reliable mid-range — Quick Sync, no 13th/14th gen issues
Intel i5-1350014C/20T65W~28,000$180 (new)High-performance — 15-20+ VMs (check stability reports)
Xeon E-2278G8C/16T80W~15,500$200 (used)ECC + iGPU — ZFS + transcoding
AMD Ryzen 5 56006C/12T65W~22,000$143 (new)Budget performance — great value
AMD Ryzen 7 9700X8C/16T65W~28,500$250 (new)Efficiency king — 65W with server-grade multi-core

PCI Passthrough and IOMMU Setup

PCI passthrough lets you assign physical hardware (GPU, NIC, HBA, USB controller) directly to a VM. This is one of the primary reasons people choose Proxmox over Docker-only setups.

Requirements

  • CPU: VT-d (Intel) or AMD-Vi enabled in BIOS
  • Motherboard: IOMMU support (most consumer boards from 2016+ support this)
  • Kernel parameter: Add intel_iommu=on iommu=pt (Intel) or amd_iommu=on iommu=pt (AMD) to GRUB
  • Interrupt remapping: Must be supported and enabled — without it, passthrough fails with “Failed to assign device” errors

Intel vs AMD for Passthrough

FactorIntelAMD
IOMMU groupingGenerally better — devices more often isolatedCan be poor — multiple devices grouped together
ACS supportMore common on consumer boardsOften requires ACS override patch
GPU passthroughWell-supported, iGPU can stay on hostWorks but reset bug on some older GPUs (pre-RDNA2)
NIC passthroughExcellent with Intel NICsWorks fine

IOMMU Troubleshooting

  1. Check IOMMU is enabled: After adding kernel parameters, verify with dmesg | grep -e DMAR -e IOMMU. Look for “IOMMU enabled” or “DMAR-IR: Enabled IRQ remapping.”
  2. List IOMMU groups: Use find /sys/kernel/iommu_groups/ -type l to see device groupings. Each device in its own group is ideal.
  3. Bad grouping? Try moving the PCIe card to a different slot. Different slots often have different IOMMU groups.
  4. Last resort — ACS override patch: Splits large IOMMU groups into individual devices. Use only if you understand the security implications (weakened DMA isolation).
  5. GPU passthrough tip: Prevent the host from claiming the GPU at boot by binding it to vfio-pci early. This avoids driver unbind/rebind issues.

RAM Sizing Guide

Use CaseMinimum RAMRecommended
3-5 LXC containers only8 GB16 GB
5-10 lightweight VMs16 GB32 GB
10-15 mixed VMs32 GB64 GB
15+ VMs or ZFS with ARC64 GB128 GB
Ceph node64 GB128 GB+

ZFS Memory Planning

ZFS uses RAM for its adaptive replacement cache (ARC). The ARC dramatically improves read performance but consumes a significant chunk of your memory budget.

ZFS Pool SizeARC RecommendedTotal RAM (with 10 VMs)
2 TB4 GB36 GB (32 GB VMs + 4 GB ARC)
8 TB10 GB42 GB (32 GB VMs + 10 GB ARC)
16 TB18 GB50 GB (32 GB VMs + 18 GB ARC)
32 TB34 GB66 GB (32 GB VMs + 34 GB ARC)

Formula: 2 GB base + 1 GB per TB of ZFS storage.

Proxmox 8.1+ default: New installations cap ARC at 10% of RAM (max 16 GB). You can increase this in /etc/modprobe.d/zfs.conf if you have RAM to spare.

SLOG, L2ARC, and Special vdevs

Special vdevPurposeSize NeededWhen to Use
SLOG (ZFS Intent Log)Accelerates synchronous writes8–16 GB NVMe with power-loss protectionNFS, iSCSI, databases — anything doing sync writes
L2ARC (Level 2 ARC)Extends read cache to SSD when ARC is full50–200 GB SSDWhen ARC hit rate is below 80% and you can’t add more RAM
Special vdevStores metadata and small files on fast storage64–256 GB NVMe (mirrored)Large HDD pools where metadata seeks are the bottleneck

SLOG tip: Only matters for synchronous writes. If your workload is mostly async (media streaming, container images), SLOG won’t help. Use an NVMe with power-loss protection (e.g., Intel Optane, Samsung PM9A3) — consumer NVMe without PLP defeats the purpose.

L2ARC tip: Don’t bother unless your ARC hit ratio is below 80%. Check with arc_summary or arcstat. L2ARC also consumes some main RAM for its index (~100–200 bytes per cached block).

Storage Configuration Tips

Boot Drive

  • 500 GB NVMe is plenty for Proxmox OS + ISO storage + container templates
  • Mirror two NVMe drives (ZFS mirror) if you want boot drive redundancy
  • Don’t use USB drives for boot — they wear out fast under Proxmox’s logging

VM Storage

  • NVMe for performance-critical VMs (databases, Nextcloud, Gitea)
  • SATA SSD for general VMs (Pi-hole, Home Assistant, monitoring)
  • HDD for bulk storage VMs (media servers, backup targets)

ZFS vs LVM-Thin

FeatureZFSLVM-Thin
SnapshotsYes (instant, efficient)Yes
ChecksumsYes (data integrity)No
CompressionYes (lz4 is nearly free)No
RAM overheadHigh (ARC cache)Low
ECC recommendedYesNot critical
ComplexityMediumLow
ReplicationNative (zfs send/recv)Requires additional tools

Recommendation: Use ZFS if you have 32+ GB RAM and care about data integrity. Use LVM-thin if RAM is tight or you just want simple thin provisioning.

Power Consumption and Running Costs

BuildIdle PowerLoad PowerAnnual Cost ($0.12/kWh)
OptiPlex Micro (budget)12–18W35–50W$13–$19/year
Mini PC — MS-01 / SER815–25W50–80W$16–$26/year
OptiPlex Tower (mid)20–30W80–120W$21–$32/year
DIY build25–40W100–180W$26–$42/year
PowerEdge T340 (high)60–90W200–350W$63–$95/year

3-year total cost comparison (hardware + electricity):

BuildHardware Cost3-Year Electricity3-Year Total
OptiPlex Micro$150$48$198
Mini PC (MS-01 barebones)$549$63$612
OptiPlex Tower$400$80$480
DIY build$630$100$730
PowerEdge T340$800$240$1,040

What Can You Run on Each Build?

Budget (4C/32GB)

Mini PC (8-14C/32-64GB)

  • Everything above, plus:
  • Jellyfin with iGPU transcoding (VM — 4 GB)
  • Immich for photo management (VM — 4 GB)
  • Gitea (VM — 2 GB)
  • pfSense/OPNsense router VM (2 GB, NIC passthrough on MS-01)
  • 5+ additional lightweight services

Mid-Range (8C/64GB)

  • Everything above, plus:
  • Grafana + Prometheus (VM — 4 GB)
  • TrueNAS VM for NAS storage (VM — 8-16 GB)
  • GPU passthrough for Plex or Jellyfin transcoding
  • 5+ additional services

High-End (8C+/128GB)

  • Full homelab: 15-20+ VMs running simultaneously
  • Ceph storage cluster node
  • Windows VM for testing
  • GPU passthrough for Plex/Jellyfin transcoding or AI inference
  • Development environments
  • Kubernetes cluster (k3s across multiple VMs)

Proxmox vs Docker: When You Need a Hypervisor

Not sure if you need Proxmox at all? Here’s when it makes sense.

FactorDocker DirectlyProxmox VE
Setup complexityLow — install Docker, write Compose filesMedium — install Proxmox, create VMs/CTs, then install Docker inside
Resource overheadMinimal (~50 MB)Moderate (2-4 GB for Proxmox + per-VM overhead)
IsolationProcess-level (shared kernel)Full VM isolation (separate kernels)
PCI passthroughNot possibleFull support (GPU, NIC, HBA, USB)
Network isolationDocker networks (software)VLANs, bridges, firewalls (hardware-level)
Snapshots/backupVolume-level onlyFull VM snapshots with RAM state
Run non-Linux OSNoYes (Windows, FreeBSD, TrueNAS)
High availabilitySwarm/K8s (complex)Built-in HA with clustering
Best forSingle-purpose servers, simple stacksMulti-tenant, mixed workloads, lab environments

Use Docker directly if you’re running only containers and want simplicity. Use Proxmox if you need VMs (Windows, TrueNAS, pfSense), PCI passthrough, network isolation between workloads, or you want to run multiple isolated Docker hosts on one machine.

Many Proxmox users run Docker inside a VM — you get the best of both worlds: VM-level isolation and snapshots with Docker’s ease of container management.

FAQ

What are the minimum hardware requirements for Proxmox VE?

Proxmox VE 8.4 officially requires a 64-bit CPU with VT-x (Intel) or AMD-V support, 2 GB of RAM, and a boot drive with at least 32 GB. That is the bare minimum to boot Proxmox — not to do anything useful. For any practical self-hosting use — running VMs, Docker via LXC, or services like Nextcloud — you need at least 16 GB RAM and a 256 GB SSD. The recommended starting point is 32 GB RAM with an NVMe boot drive. See the hardware requirements table above for minimum, recommended, and production tiers.

How much RAM do I need for Proxmox?

It depends on how many VMs you plan to run. Proxmox itself uses about 1–2 GB. Each lightweight VM (Pi-hole, Vaultwarden, Uptime Kuma) needs 512 MB–2 GB. Each heavy VM (Nextcloud, Jellyfin, TrueNAS) needs 4–16 GB. If you use ZFS, reserve an additional 2 GB base + 1 GB per TB of storage for ARC (adaptive replacement cache). A practical breakdown:

WorkloadMinimum RAMRecommended RAM
3–5 LXC containers only8 GB16 GB
5–10 lightweight VMs16 GB32 GB
10–15 mixed VMs32 GB64 GB
15+ VMs with ZFS64 GB128 GB
Ceph cluster node64 GB128 GB+

Start with 32 GB. It handles 5–10 VMs comfortably and leaves room for ZFS. Upgrade to 64 GB when you start running out.

What is the minimum RAM for Proxmox with ZFS?

The absolute minimum is 8 GB, but you’ll have almost nothing left for VMs after ZFS claims its ARC. Practically, 32 GB is the minimum for ZFS + VMs. The formula is: 2 GB (ZFS base) + 1 GB per TB of storage + your VM memory needs. For a 4 TB ZFS pool running 5 VMs: 2 + 4 + 16 = 22 GB minimum, so 32 GB gives you headroom. See the ZFS memory planning table for detailed sizing.

What is the best CPU for Proxmox VE?

For most self-hosters, the Intel Core i5-12400 (6 cores/12 threads, ~$140) is the safest pick — Quick Sync support for transcoding passthrough, proven 24/7 stability, and enough cores for 8-12 VMs. For more cores, the AMD Ryzen 7 9700X (8C/16T, 65W, ~$250) is excellent with ECC support on AM5 boards. For budget builds, a used Intel Core i7-10700 (8C/16T, ~$100) is unbeatable value. For production with ECC, the Intel Xeon E-2278G (8C/16T, ECC, iGPU) is the standard choice. Avoid Intel 13th/14th gen for always-on servers due to documented stability issues.

Can I run Proxmox on an Intel N100 mini PC?

Technically yes, but it’s a poor fit for VMs. The N100 has only 4 cores and most mini PCs max out at 16 GB RAM. Proxmox VMs need dedicated RAM allocations, so you’ll run out after 3-4 lightweight VMs. The Intel N305 (8 cores, 15W) is a much better choice for a low-power Proxmox node — it supports CPU pinning and handles 5-8 VMs. If you only need containers, use Docker directly on an N100 instead — see our Intel N100 guide.

Do I need ECC RAM for Proxmox?

Not strictly. Proxmox runs fine on consumer non-ECC RAM. But if you’re using ZFS (which you should for data integrity), ECC is strongly recommended. A single bit flip in RAM can corrupt ZFS metadata, and without ECC there’s no detection. For a homelab with replaceable data, non-ECC is acceptable. For important data (family photos, documents, financial records on ZFS), get ECC. AMD AM5 boards often support ECC with Ryzen CPUs — check your specific board.

Should I use Proxmox or just run Docker directly?

Use Docker directly if you’re running only containers and want simplicity — it has almost zero overhead and a much simpler learning curve. Use Proxmox if you need: VMs (Windows, TrueNAS, pfSense), PCI passthrough (GPU or NIC to a VM), network isolation between workloads, full VM snapshots, or high availability clustering. See the comparison table above for a detailed breakdown. Many users run Docker inside a Proxmox VM to get both VM isolation and container convenience.

How much storage do I need for Proxmox?

Proxmox OS: 32 GB minimum, 500 GB NVMe recommended (leaves room for ISOs and templates). VM storage: 500 GB–2 TB NVMe depending on workloads. Bulk storage (media, backups): as much HDD space as you need. Start with 500 GB NVMe for boot + VMs, add HDDs for bulk data later. If using ZFS, plan for at least 2 drives (mirror) for redundancy — a single-drive ZFS pool has no protection against drive failure.

Can I add a GPU later for transcoding passthrough?

Yes, if your system has a PCIe slot. Desktop towers and rack servers support this — install the GPU, enable VT-d in BIOS, bind the GPU to vfio-pci, and assign it to a VM. Mini PCs generally don’t have PCIe slots (except the Minisforum MS-01, which has a full x16 slot). SFF systems usually have low-profile-only slots. Plan for this upfront if transcoding or AI inference matters to you.

What is the best mini PC for Proxmox?

The Minisforum MS-01 is the standout choice — dual 10 GbE SFP+, 2x 2.5 GbE, a PCIe 4.0 x16 slot, 3 NVMe slots, and up to 64 GB DDR5. It’s the only mini PC with proper expansion for a serious Proxmox setup. For budget options, the Beelink SER8 (Ryzen 7 8745H, 32 GB, 2.5 GbE) is solid but lacks PCIe expansion. For ultra-low-power, an N305-based mini PC handles lightweight Proxmox loads at under 15W idle.

How do I enable PCI passthrough on Proxmox?

  1. Enable VT-d (Intel) or AMD-Vi in your BIOS
  2. Add intel_iommu=on iommu=pt (or amd_iommu=on iommu=pt) to your GRUB kernel parameters in /etc/default/grub
  3. Run update-grub and reboot
  4. Verify with dmesg | grep -e DMAR -e IOMMU — look for “IOMMU enabled”
  5. Check IOMMU groups with find /sys/kernel/iommu_groups/ -type l
  6. Add the device to a VM in the Proxmox web UI under Hardware → Add → PCI Device

See our PCI passthrough section for Intel vs AMD differences and troubleshooting tips.

Is used server hardware worth it for Proxmox?

Absolutely — used enterprise hardware is the best value for Proxmox. A used Dell PowerEdge T340 with Xeon E-2278G, 64 GB ECC, and hot-swap bays costs $400–$600 — comparable to a mid-range desktop but with ECC, iDRAC remote management, and 24/7-rated components. The tradeoff is higher power draw (60-90W idle vs 15-25W for a mini PC) and fan noise. For a quiet home setup, used OptiPlex or ThinkCentre are better. For a dedicated closet or garage, used servers are unbeatable.

Comments