Best SSDs for Home Servers in 2026

Quick Recommendation

For your OS + Docker boot drive: Samsung 980 (500 GB, ~$35) or WD Blue SN580 (500 GB, ~$30). Both are reliable TLC NVMe drives with more than enough endurance for home server use.

For NAS cache (TrueNAS SLOG/L2ARC, Synology cache): Samsung 970 EVO Plus (500 GB, ~$35) or WD Red SN700 (500 GB, ~$45). The WD Red SN700 is specifically designed for NAS write-caching with higher endurance.

For bulk SSD storage (all-flash NAS): Crucial MX500 (2-4 TB SATA, ~$120-230) or Samsung 870 EVO (2-4 TB SATA, ~$130-250). SATA is fine for NAS arrays — the drives won’t bottleneck a HDD-speed network share.

SSD Types Explained

NVMe (M.2)

Connects via PCIe. 2,000-7,000 MB/s sequential. Used for boot drives and NAS cache. The standard for any new build.

SATA SSD (2.5” or M.2)

Connects via SATA III. Max 550 MB/s sequential. Used for bulk SSD storage in NAS drive bays or as a budget boot drive. Perfectly fine for serving files over a 1-2.5 GbE network.

Key Specs

  • TLC vs QLC: TLC (Triple Level Cell) is more durable and faster for writes. QLC (Quad Level Cell) is cheaper but slower for sustained writes and has lower endurance. Prefer TLC for server use.
  • DRAM cache: SSDs with a DRAM cache maintain consistent performance. DRAMless SSDs slow down during sustained writes. For a boot drive that’s mostly read-heavy, DRAMless is fine. For NAS cache, get a DRAM drive.
  • TBW (Terabytes Written): The manufacturer’s rated endurance. 300-600 TBW for a 1 TB drive is standard. Home server workloads typically write 5-20 TB/year — decades of lifespan.

Top Picks

Boot / OS / Docker Drive (NVMe)

DriveCapacitySpeed (seq R/W)TBWDRAMPCIe GenPrice
Samsung 990 EVO Plus1 TB7,250/6,300 MB/s600 TBWNo (HMB)4x4 / 5x2~$90
WD Blue SN580500 GB4,150/3,600 MB/s300 TBWNo (HMB)4x4~$30
Samsung 980500 GB3,100/2,600 MB/s300 TBWNo (HMB)3x4~$35
Samsung 970 EVO Plus500 GB3,500/3,300 MB/s300 TBWYes3x4~$35
WD SN770500 GB5,150/4,900 MB/s300 TBWNo (HMB)4x4~$35
Kingston NV2500 GB3,500/2,100 MB/s160 TBWNo4x4~$25

Recommendation: WD Blue SN580 for best value at 500 GB. If your board supports PCIe Gen 5, the Samsung 990 EVO Plus at 1 TB gives you 7,250 MB/s reads and 600 TBW endurance — overkill for a boot drive, but it’s a “buy once” choice with Samsung’s 8th-gen V-NAND. The WD SN770 hits a sweet spot: Gen 4 speeds over 5,000 MB/s at $35.

For a home server boot drive, 256-500 GB is plenty. The OS, Docker images, and container volumes rarely exceed 100 GB unless you’re storing significant data locally. The 1 TB 990 EVO Plus only makes sense if you’re also storing database volumes on the boot drive.

NAS Cache Drive (NVMe)

DriveCapacitySpeed (seq R/W)TBWDRAMPrice
WD Red SN700500 GB3,430/2,600 MB/s1,000 TBWNo (HMB)~$45
WD Red SN7001 TB3,430/3,100 MB/s2,000 TBWNo (HMB)~$70
Samsung 970 EVO Plus500 GB3,500/3,300 MB/s300 TBWYes~$35
Samsung 970 EVO Plus1 TB3,500/3,300 MB/s600 TBWYes~$60

Recommendation: WD Red SN700 for NAS write-caching (SLOG, Synology cache). Its 1,000 TBW endurance at 500 GB is 3x the Samsung 970 EVO Plus — important for write-intensive cache workloads. For read-caching (L2ARC) where writes are minimal, the cheaper Samsung 970 EVO Plus is fine.

Bulk SSD Storage (SATA)

DriveCapacitySpeed (seq R/W)TBWDRAMPrice
Crucial MX5002 TB560/510 MB/s700 TBWYes~$120
Samsung 870 EVO2 TB560/530 MB/s1,200 TBWYes~$130
Crucial MX5004 TB560/510 MB/s1,000 TBWYes~$230
Samsung 870 EVO4 TB560/530 MB/s2,400 TBWYes~$250

Recommendation: Crucial MX500 for best value. Samsung 870 EVO for maximum endurance. Both are TLC with DRAM cache — the right combination for NAS array use.

Avoid for NAS arrays: Samsung 870 QVO and Crucial BX500. These are QLC drives with lower endurance and slower sustained writes. Fine for desktop use, not ideal for NAS workloads.

Real-World Latency: Why SSDs Transform Self-Hosting

Benchmark numbers are one thing. Here’s what the difference actually feels like in daily use:

OperationHDD (7200 RPM)SATA SSDNVMe SSD
Docker container startup8–15 s1–3 s0.5–1.5 s
Nextcloud page load (cold)3–6 s0.8–1.5 s0.3–0.8 s
Nextcloud file search (10K files)4–8 s0.5–1.5 s0.2–0.5 s
PostgreSQL query (1M rows)2–5 s0.1–0.3 s0.05–0.15 s
Immich photo thumbnail load1–3 s0.1–0.3 s0.05–0.1 s
Vaultwarden vault unlock2–4 s0.3–0.5 s0.1–0.2 s
docker compose up -d (10 containers)30–60 s5–10 s3–6 s

The difference between HDD and NVMe for database-backed applications is not incremental — it’s a different class of experience. A $35 NVMe SSD makes every container feel instant.

SSD Endurance: Will It Last?

Worried about SSD wear? Here’s how long each drive lasts at different write workloads:

DriveCapacityTBW RatingLight Use (5 GB/day)Moderate (20 GB/day)Heavy (50 GB/day)
Kingston NV2500 GB160 TBW87 years21 years8 years
Samsung 980500 GB300 TBW164 years41 years16 years
WD Blue SN580500 GB300 TBW164 years41 years16 years
WD Red SN700500 GB1,000 TBW547 years136 years54 years
Samsung 970 EVO Plus1 TB600 TBW328 years82 years32 years
Crucial MX5002 TB700 TBW383 years95 years38 years
Samsung 870 EVO2 TB1,200 TBW657 years164 years65 years

Typical home server write volume: 2–10 GB/day (Docker logs, database writes, app data). Even the cheapest 500 GB SSD rated at 160 TBW will outlive the server by a decade. SSD endurance is not a real concern for home use — buy based on speed and price, not TBW.

How Much SSD Do You Need?

Use CaseRecommended SSD Size
OS + Docker (boot only)256-500 GB NVMe
OS + Docker + small app data500 GB - 1 TB NVMe
Synology NVMe cache2x 500 GB NVMe (mirrored)
TrueNAS SLOG16-64 GB NVMe (small but fast)
TrueNAS L2ARC500 GB - 2 TB NVMe
Unraid cache pool500 GB - 2 TB NVMe
All-flash NAS (small)2-4x 2 TB SATA SSD

ZFS and Btrfs: Filesystem Considerations for SSDs

Your filesystem choice affects which SSDs make sense and how you configure them.

ZFS

ZFS is the gold standard for data integrity on home servers. SSD-specific concerns:

ZFS FeatureSSD RoleWhat to Buy
SLOG (Sync Write Log)Buffers synchronous writes16-64 GB NVMe partition, high endurance (WD Red SN700). Capacity doesn’t matter — IOPS and latency do.
L2ARC (Read Cache)Caches frequently-read data500 GB - 2 TB NVMe. Endurance less critical (read-heavy). Samsung 980 or WD SN770 are fine.
Special VDEV (Metadata)Stores ZFS metadata on fast media100-256 GB NVMe, mirrored pair. Dramatically speeds up file listings and scrubs on large pools.
All-flash poolEvery VDEV is SSDTLC SATA SSDs (Crucial MX500, Samsung 870 EVO). Avoid QLC — write amplification under ZFS is higher than ext4.

ZFS write amplification warning: ZFS’s copy-on-write design means every data change writes a full new block plus updated metadata. On a busy pool, actual drive writes can be 2-4x the logical data written. Factor this into TBW calculations — multiply your expected daily writes by 3 for ZFS.

Recordsize tuning for SSDs: ZFS defaults to 128K recordsize, which is fine for large sequential files (media). For databases (PostgreSQL, MariaDB), set recordsize=8K on the dataset to match the database page size. This reduces write amplification and improves random I/O — where SSDs excel.

Btrfs

Btrfs is the default for Synology DSM and a popular choice on Linux. SSD considerations:

  • SSD cache in Synology: Synology uses Btrfs internally. NVMe cache SSDs accelerate random reads/writes for the Btrfs volume. Always mirror your cache SSDs — a single cache SSD failure without mirroring can corrupt the Btrfs volume.
  • Btrfs RAID5/6 on SSDs: Btrfs RAID5/6 has a long-standing write hole bug. Use RAID1 or RAID10 for Btrfs SSD arrays. This applies to both Synology and bare Linux.
  • SSD TRIM: Enable discard=async in your Btrfs mount options (or run fstrim weekly via cron) to maintain SSD performance over time. Without TRIM, SSDs gradually slow down as the controller runs out of pre-erased blocks.

ext4

For simple boot drives, ext4 with discard mount option is the pragmatic choice. No special SSD considerations — it just works. Most Docker setups use ext4 for the overlay2 storage driver.

SSD RAID and Array Planning

Planning an SSD array for your NAS? Here’s how different RAID levels affect cost, performance, and safety.

RAID LevelDrives NeededUsable CapacityRead SpeedWrite SpeedFault Tolerance
RAID 0 (Stripe)2+100%Nx singleNx singleNone — one drive failure loses everything
RAID 1 (Mirror)250%~2x read1x write1 drive can fail
RAID 53+(N-1)/N~(N-1)x read~(N-2)x write1 drive can fail
RAID 10 (Mirror+Stripe)4+50%~Nx read~(N/2)x write1 drive per mirror pair
ZFS Mirror250%~2x read1x write1 drive can fail
ZFS RAIDZ13+(N-1)/N~(N-1)x read~(N-2)x write1 drive can fail
ZFS RAIDZ24+(N-2)/N~(N-2)x read~(N-3)x write2 drives can fail

For home server all-flash NAS: ZFS mirror (2 drives) or RAIDZ1 (3 drives) is the sweet spot. RAIDZ2 with 4+ drives is overkill for home use unless you’re storing irreplaceable data. Mirror gives the best random read IOPS — important for running database-backed apps off the array.

Cost example (2 TB usable):

ConfigDrivesRaw CapacityUsableApprox. Cost
2x 2 TB SATA SSD (mirror)Crucial MX5004 TB2 TB~$240
3x 1 TB SATA SSD (RAIDZ1)Crucial MX5003 TB2 TB~$195
4x 1 TB SATA SSD (RAIDZ2)Crucial MX5004 TB2 TB~$260

Power Consumption

SSDs use significantly less power than HDDs:

Drive TypeIdleActiveAnnual Cost ($0.12/kWh)
NVMe SSD0.5-2W3-8W$0.53-2.10
SATA SSD0.5-1W2-3W$0.53-1.05
3.5” HDD (7200 RPM)5-8W7-10W$5.26-8.41

An all-SSD NAS with 4 SATA SSDs idles at ~3W for the drives. The same NAS with 4 HDDs idles at ~24W. Over a year, that’s $22 in electricity savings — meaningful but not enough to offset the SSD price premium for large capacities.

SSD Failure Modes: What Actually Goes Wrong

SSDs don’t fail like HDDs. Understanding how they fail helps you plan.

Wear-out (predictable). NAND flash cells degrade with each write cycle. TLC cells last ~1,000-3,000 write cycles. The controller tracks this and reports it via SMART attribute “Percentage Used” (NVMe) or “Wear Leveling Count” (SATA). When the SSD approaches 100% worn, it goes read-only — your data is safe, but you can’t write. Monitor with: smartctl -a /dev/nvme0n1 | grep "Percentage Used" — replace when it passes 90%.

Sudden death (rare but devastating). Controller failure or firmware bug causes the entire drive to become unresponsive. No SMART warning. No gradual degradation. The drive simply disappears from the system. This is why boot drives should be cheap and replaceable, and why you back up Docker volumes to a separate device.

Power loss data corruption. Consumer SSDs lack power-loss protection capacitors. If power cuts during a write, pending data in the SSD’s volatile write cache can be lost or corrupted. This rarely affects boot drives (the OS uses write barriers) but can corrupt databases. Mitigation: Use a UPS for any server running databases, or use the WD Red SN700 which has partial power-loss protection.

Firmware bugs. Some SSD models have shipped with firmware that causes data corruption under specific workloads. The Samsung 980 Pro had a widely-reported firmware bug in 2022 (since patched). Check manufacturer release notes and community forums before buying a new SSD model. Established, well-tested models (Samsung 970 EVO Plus, WD Blue SN580, Crucial MX500) are safer choices.

NAS-Specific SSD Caveats

Synology NVMe cache and power loss. If you use NVMe write caching on Synology and lose power without a UPS, cached writes that haven’t flushed to the HDD array are lost. Synology mitigates this with mirrored cache — mirror your NVMe cache SSDs to survive a single SSD failure.

TrueNAS SLOG sizing. ZFS SLOG (Separate Log) only buffers synchronous writes — it doesn’t need to be large. A 16-64 GB partition on a fast NVMe SSD is sufficient. Don’t waste money on a large SLOG device. The SSD’s IOPS and latency matter more than its capacity.

QLC SSDs and RAID rebuild. If a QLC SSD in a RAID/ZFS array fails and the rebuild triggers sustained writes across remaining drives, QLC drives’ write speeds collapse under the sustained load (often to 50-100 MB/s vs 500 MB/s for TLC). This extends rebuild time dramatically. Stick with TLC for NAS arrays.

FAQ

NVMe or SATA SSD for my home server?

NVMe for boot/Docker drive (it’s in the M.2 slot anyway). SATA for bulk SSD storage in NAS drive bays. NVMe’s speed advantage over SATA only matters for boot drives and cache — file serving over 1-2.5 GbE can’t saturate even SATA speeds.

Will my SSD wear out?

Extremely unlikely for home server use. A 500 GB TLC SSD rated at 300 TBW lasts 15+ years at 50 GB/day of writes. Most home servers write 1-10 GB/day. Monitor SMART data (smartctl -a /dev/nvme0) to check remaining lifespan.

Do I need enterprise SSDs?

No. Consumer TLC NVMe and SATA SSDs have more than enough endurance for home use. Enterprise SSDs (Intel Optane, Samsung PM9A3) are designed for write-heavy datacenter workloads at 100x home server volume.

Should I mirror my NAS cache SSDs?

For Synology: yes, Synology recommends mirrored NVMe cache. If one cache SSD fails without mirroring, cached data may be lost. For TrueNAS SLOG: a single SSD is acceptable — SLOG only buffers sync writes temporarily. For Unraid cache: depends on your risk tolerance and what data lives on cache.

PCIe Gen 3 vs Gen 4 vs Gen 5 — does it matter for a home server?

Not much. Gen 3 NVMe (3,500 MB/s) is already 6x faster than SATA. Gen 4 (7,000 MB/s) and Gen 5 (14,000 MB/s) only help with sustained large-file transfers or database-heavy workloads. A Gen 3 Samsung 970 EVO Plus is indistinguishable from a Gen 5 Samsung 990 EVO Plus for Docker boot times. Buy whatever your motherboard supports — don’t upgrade your board for SSD speed.

Can I use a USB SSD as a boot drive?

Technically yes, but don’t. USB 3.0 maxes out at ~400 MB/s and adds latency. Boot times will be 2-3x slower than internal NVMe. USB SSDs also lack TRIM support on most Linux kernels, degrading performance over time. Use USB SSDs for backups only.

How do I monitor SSD health on my server?

Install smartmontools and check periodically:

# NVMe health
smartctl -a /dev/nvme0n1 | grep -E "Percentage Used|Temperature|Data Units Written"

# SATA SSD health
smartctl -a /dev/sda | grep -E "Wear_Leveling|Temperature|Total_LBAs_Written"

Set up a cron job or use Uptime Kuma with the Docker agent to alert when “Percentage Used” exceeds 80%.

Should I over-provision my SSD for longer lifespan?

Over-provisioning (leaving 10-20% of capacity unpartitioned) helps maintain write performance and extends lifespan on SATA SSDs. NVMe drives handle this internally with their own reserved area. For NAS SATA arrays, leaving 10% unpartitioned is a reasonable precaution. For NVMe boot drives, don’t bother — modern controllers manage wear leveling well enough.

What about Intel Optane for ZFS SLOG?

Intel Optane (now discontinued) was the ideal SLOG device — low latency, extreme endurance, power-loss protection. If you can find a used Optane M10 or P1600X on eBay for under $50, grab it. Nothing in production today matches Optane’s write latency for sync-heavy ZFS workloads. For new purchases, the WD Red SN700 is the best current alternative.

SSD or HDD for Immich / Jellyfin media libraries?

HDDs. Media files (photos, videos, music) are large sequential reads — exactly what HDDs do well. An SSD won’t meaningfully speed up streaming a 4K movie. Use an NVMe SSD for the database and thumbnail cache (significant speedup), and HDDs for the bulk media storage. See our Immich setup guide and Jellyfin guide for storage configuration details.

Comments