RAID Levels Explained for Home Servers

Want redundancy without a storage engineering degree? Here’s the shortcut.

If you’re building a home server NAS with 2–4 drives, you need exactly one decision: how many drive failures can you survive? The rest is implementation detail. This guide covers every RAID level you’ll encounter, with real hardware examples from Synology, TrueNAS, and Unraid setups.

Quick Reference

RAID LevelMin DrivesUsable Space (4× 8 TB)Drives Can FailRebuild Time (8 TB)Best For
RAID 0232 TB (100%)0 — any failure = total lossN/AScratch data only
RAID 128 TB (50%)14–8 hours2-drive NAS (Synology DS224+)
RAID 5 / RAID-Z1324 TB (75%)18–16 hours3-drive NAS, budget builds
RAID 6 / RAID-Z2416 TB (50%)28–16 hours4+ drive NAS, irreplaceable data
RAID 10416 TB (50%)1 per mirror pair4–8 hoursDatabase servers, VMs
SHR (Synology)2~24 TB (optimized)1 (SHR-1) or 2 (SHR-2)8–16 hoursAny Synology NAS

The short answer: With 2 drives, use RAID 1 or SHR. With 3+ drives, use RAID-Z2 (TrueNAS) or SHR-2 (Synology) if your data matters. RAID 5/Z1 is acceptable for 3-drive setups where you can’t afford the capacity loss of double parity.

RAID Is Not Backup

This is the single most important thing to understand about RAID. RAID keeps your server running when a drive dies. Backups save your data when everything else fails.

RAID does NOT protect against:

ThreatWhat HappensRAID Helps?Backup Helps?
Drive failureOne drive diesYes — array continuesNo (not needed yet)
Accidental deletionYou rm -rf the wrong folderNo — deleted on all drivesYes
RansomwareFiles encrypted in-placeNo — encrypted on all drivesYes
Fire / flood / theftNAS destroyedNo — all drives goneYes (offsite copy)
RAID controller failureCorrupt metadataNo — can destroy arrayYes
Silent data corruptionBit rot, undetected errorsOnly ZFS detects thisYes
Bad firmware updateNAS OS breaksNoYes

You need both RAID (for uptime) and backups (for data protection). A Synology DS425+ with SHR keeps your Plex server running when a drive fails at 2 AM. A 3-2-1 backup strategy saves your family photos when the NAS falls off the shelf.

RAID Levels in Detail

RAID 0 — Striping (No Redundancy)

Data is split across drives for speed. Two 8 TB drives give you 16 TB usable and roughly 2× sequential read/write throughput.

Drive 1: [A1][A3][A5][A7]
Drive 2: [A2][A4][A6][A8]

If ANY drive fails, ALL data is lost. RAID 0 doubles your failure risk — two drives means twice the chance of losing everything.

MetricValue
Usable capacity100% (N × drive size)
Read speed~2× single drive
Write speed~2× single drive
Fault toleranceNone
Use caseVideo editing scratch, temp data

Never use RAID 0 for data you want to keep. Not for media libraries, not for Docker volumes, not for anything.

RAID 1 — Mirroring

Data is duplicated identically on both drives. Either drive can fail without data loss.

Drive 1: [A1][A2][A3][A4]
Drive 2: [A1][A2][A3][A4]  (identical copy)

Real-world example: A Synology DS224+ ($370) with 2× 8 TB Seagate IronWolf drives ($160 each) in SHR-1 (which uses RAID 1 under the hood for 2 drives). Total cost: ~$690. Usable storage: 8 TB. One drive can fail; replace it and the array rebuilds automatically in 4–8 hours.

MetricValue
Usable capacity50% (1 × drive size)
Read speedUp to 2× (reads from both drives)
Write speed1× (writes to both drives)
Fault tolerance1 drive
Rebuild time (8 TB HDD)4–8 hours

Best for: 2-drive NAS setups, boot drive mirrors, anyone starting with just two drives.

RAID 5 — Striping with Single Parity

Data and parity are distributed across 3+ drives. The parity data allows reconstruction of any single failed drive.

Drive 1: [A1][B2][C_parity]
Drive 2: [A2][B_parity][C1]
Drive 3: [A_parity][B1][C2]

Real-world example: A DIY NAS with a Jonsbo N3 case, Intel N305 board, and 3× 12 TB Seagate IronWolf drives in RAID-Z1 on TrueNAS SCALE. Usable space: 24 TB. Cost: ~$690 (hardware) + ~$480 (drives) = ~$1,170.

Drive CountUsable Space (8 TB drives)Space Efficiency
3 drives16 TB67%
4 drives24 TB75%
5 drives32 TB80%
6 drives40 TB83%

The rebuild risk problem: When an 8 TB drive fails in a RAID 5 array, the rebuild reads every sector of every remaining drive to reconstruct the missing data. This takes 8–16 hours on HDDs. During those hours, you have zero redundancy — a second failure kills the entire array. With modern 12–16 TB drives, rebuild times stretch to 16–36 hours. This is why RAID 5 is increasingly considered risky for large drives.

Best for: 3-drive setups where capacity matters and you accept single-drive fault tolerance.

RAID 6 — Striping with Double Parity

Like RAID 5 but with two independent parity blocks per stripe. Survives any two simultaneous drive failures.

Real-world example: A Synology DS923+ (~$600) with 4× 16 TB Seagate IronWolf Pro drives in SHR-2. Usable space: 32 TB. During a drive failure, you can order a replacement, wait for shipping, and rebuild — and if a second drive fails during that rebuild, your data survives.

Drive CountUsable Space (8 TB drives)Space Efficiency
4 drives16 TB50%
5 drives24 TB60%
6 drives32 TB67%
8 drives48 TB75%

Why RAID 6 matters with large drives: With 16 TB drives, a RAID 5 rebuild takes 16–36 hours. Annual failure rate (AFR) for NAS drives is roughly 0.5–1.5%. In a 4-drive array, the probability of a second failure during a 24-hour rebuild window is small (~0.003–0.01%) — but when it happens, you lose everything. RAID 6 eliminates this scenario entirely.

Best for: 4+ drive arrays with 8 TB+ drives. The standard recommendation for any serious home NAS in 2026.

RAID 10 — Mirrored Stripes

Combines RAID 1 (mirroring) and RAID 0 (striping). Data is mirrored in pairs, then the pairs are striped for performance.

Pair 1: Drive 1 [A1][A3] ↔ Drive 2 [A1][A3]  (mirror)
Pair 2: Drive 3 [A2][A4] ↔ Drive 4 [A2][A4]  (mirror)
         └── striped across pairs ──┘
MetricValue
Usable capacity50% (same as RAID 1)
Read speed~N× (reads from all drives)
Write speed~(N/2)×
Fault tolerance1 drive per mirror pair (can survive 2 if in different pairs)
Rebuild timeFast — only mirror one drive, not full array

Best for: Database servers (PostgreSQL, MariaDB) and VM storage where random I/O performance matters more than capacity. Rarely used in home NAS because the 50% space efficiency hurts when drives cost $150–300 each.

ZFS RAID Levels

ZFS implements RAID at the filesystem level, not the block level. This is fundamentally better: ZFS checksums every block of data and metadata, detects silent corruption (bit rot), and self-heals using redundant copies.

Why ZFS RAID Is Better Than Traditional RAID

FeatureTraditional RAID (mdraid, hardware)ZFS RAID
Data checksumsNo — serves corrupt data silentlyYes — detects every corrupted block
Self-healingNoYes — reconstructs corrupt blocks from parity
ScrubbingBasic surface scanFull data integrity verification
SnapshotsNot includedInstant, space-efficient snapshots
CompressionNot includedTransparent LZ4 (1.5–2× space savings)
Bit rot detectionNoYes
Copy-on-writeNoYes — prevents write holes

RAID-Z1 (Single Parity — ZFS Equivalent of RAID 5)

Same capacity formula as RAID 5: (N−1) × drive size. Survives one drive failure.

Real-world example: TrueNAS SCALE on an Intel N305 mini PC with 32 GB RAM and 3× 8 TB WD Red Plus drives in RAID-Z1. Usable: 16 TB. The 32 GB RAM feeds ZFS ARC cache, serving frequently-accessed data at RAM speed (~10,000+ random IOPS) instead of HDD speed (~150 IOPS).

Advantage over RAID 5: When a block is silently corrupted on disk (bit rot), ZFS detects the bad checksum and reconstructs the correct data from parity — automatically, transparently, without you knowing it happened. Traditional RAID 5 doesn’t know the data is corrupt. It just serves the bad block to your application.

RAID-Z2 (Double Parity — ZFS Equivalent of RAID 6)

Same capacity as RAID 6: (N−2) × drive size. Survives two drive failures.

This is the recommended ZFS layout for home servers with 4+ drives. Example: TrueNAS on a Jonsbo N3 build with 4× 12 TB IronWolf drives in RAID-Z2. Usable: 24 TB. Protected against two simultaneous failures.

RAID-Z3 (Triple Parity)

Survives three simultaneous failures. Capacity: (N−3) × drive size.

Only practical for 8+ drive arrays with very large drives (16+ TB) where rebuild times are measured in days. Most home servers don’t need this.

ZFS Mirror (2-Drive ZFS)

Identical to RAID 1 but with ZFS checksumming and self-healing. Best for 2-drive setups on TrueNAS. If a block on one drive is corrupt, ZFS reads the good copy from the mirror and repairs the corrupt drive automatically.

Synology SHR — The Practical Choice

Synology Hybrid RAID (SHR) is Synology’s RAID implementation built on Linux mdraid + LVM. Its killer feature: it uses all available capacity when drives are different sizes.

SHR vs Traditional RAID with Mixed Drives

This is where SHR shines. Suppose you start with 2× 4 TB drives, then add 2× 8 TB drives later:

ConfigurationTraditional RAID 5SHR-1
Drives2× 4 TB + 2× 8 TB2× 4 TB + 2× 8 TB
How it worksAll drives treated as 4 TBCreates optimized sub-arrays
Usable space12 TB (3 × 4 TB)20 TB
Wasted space8 TB (extra capacity unused)0 TB

SHR creates multiple RAID volumes under the hood to utilize the full capacity of larger drives. You don’t need to think about it — DSM handles everything. This is why SHR is the default on every Synology NAS.

SHR-1 vs SHR-2

SHR-1SHR-2
Equivalent toRAID 5 (with mixed-size optimization)RAID 6 (with mixed-size optimization)
Fault tolerance1 drive2 drives
Minimum drives24
Recommended for2–3 drive setups, media that can be re-downloaded4+ drive setups, irreplaceable data

Which SHR to use on your Synology:

Synology ModelBaysRecommended RAIDWhy
DS224+2SHR-1Only 2 bays — SHR-1 = RAID 1
DS425+4SHR-2 if data matters, SHR-1 for max spaceSHR-2 survives 2 failures
DS925+4SHR-2Flagship — protect the investment
DS923+4 (+5 via DX517)SHR-2ECC RAM + SHR-2 = maximum safety
DS1621+6SHR-2No question with 6 drives

How to Choose Your RAID Level

By Drive Count

2 Drives:

  • Synology → SHR-1 (uses RAID 1)
  • TrueNAS → ZFS Mirror
  • Unraid → 1 data + 1 parity
  • Result: 50% usable, 1 drive fault tolerance

3 Drives:

  • Synology → SHR-1 (uses RAID 5)
  • TrueNAS → RAID-Z1
  • Unraid → 2 data + 1 parity
  • Result: 67% usable, 1 drive fault tolerance

4 Drives (the sweet spot):

Your PriorityChoosePlatformUsable (4× 8 TB)Fault Tolerance
Maximum safetyRAID-Z2 / SHR-2TrueNAS / Synology16 TB (50%)2 drives
BalanceRAID-Z1 / SHR-1TrueNAS / Synology24 TB (75%)1 drive
PerformanceRAID 10Any16 TB (50%)1 per pair
FlexibilityUnraid parityUnraid24 TB (75%)1 drive

Our recommendation for 4 drives: RAID-Z2 or SHR-2. You lose one drive of capacity compared to RAID-Z1/SHR-1, but you survive two drive failures. When each drive is 8–16 TB and costs $150–300, the extra capacity isn’t worth the risk.

6–8+ Drives:

  • RAID-Z2 / SHR-2 minimum
  • Consider RAID-Z3 for 8+ drives with 16+ TB each
  • TrueNAS: consider splitting into two vdevs of 3–4 drives each for faster rebuilds

By Data Value

What You’re StoringRecommended RAIDWhy
Media (movies, music) that can be re-downloadedRAID-Z1 / SHR-1Replaceable data — single parity is fine
Family photos, documents, personal dataRAID-Z2 / SHR-2Irreplaceable — don’t gamble on rebuild survival
Business data, client filesRAID-Z2 / SHR-2 + offsite backupCan’t afford ANY data loss
Docker volumes, app configsRAID-Z1 / SHR-1 + backupReconstructable from backups
Temp/scratch (video editing, builds)RAID 0 or single driveSpeed matters, data is disposable

Rebuild Times: The Hidden Risk

When a drive fails, the array must reconstruct its data. During rebuild, every remaining drive is under heavy read load. If a second drive fails during this period (from the stress, an existing defect, or coincidence), you lose the array.

Real-World Rebuild Time Estimates

These are approximate times for consumer NAS hardware (Synology DS-series, DIY with N305/N100):

Drive SizeRAID 5/Z1 RebuildRAID 6/Z2 RebuildRisk During Rebuild
4 TB HDD4–8 hours4–8 hoursLow
8 TB HDD8–16 hours8–16 hoursModerate
12 TB HDD14–28 hours14–28 hoursElevated
16 TB HDD20–40 hours20–40 hoursHigh
20 TB HDD28–56 hours28–56 hoursVery high

Why rebuilds are dangerous: A Synology DS425+ with 4× 16 TB IronWolf Pro drives in SHR-1 takes ~20–40 hours to rebuild. During those hours, a single remaining drive failure destroys the array. Drives that were purchased together (same batch, same age) have correlated failure rates — if one fails, the others are statistically more likely to fail soon. This is the core argument for RAID 6/Z2/SHR-2.

Rebuild tips:

  1. Keep a cold spare. Having a replacement drive on the shelf means you start the rebuild immediately instead of waiting 2–3 days for shipping.
  2. Don’t use the NAS heavily during rebuild. Reduce container workloads if possible. Rebuilds already stress every drive — adding Plex transcoding or Nextcloud syncs on top slows the rebuild and increases failure risk.
  3. Monitor with SMART. Run smartctl -a /dev/sdX or check your NAS health dashboard. A drive showing reallocated sectors or pending sectors is failing — replace it proactively before it takes the array down.
  4. Schedule monthly scrubs. ZFS scrubs (TrueNAS) and Synology data scrubs verify every block on every drive. They catch silent corruption and bad sectors before they cause rebuild failures.

Cost Impact of RAID Choices

RAID level directly affects how much usable storage you get per dollar. Here’s the math with 8 TB Seagate IronWolf drives at ~$160 each (early 2026 pricing):

SetupDrivesDrive CostUsable StorageCost per Usable TB
RAID 1 (2 drives)2× 8 TB$3208 TB$40/TB
RAID 5/Z1 (3 drives)3× 8 TB$48016 TB$30/TB
RAID 5/Z1 (4 drives)4× 8 TB$64024 TB$27/TB
RAID 6/Z2 (4 drives)4× 8 TB$64016 TB$40/TB
RAID 6/Z2 (6 drives)6× 8 TB$96032 TB$30/TB
RAID 10 (4 drives)4× 8 TB$64016 TB$40/TB

Observation: RAID 6/Z2 with 4 drives has the same cost-per-TB ($40) as RAID 1 with 2 drives. The extra safety of surviving 2 failures costs you nothing per TB compared to mirroring — you’re just buying more drives. At 6 drives, RAID 6/Z2 drops to $30/TB while maintaining double redundancy.

With 16 TB drives (~$300 each in early 2026):

SetupDrivesDrive CostUsable StorageCost per Usable TB
RAID 5/Z1 (4× 16 TB)4× 16 TB$1,20048 TB$25/TB
RAID 6/Z2 (4× 16 TB)4× 16 TB$1,20032 TB$38/TB
RAID 6/Z2 (6× 16 TB)6× 16 TB$1,80064 TB$28/TB

Larger drives improve cost-per-TB but increase rebuild risk — another reason to prefer RAID 6/Z2 as drive sizes grow.

Unraid: A Different Approach

Unraid doesn’t use traditional RAID. It stores files intact on individual drives with a separate parity drive. Key differences:

FeatureTraditional RAIDUnraid
Drive mixingAll must match (except SHR)Any size, any time
Adding drivesRebuild or reshape requiredJust add and assign
File recoveryNeed full array to read dataIndividual drives readable
Write speedLimited by parity calculationLimited by single drive (~180 MB/s)
Read speedStriped (fast)Single drive speed (unless cached)
CacheHardware dependentBuilt-in SSD cache pool

Unraid wins for: Gradually expanding storage over time with whatever drives you find on sale. Buy a 4 TB today, an 8 TB next month, a 16 TB when they go on sale.

Unraid loses for: Write-heavy workloads (every write goes through parity calculation on a single drive) and raw performance (no striping without cache).

FAQ

Does RAID replace backups?

No. RAID protects against drive failure only. You need a 3-2-1 backup strategy for real data protection: 3 copies, 2 different media types, 1 offsite.

Should I use hardware or software RAID?

Software RAID, always. Hardware RAID controllers are expensive ($200–500), create vendor lock-in (if the controller dies, you need the exact same model to read your drives), and offer no performance advantage for home NAS workloads. ZFS (TrueNAS), mdraid (Linux/Synology), and Unraid’s parity system are all software RAID and work excellently.

Can I add a drive to an existing RAID array?

PlatformCan Add Drives?How
Synology SHRYesInsert drive → DSM expands volume automatically
UnraidYesAdd any drive at any time, run parity sync
TrueNAS ZFSNot to existing vdevMust add a new vdev (group of drives) to the pool
Traditional RAID 5/6SometimesLengthy reshape operation (hours to days)

ZFS’s inability to expand vdevs is its biggest practical limitation. If you start with 4× 8 TB in RAID-Z2, you can’t add a 5th drive to that vdev. You’d need to add a second vdev (another set of drives) or replace all 4 drives with larger ones. This is a key reason some home users prefer Synology SHR or Unraid — both allow incremental expansion.

What happens when a RAID drive fails?

  1. The array enters “degraded” mode — all data is still accessible
  2. Performance drops (parity reconstruction on every read)
  3. You have zero additional redundancy (RAID 5/Z1) or one drive of redundancy remaining (RAID 6/Z2)
  4. Replace the failed drive → array rebuilds automatically
  5. During rebuild: don’t power off, minimize heavy I/O, monitor for SMART warnings on remaining drives

Replace failed drives immediately. Every hour in degraded mode is an hour closer to potential data loss.

Is ECC RAM required for ZFS?

Recommended but not required. ECC RAM detects and corrects single-bit memory errors that could corrupt data in transit to disk. For a home server, the risk of a memory error corrupting your ZFS pool is very low — but if your data is truly irreplaceable, the $20–40 premium for ECC RAM (on platforms that support it, like the Synology DS923+ or DIY builds with Intel Xeon/AMD EPYC boards) is cheap insurance.

JBOD — when does it make sense?

JBOD (Just a Bunch of Disks) means each drive is independent — no striping, no parity. One drive fails, you lose that drive’s data only.

JBOD makes sense when: every drive’s contents are backed up elsewhere, you want maximum capacity with zero overhead, and you don’t need uptime (you can tolerate the downtime of restoring a failed drive from backup).

Unraid’s parity system is essentially “JBOD with parity protection” — the best of both worlds for home use.

Comments