RAID Levels Explained for Home Servers
Want redundancy without a storage engineering degree? Here’s the shortcut.
If you’re building a home server NAS with 2–4 drives, you need exactly one decision: how many drive failures can you survive? The rest is implementation detail. This guide covers every RAID level you’ll encounter, with real hardware examples from Synology, TrueNAS, and Unraid setups.
Quick Reference
| RAID Level | Min Drives | Usable Space (4× 8 TB) | Drives Can Fail | Rebuild Time (8 TB) | Best For |
|---|---|---|---|---|---|
| RAID 0 | 2 | 32 TB (100%) | 0 — any failure = total loss | N/A | Scratch data only |
| RAID 1 | 2 | 8 TB (50%) | 1 | 4–8 hours | 2-drive NAS (Synology DS224+) |
| RAID 5 / RAID-Z1 | 3 | 24 TB (75%) | 1 | 8–16 hours | 3-drive NAS, budget builds |
| RAID 6 / RAID-Z2 | 4 | 16 TB (50%) | 2 | 8–16 hours | 4+ drive NAS, irreplaceable data |
| RAID 10 | 4 | 16 TB (50%) | 1 per mirror pair | 4–8 hours | Database servers, VMs |
| SHR (Synology) | 2 | ~24 TB (optimized) | 1 (SHR-1) or 2 (SHR-2) | 8–16 hours | Any Synology NAS |
The short answer: With 2 drives, use RAID 1 or SHR. With 3+ drives, use RAID-Z2 (TrueNAS) or SHR-2 (Synology) if your data matters. RAID 5/Z1 is acceptable for 3-drive setups where you can’t afford the capacity loss of double parity.
RAID Is Not Backup
This is the single most important thing to understand about RAID. RAID keeps your server running when a drive dies. Backups save your data when everything else fails.
RAID does NOT protect against:
| Threat | What Happens | RAID Helps? | Backup Helps? |
|---|---|---|---|
| Drive failure | One drive dies | Yes — array continues | No (not needed yet) |
| Accidental deletion | You rm -rf the wrong folder | No — deleted on all drives | Yes |
| Ransomware | Files encrypted in-place | No — encrypted on all drives | Yes |
| Fire / flood / theft | NAS destroyed | No — all drives gone | Yes (offsite copy) |
| RAID controller failure | Corrupt metadata | No — can destroy array | Yes |
| Silent data corruption | Bit rot, undetected errors | Only ZFS detects this | Yes |
| Bad firmware update | NAS OS breaks | No | Yes |
You need both RAID (for uptime) and backups (for data protection). A Synology DS425+ with SHR keeps your Plex server running when a drive fails at 2 AM. A 3-2-1 backup strategy saves your family photos when the NAS falls off the shelf.
RAID Levels in Detail
RAID 0 — Striping (No Redundancy)
Data is split across drives for speed. Two 8 TB drives give you 16 TB usable and roughly 2× sequential read/write throughput.
Drive 1: [A1][A3][A5][A7]
Drive 2: [A2][A4][A6][A8]
If ANY drive fails, ALL data is lost. RAID 0 doubles your failure risk — two drives means twice the chance of losing everything.
| Metric | Value |
|---|---|
| Usable capacity | 100% (N × drive size) |
| Read speed | ~2× single drive |
| Write speed | ~2× single drive |
| Fault tolerance | None |
| Use case | Video editing scratch, temp data |
Never use RAID 0 for data you want to keep. Not for media libraries, not for Docker volumes, not for anything.
RAID 1 — Mirroring
Data is duplicated identically on both drives. Either drive can fail without data loss.
Drive 1: [A1][A2][A3][A4]
Drive 2: [A1][A2][A3][A4] (identical copy)
Real-world example: A Synology DS224+ ($370) with 2× 8 TB Seagate IronWolf drives ($160 each) in SHR-1 (which uses RAID 1 under the hood for 2 drives). Total cost: ~$690. Usable storage: 8 TB. One drive can fail; replace it and the array rebuilds automatically in 4–8 hours.
| Metric | Value |
|---|---|
| Usable capacity | 50% (1 × drive size) |
| Read speed | Up to 2× (reads from both drives) |
| Write speed | 1× (writes to both drives) |
| Fault tolerance | 1 drive |
| Rebuild time (8 TB HDD) | 4–8 hours |
Best for: 2-drive NAS setups, boot drive mirrors, anyone starting with just two drives.
RAID 5 — Striping with Single Parity
Data and parity are distributed across 3+ drives. The parity data allows reconstruction of any single failed drive.
Drive 1: [A1][B2][C_parity]
Drive 2: [A2][B_parity][C1]
Drive 3: [A_parity][B1][C2]
Real-world example: A DIY NAS with a Jonsbo N3 case, Intel N305 board, and 3× 12 TB Seagate IronWolf drives in RAID-Z1 on TrueNAS SCALE. Usable space: 24 TB. Cost: ~$690 (hardware) + ~$480 (drives) = ~$1,170.
| Drive Count | Usable Space (8 TB drives) | Space Efficiency |
|---|---|---|
| 3 drives | 16 TB | 67% |
| 4 drives | 24 TB | 75% |
| 5 drives | 32 TB | 80% |
| 6 drives | 40 TB | 83% |
The rebuild risk problem: When an 8 TB drive fails in a RAID 5 array, the rebuild reads every sector of every remaining drive to reconstruct the missing data. This takes 8–16 hours on HDDs. During those hours, you have zero redundancy — a second failure kills the entire array. With modern 12–16 TB drives, rebuild times stretch to 16–36 hours. This is why RAID 5 is increasingly considered risky for large drives.
Best for: 3-drive setups where capacity matters and you accept single-drive fault tolerance.
RAID 6 — Striping with Double Parity
Like RAID 5 but with two independent parity blocks per stripe. Survives any two simultaneous drive failures.
Real-world example: A Synology DS923+ (~$600) with 4× 16 TB Seagate IronWolf Pro drives in SHR-2. Usable space: 32 TB. During a drive failure, you can order a replacement, wait for shipping, and rebuild — and if a second drive fails during that rebuild, your data survives.
| Drive Count | Usable Space (8 TB drives) | Space Efficiency |
|---|---|---|
| 4 drives | 16 TB | 50% |
| 5 drives | 24 TB | 60% |
| 6 drives | 32 TB | 67% |
| 8 drives | 48 TB | 75% |
Why RAID 6 matters with large drives: With 16 TB drives, a RAID 5 rebuild takes 16–36 hours. Annual failure rate (AFR) for NAS drives is roughly 0.5–1.5%. In a 4-drive array, the probability of a second failure during a 24-hour rebuild window is small (~0.003–0.01%) — but when it happens, you lose everything. RAID 6 eliminates this scenario entirely.
Best for: 4+ drive arrays with 8 TB+ drives. The standard recommendation for any serious home NAS in 2026.
RAID 10 — Mirrored Stripes
Combines RAID 1 (mirroring) and RAID 0 (striping). Data is mirrored in pairs, then the pairs are striped for performance.
Pair 1: Drive 1 [A1][A3] ↔ Drive 2 [A1][A3] (mirror)
Pair 2: Drive 3 [A2][A4] ↔ Drive 4 [A2][A4] (mirror)
└── striped across pairs ──┘
| Metric | Value |
|---|---|
| Usable capacity | 50% (same as RAID 1) |
| Read speed | ~N× (reads from all drives) |
| Write speed | ~(N/2)× |
| Fault tolerance | 1 drive per mirror pair (can survive 2 if in different pairs) |
| Rebuild time | Fast — only mirror one drive, not full array |
Best for: Database servers (PostgreSQL, MariaDB) and VM storage where random I/O performance matters more than capacity. Rarely used in home NAS because the 50% space efficiency hurts when drives cost $150–300 each.
ZFS RAID Levels
ZFS implements RAID at the filesystem level, not the block level. This is fundamentally better: ZFS checksums every block of data and metadata, detects silent corruption (bit rot), and self-heals using redundant copies.
Why ZFS RAID Is Better Than Traditional RAID
| Feature | Traditional RAID (mdraid, hardware) | ZFS RAID |
|---|---|---|
| Data checksums | No — serves corrupt data silently | Yes — detects every corrupted block |
| Self-healing | No | Yes — reconstructs corrupt blocks from parity |
| Scrubbing | Basic surface scan | Full data integrity verification |
| Snapshots | Not included | Instant, space-efficient snapshots |
| Compression | Not included | Transparent LZ4 (1.5–2× space savings) |
| Bit rot detection | No | Yes |
| Copy-on-write | No | Yes — prevents write holes |
RAID-Z1 (Single Parity — ZFS Equivalent of RAID 5)
Same capacity formula as RAID 5: (N−1) × drive size. Survives one drive failure.
Real-world example: TrueNAS SCALE on an Intel N305 mini PC with 32 GB RAM and 3× 8 TB WD Red Plus drives in RAID-Z1. Usable: 16 TB. The 32 GB RAM feeds ZFS ARC cache, serving frequently-accessed data at RAM speed (~10,000+ random IOPS) instead of HDD speed (~150 IOPS).
Advantage over RAID 5: When a block is silently corrupted on disk (bit rot), ZFS detects the bad checksum and reconstructs the correct data from parity — automatically, transparently, without you knowing it happened. Traditional RAID 5 doesn’t know the data is corrupt. It just serves the bad block to your application.
RAID-Z2 (Double Parity — ZFS Equivalent of RAID 6)
Same capacity as RAID 6: (N−2) × drive size. Survives two drive failures.
This is the recommended ZFS layout for home servers with 4+ drives. Example: TrueNAS on a Jonsbo N3 build with 4× 12 TB IronWolf drives in RAID-Z2. Usable: 24 TB. Protected against two simultaneous failures.
RAID-Z3 (Triple Parity)
Survives three simultaneous failures. Capacity: (N−3) × drive size.
Only practical for 8+ drive arrays with very large drives (16+ TB) where rebuild times are measured in days. Most home servers don’t need this.
ZFS Mirror (2-Drive ZFS)
Identical to RAID 1 but with ZFS checksumming and self-healing. Best for 2-drive setups on TrueNAS. If a block on one drive is corrupt, ZFS reads the good copy from the mirror and repairs the corrupt drive automatically.
Synology SHR — The Practical Choice
Synology Hybrid RAID (SHR) is Synology’s RAID implementation built on Linux mdraid + LVM. Its killer feature: it uses all available capacity when drives are different sizes.
SHR vs Traditional RAID with Mixed Drives
This is where SHR shines. Suppose you start with 2× 4 TB drives, then add 2× 8 TB drives later:
| Configuration | Traditional RAID 5 | SHR-1 |
|---|---|---|
| Drives | 2× 4 TB + 2× 8 TB | 2× 4 TB + 2× 8 TB |
| How it works | All drives treated as 4 TB | Creates optimized sub-arrays |
| Usable space | 12 TB (3 × 4 TB) | 20 TB |
| Wasted space | 8 TB (extra capacity unused) | 0 TB |
SHR creates multiple RAID volumes under the hood to utilize the full capacity of larger drives. You don’t need to think about it — DSM handles everything. This is why SHR is the default on every Synology NAS.
SHR-1 vs SHR-2
| SHR-1 | SHR-2 | |
|---|---|---|
| Equivalent to | RAID 5 (with mixed-size optimization) | RAID 6 (with mixed-size optimization) |
| Fault tolerance | 1 drive | 2 drives |
| Minimum drives | 2 | 4 |
| Recommended for | 2–3 drive setups, media that can be re-downloaded | 4+ drive setups, irreplaceable data |
Which SHR to use on your Synology:
| Synology Model | Bays | Recommended RAID | Why |
|---|---|---|---|
| DS224+ | 2 | SHR-1 | Only 2 bays — SHR-1 = RAID 1 |
| DS425+ | 4 | SHR-2 if data matters, SHR-1 for max space | SHR-2 survives 2 failures |
| DS925+ | 4 | SHR-2 | Flagship — protect the investment |
| DS923+ | 4 (+5 via DX517) | SHR-2 | ECC RAM + SHR-2 = maximum safety |
| DS1621+ | 6 | SHR-2 | No question with 6 drives |
How to Choose Your RAID Level
By Drive Count
2 Drives:
- Synology → SHR-1 (uses RAID 1)
- TrueNAS → ZFS Mirror
- Unraid → 1 data + 1 parity
- Result: 50% usable, 1 drive fault tolerance
3 Drives:
- Synology → SHR-1 (uses RAID 5)
- TrueNAS → RAID-Z1
- Unraid → 2 data + 1 parity
- Result: 67% usable, 1 drive fault tolerance
4 Drives (the sweet spot):
| Your Priority | Choose | Platform | Usable (4× 8 TB) | Fault Tolerance |
|---|---|---|---|---|
| Maximum safety | RAID-Z2 / SHR-2 | TrueNAS / Synology | 16 TB (50%) | 2 drives |
| Balance | RAID-Z1 / SHR-1 | TrueNAS / Synology | 24 TB (75%) | 1 drive |
| Performance | RAID 10 | Any | 16 TB (50%) | 1 per pair |
| Flexibility | Unraid parity | Unraid | 24 TB (75%) | 1 drive |
Our recommendation for 4 drives: RAID-Z2 or SHR-2. You lose one drive of capacity compared to RAID-Z1/SHR-1, but you survive two drive failures. When each drive is 8–16 TB and costs $150–300, the extra capacity isn’t worth the risk.
6–8+ Drives:
- RAID-Z2 / SHR-2 minimum
- Consider RAID-Z3 for 8+ drives with 16+ TB each
- TrueNAS: consider splitting into two vdevs of 3–4 drives each for faster rebuilds
By Data Value
| What You’re Storing | Recommended RAID | Why |
|---|---|---|
| Media (movies, music) that can be re-downloaded | RAID-Z1 / SHR-1 | Replaceable data — single parity is fine |
| Family photos, documents, personal data | RAID-Z2 / SHR-2 | Irreplaceable — don’t gamble on rebuild survival |
| Business data, client files | RAID-Z2 / SHR-2 + offsite backup | Can’t afford ANY data loss |
| Docker volumes, app configs | RAID-Z1 / SHR-1 + backup | Reconstructable from backups |
| Temp/scratch (video editing, builds) | RAID 0 or single drive | Speed matters, data is disposable |
Rebuild Times: The Hidden Risk
When a drive fails, the array must reconstruct its data. During rebuild, every remaining drive is under heavy read load. If a second drive fails during this period (from the stress, an existing defect, or coincidence), you lose the array.
Real-World Rebuild Time Estimates
These are approximate times for consumer NAS hardware (Synology DS-series, DIY with N305/N100):
| Drive Size | RAID 5/Z1 Rebuild | RAID 6/Z2 Rebuild | Risk During Rebuild |
|---|---|---|---|
| 4 TB HDD | 4–8 hours | 4–8 hours | Low |
| 8 TB HDD | 8–16 hours | 8–16 hours | Moderate |
| 12 TB HDD | 14–28 hours | 14–28 hours | Elevated |
| 16 TB HDD | 20–40 hours | 20–40 hours | High |
| 20 TB HDD | 28–56 hours | 28–56 hours | Very high |
Why rebuilds are dangerous: A Synology DS425+ with 4× 16 TB IronWolf Pro drives in SHR-1 takes ~20–40 hours to rebuild. During those hours, a single remaining drive failure destroys the array. Drives that were purchased together (same batch, same age) have correlated failure rates — if one fails, the others are statistically more likely to fail soon. This is the core argument for RAID 6/Z2/SHR-2.
Rebuild tips:
- Keep a cold spare. Having a replacement drive on the shelf means you start the rebuild immediately instead of waiting 2–3 days for shipping.
- Don’t use the NAS heavily during rebuild. Reduce container workloads if possible. Rebuilds already stress every drive — adding Plex transcoding or Nextcloud syncs on top slows the rebuild and increases failure risk.
- Monitor with SMART. Run
smartctl -a /dev/sdXor check your NAS health dashboard. A drive showing reallocated sectors or pending sectors is failing — replace it proactively before it takes the array down. - Schedule monthly scrubs. ZFS scrubs (TrueNAS) and Synology data scrubs verify every block on every drive. They catch silent corruption and bad sectors before they cause rebuild failures.
Cost Impact of RAID Choices
RAID level directly affects how much usable storage you get per dollar. Here’s the math with 8 TB Seagate IronWolf drives at ~$160 each (early 2026 pricing):
| Setup | Drives | Drive Cost | Usable Storage | Cost per Usable TB |
|---|---|---|---|---|
| RAID 1 (2 drives) | 2× 8 TB | $320 | 8 TB | $40/TB |
| RAID 5/Z1 (3 drives) | 3× 8 TB | $480 | 16 TB | $30/TB |
| RAID 5/Z1 (4 drives) | 4× 8 TB | $640 | 24 TB | $27/TB |
| RAID 6/Z2 (4 drives) | 4× 8 TB | $640 | 16 TB | $40/TB |
| RAID 6/Z2 (6 drives) | 6× 8 TB | $960 | 32 TB | $30/TB |
| RAID 10 (4 drives) | 4× 8 TB | $640 | 16 TB | $40/TB |
Observation: RAID 6/Z2 with 4 drives has the same cost-per-TB ($40) as RAID 1 with 2 drives. The extra safety of surviving 2 failures costs you nothing per TB compared to mirroring — you’re just buying more drives. At 6 drives, RAID 6/Z2 drops to $30/TB while maintaining double redundancy.
With 16 TB drives (~$300 each in early 2026):
| Setup | Drives | Drive Cost | Usable Storage | Cost per Usable TB |
|---|---|---|---|---|
| RAID 5/Z1 (4× 16 TB) | 4× 16 TB | $1,200 | 48 TB | $25/TB |
| RAID 6/Z2 (4× 16 TB) | 4× 16 TB | $1,200 | 32 TB | $38/TB |
| RAID 6/Z2 (6× 16 TB) | 6× 16 TB | $1,800 | 64 TB | $28/TB |
Larger drives improve cost-per-TB but increase rebuild risk — another reason to prefer RAID 6/Z2 as drive sizes grow.
Unraid: A Different Approach
Unraid doesn’t use traditional RAID. It stores files intact on individual drives with a separate parity drive. Key differences:
| Feature | Traditional RAID | Unraid |
|---|---|---|
| Drive mixing | All must match (except SHR) | Any size, any time |
| Adding drives | Rebuild or reshape required | Just add and assign |
| File recovery | Need full array to read data | Individual drives readable |
| Write speed | Limited by parity calculation | Limited by single drive (~180 MB/s) |
| Read speed | Striped (fast) | Single drive speed (unless cached) |
| Cache | Hardware dependent | Built-in SSD cache pool |
Unraid wins for: Gradually expanding storage over time with whatever drives you find on sale. Buy a 4 TB today, an 8 TB next month, a 16 TB when they go on sale.
Unraid loses for: Write-heavy workloads (every write goes through parity calculation on a single drive) and raw performance (no striping without cache).
FAQ
Does RAID replace backups?
No. RAID protects against drive failure only. You need a 3-2-1 backup strategy for real data protection: 3 copies, 2 different media types, 1 offsite.
Should I use hardware or software RAID?
Software RAID, always. Hardware RAID controllers are expensive ($200–500), create vendor lock-in (if the controller dies, you need the exact same model to read your drives), and offer no performance advantage for home NAS workloads. ZFS (TrueNAS), mdraid (Linux/Synology), and Unraid’s parity system are all software RAID and work excellently.
Can I add a drive to an existing RAID array?
| Platform | Can Add Drives? | How |
|---|---|---|
| Synology SHR | Yes | Insert drive → DSM expands volume automatically |
| Unraid | Yes | Add any drive at any time, run parity sync |
| TrueNAS ZFS | Not to existing vdev | Must add a new vdev (group of drives) to the pool |
| Traditional RAID 5/6 | Sometimes | Lengthy reshape operation (hours to days) |
ZFS’s inability to expand vdevs is its biggest practical limitation. If you start with 4× 8 TB in RAID-Z2, you can’t add a 5th drive to that vdev. You’d need to add a second vdev (another set of drives) or replace all 4 drives with larger ones. This is a key reason some home users prefer Synology SHR or Unraid — both allow incremental expansion.
What happens when a RAID drive fails?
- The array enters “degraded” mode — all data is still accessible
- Performance drops (parity reconstruction on every read)
- You have zero additional redundancy (RAID 5/Z1) or one drive of redundancy remaining (RAID 6/Z2)
- Replace the failed drive → array rebuilds automatically
- During rebuild: don’t power off, minimize heavy I/O, monitor for SMART warnings on remaining drives
Replace failed drives immediately. Every hour in degraded mode is an hour closer to potential data loss.
Is ECC RAM required for ZFS?
Recommended but not required. ECC RAM detects and corrects single-bit memory errors that could corrupt data in transit to disk. For a home server, the risk of a memory error corrupting your ZFS pool is very low — but if your data is truly irreplaceable, the $20–40 premium for ECC RAM (on platforms that support it, like the Synology DS923+ or DIY builds with Intel Xeon/AMD EPYC boards) is cheap insurance.
JBOD — when does it make sense?
JBOD (Just a Bunch of Disks) means each drive is independent — no striping, no parity. One drive fails, you lose that drive’s data only.
JBOD makes sense when: every drive’s contents are backed up elsewhere, you want maximum capacity with zero overhead, and you don’t need uptime (you can tolerate the downtime of restoring a failed drive from backup).
Unraid’s parity system is essentially “JBOD with parity protection” — the best of both worlds for home use.
Related
- Best Hard Drives for NAS
- Best NAS for Home Servers
- DIY NAS Build Guide
- Synology vs TrueNAS
- Synology vs Unraid
- TrueNAS vs Unraid
- HDD vs SSD for Home Servers
- ZFS Hardware Requirements
- Hardware RAID vs Software RAID
- Best SSD for Home Servers
- NAS vs Desktop Drives
- Backup Strategy: The 3-2-1 Rule
- Getting Started with Self-Hosting
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments