Raid 5 Recovery

RAID 5 Data Recovery

No Fix - No Fee!

Our experts have extensive experience recovering data from RAID servers. With 25 years experience in the data recovery industry, we can help you securely recover your data.
Raid 5 Recovery

Software Fault From £495

2-4 Days

Mechanical FaultFrom £895

2-4 Days

Critical Service From £995

1-2 Days

Need help recovering your data?

Call us on 01752 479547 or use the form below to make an enquiry.
Chat with us
Monday-Friday: 9am-6pm

Plymouth Data Recovery — No.1 RAID-5 / RAID-6 / RAID-10 Recovery Specialists (25+ years)

From home users to global enterprises and government teams, Plymouth Data Recovery has decades of hands-on experience recovering RAID-5, RAID-6 and RAID-10 across software RAID, hardware RAID, NAS/large NAS and rack servers. We operate forensically: stabilise each member → acquire read-only clones → reconstruct the array virtually → repair filesystems/LUNs on the clone only. Free diagnostics with clear options before any paid work begins.

Your draft mentioned “RAID 1” in places — below is aligned to RAID-5/6/10.


Platforms & vendors we support

Controllers/HBAs: Dell PERC, HPE Smart Array, Broadcom/LSI MegaRAID, Adaptec/Microchip, Areca, HighPoint, Promise, Intel RST/e, mdadm/LVM, Windows Dynamic Disks/Storage Spaces, ZFS/Btrfs md-based sets.

Filesystems/LUNs: NTFS, ReFS, exFAT, APFS, HFS+, ext2/3/4, XFS, Btrfs, ZFS, VMFS (VMware), iSCSI/NFS LUNs, CSVFS.

Media: 3.5″/2.5″ SATA/SAS HDD, SATA SSD, NVMe (M.2/U.2/U.3/AIC), 512e/4Kn.


Top 15 NAS / external RAID brands in the UK & popular models

(Representative; we support all vendors.)

  1. Synology — DS923+, DS1522+, DS224+, RS1221+, RS2421+

  2. QNAP — TS-464/TS-453D, TVS-h674, TS-873A, TS-451D2

  3. Netgear ReadyNAS — RN424, RN524X, 2304, 528X

  4. Western Digital (WD) — My Cloud EX2 Ultra, PR4100, DL2100

  5. Buffalo — TeraStation TS3410DN/5410DN, LinkStation LS220D

  6. Asustor — AS6704T (Lockerstor 4), AS5304T (Nimbustor 4)

  7. TerraMaster — F4-423, F5-422, T9-423

  8. LaCie (Seagate) — 2big/6big/12big (DAS/NAS use), d2 Professional

  9. TrueNAS / iXsystems — Mini X/X+, R-Series

  10. Drobo (legacy) — 5N/5N2 (BeyondRAID)

  11. Lenovo/Iomega (legacy) — PX4-300D/PX6-300D

  12. Zyxel — NAS326, NAS542

  13. Promise — Pegasus R-series (DAS), VTrak (rack)

  14. Seagate — Business Storage, BlackArmor (legacy)

  15. OWC — ThunderBay 4/8 (SoftRAID 0/5/10 on host)

Top 15 rack-server RAID platforms (5/6/10) & common models

  1. Dell EMC PowerEdge — R650/R750/R760, R740xd

  2. HPE ProLiant — DL360/DL380 Gen10/Gen11

  3. Lenovo ThinkSystem — SR630/SR650

  4. Supermicro — 1029/2029/6049; SuperStorage 6029/6049

  5. Cisco UCS — C220/C240 M5/M6

  6. Fujitsu PRIMERGY — RX2530/RX2540

  7. QCT (Quanta) — D52B-1U/D52BQ-2U

  8. Inspur — NF5280 M6/M7

  9. Huawei — FusionServer Pro 2288/5288

  10. ASUS Server — RS520/RS700 series

  11. Gigabyte Server — R272-Z/R282-Z

  12. Tyan — Thunder/Transport 1U/2U

  13. Areca — ARC-1883/1886 controller builds

  14. Adaptec by Microchip — SmartRAID 31xx/32xx

  15. iXsystems/TrueNAS R-Series — R10/R20 JBOD behind HBAs


Our professional RAID workflow (safeguards first)

  1. Stabilise & image every member using hardware imagers (HDD: per-head zoning, tuned timeouts/ECC; SSD/NVMe: read-retry/voltage stepping, thermal control). Originals remain read-only.

  2. Solve geometry offline: member order, start offsets, chunk/stripe size, parity rotation (left/right, symmetric/asymmetric), RAID-6 P/Q (Reed–Solomon GF(2⁸)).

  3. Virtual array reconstruction from clones; stripe-signature/entropy scoring, majority vote per stripe; RS decoding for RAID-6.

  4. Filesystem/LUN repair on the virtual array (NTFS/ReFS/APFS/HFS+/ext/XFS/Btrfs/ZFS/VMFS, CSVFS, LVM/Storage Spaces).

  5. VerificationMD5/SHA-256 per-file, representative sample opens, engineering report.

Limits: Truly overwritten blocks cannot be restored. Encrypted volumes (BitLocker/FileVault/LUKS/SED) require valid keys. We maximise outcomes with journals, snapshots, parity math and structure-aware carving.


Top 100 RAID-5 / RAID-6 / RAID-10 errors we recover — with technical recovery process

Format: Issue → Why it happens → Lab method (always on cloned members & a virtual array).

Disk/media faults (HDD/SSD/NVMe) — 1–20

  1. Single-disk failure (RAID-5) → Loss of one parity/data member → Image all; assemble degraded; repair FS.

  2. Dual-disk failure (RAID-6) → Within RS capability → Image both; RS decode missing stripes (P/Q).

  3. Dual-disk failure (RAID-5) → Beyond parity → Salvage by targeted re-reads; carve critical files; partial recovery.

  4. Two disks in same mirror (RAID-10) → Mirror-leg loss → Composite image from any readable regions; re-stripe with surviving legs.

  5. Head crash on one member → Physical surface damage → Donor HSA, per-head imaging; parity/mirror fill.

  6. Heads stuck (stiction) → Park failure → Free/swap heads; cold imaging; parity/mirror fill.

  7. Motor seizure → Bearings → Motor/chassis transplant; servo align; image.

  8. Preamp failure → Front-end dead → HSA swap; conservative profile.

  9. Media rings/scratches → Local loss → Heatmap by head/zone; skip-range multipass; parity reconstruct.

  10. Translator corruption (0-LBA) → SA module damage → Vendor fix; include recovered image in set.

  11. G-list “slow issue” → Firmware throttling → Disable BG ops; head-zoned imaging.

  12. SMR stalls → CMR/SMR mix; timeouts → Zone-aware imaging; parity aids holes.

  13. Helium leak instability → Thermal window → Short passes; aggregate best reads.

  14. NVMe controller resets → PCIe link flaps → Clamp link (Gen3 x1/x2), cool; burst imaging.

  15. NVMe 0-capacity → Namespace loss → Admin export; else chip-off (SED caveats).

  16. SSD controller dead → MCU/PMIC → Loader export or chip-off + FTL; integrate image.

  17. SSD NAND retention loss → Threshold drift → Cold imaging, voltage stepping, read-retry matrices.

  18. SSD RO-lock → FW safety → Image within window; logical repair later.

  19. Sector size mismatch (512e/4Kn) → Geometry skew → Normalise logical size in model.

  20. Bridge/backplane CRC storms → Cables/HBA → Replace; QD=1 small-block imaging.

Controller/HBA/cache/metadata — 21–35

  1. Controller failure → No import → Clone members; software model; reconstruct.

  2. BBU/cache loss (write-back) → Torn stripes → Journal-first FS repair; stripe consensus.

  3. Foreign config mismatch → NVRAM vs on-disk → Prefer on-disk metadata; ignore stale NVRAM.

  4. Firmware bug alters LBA bands → Translator change → Correct mapping layer before assemble.

  5. Hot-spare added wrongly → Poisoned rebuild → Use pre-rebuild clones; exclude contaminated writes.

  6. Write-intent bitmap corruption → False “in-sync” → Full block compare; virtual resync.

  7. Patrol read rewrites sectors → Divergence → Choose epoch with journal coherence.

  8. Stripe cache policy change → Reordering → Block-level voting on parity consistency.

  9. Controller imports with different chunk → Layout shift → Detect chunk via FS patterns; rebuild.

  10. Meta cached, not flushed → Inconsistent superblocks → Epoch voting; pick coherent set.

  11. BBU removed, forced write-through → Performance drop/torn writes → Journal-first; accept some torn files.

  12. Foreign import to dissimilar controller → Parity rotation misread → Determine rotation (L/R, Sym/Asym) via parity analysis.

  13. Controller converts JBOD silently → Flags changed → Ignore controller; content-driven assembly.

  14. Parity scrub on failing disk → Accelerated decay → Stop scrub; image weakest first.

  15. OCE/ORLM reshape metadata drift → Mixed epochs → Prefer pre-reshape epoch; salvage tail separately.

Geometry/administration mistakes — 36–50

  1. Unknown member order → No labels → Stripe-signature solver; entropy scoring; FS coherence test.

  2. Unknown chunk/stripe size → Vendor default unknown → Brute plausible sizes using FS signatures (NTFS MFT spacing, VMFS headers).

  3. Unknown start offsets → Hidden HPA/padding → Locate via GPT/FS headers; adjust per member.

  4. HPA/DCO on some members → Capacity differs → Remove on clones; align.

  5. Mix of 4Kn & 512e in set → Read anomalies → Logical normalisation; reassemble.

  6. Accidental “recreate” → Metadata overwritten → Rebuild from content; carve prior GPT/FS.

  7. Wrong disk replaced → Good removed → Re-include correct member from clone.

  8. Slots reordered → Human error → Map by WWN/serial; order solver validates.

  9. Expand failed mid-way → Array half-reshaped → Build pre-expand model; salvage extended area separately.

  10. Convert 5→6 or 10 misfired → New headers written → Ignore new config; raw images → old model.

  11. Degraded set used for weeks → Divergence → Prefer earliest coherent epoch; journal-guided selection.

  12. Quick format on top of array → Boot/bitmap overwritten → Rebuild GPT/boot; deep carve.

  13. Full format on top of array → Broad overwrite → Signature carving; partial outcomes.

  14. Disk signature collision → OS confusion → New signature on clone; RO mount.

  15. Dynamic Disk / Storage Spaces map loss → LDM/Slab DB missing → Rebuild DB/slab map from backups/on-disk copies.

RAID-5 specifics — 51–63

  1. Parity rotation unknown (L/R Sym/Asym) → Controller-dependent → Detect parity row via XOR checks; verify with FS.

  2. Write-hole after power loss → Partial stripe writes → Journal-first; per-stripe majority vote.

  3. URE during rebuild → Read error mid-rebuild → Image first; reconstruct virtually from best reads; avoid in-place rebuild.

  4. Bad sector at parity location → Can’t verify stripe → Majority from remaining data stripes; cautious parity recompute.

  5. Controller filled parity with wrong seed → FW bug → Recompute parity in software; replace bad ranges in virtual model.

  6. Data/parity swap after firmware → Flag flips → Detect by entropy (parity looks random); correct mapping.

  7. Stale parity after cache loss → Old parity lines → Use freshest data stripes per timestamp; recompute parity for model.

  8. Silent sector remap drift on one disk → Different LBA mapping → Composite “best-of” LBA image before assembly.

  9. Hybrid SSD cache stripes missing → Fast tier loss → Merge hot tier images first; then assemble RAID-5.

  10. Multi-path SAS presenting duplicate members → Ghost disks → Deduplicate by WWN/serial before build.

  11. RAID-5E/5EE with integrated hot-space → Different rotation → Use controller docs/patterns; model accordingly.

  12. RAID-50 (striped 5s) misunderstanding → Nested layout → Solve each RAID-5 group first; then higher-level stripe.

  13. Parallels/VMware datastore atop 5 → Torn VMDKs → Restore from coherent epoch; CID chain repair.

RAID-6 specifics — 64–78

  1. Dual disk failure within RS capability → P/Q can repair → RS decode per stripe in GF(2⁸); verify with FS.

  2. Third disk failing (intermittent) → Soon beyond RS → Prioritised imaging on weakest; RS decode remaining gaps.

  3. Unknown parity order (P/Q placement) → Rotation pattern varies → Detect by solving trial stripes; choose lowest syndrome error.

  4. Wrong Galois field polynomial → Controller variant → Try common polynomials; validate against known plaintext ranges.

  5. Partial RS tables (firmware bug) → Incorrect parity written → Recompute P/Q in software; prefer data stripes.

  6. RAID-6 with 4Kn/512e mix → Misaligned math → Normalise sector size; re-do RS operations.

  7. RAID-60 (striped 6s) mis-identified → Nested → Solve each 6-set; then stripe.

  8. UREs on more than two disks → Beyond parity → Composite best-of; carve critical files; partials likely.

  9. SSD wear pattern desync across set → Many ECC corrected reads → Aggregate multi-pass best pages, then RS.

  10. Patrol scrub wrote bad P/Q → Poisoned parity → Choose epoch pre-scrub; ignore poisoned stripes.

  11. Cache-tier RS over NVMe fails → Link flaps cause holes → Burst imaging; reconstruct missing with RS.

  12. Controller swapped P and Q roles → Post-update bug → Detect via solvability; swap in model.

  13. Q computed over wrong data order → Layout bug → Reorder stripe columns until solvable; fix model.

  14. Asymmetric spare injection mid-array → Column drift → Recalculate column indexes before RS.

  15. RAID-6 failure during expand → Mixed width stripes → Build separate pre/post-expand models; stitch.

RAID-10 specifics — 79–88

  1. Mirror-leg failure in multiple stripes → Common in dense sets → Build composite per LBA across legs; re-stripe.

  2. Offset mirrors (controller quirk) → Non-zero offset → Detect & correct offset before stripe assembly.

  3. Mis-paired mirrors after hot-swap → Wrong leg pairing → Pair by write-history & header signatures; then stripe.

  4. Write-hole across two legs → Partial writes on both → Journal-first; select sane copy per block.

  5. RAID-10 over NVMe with thermal throttling → Divergent recency → Prefer blocks from device without resets; fill from other.

  6. 4-disk 10 with one leg cloned wrongly → Identical serials confuse mapping → Use WWN and unique ID areas to map.

  7. Nested 10→50 migration failed → Partial conversion → Rebuild original 10 from pre-migration epoch if possible.

  8. RAID-0 over mirrored LUNs (1+0 at SAN) → Two tiers → Solve lower mirrors first, then host stripe.

  9. Controller reports 1E (striped mirrors) → Rotation differs → Identify interleave pattern; model accordingly.

  10. Single-leg encryption only → Misconfigured → Decrypt encrypted leg (keys needed) or prefer clear leg.

Filesystem/LUN/virtualisation on top — 89–100

  1. VMFS datastore header/superblock lost → ESXi won’t mount → Rebuild VMFS metadata; restore VMDK chain (CID/parentCID).

  2. VMDK/vSAN object health issues → Missing components → Reconstruct from surviving objects; export VMDKs.

  3. Hyper-V VHDX differencing chain corrupt → Parent pointer bad → Fix headers; merge AVHDX chain; mount guest FS.

  4. CSVFS fencing after incident → Owner issues → Mount images RO; extract VHDX; repair guest FS.

  5. iSCSI LUN file on NAS damaged → File-based LUN → Carve LUN extents; mount guest NTFS/ext/APFS.

  6. Thin-provisioned LUN zeroed holes → Over-commit → Recover from snapshots/older extents; accept holes.

  7. ReFS integrity mismatches → CoW poisoning → Export checksum-valid objects only.

  8. APFS on array checkpoint mismatch → Choose coherent checkpoint; export RO.

  9. NTFS $MFT/$LogFile corruption → Replay log; rebuild $MFT; relink orphans.

  10. XFS journal corruption → Manual log replay; directory rebuild.

  11. Btrfs chunk map errors → Superblock pair selection; tree-search roots; export subvols/snapshots.

  12. BitLocker/FileVault/LUKS atop array → Encrypted → Image first; decrypt clone with keys; export RO.


Why Plymouth Data Recovery

  • 25 years of RAID-5/6/10 recoveries across NAS, servers and DAS

  • Forensic-first workflow (clone originals, virtual reassembly, RO exports)

  • Advanced tooling & donor inventory (controllers, HBAs, heads, PCBs)

  • Free diagnostics with clear recovery options before work begins


Talk to a RAID engineer

Contact our Plymouth RAID engineers today for your free diagnostic. We’ll stabilise every member, reconstruct the array virtually, and recover your data with forensic-grade care.

Contact Us

Tell us about your issue and we'll get back to you.