Plymouth Data Recovery — No.1 RAID-5 / RAID-6 / RAID-10 Recovery Specialists (25+ years)
From home users to global enterprises and government teams, Plymouth Data Recovery has decades of hands-on experience recovering RAID-5, RAID-6 and RAID-10 across software RAID, hardware RAID, NAS/large NAS and rack servers. We operate forensically: stabilise each member → acquire read-only clones → reconstruct the array virtually → repair filesystems/LUNs on the clone only. Free diagnostics with clear options before any paid work begins.
Your draft mentioned “RAID 1” in places — below is aligned to RAID-5/6/10.
Platforms & vendors we support
Controllers/HBAs: Dell PERC, HPE Smart Array, Broadcom/LSI MegaRAID, Adaptec/Microchip, Areca, HighPoint, Promise, Intel RST/e, mdadm/LVM, Windows Dynamic Disks/Storage Spaces, ZFS/Btrfs md-based sets.
Filesystems/LUNs: NTFS, ReFS, exFAT, APFS, HFS+, ext2/3/4, XFS, Btrfs, ZFS, VMFS (VMware), iSCSI/NFS LUNs, CSVFS.
Media: 3.5″/2.5″ SATA/SAS HDD, SATA SSD, NVMe (M.2/U.2/U.3/AIC), 512e/4Kn.
Top 15 NAS / external RAID brands in the UK & popular models
(Representative; we support all vendors.)
-
Synology — DS923+, DS1522+, DS224+, RS1221+, RS2421+
-
QNAP — TS-464/TS-453D, TVS-h674, TS-873A, TS-451D2
-
Netgear ReadyNAS — RN424, RN524X, 2304, 528X
-
Western Digital (WD) — My Cloud EX2 Ultra, PR4100, DL2100
-
Buffalo — TeraStation TS3410DN/5410DN, LinkStation LS220D
-
Asustor — AS6704T (Lockerstor 4), AS5304T (Nimbustor 4)
-
TerraMaster — F4-423, F5-422, T9-423
-
LaCie (Seagate) — 2big/6big/12big (DAS/NAS use), d2 Professional
-
TrueNAS / iXsystems — Mini X/X+, R-Series
-
Drobo (legacy) — 5N/5N2 (BeyondRAID)
-
Lenovo/Iomega (legacy) — PX4-300D/PX6-300D
-
Zyxel — NAS326, NAS542
-
Promise — Pegasus R-series (DAS), VTrak (rack)
-
Seagate — Business Storage, BlackArmor (legacy)
-
OWC — ThunderBay 4/8 (SoftRAID 0/5/10 on host)
Top 15 rack-server RAID platforms (5/6/10) & common models
-
Dell EMC PowerEdge — R650/R750/R760, R740xd
-
HPE ProLiant — DL360/DL380 Gen10/Gen11
-
Lenovo ThinkSystem — SR630/SR650
-
Supermicro — 1029/2029/6049; SuperStorage 6029/6049
-
Cisco UCS — C220/C240 M5/M6
-
Fujitsu PRIMERGY — RX2530/RX2540
-
QCT (Quanta) — D52B-1U/D52BQ-2U
-
Inspur — NF5280 M6/M7
-
Huawei — FusionServer Pro 2288/5288
-
ASUS Server — RS520/RS700 series
-
Gigabyte Server — R272-Z/R282-Z
-
Tyan — Thunder/Transport 1U/2U
-
Areca — ARC-1883/1886 controller builds
-
Adaptec by Microchip — SmartRAID 31xx/32xx
-
iXsystems/TrueNAS R-Series — R10/R20 JBOD behind HBAs
Our professional RAID workflow (safeguards first)
-
Stabilise & image every member using hardware imagers (HDD: per-head zoning, tuned timeouts/ECC; SSD/NVMe: read-retry/voltage stepping, thermal control). Originals remain read-only.
-
Solve geometry offline: member order, start offsets, chunk/stripe size, parity rotation (left/right, symmetric/asymmetric), RAID-6 P/Q (Reed–Solomon GF(2⁸)).
-
Virtual array reconstruction from clones; stripe-signature/entropy scoring, majority vote per stripe; RS decoding for RAID-6.
-
Filesystem/LUN repair on the virtual array (NTFS/ReFS/APFS/HFS+/ext/XFS/Btrfs/ZFS/VMFS, CSVFS, LVM/Storage Spaces).
-
Verification — MD5/SHA-256 per-file, representative sample opens, engineering report.
Limits: Truly overwritten blocks cannot be restored. Encrypted volumes (BitLocker/FileVault/LUKS/SED) require valid keys. We maximise outcomes with journals, snapshots, parity math and structure-aware carving.
Top 100 RAID-5 / RAID-6 / RAID-10 errors we recover — with technical recovery process
Format: Issue → Why it happens → Lab method (always on cloned members & a virtual array).
Disk/media faults (HDD/SSD/NVMe) — 1–20
-
Single-disk failure (RAID-5) → Loss of one parity/data member → Image all; assemble degraded; repair FS.
-
Dual-disk failure (RAID-6) → Within RS capability → Image both; RS decode missing stripes (P/Q).
-
Dual-disk failure (RAID-5) → Beyond parity → Salvage by targeted re-reads; carve critical files; partial recovery.
-
Two disks in same mirror (RAID-10) → Mirror-leg loss → Composite image from any readable regions; re-stripe with surviving legs.
-
Head crash on one member → Physical surface damage → Donor HSA, per-head imaging; parity/mirror fill.
-
Heads stuck (stiction) → Park failure → Free/swap heads; cold imaging; parity/mirror fill.
-
Motor seizure → Bearings → Motor/chassis transplant; servo align; image.
-
Preamp failure → Front-end dead → HSA swap; conservative profile.
-
Media rings/scratches → Local loss → Heatmap by head/zone; skip-range multipass; parity reconstruct.
-
Translator corruption (0-LBA) → SA module damage → Vendor fix; include recovered image in set.
-
G-list “slow issue” → Firmware throttling → Disable BG ops; head-zoned imaging.
-
SMR stalls → CMR/SMR mix; timeouts → Zone-aware imaging; parity aids holes.
-
Helium leak instability → Thermal window → Short passes; aggregate best reads.
-
NVMe controller resets → PCIe link flaps → Clamp link (Gen3 x1/x2), cool; burst imaging.
-
NVMe 0-capacity → Namespace loss → Admin export; else chip-off (SED caveats).
-
SSD controller dead → MCU/PMIC → Loader export or chip-off + FTL; integrate image.
-
SSD NAND retention loss → Threshold drift → Cold imaging, voltage stepping, read-retry matrices.
-
SSD RO-lock → FW safety → Image within window; logical repair later.
-
Sector size mismatch (512e/4Kn) → Geometry skew → Normalise logical size in model.
-
Bridge/backplane CRC storms → Cables/HBA → Replace; QD=1 small-block imaging.
Controller/HBA/cache/metadata — 21–35
-
Controller failure → No import → Clone members; software model; reconstruct.
-
BBU/cache loss (write-back) → Torn stripes → Journal-first FS repair; stripe consensus.
-
Foreign config mismatch → NVRAM vs on-disk → Prefer on-disk metadata; ignore stale NVRAM.
-
Firmware bug alters LBA bands → Translator change → Correct mapping layer before assemble.
-
Hot-spare added wrongly → Poisoned rebuild → Use pre-rebuild clones; exclude contaminated writes.
-
Write-intent bitmap corruption → False “in-sync” → Full block compare; virtual resync.
-
Patrol read rewrites sectors → Divergence → Choose epoch with journal coherence.
-
Stripe cache policy change → Reordering → Block-level voting on parity consistency.
-
Controller imports with different chunk → Layout shift → Detect chunk via FS patterns; rebuild.
-
Meta cached, not flushed → Inconsistent superblocks → Epoch voting; pick coherent set.
-
BBU removed, forced write-through → Performance drop/torn writes → Journal-first; accept some torn files.
-
Foreign import to dissimilar controller → Parity rotation misread → Determine rotation (L/R, Sym/Asym) via parity analysis.
-
Controller converts JBOD silently → Flags changed → Ignore controller; content-driven assembly.
-
Parity scrub on failing disk → Accelerated decay → Stop scrub; image weakest first.
-
OCE/ORLM reshape metadata drift → Mixed epochs → Prefer pre-reshape epoch; salvage tail separately.
Geometry/administration mistakes — 36–50
-
Unknown member order → No labels → Stripe-signature solver; entropy scoring; FS coherence test.
-
Unknown chunk/stripe size → Vendor default unknown → Brute plausible sizes using FS signatures (NTFS MFT spacing, VMFS headers).
-
Unknown start offsets → Hidden HPA/padding → Locate via GPT/FS headers; adjust per member.
-
HPA/DCO on some members → Capacity differs → Remove on clones; align.
-
Mix of 4Kn & 512e in set → Read anomalies → Logical normalisation; reassemble.
-
Accidental “recreate” → Metadata overwritten → Rebuild from content; carve prior GPT/FS.
-
Wrong disk replaced → Good removed → Re-include correct member from clone.
-
Slots reordered → Human error → Map by WWN/serial; order solver validates.
-
Expand failed mid-way → Array half-reshaped → Build pre-expand model; salvage extended area separately.
-
Convert 5→6 or 10 misfired → New headers written → Ignore new config; raw images → old model.
-
Degraded set used for weeks → Divergence → Prefer earliest coherent epoch; journal-guided selection.
-
Quick format on top of array → Boot/bitmap overwritten → Rebuild GPT/boot; deep carve.
-
Full format on top of array → Broad overwrite → Signature carving; partial outcomes.
-
Disk signature collision → OS confusion → New signature on clone; RO mount.
-
Dynamic Disk / Storage Spaces map loss → LDM/Slab DB missing → Rebuild DB/slab map from backups/on-disk copies.
RAID-5 specifics — 51–63
-
Parity rotation unknown (L/R Sym/Asym) → Controller-dependent → Detect parity row via XOR checks; verify with FS.
-
Write-hole after power loss → Partial stripe writes → Journal-first; per-stripe majority vote.
-
URE during rebuild → Read error mid-rebuild → Image first; reconstruct virtually from best reads; avoid in-place rebuild.
-
Bad sector at parity location → Can’t verify stripe → Majority from remaining data stripes; cautious parity recompute.
-
Controller filled parity with wrong seed → FW bug → Recompute parity in software; replace bad ranges in virtual model.
-
Data/parity swap after firmware → Flag flips → Detect by entropy (parity looks random); correct mapping.
-
Stale parity after cache loss → Old parity lines → Use freshest data stripes per timestamp; recompute parity for model.
-
Silent sector remap drift on one disk → Different LBA mapping → Composite “best-of” LBA image before assembly.
-
Hybrid SSD cache stripes missing → Fast tier loss → Merge hot tier images first; then assemble RAID-5.
-
Multi-path SAS presenting duplicate members → Ghost disks → Deduplicate by WWN/serial before build.
-
RAID-5E/5EE with integrated hot-space → Different rotation → Use controller docs/patterns; model accordingly.
-
RAID-50 (striped 5s) misunderstanding → Nested layout → Solve each RAID-5 group first; then higher-level stripe.
-
Parallels/VMware datastore atop 5 → Torn VMDKs → Restore from coherent epoch; CID chain repair.
RAID-6 specifics — 64–78
-
Dual disk failure within RS capability → P/Q can repair → RS decode per stripe in GF(2⁸); verify with FS.
-
Third disk failing (intermittent) → Soon beyond RS → Prioritised imaging on weakest; RS decode remaining gaps.
-
Unknown parity order (P/Q placement) → Rotation pattern varies → Detect by solving trial stripes; choose lowest syndrome error.
-
Wrong Galois field polynomial → Controller variant → Try common polynomials; validate against known plaintext ranges.
-
Partial RS tables (firmware bug) → Incorrect parity written → Recompute P/Q in software; prefer data stripes.
-
RAID-6 with 4Kn/512e mix → Misaligned math → Normalise sector size; re-do RS operations.
-
RAID-60 (striped 6s) mis-identified → Nested → Solve each 6-set; then stripe.
-
UREs on more than two disks → Beyond parity → Composite best-of; carve critical files; partials likely.
-
SSD wear pattern desync across set → Many ECC corrected reads → Aggregate multi-pass best pages, then RS.
-
Patrol scrub wrote bad P/Q → Poisoned parity → Choose epoch pre-scrub; ignore poisoned stripes.
-
Cache-tier RS over NVMe fails → Link flaps cause holes → Burst imaging; reconstruct missing with RS.
-
Controller swapped P and Q roles → Post-update bug → Detect via solvability; swap in model.
-
Q computed over wrong data order → Layout bug → Reorder stripe columns until solvable; fix model.
-
Asymmetric spare injection mid-array → Column drift → Recalculate column indexes before RS.
-
RAID-6 failure during expand → Mixed width stripes → Build separate pre/post-expand models; stitch.
RAID-10 specifics — 79–88
-
Mirror-leg failure in multiple stripes → Common in dense sets → Build composite per LBA across legs; re-stripe.
-
Offset mirrors (controller quirk) → Non-zero offset → Detect & correct offset before stripe assembly.
-
Mis-paired mirrors after hot-swap → Wrong leg pairing → Pair by write-history & header signatures; then stripe.
-
Write-hole across two legs → Partial writes on both → Journal-first; select sane copy per block.
-
RAID-10 over NVMe with thermal throttling → Divergent recency → Prefer blocks from device without resets; fill from other.
-
4-disk 10 with one leg cloned wrongly → Identical serials confuse mapping → Use WWN and unique ID areas to map.
-
Nested 10→50 migration failed → Partial conversion → Rebuild original 10 from pre-migration epoch if possible.
-
RAID-0 over mirrored LUNs (1+0 at SAN) → Two tiers → Solve lower mirrors first, then host stripe.
-
Controller reports 1E (striped mirrors) → Rotation differs → Identify interleave pattern; model accordingly.
-
Single-leg encryption only → Misconfigured → Decrypt encrypted leg (keys needed) or prefer clear leg.
Filesystem/LUN/virtualisation on top — 89–100
-
VMFS datastore header/superblock lost → ESXi won’t mount → Rebuild VMFS metadata; restore VMDK chain (CID/parentCID).
-
VMDK/vSAN object health issues → Missing components → Reconstruct from surviving objects; export VMDKs.
-
Hyper-V VHDX differencing chain corrupt → Parent pointer bad → Fix headers; merge AVHDX chain; mount guest FS.
-
CSVFS fencing after incident → Owner issues → Mount images RO; extract VHDX; repair guest FS.
-
iSCSI LUN file on NAS damaged → File-based LUN → Carve LUN extents; mount guest NTFS/ext/APFS.
-
Thin-provisioned LUN zeroed holes → Over-commit → Recover from snapshots/older extents; accept holes.
-
ReFS integrity mismatches → CoW poisoning → Export checksum-valid objects only.
-
APFS on array checkpoint mismatch → Choose coherent checkpoint; export RO.
-
NTFS $MFT/$LogFile corruption → Replay log; rebuild $MFT; relink orphans.
-
XFS journal corruption → Manual log replay; directory rebuild.
-
Btrfs chunk map errors → Superblock pair selection; tree-search roots; export subvols/snapshots.
-
BitLocker/FileVault/LUKS atop array → Encrypted → Image first; decrypt clone with keys; export RO.
Why Plymouth Data Recovery
-
25 years of RAID-5/6/10 recoveries across NAS, servers and DAS
-
Forensic-first workflow (clone originals, virtual reassembly, RO exports)
-
Advanced tooling & donor inventory (controllers, HBAs, heads, PCBs)
-
Free diagnostics with clear recovery options before work begins
Talk to a RAID engineer
Contact our Plymouth RAID engineers today for your free diagnostic. We’ll stabilise every member, reconstruct the array virtually, and recover your data with forensic-grade care.




