Raid Recovery

RAID Data Recovery

No Fix - No Fee!

Our experts have extensive experience recovering data from RAID servers. With 25 years experience in the data recovery industry, we can help you securely recover your data.
Raid Recovery

Software Fault From £495

2-4 Days

Mechanical FaultFrom £895

2-4 Days

Critical Service From £995

1-2 Days

Need help recovering your data?

Call us on 01752 479547 or use the form below to make an enquiry.
Chat with us
Monday-Friday: 9am-6pm

Plymouth Data Recovery — No.1 NAS RAID 0/1/5/10 Recovery Specialists (25+ years)

Plymouth Data Recovery has completed RAID recoveries for home users, SMEs, global enterprises and public-sector teams. We recover software & hardware RAID, NAS, large NAS, rack servers and DAS across all mainstream vendors. Free diagnostics and clear options before any paid work begins.

Your draft mixed locations and RAID levels; we’ve aligned everything to Plymouth and covered RAID 0/1/5/10 comprehensively.


Platforms we recover

Hardware RAID / HBAs: Dell PERC, HPE Smart Array, LSI/Broadcom/Avago MegaRAID, Adaptec/Microchip, Areca, Intel RST/e, HighPoint, Promise, Areca.
Software RAID: Linux mdadm/LVM, Windows Dynamic Disks/Storage Spaces, Apple/macOS, ZFS/Btrfs.
File systems: NTFS, ReFS, exFAT, FAT32, APFS, HFS+, ext2/3/4, XFS, Btrfs, ZFS, VMFS (VMware), NFS/SMB LUNs.
Media: 3.5″/2.5″ HDD (SATA/SAS), SATA SSD, NVMe (M.2/U.2/U.3/AIC), hybrid pools, 512e/4Kn.


Top 15 NAS / external RAID brands in the UK & popular models

(Representative brands & models we most commonly see; if yours isn’t listed, we still support it.)

  1. Synology — DS923+, DS1522+, DS224+, RS1221+, RS2421+

  2. QNAP — TS-464/TS-453D, TVS-h674, TS-873A, TS-451D2

  3. Netgear ReadyNAS — RN424, RN524X, 2304, 528X

  4. Western Digital (WD) — My Cloud EX2 Ultra, PR4100, DL2100

  5. Buffalo — TeraStation TS3410DN/5410DN, LinkStation LS220D

  6. Asustor — AS6704T (Lockerstor 4), AS5304T (Nimbustor 4)

  7. TerraMaster — F4-423, F5-422, T9-423

  8. LaCie (Seagate) — 2big/6big/12big (DAS/NAS use), d2 Professional

  9. TrueNAS / iXsystems — Mini X/X+, R-Series

  10. Drobo (legacy) — 5N/5N2 (BeyondRAID)

  11. Lenovo/Iomega (legacy) — PX4-300D/PX6-300D

  12. Zyxel — NAS326, NAS542

  13. Promise — Pegasus R-series (DAS), VTrak (rack NAS)

  14. Seagate — Business Storage, BlackArmor (legacy)

  15. OWC — ThunderBay 4/8 (host RAID 5/10 via SoftRAID)


Top 15 RAID rack-server platforms & common models

  1. Dell EMC PowerEdge — R650/R750/R760, R740xd

  2. HPE ProLiant — DL360/DL380 Gen10/Gen11, ML350

  3. Lenovo ThinkSystem — SR630/SR650, ST550

  4. Supermicro — 1029/2029/6049; SuperStorage 6029/6049

  5. Cisco UCS — C220/C240 M5/M6

  6. Fujitsu PRIMERGY — RX2530/RX2540

  7. QCT (Quanta) — D52B-1U/D52BQ-2U

  8. Inspur — NF5280 M6/M7

  9. Huawei — FusionServer Pro 2288/5288

  10. ASUS Server — RS520/RS700 series

  11. Gigabyte Server — R272-Z/R282-Z

  12. Tyan — Thunder/Transport 1U/2U

  13. Areca — ARC-1883/1886 controller builds

  14. Adaptec by Microchip — SmartRAID 31xx/32xx

  15. NetApp/Promise/LSI JBODs behind HBAs (host RAID 5/6/10)


Professional RAID recovery workflow (what we actually do)

  1. Stabilise & image every member — Hardware imagers with per-head zoning, configurable timeouts/ECC, and thermal control (HDD); read-retry/voltage stepping (SSD/NVMe). Originals are never modified.

  2. Virtual array reconstruction — Infer/confirm member order, start offsets, stripe size, parity rotation (RAID5), dual parity P/Q (RAID6 where present), mirror pairs + striping (RAID10).

  3. Parity & geometry solving — Stripe-signature heuristics, entropy scoring, and majority-vote on contested stripes; Reed–Solomon math for dual-parity sets.

  4. Filesystem/LUN repair on the virtual array — Repair NTFS/ReFS/APFS/HFS+/ext/XFS/Btrfs/ZFS/VMFS metadata; mount read-only and export.

  5. VerificationPer-file hashing (MD5/SHA-256), open-sample testing, recovery report.

Overwrites/crypto: Truly overwritten blocks cannot be “undeleted”. Encrypted arrays require valid keys (BitLocker/FileVault/LUKS/SED). We maximise results via journals, snapshots, parity and structure-aware carving.


Top 100 RAID errors we recover — with technical process

Format: Issue → Diagnosis → Lab recovery (on cloned members & virtual arrays).

Disk-level & media (1–15)

  1. Multiple member disk failures → SMART/logs, error heatmaps → Prioritise weakest for fast-first imaging; reconstruct stripes from parity/mirrors.

  2. Head crash on one member → SA access/CRC → Donor HSA swap; per-head imaging; fill gaps via parity.

  3. Peppered bad sectors across members → Heatmap overlay → Targeted re-reads; parity reconstruct or carve where parity insufficient.

  4. Translator corruption (one drive 0-LBA) → Vendor fix → Image rest; repair translator; integrate recovered image.

  5. Preamp failure → Bias anomalies → HSA swap; conservative imaging, then parity fill.

  6. Motor seizure → Stiction test → Motor/chassis transplant; image → parity fill.

  7. SMR stall under parity load → Long timeouts → Timeout budgeting; zone imaging; backfill via parity.

  8. Helium leak drive instability → Thermally windowed passes → Aggregate best-of reads; parity supplement.

  9. G-list explosion “slow issue” → Disable BG ops → Head-zoned imaging; parity fill.

  10. Write caching lost on member → Incoherent last writes → Journal-first FS repair; majority vote per stripe.

  11. Sector size mismatch (512e/4Kn) → Model headers → Normalise size in virtual geometry.

  12. NVMe member read-reset loops → Clamp link (Gen3 x2/x1), cool → Short imaging bursts; merge.

  13. SSD NAND wear/retention loss → Read-retry/voltage stepping → Majority vote page merges; parity fill.

  14. SSD controller failure → Vendor loader; else chip-off + FTL → Integrate image into array.

  15. Bridge power fault in NAS → Bypass to direct HBA, stable PSU → Image members, proceed.

Controller/HBA & cache (16–25)

  1. RAID controller dead → No import → Clone members; emulate layout in software; reconstruct.

  2. BBU/cache loss (write-back) → Dirty stripes → Majority vote at parity; journal-first FS repair.

  3. Foreign config overwrite → NVRAM mismatch → Use on-disk metadata; assemble consistent epoch set.

  4. Firmware bug changes LBA bands → Translator anomaly → Build corrective mapping before assemble.

  5. Controller resets mid-write → Stripe split-brain → Stripe-level consensus + FS journaling reconciliation.

  6. HBA link CRC storms → Cabling/backplane → New HBA/cables; small-queue, small-block imaging.

  7. Controller adds hot-spare wrongly → Poisoned rebuild → Choose pre-event epoch; ignore contaminated writes.

  8. Cache policy mis-set (RA/WA) → FS inconsistencies → Reassemble with most-consistent members; repair FS.

  9. Metadata cached not flushed → Inconsistent superblocks → Epoch voting; pick coherent set.

  10. Controller battery removed → Lost cache → Journal-first FS repair; parity reconcile.

Array management mistakes (26–40)

  1. Wrong disk order inserted → Stripe signature solver → Brute-force order; choose highest FS coherence score.

  2. Accidental reinitialise/recreate → Metadata wiped → Recover backup superblocks; infer geometry from content.

  3. Expansion/reshape failed mid-op → Mixed epochs → Rebuild pre-reshape array; selectively add post-reshape if consistent.

  4. Rebuild started on wrong member → Overwrote good data → Roll back to pre-rebuild clones; exclude poisoned ranges.

  5. Hot-swap during I/O storm → Stripe skew → Majority vote; journal reconcile.

  6. Foreign import to different controller → Geometry mismatch → Use on-disk superblocks; ignore controller metadata.

  7. Accidental disk removal/reorder → Slots lost → Entropy/order solver; reassemble; verify with FS signatures.

  8. Auto-repair by NAS OS → md/LVM altered → Halt; assemble manually from images; repair FS.

  9. Wrong stripe size configured → Throughput oddities → Detect via FS runlists; rebuild with detected chunk.

  10. Sector alignment off → Offset by bridge → Correct start offsets; reassemble.

  11. Mix of 512e and 4Kn drives → Partial reads fail → Logical size normalisation in model.

  12. RAID set imported degraded & used → Further divergence → Prefer earliest consistent snapshot across members.

  13. Spare promoted from failing disk → Cascading errors → Composite “best-of” image per LBA; reassemble.

  14. Parity scrub on failing disk → Accelerated decay → Stop scrub; image weakest first; parity fill later.

  15. Controller Online Capacity Expansion loop → OCE metadata split → Select pre-OCE epoch; salvage post-OCE cautiously.

RAID-level specifics (41–55)

  1. RAID-0 one disk lost → No redundancy → Image surviving; carve; partial only unless mirror/backup exists.

  2. RAID-1 mirror divergence → Different versions → Choose most recent consistent member; mount RO; export.

  3. RAID-5 dual failure → Second fault before/during rebuild → Clone all; parity reconstruct missing stripes; validate via FS.

  4. RAID-10 mirror-leg dual failure → Both disks in one leg dead → Build composite from readable regions across both; re-stripe with healthy legs.

  5. Parity rotation unknown → Left/Right, Sym/Asym → Detect via parity signature; confirm by FS scoring.

  6. RAID-5 write-hole → Unclean shutdown → Reconcile at journals first; per-stripe majority vote.

  7. RAID-10 offset mirrors → Controller quirk → Detect offset via headers/runs; correct model; export.

  8. Nested RAID above RAID → 10 over 0+1 or vice-versa → Disassemble layers bottom-up; test each layer’s integrity.

  9. Parity drive marked failed but fine → False positive → Validate by raw reads; include as data source.

  10. Degraded performance caused timeouts → TLER/ERC mismatch → Separate member imaging; avoid controller rebuilds.

Filesystem/LUN on top of arrays (56–75)

  1. VMFS datastore header loss → ESXi can’t mount → Rebuild VMFS metadata/copies; restore VMDK chain; export.

  2. NTFS on RAID corrupted → $MFT/$LogFile → Replay log; rebuild $MFT; relink orphans.

  3. ReFS integrity stream mismatches → CoW artefacts → Export checksum-valid objects; ignore poisoned blocks.

  4. APFS on RAID (HBA Mac/Thunderbolt DAS) → Checkpoint walk; rebuild omap; mount Data RO.

  5. XFS journal corruption → Manual log replay; dir rebuild.

  6. ext4 superblock/inodes lost → Use backup superblocks; fsck-like repair on image.

  7. Btrfs chunk/metadata errors → Choose best superblock pair; tree-search root; export snapshots.

  8. ZFS pool missing vdev → Import with rewind/readonly; scrub; export datasets.

  9. LVM PV/VG/LV map broken → Rebuild from on-disk metadata; restore LV; repair FS.

  10. Windows Dynamic Disk (spanned/striped) → LDM DB loss → Recover from copies; infer from FS runs.

  11. Storage Spaces parity/mirror → Slab map corrupt → Rebuild columns; export virtual disk RO.

  12. BitLocker on RAID → Need recovery key → Image first; decrypt clone; export.

  13. FileVault/LUKS layers → Keys required → Decrypt clone; export RO.

  14. iSCSI LUN sparse file corrupted → NAS backing store → Carve LUN extents; rebuild guest FS.

  15. CSVFS/Cluster Shared Volumes issues → Owner/fencing → Mount RO on image; extract VHDX.

  16. Thin-provisioned LUN overrun → Holes mapped → Handle sparse correctly; rebuild file map.

  17. Deduplication store damage → Hash index repair → Rehydrate from good chunks; verify hashes.

  18. NAS snapshot DB broken → Btrfs/ZFS → Mount earlier snapshot epochs; export.

  19. NFS export corrupt but LUN OK → Recover LUN file; ignore share layer; export guest.

  20. SMB shadow copies hidden → Enumerate previous versions on image; export.

Network/NAS firmware & protocol quirks (76–85)

  1. Synology mdadm event mismatch → Assemble with highest common event; ignore outliers.

  2. QNAP migration after reset → Geometry changed → Use pre-reset md/LVM metadata; reassemble.

  3. Drobo BeyondRAID metadata → Proprietary map → Reconstruct from blocks; emulate logical disk; recover FS.

  4. Netgear ReadyNAS X-RAID → Auto-expand quirks → Choose pre-expand epoch; rebuild.

  5. Zyxel/TerraMaster firmware update fail → md arrays altered → Image and manual assemble; fix FS.

  6. NFS stale handles during incident → Incomplete writes → Use journal; exclude partials.

  7. SMB opportunistic locking → In-flight loss → Recover temp/lock files; replay app logs.

  8. AFP legacy shares (Mac) → Catalog sync issues → HFS+ repair; export.

  9. rsync with --inplace on NAS → Overwrite original → Carve from snapshots/previous versions.

  10. Hybrid RAID modes (SHR/SHR-2) → Non-uniform chunking → Use Synology chunk map; export.

Environment & human factors (86–95)

  1. Overheating NAS → Thermal throttling → Staged imaging; priority passes.

  2. Power surge → Multiple member faults → Electronics repair + imaging; parity fill.

  3. Water ingress → Immediate neutralisation; short imaging window → Prioritise weak drives.

  4. Chassis backplane damage → Intermittent link → Direct-attach to HBA; image.

  5. Moved chassis while live → Vibration head slap → HSA swap; image; parity fill.

  6. Forced filesystem checks on array → Secondary damage → Roll back to pre-repair images; rebuild logically.

  7. User replaced wrong disk → Removed good member → Reconstruct with correct set via order solver.

  8. Mixed firmware revisions in pool → Behaviour drift → Image all; choose consistent epoch.

  9. Non-ECC RAM bit-flips during scrub → Parity poison → Majority vote; journal-first recovery.

  10. Expired cache battery forced write-through → Latency errors → Imaging per member; reconstruct.

Edge/nested scenarios (96–100)

  1. RAID over encrypted volumes → Layered crypto → Decrypt per-member first (keys required), then assemble.

  2. Encrypted over RAID (BitLocker on array) → Assemble first, then decrypt clone.

  3. vSAN object health issues → RDT/DOM metadata → Export components; reconstruct VMDKs; recover.

  4. Ceph RGW/OSD loss → PG repair; export object data; rebuild FS where possible.

  5. GlusterFS replica/stripe mismatch → Heal info → Prefer healthy bricks; reconstruct file trees.

  6. Hybrid HDD+SSD tiering (FAST Cache) → Hot blocks missing → Merge SSD/HDD images; priority resolve on hot tier.

  7. 4Kn drives mixed with 512e → Alignment errors → Normalise sector size in model.

  8. Controller converts JBOD silently → Foreign import changed flags → Use raw images, ignore controller metadata.

  9. Parity verified wrong after firmware → New parity calc → Roll back with images; reconstruct.

  10. Controller “secure erase” on hot-spare → Data gone on that member → Maximise from other members; parity where possible.


Top 20 virtualisation / virtual-disk failure scenarios & recovery

  1. VMFS header loss / DS won’t mount → Rebuild VMFS metadata; restore VMDK chain (CID/parentCID); export guest files.

  2. VMDK snapshot chain broken → Missing delta links → Recreate descriptor; stitch deltas by parent CID/time; mount virtual disk.

  3. Hyper-V VHDX differencing chain corrupt → Parent pointer bad → Fix headers; merge AVHDX hierarchy; export NTFS from guest.

  4. CSVFS/Cluster Shared Volumes fencing → Ownership issues → Mount images RO; extract VHDX; repair guest FS.

  5. Thin provisioned overfilled → Zeroed holes → Recover pre-overfill data from snapshots/older extents.

  6. VMFS over RAID degraded → Dual fault during snapshot → Assemble pre-fault virtual array; mount older VM snapshots.

  7. iSCSI LUN backing VMFS damaged → Carve LUN from NAS FS; mount VMFS; export VMs.

  8. NFS datastore map loss → Export .lck & descriptor remnants → Rebuild inventory; attach VMs to recovered VMDKs.

  9. vSAN object/component missing → Reconstruct from surviving components; recover namespace; export VMDKs.

  10. Proxmox/ZFS zvol snapshot corruption → Roll ZFS snapshot back on image; zdb to locate blocks; export.

  11. KVM qcow2 overlay lost → Rebase overlays; convert qcow2→raw if needed; mount guest FS.

  12. Citrix/XenServer VHD chain issues → Fix footer/parent; coalesce chain; export NTFS/ext.

  13. SR metadata corruption (Xen) → Recreate SR from LUN; attach VDI; export.

  14. VM encryption (vSphere/BitLocker inside guest) → Need keys; decrypt either at host layer (if vSphere) or inside guest on clone.

  15. Changed block tracking (CBT) bugs → Inconsistent backups → Prefer full images; use earlier snapshots.

  16. Backup appliance dedup chain damage → Rehydrate by hash map; export full images; verify via checksum.

  17. Array reshape during VM storage vMotion → Mixed epochs → Use pre-vMotion copies; rebuild.

  18. Template golden image corrupted → Restore from dedup/previous versions; recompose pools.

  19. Guest FS dirty after host incident → Journal replay on guest (on image); export.

  20. EFI/boot issues in VM → Fix only on copy; mount disk; export data directly.


Why choose Plymouth Data Recovery

  • 25 years of RAID/NAS/server recoveries across vendors, controllers and filesystems

  • Forensic-first workflow (image originals, virtual reconstruction, RO exports)

  • Advanced tooling & donor inventory (controllers, HBAs, heads, PCBs)

  • Free diagnostics with clear recovery options before work begins


Talk to a RAID engineer

Plymouth Data Recovery — contact our RAID engineers today for a free diagnostic. We’ll stabilise the members, reconstruct the array virtually, and recover your data with forensic-grade care.

Contact Us

Tell us about your issue and we'll get back to you.