Plymouth Data Recovery — No.1 NAS RAID 0/1/5/10 Recovery Specialists (25+ years)
Plymouth Data Recovery has completed RAID recoveries for home users, SMEs, global enterprises and public-sector teams. We recover software & hardware RAID, NAS, large NAS, rack servers and DAS across all mainstream vendors. Free diagnostics and clear options before any paid work begins.
Your draft mixed locations and RAID levels; we’ve aligned everything to Plymouth and covered RAID 0/1/5/10 comprehensively.
Platforms we recover
Hardware RAID / HBAs: Dell PERC, HPE Smart Array, LSI/Broadcom/Avago MegaRAID, Adaptec/Microchip, Areca, Intel RST/e, HighPoint, Promise, Areca.
Software RAID: Linux mdadm/LVM, Windows Dynamic Disks/Storage Spaces, Apple/macOS, ZFS/Btrfs.
File systems: NTFS, ReFS, exFAT, FAT32, APFS, HFS+, ext2/3/4, XFS, Btrfs, ZFS, VMFS (VMware), NFS/SMB LUNs.
Media: 3.5″/2.5″ HDD (SATA/SAS), SATA SSD, NVMe (M.2/U.2/U.3/AIC), hybrid pools, 512e/4Kn.
Top 15 NAS / external RAID brands in the UK & popular models
(Representative brands & models we most commonly see; if yours isn’t listed, we still support it.)
-
Synology — DS923+, DS1522+, DS224+, RS1221+, RS2421+
-
QNAP — TS-464/TS-453D, TVS-h674, TS-873A, TS-451D2
-
Netgear ReadyNAS — RN424, RN524X, 2304, 528X
-
Western Digital (WD) — My Cloud EX2 Ultra, PR4100, DL2100
-
Buffalo — TeraStation TS3410DN/5410DN, LinkStation LS220D
-
Asustor — AS6704T (Lockerstor 4), AS5304T (Nimbustor 4)
-
TerraMaster — F4-423, F5-422, T9-423
-
LaCie (Seagate) — 2big/6big/12big (DAS/NAS use), d2 Professional
-
TrueNAS / iXsystems — Mini X/X+, R-Series
-
Drobo (legacy) — 5N/5N2 (BeyondRAID)
-
Lenovo/Iomega (legacy) — PX4-300D/PX6-300D
-
Zyxel — NAS326, NAS542
-
Promise — Pegasus R-series (DAS), VTrak (rack NAS)
-
Seagate — Business Storage, BlackArmor (legacy)
-
OWC — ThunderBay 4/8 (host RAID 5/10 via SoftRAID)
Top 15 RAID rack-server platforms & common models
-
Dell EMC PowerEdge — R650/R750/R760, R740xd
-
HPE ProLiant — DL360/DL380 Gen10/Gen11, ML350
-
Lenovo ThinkSystem — SR630/SR650, ST550
-
Supermicro — 1029/2029/6049; SuperStorage 6029/6049
-
Cisco UCS — C220/C240 M5/M6
-
Fujitsu PRIMERGY — RX2530/RX2540
-
QCT (Quanta) — D52B-1U/D52BQ-2U
-
Inspur — NF5280 M6/M7
-
Huawei — FusionServer Pro 2288/5288
-
ASUS Server — RS520/RS700 series
-
Gigabyte Server — R272-Z/R282-Z
-
Tyan — Thunder/Transport 1U/2U
-
Areca — ARC-1883/1886 controller builds
-
Adaptec by Microchip — SmartRAID 31xx/32xx
-
NetApp/Promise/LSI JBODs behind HBAs (host RAID 5/6/10)
Professional RAID recovery workflow (what we actually do)
-
Stabilise & image every member — Hardware imagers with per-head zoning, configurable timeouts/ECC, and thermal control (HDD); read-retry/voltage stepping (SSD/NVMe). Originals are never modified.
-
Virtual array reconstruction — Infer/confirm member order, start offsets, stripe size, parity rotation (RAID5), dual parity P/Q (RAID6 where present), mirror pairs + striping (RAID10).
-
Parity & geometry solving — Stripe-signature heuristics, entropy scoring, and majority-vote on contested stripes; Reed–Solomon math for dual-parity sets.
-
Filesystem/LUN repair on the virtual array — Repair NTFS/ReFS/APFS/HFS+/ext/XFS/Btrfs/ZFS/VMFS metadata; mount read-only and export.
-
Verification — Per-file hashing (MD5/SHA-256), open-sample testing, recovery report.
Overwrites/crypto: Truly overwritten blocks cannot be “undeleted”. Encrypted arrays require valid keys (BitLocker/FileVault/LUKS/SED). We maximise results via journals, snapshots, parity and structure-aware carving.
Top 100 RAID errors we recover — with technical process
Format: Issue → Diagnosis → Lab recovery (on cloned members & virtual arrays).
Disk-level & media (1–15)
-
Multiple member disk failures → SMART/logs, error heatmaps → Prioritise weakest for fast-first imaging; reconstruct stripes from parity/mirrors.
-
Head crash on one member → SA access/CRC → Donor HSA swap; per-head imaging; fill gaps via parity.
-
Peppered bad sectors across members → Heatmap overlay → Targeted re-reads; parity reconstruct or carve where parity insufficient.
-
Translator corruption (one drive 0-LBA) → Vendor fix → Image rest; repair translator; integrate recovered image.
-
Preamp failure → Bias anomalies → HSA swap; conservative imaging, then parity fill.
-
Motor seizure → Stiction test → Motor/chassis transplant; image → parity fill.
-
SMR stall under parity load → Long timeouts → Timeout budgeting; zone imaging; backfill via parity.
-
Helium leak drive instability → Thermally windowed passes → Aggregate best-of reads; parity supplement.
-
G-list explosion “slow issue” → Disable BG ops → Head-zoned imaging; parity fill.
-
Write caching lost on member → Incoherent last writes → Journal-first FS repair; majority vote per stripe.
-
Sector size mismatch (512e/4Kn) → Model headers → Normalise size in virtual geometry.
-
NVMe member read-reset loops → Clamp link (Gen3 x2/x1), cool → Short imaging bursts; merge.
-
SSD NAND wear/retention loss → Read-retry/voltage stepping → Majority vote page merges; parity fill.
-
SSD controller failure → Vendor loader; else chip-off + FTL → Integrate image into array.
-
Bridge power fault in NAS → Bypass to direct HBA, stable PSU → Image members, proceed.
Controller/HBA & cache (16–25)
-
RAID controller dead → No import → Clone members; emulate layout in software; reconstruct.
-
BBU/cache loss (write-back) → Dirty stripes → Majority vote at parity; journal-first FS repair.
-
Foreign config overwrite → NVRAM mismatch → Use on-disk metadata; assemble consistent epoch set.
-
Firmware bug changes LBA bands → Translator anomaly → Build corrective mapping before assemble.
-
Controller resets mid-write → Stripe split-brain → Stripe-level consensus + FS journaling reconciliation.
-
HBA link CRC storms → Cabling/backplane → New HBA/cables; small-queue, small-block imaging.
-
Controller adds hot-spare wrongly → Poisoned rebuild → Choose pre-event epoch; ignore contaminated writes.
-
Cache policy mis-set (RA/WA) → FS inconsistencies → Reassemble with most-consistent members; repair FS.
-
Metadata cached not flushed → Inconsistent superblocks → Epoch voting; pick coherent set.
-
Controller battery removed → Lost cache → Journal-first FS repair; parity reconcile.
Array management mistakes (26–40)
-
Wrong disk order inserted → Stripe signature solver → Brute-force order; choose highest FS coherence score.
-
Accidental reinitialise/recreate → Metadata wiped → Recover backup superblocks; infer geometry from content.
-
Expansion/reshape failed mid-op → Mixed epochs → Rebuild pre-reshape array; selectively add post-reshape if consistent.
-
Rebuild started on wrong member → Overwrote good data → Roll back to pre-rebuild clones; exclude poisoned ranges.
-
Hot-swap during I/O storm → Stripe skew → Majority vote; journal reconcile.
-
Foreign import to different controller → Geometry mismatch → Use on-disk superblocks; ignore controller metadata.
-
Accidental disk removal/reorder → Slots lost → Entropy/order solver; reassemble; verify with FS signatures.
-
Auto-repair by NAS OS → md/LVM altered → Halt; assemble manually from images; repair FS.
-
Wrong stripe size configured → Throughput oddities → Detect via FS runlists; rebuild with detected chunk.
-
Sector alignment off → Offset by bridge → Correct start offsets; reassemble.
-
Mix of 512e and 4Kn drives → Partial reads fail → Logical size normalisation in model.
-
RAID set imported degraded & used → Further divergence → Prefer earliest consistent snapshot across members.
-
Spare promoted from failing disk → Cascading errors → Composite “best-of” image per LBA; reassemble.
-
Parity scrub on failing disk → Accelerated decay → Stop scrub; image weakest first; parity fill later.
-
Controller Online Capacity Expansion loop → OCE metadata split → Select pre-OCE epoch; salvage post-OCE cautiously.
RAID-level specifics (41–55)
-
RAID-0 one disk lost → No redundancy → Image surviving; carve; partial only unless mirror/backup exists.
-
RAID-1 mirror divergence → Different versions → Choose most recent consistent member; mount RO; export.
-
RAID-5 dual failure → Second fault before/during rebuild → Clone all; parity reconstruct missing stripes; validate via FS.
-
RAID-10 mirror-leg dual failure → Both disks in one leg dead → Build composite from readable regions across both; re-stripe with healthy legs.
-
Parity rotation unknown → Left/Right, Sym/Asym → Detect via parity signature; confirm by FS scoring.
-
RAID-5 write-hole → Unclean shutdown → Reconcile at journals first; per-stripe majority vote.
-
RAID-10 offset mirrors → Controller quirk → Detect offset via headers/runs; correct model; export.
-
Nested RAID above RAID → 10 over 0+1 or vice-versa → Disassemble layers bottom-up; test each layer’s integrity.
-
Parity drive marked failed but fine → False positive → Validate by raw reads; include as data source.
-
Degraded performance caused timeouts → TLER/ERC mismatch → Separate member imaging; avoid controller rebuilds.
Filesystem/LUN on top of arrays (56–75)
-
VMFS datastore header loss → ESXi can’t mount → Rebuild VMFS metadata/copies; restore VMDK chain; export.
-
NTFS on RAID corrupted → $MFT/$LogFile → Replay log; rebuild $MFT; relink orphans.
-
ReFS integrity stream mismatches → CoW artefacts → Export checksum-valid objects; ignore poisoned blocks.
-
APFS on RAID (HBA Mac/Thunderbolt DAS) → Checkpoint walk; rebuild omap; mount Data RO.
-
XFS journal corruption → Manual log replay; dir rebuild.
-
ext4 superblock/inodes lost → Use backup superblocks; fsck-like repair on image.
-
Btrfs chunk/metadata errors → Choose best superblock pair; tree-search root; export snapshots.
-
ZFS pool missing vdev → Import with rewind/readonly; scrub; export datasets.
-
LVM PV/VG/LV map broken → Rebuild from on-disk metadata; restore LV; repair FS.
-
Windows Dynamic Disk (spanned/striped) → LDM DB loss → Recover from copies; infer from FS runs.
-
Storage Spaces parity/mirror → Slab map corrupt → Rebuild columns; export virtual disk RO.
-
BitLocker on RAID → Need recovery key → Image first; decrypt clone; export.
-
FileVault/LUKS layers → Keys required → Decrypt clone; export RO.
-
iSCSI LUN sparse file corrupted → NAS backing store → Carve LUN extents; rebuild guest FS.
-
CSVFS/Cluster Shared Volumes issues → Owner/fencing → Mount RO on image; extract VHDX.
-
Thin-provisioned LUN overrun → Holes mapped → Handle sparse correctly; rebuild file map.
-
Deduplication store damage → Hash index repair → Rehydrate from good chunks; verify hashes.
-
NAS snapshot DB broken → Btrfs/ZFS → Mount earlier snapshot epochs; export.
-
NFS export corrupt but LUN OK → Recover LUN file; ignore share layer; export guest.
-
SMB shadow copies hidden → Enumerate previous versions on image; export.
Network/NAS firmware & protocol quirks (76–85)
-
Synology mdadm event mismatch → Assemble with highest common event; ignore outliers.
-
QNAP migration after reset → Geometry changed → Use pre-reset md/LVM metadata; reassemble.
-
Drobo BeyondRAID metadata → Proprietary map → Reconstruct from blocks; emulate logical disk; recover FS.
-
Netgear ReadyNAS X-RAID → Auto-expand quirks → Choose pre-expand epoch; rebuild.
-
Zyxel/TerraMaster firmware update fail → md arrays altered → Image and manual assemble; fix FS.
-
NFS stale handles during incident → Incomplete writes → Use journal; exclude partials.
-
SMB opportunistic locking → In-flight loss → Recover temp/lock files; replay app logs.
-
AFP legacy shares (Mac) → Catalog sync issues → HFS+ repair; export.
-
rsync with
--inplaceon NAS → Overwrite original → Carve from snapshots/previous versions. -
Hybrid RAID modes (SHR/SHR-2) → Non-uniform chunking → Use Synology chunk map; export.
Environment & human factors (86–95)
-
Overheating NAS → Thermal throttling → Staged imaging; priority passes.
-
Power surge → Multiple member faults → Electronics repair + imaging; parity fill.
-
Water ingress → Immediate neutralisation; short imaging window → Prioritise weak drives.
-
Chassis backplane damage → Intermittent link → Direct-attach to HBA; image.
-
Moved chassis while live → Vibration head slap → HSA swap; image; parity fill.
-
Forced filesystem checks on array → Secondary damage → Roll back to pre-repair images; rebuild logically.
-
User replaced wrong disk → Removed good member → Reconstruct with correct set via order solver.
-
Mixed firmware revisions in pool → Behaviour drift → Image all; choose consistent epoch.
-
Non-ECC RAM bit-flips during scrub → Parity poison → Majority vote; journal-first recovery.
-
Expired cache battery forced write-through → Latency errors → Imaging per member; reconstruct.
Edge/nested scenarios (96–100)
-
RAID over encrypted volumes → Layered crypto → Decrypt per-member first (keys required), then assemble.
-
Encrypted over RAID (BitLocker on array) → Assemble first, then decrypt clone.
-
vSAN object health issues → RDT/DOM metadata → Export components; reconstruct VMDKs; recover.
-
Ceph RGW/OSD loss → PG repair; export object data; rebuild FS where possible.
-
GlusterFS replica/stripe mismatch → Heal info → Prefer healthy bricks; reconstruct file trees.
-
Hybrid HDD+SSD tiering (FAST Cache) → Hot blocks missing → Merge SSD/HDD images; priority resolve on hot tier.
-
4Kn drives mixed with 512e → Alignment errors → Normalise sector size in model.
-
Controller converts JBOD silently → Foreign import changed flags → Use raw images, ignore controller metadata.
-
Parity verified wrong after firmware → New parity calc → Roll back with images; reconstruct.
-
Controller “secure erase” on hot-spare → Data gone on that member → Maximise from other members; parity where possible.
Top 20 virtualisation / virtual-disk failure scenarios & recovery
-
VMFS header loss / DS won’t mount → Rebuild VMFS metadata; restore VMDK chain (CID/parentCID); export guest files.
-
VMDK snapshot chain broken → Missing delta links → Recreate descriptor; stitch deltas by parent CID/time; mount virtual disk.
-
Hyper-V VHDX differencing chain corrupt → Parent pointer bad → Fix headers; merge AVHDX hierarchy; export NTFS from guest.
-
CSVFS/Cluster Shared Volumes fencing → Ownership issues → Mount images RO; extract VHDX; repair guest FS.
-
Thin provisioned overfilled → Zeroed holes → Recover pre-overfill data from snapshots/older extents.
-
VMFS over RAID degraded → Dual fault during snapshot → Assemble pre-fault virtual array; mount older VM snapshots.
-
iSCSI LUN backing VMFS damaged → Carve LUN from NAS FS; mount VMFS; export VMs.
-
NFS datastore map loss → Export
.lck& descriptor remnants → Rebuild inventory; attach VMs to recovered VMDKs. -
vSAN object/component missing → Reconstruct from surviving components; recover namespace; export VMDKs.
-
Proxmox/ZFS zvol snapshot corruption → Roll ZFS snapshot back on image;
zdbto locate blocks; export. -
KVM qcow2 overlay lost → Rebase overlays; convert qcow2→raw if needed; mount guest FS.
-
Citrix/XenServer VHD chain issues → Fix footer/parent; coalesce chain; export NTFS/ext.
-
SR metadata corruption (Xen) → Recreate SR from LUN; attach VDI; export.
-
VM encryption (vSphere/BitLocker inside guest) → Need keys; decrypt either at host layer (if vSphere) or inside guest on clone.
-
Changed block tracking (CBT) bugs → Inconsistent backups → Prefer full images; use earlier snapshots.
-
Backup appliance dedup chain damage → Rehydrate by hash map; export full images; verify via checksum.
-
Array reshape during VM storage vMotion → Mixed epochs → Use pre-vMotion copies; rebuild.
-
Template golden image corrupted → Restore from dedup/previous versions; recompose pools.
-
Guest FS dirty after host incident → Journal replay on guest (on image); export.
-
EFI/boot issues in VM → Fix only on copy; mount disk; export data directly.
Why choose Plymouth Data Recovery
-
25 years of RAID/NAS/server recoveries across vendors, controllers and filesystems
-
Forensic-first workflow (image originals, virtual reconstruction, RO exports)
-
Advanced tooling & donor inventory (controllers, HBAs, heads, PCBs)
-
Free diagnostics with clear recovery options before work begins
Talk to a RAID engineer
Plymouth Data Recovery — contact our RAID engineers today for a free diagnostic. We’ll stabilise the members, reconstruct the array virtually, and recover your data with forensic-grade care.




