Replacing a disk maybe sometimes challenging, especially with software RAID. If the software RAID1 went inactive this article might be for you!
Booting from a LIVECD or a rescue PXE system and all RAID devices got inactive despite the loaded personalities. We have similar article on the subject – Recovering MD array and mdadm: Cannot get array info for /dev/md0
livecd ~ # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid0] [raid1] [raid10] [linear] [multipath] md125 : inactive sdb3[1](S) 1047552 blocks super 1.2 md126 : inactive sdb1[1](S) 52427776 blocks super 1.2 md127 : inactive sdb2[1](S) 16515072 blocks super 1.2 unused devices: <none>
Despite the personalities are loaded, which means the kernel modules are successfully loaded – “[raid6] [raid5] [raid4] [raid0] [raid1] [raid10] [linear] [multipath] “. Still, something got wrong and the device’s personality is unrecognized and is inactive state.
A device in inactive state cannot be recovered and it cannot be added disks:
livecd ~ # mdadm --add /dev/md125 /dev/sda3 mdadm: Cannot get array info for /dev/md125
In general, to recover a RAID in inactive state:
- Check if the kernel modules are loaded. If the RAID setups are using RAID1, the “Personalities” line in /proc/mdstat should include it as “[raid1]”
- Try to run the device with “mdadm –run”
- Add the missing device to the RAID device with “mdadm –add” if the status of the RAID device goes to “active (auto-read-only)” or just “active”.
- Wait for the RAID device to recover.