What a case! A long story short one of our disks got a bad disk in a software RAID1 setup and when we tried replacing the disk in a recovery Linux console we got the strange error of an MD device:
mdadm: Cannot get array info for /dev/md125
And ccording to the /proc/mdstat the device was there and mdadm -E reported the array was “clean”.
Similar issue here Inactive array – mdadm: Cannot get array info for /dev/md126
root@631019 ~ # mdadm --add /dev/md125 /dev/sdb2 mdadm: Cannot get array info for /dev/md125 root@631019 ~ # cat /proc/mdstat :( Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] md122 : inactive sda4[0](S) 33520640 blocks super 1.2 md123 : inactive sda5[0](S) 1914583040 blocks super 1.2 md124 : inactive sda3[0](S) 4189184 blocks super 1.2 md125 : inactive sda2[0](S) 1048512 blocks unused devices: <none> root@631019 ~ # mdadm -E /dev/sda2 :( /dev/sda2: Magic : a92b4efc Version : 0.90.00 UUID : aff708ee:16669ffb:1a120e13:7e9185ae Creation Time : Thu Mar 14 15:10:21 2019 Raid Level : raid1 Used Dev Size : 1048512 (1023.94 MiB 1073.68 MB) Array Size : 1048512 (1023.94 MiB 1073.68 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 126 Update Time : Thu Jul 11 10:22:17 2019 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : c1ee0a10 - correct Events : 103 Number Major Minor RaidDevice State this 0 8 2 0 active sync /dev/sda2 0 0 8 2 0 active sync /dev/sda2 1 1 8 18 1 active sync /dev/sdb2
The important piece of information here is that the RAID1 is in an inactive state, which is really strange! It is perfectly normal to be started with one disk missing (the raid as you can see consists from 2 disks) and in read-only mode before mounting it. But here it is in an inactive state! The output of /proc/mdstat shows a sign of inappropriate assembly of all those arrays probably during the boot of the rescue Linux system – missing information or old version of mdadm utility or some other configuration loaded! In such states – inactive and as you see no information about the type of the arrays it is normal mdadm to report error it could not get current array info. The key word here is CURRENT despite mdadm misses it in the error output:
root@631019 ~ # mdadm --add /dev/md125 /dev/sdb2 mdadm: Cannot get array info for /dev/md125
Because in fact mdadm tries adding a disk in the currently loaded configuration, not the real one in your disks!
The solution
- Remove ALL current configuration by issuing multiple stop commands with mdadm, no inactive raids or any raids should be reported in “/proc/mdstat”.
- Remove (or better rename) mdadm configuration files in /etc/mdadm.conf (in some Linux distributions is /etc/mdadm/mdadm.conf).
- Rescan for MD devices with mdadm. The mdadm will load the configuration from your disks.
- Add the missing partitions to your software raid devices.
STEP 1) Remove all current MD configuration
Use mdadm with “–stop”. This will not delete any data (metadata for the raid and real data). It just remove the currently loaded configuration (which might be wrong!) from your kernel. We want to rescan it cleanly, again.
root@631019 ~ # mdadm --stop /dev/md122 mdadm: stopped /dev/md122 root@631019 ~ # mdadm --stop /dev/md123 mdadm: stopped /dev/md123 root@631019 ~ # mdadm --stop /dev/md124 mdadm: stopped /dev/md124 root@631019 ~ # mdadm --stop /dev/md125 mdadm: stopped /dev/md125 root@631019 ~ # cat /proc/mdstat Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices: <none>
STEP 2) Remove all mdadm configuration files
Because they might be the problem. Imagine there was an old configuration file, which the rescue Linux (or your newly installed by some strange method server or ) mdadm may try assembling your raids using it and miss-configuring everything! Do not replace the file with a new mdadm scan, because it will be the same (or sort of) wrong configuration. Just remove the files!
root@631019 ~ # rm /etc/mdadm.conf root@631019 ~ # rm /etc/mdadm/mdadm.conf
STEP 3) Rescan for MD devices with mdadm
The mdadm will rescan your disks for metadata and will load a new configuration in the kernel reflecting what it has just found in the disks. A verbose option may be included but it is not mandatory. With the verbose option the user see what mdadm is doing and it can be discovered errors.
root@631019 ~ # mdadm --assemble --scan --verbose mdadm: looking for devices for further assembly mdadm: no recogniseable superblock on /dev/loop0 mdadm: no recogniseable superblock on /dev/sdb5 mdadm: no recogniseable superblock on /dev/sdb4 mdadm: no recogniseable superblock on /dev/sdb3 mdadm: no recogniseable superblock on /dev/sdb2 mdadm: no recogniseable superblock on /dev/sdb1 mdadm: Cannot assemble mbr metadata on /dev/sdb mdadm: No super block found on /dev/sda2 (Expected magic a92b4efc, got 00000041) mdadm: no RAID superblock on /dev/sda2 mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got af461b01) mdadm: no RAID superblock on /dev/sda1 mdadm: No super block found on /dev/sda (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sda mdadm: /dev/sda5 is identified as a member of /dev/md/srv4.local:5, slot 0. mdadm: no uptodate device for slot 1 of /dev/md/srv48.local:5 mdadm: added /dev/sda5 to /dev/md/srv48.local:5 as 0 mdadm: /dev/md/srv48.local:5 assembled from 1 drive - not enough to start the array. mdadm: looking for devices for further assembly mdadm: /dev/sda4 is identified as a member of /dev/md/2113894:root, slot 0. mdadm: no uptodate device for slot 1 of /dev/md/2113894:root mdadm: added /dev/sda4 to /dev/md/2113894:root as 0 mdadm: /dev/md/2113894:root has been started with 1 drive (out of 2). mdadm: looking for devices for further assembly mdadm: /dev/sda3 is identified as a member of /dev/md/2113894:swap, slot 0. mdadm: no uptodate device for slot 1 of /dev/md/2113894:swap mdadm: added /dev/sda3 to /dev/md/2113894:swap as 0 mdadm: /dev/md/2113894:swap has been started with 1 drive (out of 2). mdadm: looking for devices for further assembly mdadm: looking for devices for further assembly mdadm: no recogniseable superblock on /dev/md/2113894:swap mdadm: no recogniseable superblock on /dev/md/2113894:root mdadm: no recogniseable superblock on /dev/loop0 mdadm: no recogniseable superblock on /dev/sdb5 mdadm: no recogniseable superblock on /dev/sdb4 mdadm: no recogniseable superblock on /dev/sdb3 mdadm: no recogniseable superblock on /dev/sdb2 mdadm: no recogniseable superblock on /dev/sdb1 mdadm: Cannot assemble mbr metadata on /dev/sdb mdadm: /dev/sda5 is busy - skipping mdadm: /dev/sda4 is busy - skipping mdadm: /dev/sda3 is busy - skipping mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sda1 mdadm: No super block found on /dev/sda (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sda mdadm: /dev/sda2 is identified as a member of /dev/md/126_0, slot 0. mdadm: no uptodate device for slot 1 of /dev/md/126_0 mdadm: added /dev/sda2 to /dev/md/126_0 as 0 mdadm: /dev/md/126_0 has been started with 1 drive (out of 2). mdadm: looking for devices for further assembly mdadm: /dev/sda5 is busy - skipping mdadm: /dev/sda4 is busy - skipping mdadm: /dev/sda3 is busy - skipping mdadm: looking for devices for further assembly mdadm: no recogniseable superblock on /dev/md/126_0 mdadm: no recogniseable superblock on /dev/md/2113894:swap mdadm: no recogniseable superblock on /dev/md/2113894:root mdadm: no recogniseable superblock on /dev/loop0 mdadm: no recogniseable superblock on /dev/sdb5 mdadm: no recogniseable superblock on /dev/sdb4 mdadm: no recogniseable superblock on /dev/sdb3 mdadm: no recogniseable superblock on /dev/sdb2 mdadm: no recogniseable superblock on /dev/sdb1 mdadm: Cannot assemble mbr metadata on /dev/sdb mdadm: /dev/sda5 is busy - skipping mdadm: /dev/sda4 is busy - skipping mdadm: /dev/sda3 is busy - skipping mdadm: /dev/sda2 is busy - skipping mdadm: no recogniseable superblock on /dev/sda1 mdadm: Cannot assemble mbr metadata on /dev/sda
A lot of information above, but if you inspect it deeply you will see multiple arrays are found and started with a single disk. And here is the new current configuration in the kernel, which looks a lot better than the previous one:
root@631019 ~ # cat /proc/mdstat Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] md124 : active (auto-read-only) raid1 sda2[0] 1048512 blocks [2/1] [U_] md125 : active (auto-read-only) raid1 sda3[0] 4189184 blocks super 1.2 [2/1] [U_] md126 : active (auto-read-only) raid1 sda4[0] 33520640 blocks super 1.2 [2/1] [U_] md127 : inactive sda5[0](S) 1914583040 blocks super 1.2 unused devices: <none>
In fact, the 3 RAID1s are started successfully and we could use them, only one RAID is in an inactive state, but it is RAID0 and it is normal – this device cannot be recovered because it is RAID0 with a missing device (but this might be expected if you use RAID1).
STEP 4) Add the missing partitions to your software raid devices.
First, you should partition accordingly your new disk (probably with parted) if you haven’t done it yet use this (first copy then randomize the GUIDs):
sgdisk /dev/sda -R /dev/sdb sgdisk -G /dev/sdb
And we can add the partitions to recover the RAID devices successfully:
root@631019 ~ # mdadm --add /dev/md124 /dev/sdb2 mdadm: added /dev/sdb2 root@631019 ~ # cat /proc/mdstat Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] md124 : active raid1 sdb2[2] sda2[0] 1048512 blocks [2/1] [U_] [===============>.....] recovery = 75.0% (786432/1048512) finish=0.0min speed=196608K/sec md125 : active (auto-read-only) raid1 sda3[0] 4189184 blocks super 1.2 [2/1] [U_] md126 : active (auto-read-only) raid1 sda4[0] 33520640 blocks super 1.2 [2/1] [U_] md127 : inactive sda5[0](S) 1914583040 blocks super 1.2 unused devices: <none> root@631019 ~ # mdadm --add /dev/md125 /dev/sdb3 mdadm: added /dev/sdb3 root@631019 ~ # mdadm --add /dev/md126 /dev/sdb4 mdadm: added /dev/sdb3
After a while all RAID1 are rebuilt successfully!
The whole story
A disk in a software RAID1 got faulty and we stopped the server to change the disk. The colocation replaced the disk with a new one and booted the server. Because it was CentOS 7 it did not boot normally because one of the raid devices included in the grub.cfg could not be started – Centos 7 Server hangs up on boot after deleting a software raid (mdadm device) Then we booted in the rescue console “4.19.0-1-grml-amd64 #1 SMP Debian 4.19.8-1+grml.1 (2018-12-11) x86_64 x86_64 x86_64 GNU/Linux”. We partitioned our new disk with the same layout like the old one and executed the first command to add the new raid partition to the raid device and we got a nasty error:
root@631019 ~ # mdadm --add /dev/md123 /dev/sdb5 mdadm: Cannot get array info for /dev/md123
Checking the mdstat show the first sign of troubles. Some strange raid0 and ddf mdadm driver loaded?
root@631019 ~ # cat /proc/mdstat :( Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] md122 : inactive sda4[0](S) 33520640 blocks super 1.2 md123 : inactive sda5[0](S) 1914583040 blocks super 1.2 md124 : inactive sda3[0](S) 4189184 blocks super 1.2 md125 : inactive sda2[0](S) 1048512 blocks md126 : active raid0 sdb[0] 1952972800 blocks super external:/md127/10 64k chunks md127 : inactive sdb[0](S) 541784 blocks super external:ddf
It appeared no raids in “/dev/sdb”, which was normally the disk was new.
root@631019 ~ # mdadm -E /dev/sdb /dev/sdb: MBR Magic : aa55 Partition[0] : 3907029167 sectors at 1 (type ee)
The kernel also had not tried building the RAIDs (there were version 1.2, so it was normal):
root@631019 ~ # dmesg|grep md [ 0.000000] Linux version 4.19.0-1-grml-amd64 (team@grml.org) (gcc version 8.2.0 (Debian 8.2.0-12)) #1 SMP Debian 4.19.8-1+grml.1 (2018-12-11) [ 2.542302] random: systemd-udevd: uninitialized urandom read (16 bytes read) [ 2.542381] random: systemd-udevd: uninitialized urandom read (16 bytes read) [ 2.542402] random: systemd-udevd: uninitialized urandom read (16 bytes read) [ 2.594464] usb usb1: Manufacturer: Linux 4.19.0-1-grml-amd64 ehci_hcd [ 2.614448] usb usb2: Manufacturer: Linux 4.19.0-1-grml-amd64 ehci_hcd [ 3.704709] md126: detected capacity change from 0 to 1999844147200 [ 20.542235] systemd[1]: RTC configured in localtime, applying delta of 0 minutes to system time. [ 20.604080] systemd[1]: systemd 232 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 20.622746] systemd[1]: Detected architecture x86-64. [ 20.623153] systemd[1]: Running with unpopulated /etc. [ 20.635723] systemd[1]: Set hostname to <lswrescue>. [ 20.636175] systemd[1]: Initializing machine ID from random generator. [ 20.662804] systemd[1]: Populated /etc with preset unit settings. [ 20.775061] systemd-sysv-generator[866]: Overwriting existing symlink /run/systemd/generator.late/grml-reboot.service with real service. [ 20.818898] systemd[1]: Listening on Journal Socket. [ 20.819321] systemd[1]: Reached target Swap. [ 20.819727] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [ 20.820108] systemd[1]: Reached target Encrypted Volumes. [ 20.996378] systemd-journald[869]: Received request to flush runtime journal from PID 1
The we tried first to stop an array and then to assemble it again and then to add to rebuild the array. It failed with the same error:
root@631019 ~ # mdadm -E /dev/sda2 :( /dev/sda2: Magic : a92b4efc Version : 0.90.00 UUID : aff708ee:16669ffb:1a120e13:7e9185ae Creation Time : Thu Mar 14 15:10:21 2019 Raid Level : raid1 Used Dev Size : 1048512 (1023.94 MiB 1073.68 MB) Array Size : 1048512 (1023.94 MiB 1073.68 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 126 Update Time : Thu Jul 11 10:22:17 2019 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : c1ee0a10 - correct Events : 103 Number Major Minor RaidDevice State this 0 8 2 0 active sync /dev/sda2 0 0 8 2 0 active sync /dev/sda2 1 1 8 18 1 active sync /dev/sdb2 root@631019 ~ # mdadm --manage /dev/md122 --add /dev/sdb4 mdadm: Cannot get array info for /dev/md122 root@631019 ~ # mdadm --assemble /dev/md122 :( mdadm: /dev/md122 not identified in config file. root@631019 ~ # mdadm --assemble /dev/md122 --scan :( mdadm: /dev/md122 not identified in config file. root@631019 ~ # mdadm --assemble /dev/md122 --scan --force :( mdadm: /dev/md122 not identified in config file. root@631019 ~ # mdadm --stop /dev/md122 :( mdadm: stopped /dev/md122 root@631019 ~ # mdadm --assemble /dev/md122 --scan --force mdadm: /dev/md122 not identified in config file. root@631019 ~ # cat /proc/mdstat :( Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] md123 : inactive sda5[0](S) 1914583040 blocks super 1.2 md124 : inactive sda3[0](S) 4189184 blocks super 1.2 md125 : inactive sda2[0](S) 1048512 blocks md126 : active raid0 sdb[0] 1952972800 blocks super external:/md127/10 64k chunks md127 : inactive sdb[0](S) 541784 blocks super external:ddf unused devices: <none> root@631019 ~ # mdadm --assemble /dev/md122 /dev/sda4 mdadm: /dev/md122 assembled from 1 drive - need all 2 to start it (use --run to insist). 1 root@631019 ~ # cat /proc/mdstat :( Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] md122 : inactive sda4[0](S) 33520640 blocks super 1.2 md123 : inactive sda5[0](S) 1914583040 blocks super 1.2 md124 : inactive sda3[0](S) 4189184 blocks super 1.2 md125 : inactive sda2[0](S) 1048512 blocks md126 : active raid0 sdb[0] 1952972800 blocks super external:/md127/10 64k chunks md127 : inactive sdb[0](S) 541784 blocks super external:ddf unused devices: <none> root@631019 ~ # mdadm --add /dev/md122 /dev/sdb4 mdadm: Cannot get array info for /dev/md122
First, the assemble command failed, but specifying the exact device and partition inactive device appeared again in the current configuration in “/proc/mdstat”. We were in the same situation as before – inactive devices and no information about the RAID in the loaded configuration in the kernel. So the mdadm loaded the wrong configuration again and we saw there was a really strange configuration in the “/etc/mdadm/mdadm.conf”. As you can see the line “mdadm: /dev/md122 not identified in config file”? What?
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays ARRAY /dev/md126 UUID=aff708ee:16669ffb:1a120e13:7e9185ae ARRAY /dev/md/swap metadata=1.2 UUID=2960efb0:54bfb2af:babacd0b:c95e34c0 name=2113894:swap ARRAY /dev/md/root metadata=1.2 UUID=e6367857:94aa2b7a:70a26151:63c60d0b name=2113894:root ARRAY /dev/md/5 metadata=1.2 UUID=53a6a93b:769c450c:589a622e:00c66915 name=srv.local:5 ARRAY metadata=ddf UUID=9451717d:ca7996b6:e79e8c3c:e381784d ARRAY container=9451717d:ca7996b6:e79e8c3c:e381784d member=0 UUID=01d23819:8d2cb691:843b82dd:1998dabe ARRAY container=9451717d:ca7996b6:e79e8c3c:e381784d member=1 UUID=ed021fc6:45d423de:93fd03fd:f03785c3 ARRAY container=9451717d:ca7996b6:e79e8c3c:e381784d member=2 UUID=daf0c60e:eca16660:7d65c7e0:0fa3c2cb ARRAY container=9451717d:ca7996b6:e79e8c3c:e381784d member=3 UUID=3ab9e3ab:0b90cfff:f5ecdf19:b523e1d5 ARRAY container=9451717d:ca7996b6:e79e8c3c:e381784d member=4 UUID=9898dc76:e9f662e2:0ec3bbba:10affe69 ARRAY container=9451717d:ca7996b6:e79e8c3c:e381784d member=5 UUID=77245b0c:b24d7261:4f7ee128:4d431b86 ARRAY container=9451717d:ca7996b6:e79e8c3c:e381784d member=6 UUID=3b06e0d2:8883165a:e3ae19a5:1b403352 ARRAY container=9451717d:ca7996b6:e79e8c3c:e381784d member=7 UUID=47440a60:a27caa49:e035dc9e:31d6771b ARRAY container=9451717d:ca7996b6:e79e8c3c:e381784d member=8 UUID=7b256bc9:4a8f465c:e4ff4318:df8361d2 ARRAY container=9451717d:ca7996b6:e79e8c3c:e381784d member=9 UUID=d09e9945:9d22102d:b895e189:e498fe91 ARRAY container=9451717d:ca7996b6:e79e8c3c:e381784d member=10 UUID=90d64ef2:eeb2d004:823cbfaa:6f2081e1 ARRAY container=9451717d:ca7996b6:e79e8c3c:e381784d member=11 UUID=18a87c3b:1993f6f8:30fb116f:51db1cfc
We removed the file and then stopped one of the inactive devices and this time we tried assembling the device with a verbose option and here we saw the problem!
root@631019 ~ # mdadm --stop /dev/md122 :( mdadm: stopped /dev/md122 root@631019 ~ # mdadm --assemble /dev/md122 --update=summaries --verbose mdadm: looking for devices for /dev/md122 mdadm: No super block found on /dev/loop0 (Expected magic a92b4efc, got 637ab963) mdadm: no RAID superblock on /dev/loop0 mdadm: No super block found on /dev/sdb5 (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sdb5 mdadm: No super block found on /dev/sdb4 (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sdb4 mdadm: No super block found on /dev/sdb3 (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sdb3 mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sdb2 mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sdb1 mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sdb mdadm: /dev/sda5 has wrong uuid. mdadm: /dev/sda3 has wrong uuid. mdadm: No super block found on /dev/sda2 (Expected magic a92b4efc, got 00000041) mdadm: no RAID superblock on /dev/sda2 mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got af461b01) mdadm: no RAID superblock on /dev/sda1 mdadm: No super block found on /dev/sda (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sda mdadm: --update=summaries not understood for 1.x metadata
mdadm reported that some raid partitions has wrong uuid, so apperantly the currenly loaded configuration was totally wrong and we must delete all mdadm configuration files, to stop all of the software MD devices and rescan again with verbose option to see what is going on. And this time it worked the assemble command finished and the RAID1 devices appeared with one drive OK and one missing!
root@631019 ~ # mdadm --stop /dev/md122 mdadm: stopped /dev/md122 root@631019 ~ # mdadm --stop /dev/md123 mdadm: stopped /dev/md123 root@631019 ~ # mdadm --stop /dev/md124 mdadm: stopped /dev/md124 root@631019 ~ # mdadm --stop /dev/md125 mdadm: stopped /dev/md125 root@631019 ~ # cat /proc/mdstat Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices: <none> root@631019 ~ # rm /etc/mdadm.conf root@631019 ~ # rm /etc/mdadm/mdadm.conf root@631019 ~ # mdadm --assemble --scan --verbose mdadm: looking for devices for further assembly mdadm: no recogniseable superblock on /dev/loop0 mdadm: no recogniseable superblock on /dev/sdb5 mdadm: no recogniseable superblock on /dev/sdb4 mdadm: no recogniseable superblock on /dev/sdb3 mdadm: no recogniseable superblock on /dev/sdb2 mdadm: no recogniseable superblock on /dev/sdb1 mdadm: Cannot assemble mbr metadata on /dev/sdb mdadm: No super block found on /dev/sda2 (Expected magic a92b4efc, got 00000041) mdadm: no RAID superblock on /dev/sda2 mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got af461b01) mdadm: no RAID superblock on /dev/sda1 mdadm: No super block found on /dev/sda (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sda mdadm: /dev/sda5 is identified as a member of /dev/md/srv.local:5, slot 0. mdadm: no uptodate device for slot 1 of /dev/md/srv.local:5 mdadm: added /dev/sda5 to /dev/md/srv.local:5 as 0 mdadm: /dev/md/srv.local:5 assembled from 1 drive - not enough to start the array. mdadm: looking for devices for further assembly mdadm: /dev/sda4 is identified as a member of /dev/md/2113894:root, slot 0. mdadm: no uptodate device for slot 1 of /dev/md/2113894:root mdadm: added /dev/sda4 to /dev/md/2113894:root as 0 mdadm: /dev/md/2113894:root has been started with 1 drive (out of 2). mdadm: looking for devices for further assembly mdadm: /dev/sda3 is identified as a member of /dev/md/2113894:swap, slot 0. mdadm: no uptodate device for slot 1 of /dev/md/2113894:swap mdadm: added /dev/sda3 to /dev/md/2113894:swap as 0 mdadm: /dev/md/2113894:swap has been started with 1 drive (out of 2). mdadm: looking for devices for further assembly mdadm: looking for devices for further assembly mdadm: no recogniseable superblock on /dev/md/2113894:swap mdadm: no recogniseable superblock on /dev/md/2113894:root mdadm: no recogniseable superblock on /dev/loop0 mdadm: no recogniseable superblock on /dev/sdb5 mdadm: no recogniseable superblock on /dev/sdb4 mdadm: no recogniseable superblock on /dev/sdb3 mdadm: no recogniseable superblock on /dev/sdb2 mdadm: no recogniseable superblock on /dev/sdb1 mdadm: Cannot assemble mbr metadata on /dev/sdb mdadm: /dev/sda5 is busy - skipping mdadm: /dev/sda4 is busy - skipping mdadm: /dev/sda3 is busy - skipping mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sda1 mdadm: No super block found on /dev/sda (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sda mdadm: /dev/sda2 is identified as a member of /dev/md/126_0, slot 0. mdadm: no uptodate device for slot 1 of /dev/md/126_0 mdadm: added /dev/sda2 to /dev/md/126_0 as 0 mdadm: /dev/md/126_0 has been started with 1 drive (out of 2). mdadm: looking for devices for further assembly mdadm: /dev/sda5 is busy - skipping mdadm: /dev/sda4 is busy - skipping mdadm: /dev/sda3 is busy - skipping mdadm: looking for devices for further assembly mdadm: no recogniseable superblock on /dev/md/126_0 mdadm: no recogniseable superblock on /dev/md/2113894:swap mdadm: no recogniseable superblock on /dev/md/2113894:root mdadm: no recogniseable superblock on /dev/loop0 mdadm: no recogniseable superblock on /dev/sdb5 mdadm: no recogniseable superblock on /dev/sdb4 mdadm: no recogniseable superblock on /dev/sdb3 mdadm: no recogniseable superblock on /dev/sdb2 mdadm: no recogniseable superblock on /dev/sdb1 mdadm: Cannot assemble mbr metadata on /dev/sdb mdadm: /dev/sda5 is busy - skipping mdadm: /dev/sda4 is busy - skipping mdadm: /dev/sda3 is busy - skipping mdadm: /dev/sda2 is busy - skipping mdadm: no recogniseable superblock on /dev/sda1 mdadm: Cannot assemble mbr metadata on /dev/sda :( 130 root@631019 ~ # cat /proc/mdstat :( Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] md124 : active (auto-read-only) raid1 sda2[0] 1048512 blocks [2/1] [U_] md125 : active (auto-read-only) raid1 sda3[0] 4189184 blocks super 1.2 [2/1] [U_] md126 : active (auto-read-only) raid1 sda4[0] 33520640 blocks super 1.2 [2/1] [U_] md127 : inactive sda5[0](S) 1914583040 blocks super 1.2 unused devices: <none>
And adding raid partitions was successful:
root@631019 ~ # mdadm --add /dev/md124 /dev/sdb2 mdadm: added /dev/sdb2 root@631019 ~ # cat /proc/mdstat Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] md124 : active raid1 sdb2[2] sda2[0] 1048512 blocks [2/1] [U_] [===============>.....] recovery = 75.0% (786432/1048512) finish=0.0min speed=196608K/sec md125 : active (auto-read-only) raid1 sda3[0] 4189184 blocks super 1.2 [2/1] [U_] md126 : active (auto-read-only) raid1 sda4[0] 33520640 blocks super 1.2 [2/1] [U_] md127 : inactive sda5[0](S) 1914583040 blocks super 1.2 unused devices: <none> root@631019 ~ # mdadm --add /dev/md125 /dev/sdb3 mdadm: added /dev/sdb3 root@631019 ~ # cat /proc/mdstat Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] md124 : active raid1 sdb2[1] sda2[0] 1048512 blocks [2/2] [UU] md125 : active raid1 sdb3[2] sda3[0] 4189184 blocks super 1.2 [2/1] [U_] [=>...................] recovery = 6.2% (261888/4189184) finish=0.2min speed=261888K/sec md126 : active (auto-read-only) raid1 sda4[0] 33520640 blocks super 1.2 [2/1] [U_] md127 : inactive sda5[0](S) 1914583040 blocks super 1.2 unused devices: <none>
Thanks for this great post you saved my day #raid5 setup
Thank you SO MUCH for posting this. Just saved my home-made NAS from failure!
This was always a PoC (Raspberry Pi4 + 2x USB drives in RAID1) but I’d started to rely on it (bad move). Currently re-syncing from the working drive (and ordering a “proper” NAS) 😀
This also saved my bacon. Thank You!!
Thank you, the documentation is very confusing
Another thanks from a very glad reader!
Nice one – also sorted me out for mdadm: Cannot get array info for /dev/md127
Any idea on what causes this?
Well, I’m in the situation (after changing a defective HDD but without marking it as failed and then removed – my mistake) where the “cat /proc/mdstat” command gives:
Personalities : [raid1]
md2 : active raid1 sda3[1]
2111699968 blocks super 1.2 [2/1] [_U]
bitmap: 8/16 pages [32KB], 65536KB chunk
md3 : active raid1 sda4[1]
1760973632 blocks super 1.2 [2/1] [_U]
bitmap: 5/14 pages [20KB], 65536KB chunk
md1 : active raid1 sda2[1]
523264 blocks super 1.2 [2/1] [_U]
md0 : inactive sda1[1](S)
33520640 blocks super 1.2
What would you recommend (the situation being a little bit different from the you described above)?
Thank you so much if you could help me!
The md1, md2 and md3 seem OK, just “–add” the partitions from the new hard drive (the new driver should be partitioned, first). The md0 try to stop it with “mdadm –stop /dev/md0” and then “mdadm –assemble –scan”, but if it is a RAID0 it won’t work and the data are gone.