CentOS 7 dracut-initqueue timeout and could not boot – warning /dev/disk/by-id/md-uuid- does not exist

Let’s say you update your software raid layout – create, delete or modify your software raid and reboot the system and your server does not start normally. After loading your remote video console (KVM) you see the boot process reports for a missing device and you are under console (dracut console). Your system is in “Emergency mode”.

The warning:

dracut-initqueue[504]: Warning: dracut-initqueue timeout - starting timeout scripts
dracut-initqueue[504]: Warning: dracut-initqueue timeout - starting timeout scripts
dracut-initqueue[504]: Warning: dracut-initqueue timeout - starting timeout scripts
....
....
dracut-initqueue[504]: Warning: could not boot.
dracut-initqueue[504]: Warning: /dev/disk/by-id/md-uuid-2fdc509e:8dd05ed3:c2350cb4:ea5a620d does not exist
      Starting Dracut Emergency Shell...
Warning: /dev/disk/by-id/md-uuid-2fdc509e:8dd05ed3:c2350cb4:ea5a620d does not exist

Generating "/run/initramfs/rdsosreport.txt"


Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.


dracut:/#

SCREENSHOT 1) The boot process reports mutiple warning messages of dracut-initqueue timeout, because a drive cannot be found.

main menu
Warning: dracut-initqueue timeout – starting timeout scripts


This article is similar to Centos 7 Server hangs up on boot after deleting a software raid (mdadm device).
Check if all of your software raid devices are included in:

  1. /etc/default/grub, which file is used when your boot configuration is made.
  2. /boot/grub/grub.cfg – grub configuration file.

What happened in our case

we included the configuration in /etc/default/grub, but never generated the new grub2 configuration before a reboot, so our server got into the Emergency mode, which you can exit, by just type “exit” and to continue loading the system as usual.

Here is a log of how we got in the problem and how we fixed it.

The problem

[root@srv ~]# grep rd.md.uuid /etc/default/grub 
GRUB_CMDLINE_LINUX="crashkernel=auto rd.md.uuid=3b9feb09:75da7a5e:72932e0a:b847f393 rd.md.uuid=b6e9ca56:66468d69:3c89646a:2154d33f rd.md.uuid=b1427aed:cdd0e6d0:81f80c97:ca76233d rd.md.uuid=d950abd0:22d3443d:07148bae:344b362a rhgb quiet"
[root@srv ~]# grep rd.md.uuid /boot/grub2/grub.cfg 
        linux16 /vmlinuz-3.10.0-957.21.2.el7.x86_64 root=UUID=362149c5-a2f1-4c49-b12f-00ce7e68d2b4 ro crashkernel=auto rd.md.uuid=3b9feb09:75da7a5e:72932e0a:b847f393 rd.md.uuid=b6e9ca56:66468d69:3c89646a:2154d33f rd.md.uuid=b1427aed:cdd0e6d0:81f80c97:ca76233d rd.md.uuid=2fdc509e:8dd05ed3:c2350cb4:ea5a620d rhgb quiet LANG=en_US.UTF-8
        linux16 /vmlinuz-3.10.0-957.10.1.el7.x86_64 root=UUID=362149c5-a2f1-4c49-b12f-00ce7e68d2b4 ro crashkernel=auto rd.md.uuid=3b9feb09:75da7a5e:72932e0a:b847f393 rd.md.uuid=b6e9ca56:66468d69:3c89646a:2154d33f rd.md.uuid=b1427aed:cdd0e6d0:81f80c97:ca76233d rd.md.uuid=2fdc509e:8dd05ed3:c2350cb4:ea5a620d rhgb quiet LANG=en_US.UTF-8
        linux16 /vmlinuz-3.10.0-957.5.1.el7.x86_64 root=UUID=362149c5-a2f1-4c49-b12f-00ce7e68d2b4 ro crashkernel=auto rd.md.uuid=3b9feb09:75da7a5e:72932e0a:b847f393 rd.md.uuid=b6e9ca56:66468d69:3c89646a:2154d33f rd.md.uuid=b1427aed:cdd0e6d0:81f80c97:ca76233d rd.md.uuid=2fdc509e:8dd05ed3:c2350cb4:ea5a620d rhgb quiet
        linux16 /vmlinuz-3.10.0-957.el7.x86_64 root=UUID=362149c5-a2f1-4c49-b12f-00ce7e68d2b4 ro crashkernel=auto rd.md.uuid=3b9feb09:75da7a5e:72932e0a:b847f393 rd.md.uuid=b6e9ca56:66468d69:3c89646a:2154d33f rd.md.uuid=b1427aed:cdd0e6d0:81f80c97:ca76233d rd.md.uuid=2fdc509e:8dd05ed3:c2350cb4:ea5a620d rhgb quiet
        linux16 /vmlinuz-0-rescue-bc0e9d9e9dcd4e48b3b6d0b7a8327917 root=UUID=362149c5-a2f1-4c49-b12f-00ce7e68d2b4 ro crashkernel=auto rd.md.uuid=3b9feb09:75da7a5e:72932e0a:b847f393 rd.md.uuid=b6e9ca56:66468d69:3c89646a:2154d33f rd.md.uuid=b1427aed:cdd0e6d0:81f80c97:ca76233d rd.md.uuid=2fdc509e:8dd05ed3:c2350cb4:ea5a620d rhgb quiet
[root@srv ~]# 

As you can see in our default configuration the last software raid device is with ID d950abd0:22d3443d:07148bae:344b362a, but in the active grub2 configuration is the old ID 2fdc509e:8dd05ed3:c2350cb4:ea5a620d and this is why we got into Emergency mode, the grub2 cannot find this disk, because it was removed.

To fix the problem

Generate the grub2 configuration and verify the two default and active configuration are the same.

[root@srv ~]# grub2-mkconfig -o /boot/grub2/grub.cfg 
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-957.21.2.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-957.21.2.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-957.10.1.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-957.10.1.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-957.5.1.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-957.5.1.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-957.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-957.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-bc0e9d9e9dcd4e48b3b6d0b7a8327917
Found initrd image: /boot/initramfs-0-rescue-bc0e9d9e9dcd4e48b3b6d0b7a8327917.img
done
[root@srv ~]# grep rd.md.uuid /etc/default/grub 
GRUB_CMDLINE_LINUX="crashkernel=auto rd.md.uuid=3b9feb09:75da7a5e:72932e0a:b847f393 rd.md.uuid=b6e9ca56:66468d69:3c89646a:2154d33f rd.md.uuid=b1427aed:cdd0e6d0:81f80c97:ca76233d rd.md.uuid=d950abd0:22d3443d:07148bae:344b362a rhgb quiet"
[root@srv ~]# grep rd.md.uuid /boot/grub2/grub.cfg 
        linux16 /vmlinuz-3.10.0-957.21.2.el7.x86_64 root=UUID=362149c5-a2f1-4c49-b12f-00ce7e68d2b4 ro crashkernel=auto rd.md.uuid=3b9feb09:75da7a5e:72932e0a:b847f393 rd.md.uuid=b6e9ca56:66468d69:3c89646a:2154d33f rd.md.uuid=b1427aed:cdd0e6d0:81f80c97:ca76233d rd.md.uuid=d950abd0:22d3443d:07148bae:344b362a rhgb quiet 
        linux16 /vmlinuz-3.10.0-957.10.1.el7.x86_64 root=UUID=362149c5-a2f1-4c49-b12f-00ce7e68d2b4 ro crashkernel=auto rd.md.uuid=3b9feb09:75da7a5e:72932e0a:b847f393 rd.md.uuid=b6e9ca56:66468d69:3c89646a:2154d33f rd.md.uuid=b1427aed:cdd0e6d0:81f80c97:ca76233d rd.md.uuid=d950abd0:22d3443d:07148bae:344b362a rhgb quiet 
        linux16 /vmlinuz-3.10.0-957.5.1.el7.x86_64 root=UUID=362149c5-a2f1-4c49-b12f-00ce7e68d2b4 ro crashkernel=auto rd.md.uuid=3b9feb09:75da7a5e:72932e0a:b847f393 rd.md.uuid=b6e9ca56:66468d69:3c89646a:2154d33f rd.md.uuid=b1427aed:cdd0e6d0:81f80c97:ca76233d rd.md.uuid=d950abd0:22d3443d:07148bae:344b362a rhgb quiet 
        linux16 /vmlinuz-3.10.0-957.el7.x86_64 root=UUID=362149c5-a2f1-4c49-b12f-00ce7e68d2b4 ro crashkernel=auto rd.md.uuid=3b9feb09:75da7a5e:72932e0a:b847f393 rd.md.uuid=b6e9ca56:66468d69:3c89646a:2154d33f rd.md.uuid=b1427aed:cdd0e6d0:81f80c97:ca76233d rd.md.uuid=d950abd0:22d3443d:07148bae:344b362a rhgb quiet 
        linux16 /vmlinuz-0-rescue-bc0e9d9e9dcd4e48b3b6d0b7a8327917 root=UUID=362149c5-a2f1-4c49-b12f-00ce7e68d2b4 ro crashkernel=auto rd.md.uuid=3b9feb09:75da7a5e:72932e0a:b847f393 rd.md.uuid=b6e9ca56:66468d69:3c89646a:2154d33f rd.md.uuid=b1427aed:cdd0e6d0:81f80c97:ca76233d rd.md.uuid=d950abd0:22d3443d:07148bae:344b362a rhgb quiet 
[root@srv ~]#

SCREENSHOT 2) When the boot fails the boot process leave you in the Emergency shell.

It is simple to exit and continue booting – just type “exit” and hit Enter. You can see the systemd logs with journalctl – these logs are from the current boot process and resides in memory.

main menu
Starting Dracut Emergency Shell and exit

SCREENSHOT 3) Exit Emergency Shell with the command “Exit” and the booting process continues if possible.

In our case, the unrecognized drive was our new storage and it was not important for the boot process. If the miss-configured ID were for the root partition the boot process would be able to continue. The boot process is smart enough and you can see the two lines after the “exit” command: “Not all disks have been found.” and “You might want to regenerate your initramfs.” – in our case not initramfs, but the grub2 configuration!

main menu
Exit Emergency Shell and continue normal boot

SCREENSHOT 4) The normal boot continues after Emergency Shell if the unrecognized disk is not so important (such as the root partition, for example).

main menu
Normal boot after Emergency Shell

Here you can check another issue with the same error – CentOS 8 dracut-initqueue timeout and could not boot – warning /dev/disk/by-id/md-uuid- does not exist – inactive raids

Leave a Reply

Your email address will not be published. Required fields are marked *