Not all dedicated server providers have good web management panel or offer a remote management console (like IPMI/KVM, iDRAC, iLO and so on)! If you used multiple server providers you would experience the case when you received a server installed with the linux distro by your choice, but with no options to configure the hard drives setup. And in most cases when there is no choice of hard drives and partition configuration you end up with a dedicated server installed with root filesystem on the whole first device without any redundancy or performance. So it will be nice to reinstall the server without using remote management module (because it is missing, for example).
Here are the steps to reinstall a CentOS7 server with two hard drives, the initial installation uses the almost whole first device for root filesystem. At the end we will have the server with different hard drive configuration – root filesystem will be on a RAID1 device for redundancy (and read performance) and will have a dedicated space for it, there will be a bigger device for storage purposes.
Here are the steps to reinstall live CentOS 7 server using ssh and change disk configuration of root filesystem to RAID1:
STEP 1) Show the current configuration
[root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 461G 1.5G 437G 1% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm tmpfs 16G 8.5M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/sda1 485M 136M 324M 30% /boot /dev/sda5 3.9G 17M 3.7G 1% /tmp tmpfs 3.2G 0 3.2G 0% /run/user/0 [root@localhost ~]# ls -al /dev/sd? brw-rw---- 1 root disk 8, 0 Apr 4 11:06 /dev/sda brw-rw---- 1 root disk 8, 16 Apr 4 11:06 /dev/sdb brw-rw---- 1 root disk 8, 32 Apr 4 11:06 /dev/sdc brw-rw---- 1 root disk 8, 48 Apr 4 11:06 /dev/sdd [root@localhost ~]#
Root filesystem uses /dev/sda3, /tmp uses /dev/sda5 and the boot is on /dev/sda2. So the whole install uses only /dev/sda device and the other 3 are spare. We are going to use the second hard drive /dev/sdb to create a software RAID1 for our root filesystem.
STEP 2) Prepare the second drive for first reboot
Because /dev/sda is the first device the server will always boot from the grub installed in /dev/sda, so if we want to change the partition layout we must umount all partitions of /dev/sda, which is not a easy job. So because we have a second disk, we can change the partition layout of the second disk /dev/sdb, then copy the root filesystem from the first disk to the second and instruct the the grub to boot the root filesystem from the second disk. And more, at first we’ll use the partition for the swap of the second disk for root filesystem and then we’ll create the RAID1 device. Here we make initial setup of the second disk /dev/sdb:
[root@localhost ~]# parted /dev/sdb --script print Model: ATA SanDisk SD6SB2M5 (scsi) Disk /dev/sdb: 512GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags [root@localhost ~]# parted /dev/sdb --script mklabel gpt [root@localhost ~]# parted /dev/sdb --script mkpart primary 0% 4M [root@localhost ~]# parted /dev/sdb --script mkpart primary 4M 16G [root@localhost ~]# parted /dev/sdb --script mkpart primary 16G 50G [root@localhost ~]# parted /dev/sdb --script mkpart primary 50G 100% [root@localhost ~]# parted /dev/sdb --script set 1 bios_grub on [root@localhost ~]# parted /dev/sdb --script print Model: ATA SanDisk SD6SB2M5 (scsi) Disk /dev/sdb: 512GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 4194kB 3146kB primary bios_grub 2 4194kB 16.0GB 16.0GB primary 3 16.0GB 50.0GB 34.0GB primary 4 50.0GB 512GB 462GB primary [root@localhost ~]# mkfs.ext4 /dev/sdb2 mke2fs 1.42.9 (28-Dec-2013) Discarding device blocks: done Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 977280 inodes, 3905280 blocks 195264 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2151677952 120 block groups 32768 blocks per group, 32768 fragments per group 8144 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@localhost ~]#
STEP 3) Copy all the files from the current filesystem to the newly created one using rsync
[root@localhost ~]# mkdir /mnt/centos [root@localhost ~]# mount /dev/sdb2 /mnt/centos/ [root@localhost ~]# rsync --delete --partial --verbose --progress --stats --recursive --times --perms --links --owner --group --hard-links --devices --exclude=/mnt --exclude=/proc --exclude=/sys / /mnt/centos/ ... ... Number of files: 81519 Number of files transferred: 60909 Total file size: 1483871491 bytes Total transferred file size: 1460598076 bytes Literal data: 1460598076 bytes Matched data: 0 bytes File list size: 1691669 File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 1465598884 Total bytes received: 1734750 sent 1465598884 bytes received 1734750 bytes 79315331.57 bytes/sec total size is 1483871491 speedup is 1.01 [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 461G 1.5G 437G 1% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm tmpfs 16G 8.5M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/sda1 485M 136M 324M 30% /boot /dev/sda5 3.9G 17M 3.7G 1% /tmp tmpfs 3.2G 0 3.2G 0% /run/user/0 /dev/sdb2 15G 1.6G 13G 12% /mnt/centos [root@localhost ~]#
STEP 4) Prepare the new root
- umount /boot, because we want to use it for the first reboot but with changed grub configuration, generated for the new root filesystem
[root@localhost ~]# umount /boot [root@localhost ~]# mount /dev/sda1 /mnt/centos/boot [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 461G 1.5G 437G 1% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm tmpfs 16G 8.6M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/sda5 3.9G 17M 3.7G 1% /tmp tmpfs 3.2G 0 3.2G 0% /run/user/0 /dev/sdb2 15G 1.6G 13G 12% /mnt/centos /dev/sda1 485M 136M 324M 30% /mnt/centos/boot
- chroot in the new root and replace the old GIUD with the new one for the root filesystem in /etc/fstab
[root@localhost ~]# mkdir /mnt/centos/proc [root@localhost ~]# mkdir /mnt/centos/sys [root@localhost ~]# mount -o bind /proc /mnt/centos/proc [root@localhost ~]# mount -o bind /dev /mnt/centos/dev [root@localhost ~]# mount -o bind /sys /mnt/centos/sys [root@localhost ~]# chroot /mnt/centos/ [root@localhost /]# blkid |grep sdb2 /dev/sdb2: UUID="b829d6f1-ca0e-4939-8764-c329aee0a5b2" TYPE="ext4" PARTLABEL="primary" PARTUUID="b150c7cc-0557-4de9-bbc9-05ae54b9cec5" [root@localhost /]# blkid |grep sda2 /dev/sda2: UUID="b43edab7-8b2f-4047-9ca2-0f3e3ea24e0e" TYPE="ext4" [root@localhost /]# sed -i "s/b43edab7-8b2f-4047-9ca2-0f3e3ea24e0e/b829d6f1-ca0e-4939-8764-c329aee0a5b2/g" /etc/fstab
- comment out the /tmp in /etc/fstab (and any other directory, which uses /dev/sda, remember after the reboot we will change the partition layout of /dev/sda, so we need not to use any partition from it)
[root@localhost /]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Wed Apr 4 11:00:01 2018 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=b829d6f1-ca0e-4939-8764-c329aee0a5b2 / ext4 defaults 1 1 UUID=9b98bd49-34bd-43a3-89b9-32c36df722b2 /boot ext2 defaults 1 2 UUID=7f44f0b8-cbbe-4e70-a763-112675cf9a2c /tmp ext4 noexec,nosuid,nodev 1 2 UUID=20c3afea-87ae-4716-8a65-323bd9e6eae6 swap swap defaults 0 0 [root@localhost /]# sed -i "s/UUID=7f44f0b8-cbbe-4e70-a763-112675cf9a2c/#UUID=7f44f0b8-cbbe-4e70-a763-112675cf9a2c/g" /etc/fstab [root@localhost /]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Wed Apr 4 11:00:01 2018 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=b829d6f1-ca0e-4939-8764-c329aee0a5b2 / ext4 defaults 1 1 UUID=9b98bd49-34bd-43a3-89b9-32c36df722b2 /boot ext2 defaults 1 2 #UUID=7f44f0b8-cbbe-4e70-a763-112675cf9a2c /tmp ext4 noexec,nosuid,nodev 1 2 UUID=20c3afea-87ae-4716-8a65-323bd9e6eae6 swap swap defaults 0 0 [root@localhost /]#
- Generate the new grub2 configuration (because you are in the chrooted new root and there you change the /etc/fstab, grub will use the new GUID for the filesystem)
[root@localhost /]# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-693.21.1.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-693.21.1.el7.x86_64.img Found linux image: /boot/vmlinuz-3.10.0-693.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-693.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-3003b47aedb040f6baaf6fce8c6b8386 Found initrd image: /boot/initramfs-0-rescue-3003b47aedb040f6baaf6fce8c6b8386.img done [root@localhost /]# exit exit [root@localhost ~]# umount /mnt/centos/boot [root@localhost ~]# umount /mnt/centos/proc [root@localhost ~]# umount /mnt/centos/sys [root@localhost ~]# umount /mnt/centos/dev [root@localhost ~]# umount /mnt/centos [root@localhost ~]# reboot Connection to srv closed by remote host. Connection to srv closed.
STEP 5) Prepare the two disks /dev/sda and /dev/sdb for RAID1 device
Unmount all devices of /dev/sda (such as swap, /boot). Then make a new partition layout for /dev/sda and tune the layout of /dev/sdb (we need the flag raid to be “on”)
[root@srv0 ~]# ssh srv root@srv's password: Last login: Wed Apr 4 11:28:57 2018 from 192.168.0.110 [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb2 15G 1.6G 13G 12% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm tmpfs 16G 8.5M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/sda1 485M 136M 324M 30% /boot tmpfs 3.2G 0 3.2G 0% /run/user/0 [root@localhost ~]# umount /boot [root@localhost ~]# swapoff -a [root@localhost ~]# ls -al /boot/ total 128020 dr-xr-xr-x 6 root root 4096 Apr 4 11:07 . dr-xr-xr-x 17 root root 4096 Apr 4 12:25 .. -rw-r--r-- 1 root root 140971 Mar 7 19:16 config-3.10.0-693.21.1.el7.x86_64 -rw-r--r-- 1 root root 140894 Aug 22 2017 config-3.10.0-693.el7.x86_64 drwxr-xr-x 3 root root 4096 Apr 4 11:00 efi drwxr-xr-x 2 root root 4096 Apr 4 11:00 grub drwx------ 5 root root 4096 Apr 4 11:04 grub2 -rw------- 1 root root 53705597 Apr 4 11:02 initramfs-0-rescue-3003b47aedb040f6baaf6fce8c6b8386.img -rw------- 1 root root 17881515 Apr 4 11:04 initramfs-3.10.0-693.21.1.el7.x86_64.img -rw------- 1 root root 15956344 Apr 4 11:07 initramfs-3.10.0-693.21.1.el7.x86_64kdump.img -rw------- 1 root root 17871068 Apr 4 11:04 initramfs-3.10.0-693.el7.x86_64.img -rw-r--r-- 1 root root 610968 Apr 4 11:01 initrd-plymouth.img drwx------ 2 root root 4096 Apr 4 11:00 lost+found -rw-r--r-- 1 root root 293361 Mar 7 19:18 symvers-3.10.0-693.21.1.el7.x86_64.gz -rw-r--r-- 1 root root 293027 Aug 22 2017 symvers-3.10.0-693.el7.x86_64.gz -rw------- 1 root root 3237433 Mar 7 19:16 System.map-3.10.0-693.21.1.el7.x86_64 -rw------- 1 root root 3228420 Aug 22 2017 System.map-3.10.0-693.el7.x86_64 -rwxr-xr-x 1 root root 5877760 Apr 4 11:02 vmlinuz-0-rescue-3003b47aedb040f6baaf6fce8c6b8386 -rwxr-xr-x 1 root root 5917504 Mar 7 19:16 vmlinuz-3.10.0-693.21.1.el7.x86_64 -rw-r--r-- 1 root root 171 Mar 7 19:16 .vmlinuz-3.10.0-693.21.1.el7.x86_64.hmac -rwxr-xr-x 1 root root 5877760 Aug 22 2017 vmlinuz-3.10.0-693.el7.x86_64 -rw-r--r-- 1 root root 166 Aug 22 2017 .vmlinuz-3.10.0-693.el7.x86_64.hmac [root@localhost ~]# parted /dev/sda --script mklabel gpt [root@localhost ~]# parted /dev/sda --script mkpart primary 0% 4M [root@localhost ~]# parted /dev/sda --script mkpart primary 4M 16G [root@localhost ~]# parted /dev/sda --script mkpart primary 16G 50G [root@localhost ~]# parted /dev/sda --script mkpart primary 50G 100% [root@localhost ~]# parted /dev/sda --script set 1 bios_grub on [root@localhost ~]# parted /dev/sda --script set 2 raid on [root@localhost ~]# parted /dev/sda --script set 3 raid on [root@localhost ~]# parted /dev/sda --script set 4 raid on [root@localhost ~]# parted /dev/sda --script print Model: ATA SanDisk SD6SB2M5 (scsi) Disk /dev/sda: 512GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 4194kB 3146kB ext2 primary bios_grub 2 4194kB 16.0GB 16.0GB primary raid 3 16.0GB 50.0GB 34.0GB primary raid 4 50.0GB 512GB 462GB primary raid [root@localhost ~]# parted /dev/sdb --script print Model: ATA SanDisk SD6SB2M5 (scsi) Disk /dev/sdb: 512GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 4194kB 3146kB primary bios_grub 2 4194kB 16.0GB 16.0GB ext4 primary 3 16.0GB 50.0GB 34.0GB primary 4 50.0GB 512GB 462GB primary [root@localhost ~]# parted /dev/sdb --script set 3 raid on [root@localhost ~]# parted /dev/sdb --script print Model: ATA SanDisk SD6SB2M5 (scsi) Disk /dev/sdb: 512GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 4194kB 3146kB primary bios_grub 2 4194kB 16.0GB 16.0GB ext4 primary 3 16.0GB 50.0GB 34.0GB primary raid 4 50.0GB 512GB 462GB primary
STEP 6) Create the RAID1 device and format the filesystem, then copy all files from the root file system to the RAID1 device /dev/md1 (mounted again in /mnt/centos)
[root@localhost ~]# mdadm --create --verbose --metadata=1.2 /dev/md1 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3 mdadm: size set to 33186816K mdadm: array /dev/md1 started. [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdb3[1] sda3[0] 33186816 blocks super 1.2 [2/2] [UU] [==>..................] resync = 10.8% (3602048/33186816) finish=2.4min speed=200113K/sec unused devices: <none> [root@localhost ~]# mkfs.ext4 /dev/md1 mke2fs 1.42.9 (28-Dec-2013) Discarding device blocks: done Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 977280 inodes, 3905280 blocks 195264 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2151677952 120 block groups 32768 blocks per group, 32768 fragments per group 8144 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@localhost ~]# mount /dev/md1 /mnt/centos/ [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb2 15G 1.6G 13G 12% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm tmpfs 16G 8.5M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup tmpfs 3.2G 0 3.2G 0% /run/user/0 /dev/md1 32G 49M 30G 1% /mnt/centos [root@localhost ~]# rsync --delete --partial --verbose --progress --stats --recursive --times --perms --links --owner --group --hard-links --devices --exclude=/mnt --exclude=/proc --exclude=/sys / /mnt/centos/ sending incremental file list ./ .autorelabel 0 100% 0.00kB/s 0:00:00 (xfer#1, to-check=1022/1024) .readahead 237849 100% 39.12MB/s 0:00:00 (xfer#2, to-check=1021/1024) bin -> usr/bin lib -> usr/lib lib64 -> usr/lib64 sbin -> usr/sbin .... .... Number of files: 81532 Number of files transferred: 60917 Total file size: 1484146321 bytes Total transferred file size: 1460872926 bytes Literal data: 1460872926 bytes Matched data: 0 bytes File list size: 1693018 File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 1465875457 Total bytes received: 1734920 sent 1465875457 bytes received 1734920 bytes 83863450.11 bytes/sec total size is 1484146321 speedup is 1.01 [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb2 15G 1.6G 13G 12% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm tmpfs 16G 8.5M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup tmpfs 3.2G 0 3.2G 0% /run/user/0 /dev/md1 32G 1.6G 28G 6% /mnt/centos
STEP 7) Prepare the new root in the RAID1 device
chroot in the new place (/mnt/centos) and change the GUID of the root filesystem in /etc/fstab, comment the “/boot”, we will use /boot on our root filesystem (we can do it, because we have a separate boot grub partition, the first one, so our /boot could reside on a RAID device). Also add the configuration of the UUID array in /etc/default/grub to generate a proper grub2 configuration for the next boot.
[root@localhost ~]# mkdir -p /mnt/centos/proc [root@localhost ~]# mkdir -p /mnt/centos/sys [root@localhost ~]# mount -o bind /proc /mnt/centos/proc [root@localhost ~]# mount -o bind /dev /mnt/centos/dev [root@localhost ~]# mount -o bind /sys /mnt/centos/sys [root@localhost ~]# chroot /mnt/centos/ [root@localhost /]# blkid |grep sdb2 /dev/sdb2: UUID="b829d6f1-ca0e-4939-8764-c329aee0a5b2" TYPE="ext4" PARTLABEL="primary" PARTUUID="b150c7cc-0557-4de9-bbc9-05ae54b9cec5" [root@localhost /]# blkid |grep md1 /dev/md1: UUID="38407879-7399-492c-bad6-d8a3ef0297d4" TYPE="ext4" [root@localhost /]# sed -i "s/b829d6f1-ca0e-4939-8764-c329aee0a5b2/38407879-7399-492c-bad6-d8a3ef0297d4/g" /etc/fstab [root@localhost /]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Wed Apr 4 11:00:01 2018 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=38407879-7399-492c-bad6-d8a3ef0297d4 / ext4 defaults 1 1 UUID=9b98bd49-34bd-43a3-89b9-32c36df722b2 /boot ext2 defaults 1 2 #UUID=7f44f0b8-cbbe-4e70-a763-112675cf9a2c /tmp ext4 noexec,nosuid,nodev 1 2 UUID=20c3afea-87ae-4716-8a65-323bd9e6eae6 swap swap defaults 0 0 [root@localhost /]# sed -i "s/UUID=9b98bd49-34bd-43a3-89b9-32c36df722b2/#UUID=9b98bd49-34bd-43a3-89b9-32c36df722b2/g" /etc/fstab [root@localhost /]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Wed Apr 4 11:00:01 2018 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=38407879-7399-492c-bad6-d8a3ef0297d4 / ext4 defaults 1 1 #UUID=9b98bd49-34bd-43a3-89b9-32c36df722b2 /boot ext2 defaults 1 2 #UUID=7f44f0b8-cbbe-4e70-a763-112675cf9a2c /tmp ext4 noexec,nosuid,nodev 1 2 UUID=20c3afea-87ae-4716-8a65-323bd9e6eae6 swap swap defaults 0 0 [root@localhost /]# mdadm -E /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : e59b6269:7af24168:193c51d0:65b33fd9 Name : localhost.localdomain:1 (local to host localhost.localdomain) Creation Time : Wed Apr 4 12:38:58 2018 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 66373632 (31.65 GiB 33.98 GB) Array Size : 33186816 (31.65 GiB 33.98 GB) Data Offset : 32768 sectors Super Offset : 8 sectors Unused Space : before=32616 sectors, after=0 sectors State : active Device UUID : 8ebd8e2d:aa01d194:55a51280:e4192e08 Update Time : Wed Apr 4 12:44:00 2018 Bad Block Log : 512 entries available at offset 136 sectors Checksum : 3e7cfbb6 - correct Events : 18 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) [root@localhost /]# nano /etc/default/grub [root@localhost /]# cat /etc/default/grub GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL="serial console" GRUB_SERIAL_COMMAND="serial --speed=115200" GRUB_CMDLINE_LINUX="rd.md.uuid=e59b6269:7af24168:193c51d0:65b33fd9 crashkernel=auto console=ttyS0,115200" GRUB_DISABLE_RECOVERY="true" [root@localhost /]# mdadm --detail --scan >> /etc/mdadm.conf [root@localhost /]# cat /etc/mdadm.conf ARRAY /dev/md1 metadata=1.2 name=localhost.localdomain:1 UUID=e59b6269:7af24168:193c51d0:65b33fd9 [root@localhost /]# dracut --regenerate-all --force [root@localhost /]# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-693.21.1.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-693.21.1.el7.x86_64.img Found linux image: /boot/vmlinuz-3.10.0-693.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-693.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-3003b47aedb040f6baaf6fce8c6b8386 Found initrd image: /boot/initramfs-0-rescue-3003b47aedb040f6baaf6fce8c6b8386.img done [root@localhost /]# grub2-install /dev/sda Installing for i386-pc platform. Installation finished. No error reported. [root@localhost /]# grub2-install /dev/sdb Installing for i386-pc platform. Installation finished. No error reported. [root@localhost /]# exit exit [root@localhost ~]# umount /mnt/centos/proc [root@localhost ~]# umount /mnt/centos/sys [root@localhost ~]# umount /mnt/centos/dev [root@localhost ~]# umount /mnt/centos [root@localhost ~]# reboot PolicyKit daemon disconnected from the bus. We are no longer a registered authentication agent. Connection to srv closed by remote host. Connection to srv closed.
STEP 8) Create one more RAID1 device for the swap
Create a new RAID1 for the swap partition and configure the /dev/sdb with parted. Add the new RAID1 device in /dev/default/grub and generate the grub2 configuration file.
[root@srv0 ~]# ssh srv root@srv's password: Last login: Wed Apr 4 11:38:19 2018 from 192.168.0.110 [root@localhost ~]# parted /dev/sda --script print Model: ATA SanDisk SD6SB2M5 (scsi) Disk /dev/sda: 512GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 4194kB 3146kB primary bios_grub 2 4194kB 16.0GB 16.0GB primary raid 3 16.0GB 50.0GB 34.0GB primary raid 4 50.0GB 512GB 462GB primary raid [root@localhost ~]# parted /dev/sdb --script print Model: ATA SanDisk SD6SB2M5 (scsi) Disk /dev/sdb: 512GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 4194kB 3146kB primary bios_grub 2 4194kB 16.0GB 16.0GB ext4 primary 3 16.0GB 50.0GB 34.0GB primary raid 4 50.0GB 512GB 462GB primary [root@localhost ~]# parted /dev/sdb --script set 2 raid on [root@localhost ~]# parted /dev/sdb --script set 4 raid on [root@localhost ~]# mdadm --create --verbose --metadata=1.2 /dev/md0 --level=1 --raid-devices=2 /dev/sda2 /dev/sdb2 mdadm: /dev/sdb2 appears to contain an ext2fs file system size=15621120K mtime=Wed Apr 4 12:24:37 2018 mdadm: size set to 15612928K Continue creating array? yes mdadm: array /dev/md0 started. [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 15612928 blocks super 1.2 [2/2] [UU] [====>................] resync = 21.5% (3371072/15612928) finish=0.9min speed=210692K/sec md1 : active raid1 sdb3[1] sda3[0] 33186816 blocks super 1.2 [2/2] [UU] unused devices: <none> [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 15612928 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb3[1] sda3[0] 33186816 blocks super 1.2 [2/2] [UU] unused devices: <none> [root@localhost ~]# mkswap /dev/md0 Setting up swapspace version 1, size = 15612924 KiB no label, UUID=0916f8c5-079d-4780-af38-89411fa7ec24 [root@localhost ~]# cat /etc/fstab |grep swap UUID=20c3afea-87ae-4716-8a65-323bd9e6eae6 swap swap defaults 0 0 [root@localhost ~]# sed -i "s/20c3afea-87ae-4716-8a65-323bd9e6eae6/0916f8c5-079d-4780-af38-89411fa7ec24/g" /etc/fstab [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 15612928 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb3[1] sda3[0] 33186816 blocks super 1.2 [2/2] [UU] unused devices: <none> [root@localhost ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Wed Apr 4 11:00:01 2018 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=38407879-7399-492c-bad6-d8a3ef0297d4 / ext4 defaults 1 1 #UUID=9b98bd49-34bd-43a3-89b9-32c36df722b2 /boot ext2 defaults 1 2 #UUID=7f44f0b8-cbbe-4e70-a763-112675cf9a2c /tmp ext4 noexec,nosuid,nodev 1 2 UUID=0916f8c5-079d-4780-af38-89411fa7ec24 swap swap defaults 0 0 [root@localhost ~]# mdadm -E /dev/sda2 /dev/sda2: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 2e122130:2eefd9ec:5ad5b846:6bd10d6b Name : localhost.localdomain:0 (local to host localhost.localdomain) Creation Time : Wed Apr 4 13:11:03 2018 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 31225856 (14.89 GiB 15.99 GB) Array Size : 15612928 (14.89 GiB 15.99 GB) Data Offset : 16384 sectors Super Offset : 8 sectors Unused Space : before=16232 sectors, after=0 sectors State : clean Device UUID : 7ef8d502:96208fd4:bbed302a:37063c83 Update Time : Wed Apr 4 13:12:42 2018 Bad Block Log : 512 entries available at offset 136 sectors Checksum : c808e2cc - correct Events : 17 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) [root@localhost ~]# nano /etc/default/grub [root@localhost ~]# cat /etc/default/grub GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL="serial console" GRUB_SERIAL_COMMAND="serial --speed=115200" GRUB_CMDLINE_LINUX="rd.md.uuid=e59b6269:7af24168:193c51d0:65b33fd9 rd.md.uuid=2e122130:2eefd9ec:5ad5b846:6bd10d6b crashkernel=auto console=ttyS0,115200" GRUB_DISABLE_RECOVERY="true" [root@localhost ~]# mdadm --detail --scan > /etc/mdadm.conf [root@localhost ~]# cat /etc/mdadm.conf ARRAY /dev/md1 metadata=1.2 name=localhost.localdomain:1 UUID=e59b6269:7af24168:193c51d0:65b33fd9 ARRAY /dev/md0 metadata=1.2 name=localhost.localdomain:0 UUID=2e122130:2eefd9ec:5ad5b846:6bd10d6b [root@localhost ~]# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-693.21.1.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-693.21.1.el7.x86_64.img Found linux image: /boot/vmlinuz-3.10.0-693.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-693.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-3003b47aedb040f6baaf6fce8c6b8386 Found initrd image: /boot/initramfs-0-rescue-3003b47aedb040f6baaf6fce8c6b8386.img done [root@localhost ~]# reboot Connection to srv closed by remote host. Connection to srv closed. [root@srv0 ~]#
So we changed our root filesystem device configuration from a single partition to a RAID1 device for redundancy and better performance!
[root@srv0 ~]# ssh srv root@srv's password: Last login: Wed Apr 4 13:35:55 2018 from 192.168.0.110 [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 32G 1.6G 28G 6% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm tmpfs 16G 8.5M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup tmpfs 3.2G 0 3.2G 0% /run/user/0 [root@localhost ~]# free -h total used free shared buff/cache available Mem: 31G 273M 30G 8.5M 199M 30G Swap: 14G 0B 14G [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda2[0] sdb2[1] 15612928 blocks super 1.2 [2/2] [UU] md1 : active raid1 sda3[0] sdb3[1] 33186816 blocks super 1.2 [2/2] [UU] unused devices: <none> [root@localhost ~]#
STEP 9) Create the storage device
The storage device is a RAID5 setup with 4 hard disks available in the current machine (but it is the same with two devices except the RAID is RAID1). The idea is to separate the storage from the root filesystem that’s why we have separate two RAID devices.
[root@localhost ~]# parted /dev/sdc --script mklabel gpt [root@localhost ~]# parted /dev/sdc --script mkpart primary 0% 4M [root@localhost ~]# parted /dev/sdc --script mkpart primary 4M 16G [root@localhost ~]# parted /dev/sdc --script mkpart primary 16G 50G [root@localhost ~]# parted /dev/sdc --script mkpart primary 50G 100% [root@localhost ~]# parted /dev/sdc --script set 1 bios_grub on [root@localhost ~]# parted /dev/sdc --script set 2 raid on [root@localhost ~]# parted /dev/sdc --script set 3 raid on [root@localhost ~]# parted /dev/sdc --script set 4 raid on [root@localhost ~]# parted /dev/sdd --script mklabel gpt [root@localhost ~]# parted /dev/sdd --script mkpart primary 0% 4M [root@localhost ~]# parted /dev/sdd --script mkpart primary 4M 16G [root@localhost ~]# parted /dev/sdd --script mkpart primary 16G 50G [root@localhost ~]# parted /dev/sdd --script mkpart primary 50G 100% [root@localhost ~]# parted /dev/sdd --script set 1 bios_grub on [root@localhost ~]# parted /dev/sdd --script set 2 raid on [root@localhost ~]# parted /dev/sdd --script set 3 raid on [root@localhost ~]# parted /dev/sdd --script set 4 raid on [root@localhost ~]# mdadm --create --verbose /dev/md2 --level=5 --raid-devices=4 --chunk=1024 /dev/sda4 /dev/sdb4 /dev/sdc4 /dev/sdd4 mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: size set to 451147776K mdadm: automatically enabling write-intent bitmap on large array mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md2 started. [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md2 : active raid5 sdd4[4] sdc4[2] sdb4[1] sda4[0] 1353443328 blocks super 1.2 level 5, 1024k chunk, algorithm 2 [4/3] [UUU_] [>....................] recovery = 0.9% (4316984/451147776) finish=36.2min speed=205570K/sec bitmap: 0/4 pages [0KB], 65536KB chunk md0 : active raid1 sda2[0] sdb2[1] 15612928 blocks super 1.2 [2/2] [UU] md1 : active raid1 sda3[0] sdb3[1] 33186816 blocks super 1.2 [2/2] [UU] unused devices: <none> [root@localhost ~]# mkfs.ext4 /dev/md2 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=256 blocks, Stripe width=768 blocks 84590592 inodes, 338360832 blocks 16918041 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2487222272 10326 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@localhost ~]# blkid | grep md2 /dev/md2: UUID="0ba39ec9-a1fc-4593-a704-6171cb2a3403" TYPE="ext4" [root@localhost ~]# nano /etc/fstab [root@localhost ~]# mkdir -p /mnt/storage [root@localhost ~]# mount /mnt/storage [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 32G 1.6G 28G 6% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm tmpfs 16G 8.6M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup tmpfs 3.2G 0 3.2G 0% /run/user/0 /dev/md2 1.3T 77M 1.2T 1% /mnt/storage