Show deleted partitions in use with blockdev –report

blockdev Linux command could show sector and size information for deleted partitions, which are still in use (i.e. mounted).
Deleting partitions in use their characters’ devices under /dev are preserved till the partitions are released from use and the kernel reloads the new partition table.
So before rebooting or releasing the deleted partitions blockdev may be used to report useful information for future recovery:

Delete the partitions with parted by just overwriting the partition table with empty one, for example:

[root@srv ~]# parted /dev/sda
GNU Parted 3.1
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: ATA Crucial_CT500MX2 (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type      File system  Flags
 1      1049kB  34.3GB  34.3GB  primary                boot, raid
 2      34.4GB  34.9GB  537MB   primary                raid
 3      34.9GB  67.1GB  32.2GB  primary                raid
 4      67.1GB  500GB   433GB   extended               lba
 5      67.1GB  500GB   433GB   logical                raid
(parted) mklabel msdos
Warning: The existing disk label on /dev/sda will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes
Error: Partition(s) 1, 2, 3, 4, 5 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because
it/they are in use.  As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.
Ignore/Cancel? Cancel
(parted) p
Model: ATA Crucial_CT500MX2 (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags: 

Number  Start  End  Size  Type  File system  Flags

(parted) q

First, print the partition table, then delete all partitions by setting an empty new partition table!

Use blockdev to show the deleted partitions information.

[root@srv ~]# blockdev --report /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sdb5
RO    RA   SSZ   BSZ   StartSec            Size   Device
rw   512   512  4096       2048     34359738368   /dev/sda1
rw   512   512  4096   67110912       536870912   /dev/sda2
rw   512   512  4096   68159488     32212254720   /dev/sda3
blockdev: cannot open /dev/sda4: No such file or directory.
rw   256   512  4096  131076096    432862658560   /dev/sdb5

The partitions, which are not in use, are removed from the kernel structures, so no information is available with blockdev. Their characters’ devices under /dev/ are removed, too.

The information such as the size of the partitions and the start sectors may be used to recover the partitions manually with fdisk, sfdisk, sgdisk or parted or even testdisk – testdisk official site. In fact, testdisk is the recommended way.

Copying partition table from one disk to another with older sfdisk under CentOS 7

Older version of sfdisk may still be used for msdos partition tables.
To copy the partition table from one disk to another using sfdisk a temporary file should be used to store the data for the partition table.

Here is an example of how to copy the msdos partition table from disk sda to disk sdb! Two simple commands

  1. Dump the source (sda) partition table to a temporary file.
  2. Redirect the standard input of the sfdisk utility with the above temporary file.

A copying partition table is really useful when recovering from a drive failure in a Linux software raid. Sometimes it is difficult or just easier to create the exact layout as the source mirror disk!

mdadm --add /dev/md1 /dev/sdb2
mdadm: /dev/sdb2 not large enough to join array

Errors such as the above are easily resolved with just two commands. The new versions of disk programs align the partitions, which may be a problem for a software RAID to join in a partition.

STEP 1) Dump the source partition table.

The source partition table is from /dev/sda:

[root@srv ~]# sfdisk -d /dev/sda > part_table_sda
sfdisk: Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.

There is a warning about not aligned partition, which may cause problems when creating from scratch, so copying the partition table is the best option in such cases.

Here is what the temporary file part_table_sda with the partition table information contains:

[root@srv ~]# cat part_table_sda 
# partition table of /dev/sda
unit: sectors

/dev/sda1 : start=     2048, size= 67045376, Id=fd, bootable
/dev/sda2 : start= 67110912, size=  1048576, Id=fd
/dev/sda3 : start= 68159488, size= 62883840, Id=fd
/dev/sda4 : start=131043328, size=845729792, Id= f
/dev/sda5 : start=131076096, size=845434880, Id=fd

Keep on reading!

Remove disk (all partitions) from software RAID1 with mdadm and change layout of the disk

The following article is to show how to remove healthy partitions from software RAID1 devices to change the layout of the disk and then add them back to the array.
The mdadm is the tool to manipulate the software RAID devices under Linux and it is part of all Linux distributions (some don’t install it by default so it may need to be installed).

Software RAID layout

[root@srv ~]# cat /proc/mdstat 
Personalities : [raid1] 
md125 : active raid1 sda4[1] sdb3[0]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid1 sdb2[0] sda3[1]
      32867328 blocks super 1.2 [2/2] [UU]
      
md127 : active raid1 sda2[1] sdb1[0]
      52427776 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

STEP 1) Make the partitions faulty.

The partitions cannot be removed if they are not faulty.

[root@srv ~]# mdadm --fail /dev/md125 /dev/sdb3
mdadm: set /dev/sdb3 faulty in /dev/md125
[root@srv ~]# mdadm --fail /dev/md126 /dev/sdb2
mdadm: set /dev/sdb2 faulty in /dev/md126
[root@srv ~]# mdadm --fail /dev/md127 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md127

Keep on reading!

SSD cache device to a software RAID5 using LVM2

Continuing our series LVM2 plus cache device:

  1. single hard disk with a SSD device SSD cache device to a hard disk drive using LVM, which uses SSD drive as a cache device to a single hard drive.
  2. Mirror LVM2 device with a SSD device for cache – SSD cache device to a software raid using LVM2 – software mirror across two devices with an additional SSD cache device over the mirror.

And now we show you how to do software RAID5 with SSD cache nvme using LVM2.

The goal:
Caching RAID5 consisting of three 8T hard drives with a single 1T NVME SSD drive. Caching reads, i.e. the write-through is enabled ().
Our setup:

  • 1 NVME SSD disk Samsung 1T. It will be used for writethrough cache device (you may use writeback, too, you do not care for the data if the cache device fails)!
  • 3 Hard disk drive 8T grouped in RAID5 for redundancy.

Keep on reading!

SSD cache device to a software raid using LVM2

Inspired by our article – SSD cache device to a hard disk drive using LVM, which uses SSD driver as a cache device to a single hard drive, we decided to make a new article, but this time using two hard drives in raid setup (in our case RAID1 for redundancy) and a single NVME SSD drive.
The goal:
Caching RAID1 consisting of two 8T hard drive with a single 1T NVME SSD drive. Caching reads and writes, i.e. the write-back is enabled.
Our setup:

  • 1 NVME SSD disk Samsung 1T. It will be used for writeback cache device (you may use writethrough, too, to maintain the redundancy of the whole storage)!
  • 2 Hard disk drive 8T grouped in RAID1 for redundancy.

STEP 1) Install lvm2 and enable the lvm2 service

Only this step is different on different Linux distributions. We included three of them:
Ubuntu 16+:

sudo apt update && apt upgrade -y
sudo apt install lvm2 -y
systemctl enable lvm2-lvmetad
systemctl start lvm2-lvmetad

CentOS 7:

yum update
yum install -y lvm2
systemctl enable lvm2-lvmetad
systemctl start lvm2-lvmetad

Gentoo:

emerge --sync
emerge -v sys-fs/lvm2
/etc/init.d/lvm start
rc-update add default lvm

Keep on reading!

Online resize of a root ext4 file system – increase the space

Here you can see how to online resize your root ext4 file system. The free space of your partition will be increased after the operation. The size of the root file system will grow not shrink. Of course, this could have been any other partition, not exactly the root one, but in most cases, such operations on the root are the more complex and dangerous – SO ALWAYS do backups before such operations!

All services work properly and no shut down of services, no reboot, or umount is required during the resize operation.

Still, we rebooted the server once to force check the file system as a precaution, because it was possible and this server was not in production. The reboot of the server after this kind of resizing is not mandatory.
The following method is tested on CentOS 7, Ubuntu 16 LTS, and Gentoo with kernel 4.15 kernel. So we can assume you may have no problems if your system is newer than ours.

Summary

  1. Partition resize – Use resizepart in parted command. All Linux distributions have this package with the same name as the needed command “parted”
  2. File system resize – Use resize2fs from the E2fsprogs package. All Linux distributions include this package mostly with the same name of the package.

STEP 1) Expand the partition, which holds the root partition.

Let’s assume you have changed your disk and now there is more unallocated space to be used or somehow the space of the disk is increased. Look below for a real-world example with one of our virtual servers.

root@srv1 ~ # parted /dev/sda
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Model: Model: ATA Samsung SSD 850 (scsi)
Disk /dev/sda: 215GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2097kB  1049kB                        bios_grub
 2      2097kB  4096MB  4094MB  linux-swap(v1)
 3      4096MB  24.0GB  19.9GB  ext4
(parted) resizepart 3 -1                                                  
Warning: Partition /dev/sda3 is being used. Are you sure you want to continue?
parted: invalid token: -1                                                 
Yes/No? Yes                                                               
End?  [24.0GB]? -1                                                        
(parted) p                                                                
Model: Model: ATA Samsung SSD 850 (scsi)
Disk /dev/sda: 215GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2097kB  1049kB                        bios_grub
 2      2097kB  4096MB  4094MB  linux-swap(v1)
 3      4096MB  215GB   211GB   ext4

(parted) q                                                                
Information: You may need to update /etc/fstab.

As you can see from the first print command the partition number 3 is 19.9GB and after the resize command with “-1” is 211GB. There is a warning about the partition is used, but it is normal and not critical.

STEP 2) Resize the file system, on which we expanded the partition.

You need to install E2fsprogs. All Linux distributions have this package, here are some of them:

  • CentOS 7 – e2fsprogs
  • Ubuntu – e2fsprogs
  • Gentoo – sys-fs/e2fsprogs

After installing the e2fsprogs package you will have the online ext4 resizing tool – resize2fs.

root@srv ~ # resize2fs /dev/sda3
resize2fs 1.42.13 (17-May-2015)
Filesystem at /dev/sda3 is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 13
The filesystem on /dev/sda3 is now 51428620 (4k) blocks long.

Check if everything is OK with

root@srv ~ # dmesg|grep EXT4
[  449.330140] EXT4-fs (vda3): resizing filesystem from 4859392 to 51428620 blocks
[  449.936044] EXT4-fs (vda3): resized filesystem to 51428620
root@srv ~ # df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           798M  3.5M  795M   1% /run
/dev/sda3       193G  3.4G  182G   2% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           798M     0  798M   0% /run/user/0

Bonus – you can force check the file system on the next reboot

Probably it is a good idea to force check the file system integrity on the next boot. This step is not mandatory and you may skip it.
For Ubuntu you can do:

root@srv ~ # touch /forcefsck
root@srv ~ # reboot

Bonus 2

Fixing the GPT. Newer versions may display warning the GPT table is not using the whole disk space and to fix it. Just type fix to add the new unallocated disk space:

root@srv ~ # parted /dev/sda
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 188743680 blocks) or continue with the current setting? 
Fix/Ignore? Fix                                                           
Model: Virtio Block Device (virtblk)
Disk /dev/sda: 118GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2097kB  1049kB                        bios_grub
 2      2097kB  17.2GB  17.2GB  ext4
 3      17.2GB  21.5GB  4293MB  linux-swap(v1)

(parted)