Upgrading Ubuntu 18 to Ubuntu 20 – software versions upgrade table – head to head

In the following article a comparison between two LTS versions of Ubuntu is presented – Ubuntu 18.04 LTS (Bionic) versus Ubuntu 20.04 LTS (Focal). The latest version of Ubuntu 18.04 and Ubuntu 20.04 (17.06.2020) is used to generate the software versions below.

In the Desktop world upgrading to the new and latest version of a Linux distribution is almost mandatory, but in the server world, upgrading is more complicated. The first step in updating a server is to check what software versions come with the new distribution version and then check whether the running custom (application) software supports the software versions. For example, updating to a new distribution version, which comes with PHP 7.4, but the current application supports only 7.2 is not very wise and in addition, the current version may have years of support in the future.

Having a head-to-head version comparison to check is the main target of this article – a fast check of what version the user could expect from the new (aka latest) Linux distribution.

SoftwareUbuntu 20.04Ubuntu 18.04
Linux kernel



5.4.0
5.6.0
4.15.0
4.18.0
5.0.0
5.3.0
5.4.0
libc2.312.27
OpenSSL
1.1.1f
1.0.2n
1.1.1
GNU GCC


7.5.0
8.4.0
9.3.0
10-20200411
4.8.5
5.5.0
6.5.0
7.5.0
8.4.0
PHP7.47.2
Python2.7.17
3.8.2
2.7.15
3.6.7
Perl5.30.05.26.1
Ruby2.72.5.1
OpenJDK8u252-b09
11.0.7
13.0.3
14.0.1
8u252-b09
11.0.7
Go lang1.13.8
1.14.2
1.8
1.9
1.10
Rust1.41.01.41.0
llvm



6.0.1
7.0.1
8.0.1
9.0.1
10.0.0
3.7.1
3.9.1
4.0.1
5.0.1
6.0
7
8
9
10.0.0
nodejs10.19.08.10.0
Subversion1.131.9.7
Git2.25.22.17.1
Apache2.4.412.4.29
Nginx1.17.101.14.0
MySQL server8.0.205.7.30
MariaDB10.3.2210.1.44
PostgreSQL12.210.12
SQLite3.22.03.31.1
Xorg X server1.20.81.19.6
Gnome Shell3.36.23.28.4

Configure Bond (802.3ad LACP) device in CentOS 8 – configuration files

Upgrading to a bond device is a common step when the server exhausts its current network port bandwidth.
The hardware setup of the bond example here is:

  • two 10G network cards – ens1f0 and ens1f0
  • bond name – bond0
  • bond mode – 802.3ad – Link Aggregation Control Protocol (LACP)

The systemd reconfiguration procedure consists of:

  • Stop the network target
    systemctl stop network
    
  • Set several configuration files – network device files for the network interfaces, bonding interface – master and slave devices.
  • Start the network target
    systemctl start network
    

*Note: the 802.3ad bonding mode needs aditional configuration in the switch of which the server is connected.

The example here is using CentOS 8 configuration file to make a permanent (i.e. persistent over reboots using the CentOS 8 network configuration files) bonding configuration.
Check out the official bonding documentation for all modes and options – https://www.kernel.org/doc/Documentation/networking/bonding.txt.

CONF 1) Configure the network interfaces.

The interface should be in down state in the configuration file.
Interface 1 – /etc/sysconfig/network-scripts/ifcfg-ens1f0:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens1f0
UUID=3b399a23-570e-45ed-9369-4ff5b87efb2c
DEVICE=ens1f0
ONBOOT=no

Interface 2 – /etc/sysconfig/network-scripts/ifcfg-ens1f1:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens1f1
UUID=ecdc5d5b-9739-4424-9d67-362411974281
DEVICE=ens1f1
ONBOOT=no

CONF 2) Configure bonding master device – create a bonding group bond0

This device should be started up at boot.
Bonding device 1 – with name bond0 – /etc/sysconfig/network-scripts/ifcfg-Bond_connection_1:

BONDING_OPTS="downdelay=200 miimon=100 mode=802.3ad updelay=200"
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=10.10.10.10
PREFIX=24
GATEWAY=10.10.10.1
DNS1=10.10.10.2
DNS2=10.10.10.3
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_PRIVACY=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME="Bond connection 1"
UUID=f0a35f9a-20e4-484e-850c-689128642555
DEVICE=bond0
ONBOOT=yes

BONDING_OPTS are specific options for the bonding group with name bond0 and the bonding mode is set here, too.

CONF 3) Configure bonding slave devices – the two network cards

Adding the two network cards to the bonding group bond0. These devices should be started up at boot.
Interface 1 – /etc/sysconfig/network-scripts/ifcfg-bond0_slave_1:

HWADDR=90:E2:BA:8A:13:8C
TYPE=Ethernet
NAME="bond0 slave 1"
UUID=c49e0ced-6411-41fa-9a3b-a01a430664a7
DEVICE=ens1f0
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Interface 2 – /etc/sysconfig/network-scripts/ifcfg-bond0_slave_2:

HWADDR=90:E2:BA:8A:13:8D
TYPE=Ethernet
NAME="bond0 slave 2"
UUID=90de1cad-1d9f-48cb-8e5a-7d8bfdde91d2
DEVICE=ens1f1
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Set up (802.3ad LACP) bonding when installing CentOS 8

This article is to show how the user could install CentOS 8 (the steps are the same with CentOS 7) with a much complex network setup such as Bonding device in 802.3ad mode (LACP – Link Aggregation Control Protocol).
The whole installation procedure is not included here, but there are couple of other article on the subject “Install CentOS 7 or CentOS 8”:

Similar configuration files will be generated as in Configure Bond (802.3ad LACP) device in CentOS 8 – configuration files

SCREENSHOT 1) Click on “Network and Host Name” to configure the machine networking.

main menu
Installation Summary – Network and Host Name

Keep on reading!

Chromium browser in Ubuntu 20.04 LTS without snap to use in docker container

Ubuntu team has its own vision for the snap (https://snapcraft.io/) service and that’s why they have moved the really big and difficult to maintain Chromium browser package in the snap package. Unfortunately, the snap has many issues with docker containers and in short, it is way difficult to run snap in a docker container. The user may just want not to mess with snap packages (despite this is the future according to the Ubuntu team) or like most developers they all need a browser for their tests executed in a container.
Whether you are a developer or an ordinary user this article is for you, who wants Chromium browser installed not from the snap service under Ubuntu 20.04 LTS!
There are multiple options, which could end up with a Chromium browser installed on the system, not from the snap service:

  1. Using Debian package and Debian repository. The problem here is that using simultaneously Ubuntu and Debian repository on one machine is not a good idea! Despite the hack, Debian packages are with low priority – https://askubuntu.com/questions/1204571/chromium-without-snap/1206153#1206153
  2. Using Google Chromehttps://www.google.com/chrome/?platform=linux. It is just a single Debian package, which provides Chromium-like browser and all dependencies requesting the Chromium browser package are fulfilled.
  3. Using Chromium team dev or beta PPA (https://launchpad.net/~chromium-team) for the nearest version if still missing Ubuntu packages for Focal (Ubuntu 20.04 LTS).
  4. more options available?

This article will show how to use Ubuntu 18 (Bionic) Chromium browser package from Chromium team beta PPA under Ubuntu 20.04 LTS (Focal). Bionic package from the very same repository of Ubuntu Chromium team may be used, too.

All dependencies will be downloaded from the Ubuntu 20.04 and just several Chromium-* packages will be downloaded from the Chromium team PPA Ubuntu 19 repository. The chances to break something are really small compared to the options 1 above, which uses the Debian packages and repositories. Hope, soon we are going to have focal (Ubuntu 20.04 LTS) packages in the Ubuntu Chromium team PPA!

Dockerfile

An example of a Dockerfile installing Chromium (and python3 selenium for automating web browser interactions)

RUN apt-key adv --fetch-keys "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xea6e302dc78cc4b087cfc3570ebea9b02842f111" \
&& echo 'deb http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic main ' >> /etc/apt/sources.list.d/chromium-team-beta.list \
&& apt update
RUN export DEBIAN_FRONTEND=noninteractive \
&& export DEBCONF_NONINTERACTIVE_SEEN=true \
&& apt-get -y install chromium-browser
RUN apt-get -y install python3-selenium

First command adds the repository key and the repository to the Ubuntu source lists. Note we are adding the “bionic main”, not “focal main”.
From the all dependencies of the Bionic chromium-browser only three packages are pulled from the Bionic repository and all other are from the Ubuntu 20 (Focal):

.....
Get:1 http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic/main amd64 chromium-codecs-ffmpeg-extra amd64 84.0.4147.38-0ubuntu0.18.04.1 [1174 kB]
.....
Get:5 http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic/main amd64 chromium-browser amd64 84.0.4147.38-0ubuntu0.18.04.1 [67.8 MB]
.....
Get:187 http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic/main amd64 chromium-browser-l10n all 84.0.4147.38-0ubuntu0.18.04.1 [3429 kB]
.....

Here is the whole Dockerfile sample file:

#
#   Docker file for the image "chromium brower without snap"
#
FROM ubuntu:20.04
MAINTAINER myuser@example.com

#chromium browser
#original PPA repository, use if our local fails
RUN echo "tzdata tzdata/Areas select Etc" | debconf-set-selections && echo "tzdata tzdata/Zones/Etc select UTC" | debconf-set-selections
RUN export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NONINTERACTIVE_SEEN=true
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get -y install gnupg2 apt-utils wget
#RUN wget -O /root/chromium-team-beta.pub "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xea6e302dc78cc4b087cfc3570ebea9b02842f111" && apt-key add /root/chromium-team-beta.pub
RUN apt-key adv --fetch-keys "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xea6e302dc78cc4b087cfc3570ebea9b02842f111" && echo 'deb http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic main ' >> /etc/apt/sources.list.d/chromium-team-beta.list && apt update
RUN export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NONINTERACTIVE_SEEN=true && apt-get -y install chromium-browser
RUN apt-get -y install python3-selenium

Desktop install

The desktop installation is almost the same as the Dockerfile above. Just execute the following lines:
Keep on reading!

Copy files with read errors successfully – skipping only errors (i.e. bad sectors)

Sometimes disks have errors or an SSD disk has a bad NAND cell. Saving the whole hard disk data may not be needed and when only a specific file or two are important and which cannot be copied by cp or rsync because of “Unrecovered read error”.
Furthermore, the SSD reallocates the bad cells, when there are writes to the cell(s), which may not occur years, but reading may be each day. Reading from a sector with bad NAND cells will result in slow IO (multiple read commands are executed before giving up). Copying the file to a new place without only 512 bytes may not harm the data, but it is difficult to be done with the generic tool for copying.
This article is to save single files from a mounted ext4 file system with bad sectors using the ddrescue tool – https://www.gnu.org/software/ddrescue/ In fact, the ddrescue could save files or whole devices.

STEP 1) Install ddrescue.

Installing ddrescue is pretty easy. The tool is included in almost all Linux distributions and it doesn’t have many dependencies. Apparently, there is another dd_rescue tool, which is different than this one, just follow the link above for the tool used here.
CentOS 7/8 or Fedora:

yum install -y ddrescue

Ubuntu last 10 years versions:

apt install -y gddrescue

Gentoo:

emerge -v ddrescue

STEP 2) Rescuing a single file with read errors because of bad sectors in a mounted file system.

[root@srv Snapshots]# ddrescue -v \{9f02ae0a-6dae-4729-b6a6-ec3f0550f294\}.vdi test2.vdi
GNU ddrescue 1.25
About to copy 15724 MBytes from '{9f02ae0a-6dae-4729-b6a6-ec3f0550f294}.vdi' to 'test2.vdi'
    Starting positions: infile = 0 B,  outfile = 0 B
    Copy block size: 128 sectors       Initial skip size: 384 sectors
Sector size: 512 Bytes

Press Ctrl-C to interrupt
     ipos:   13495 MB, non-trimmed:        0 B,  current rate:       0 B/s
     opos:   13495 MB, non-scraped:        0 B,  average rate:    162 MB/s
non-tried:        0 B,  bad-sector:     8192 B,    error rate:    4608 B/s
  rescued:   15724 MB,   bad areas:        2,        run time:      1m 36s
pct rescued:   99.99%, read errors:       18,  remaining time:          0s
                              time since last successful read:          0s
Finished                                      
[root@srv Snapshots]# ls -al
total 52602944
drwx------. 2 root root        4096 Jun  2 02:22 .
drwxr-xr-x. 4 root root        4096 Jun  1 14:16 ..
-rw-------. 1 root root   459981735 Nov  8  2018 2018-11-08T15-19-17-776317000Z.sav
-rw-------. 1 root root   566704069 Jun  1 14:16 2020-06-01T11-16-05-735318000Z.sav
-rw-------. 1 root root  8329887744 Jun  1 12:53 {3d30ebea-2e2f-4e33-8088-d3d66f315e2c}.vdi
-rw-------. 1 root root 15724445696 Nov  8  2018 {9f02ae0a-6dae-4729-b6a6-ec3f0550f294}.vdi
-rw-------. 1 root root  4012900352 Jun  1 14:16 {f7e72510-7dce-48fd-b62c-630664ad984f}.vdi
-rw-r--r--. 1 root root 15724445696 Jun  2 02:24 test2.vdi
-rw-------. 1 root root  9051041792 Jun  2 02:19 test.vdi

Here is an animated gif of the ddrescue procedure:

main menu
ddrescue – copy files with bad sectors

Keep on reading!

Software and technical overview of Ubuntu 20.04 LTS server edition

Ubuntu 20.04 LTS server edition offers the following software and versions:

Software

  • linux kernel – 5.4.0 – 5.4.0-29-generic
  • System
    • linux-firmware – 1.187
    • libc – 2.31 – 2.31-0ubuntu9
    • GNU GCC – multiple versions available – 7.5.0, 8.4.0, 9.3.0 and 10-20200411. The exact versions – 7.5.0-6ubuntu2, 8.4.0-3ubuntu2, 9.3.0-10ubuntu2 and 10-20200411-0ubuntu1
    • OpenSSL – 1.1.1f – 1.1.1f-1ubuntu2
    • coreutils – 8.308.30-3ubuntu2
    • apt – 2.0.2ubuntu0.1
    • rsyslog – 8.2001.0 – 8.2001.0-1ubuntu1
  • Servers
    • Apache – 2.4.41 – 2.4.41-4ubuntu3
    • Nginx – 1.17.10 – 1.17.10-0ubuntu1
    • MySQL server – 8.0.208.0.20-0ubuntu0.20.04.1
    • MariaDB server – 10.3.22 – 10.3.22-1ubuntu1
    • PostgreSQL – 12.2-4
  • Programming
      LTS the user may install

    • PHP – 7.4 – 7.4.3-4ubuntu1.1
    • python – 3.8.2 (3.8.2-0ubuntu2) and also includes 2.7.17 (2.7.17-2ubuntu4)
    • perl – 5.30.0 and also includes perl 6 6.d-2
    • ruby – 2.7 – 2.7+1
    • OpenJDK – includes multiple versions – 8, 11, 13 and 14. The exact versions are 8u252-b09-1ubuntu1, 11.0.7+10-3ubuntu1, 13.0.3+3-1ubuntu2 and 14.0.1+7-1ubuntu1
    • Go lang – multiple versions – 1.13.8 and 1.14.2. The exact versions – 1.13.8-1ubuntu1 and 1.14.2-1ubuntu1
    • Rust – 1.41.0 – 1.41.0+dfsg1+llvm-0ubuntu2
    • Subversion – 1.13.0 – 1.13.0-3
    • Git – 2.25.1 – 2.25.1-1ubuntu3
    • llvm – multiple versions – 6, 7, 8, 9 and 10. The exact versions – 6.0.1-14, 7.0.1-12, 8.0.1-9, 9.0.1-12, 10.0-50~exp1
  • Graphical User Interface
    • Xorg X server – 1.20.8 – 1.20.8-2ubuntu2
    • GNOME (the GUI) – 3.36.x – Gnome Shell – 3.36.1

Note: Not all of the above software comes installed by default. The versions above are valid for the intial release so in fact, these are the minimal versions you get with Ubuntu 20 LTS and installing and updating it after the initial date may update some of the above packages with new versions. Installed packages are 582 occupying 11G space.

During the installation wizard you may want to install the following snap software environments. Of course, this software is available after the installation setup, too.

The test server is equipped with “Threadripper 1950X AMD“, which is 16 cores CPU.
Check out Minimal installation of Ubuntu server 20.04 LTS, too.

Keep on reading!

Minimal installation of Ubuntu server 20.04 LTS

This tutorial will show you the simple steps of installing a modern Linux Distribution – Ubuntu server 20.04 LTS edition. Following most of the default options during the setup configuration for simplicity.

Here are some basic data from the default installation setup settings:

  1. Installed packages – ~582 occupying 11G of space.
  2. 3 partitions when using automatic patition layout – boot efi, swap and root.
  3. ext4 used for the root parition.

We used the following ISO for the installation process – Ubuntu 20.04 LTS (Focal Fossa):

http://releases.ubuntu.com/focal/ubuntu-20.04-live-server-amd64.iso

It is a LIVE image so you can try it before installing it. The easiest way is just to download the image and burn it to a DVD disk and then follow the installation below:

SCREENSHOT 1) Boot from the disk or USB – whatever you made after downloading the ISO file from Ubuntu official source.

On the image here the DVD is used to boot in UEFI mode installation.

main menu
boot uefi dvd

Keep on reading!

collectd nginx plugin: curl_easy_perform failed because of selinux

Enabling the Nginx plugin for collectd under CentOS (or any other system using SELinux) might be confusing for a newbie. Most sources on the Internet would just install collectd-nginx:

yum install -y collectd-nginx

and configure it in the nginx.conf and collectd.conf. Still, the statistics might not work as expected, the collectd may not be able to gather statistics from the Nginx.

SELinux may prevent collectd (plugin) daemon to connect to Nginx and gather statistics from the Nginx stats page.

Checking the collectd log and it reports a problem:
Keep on reading!

Cron missing path – executing docker/podman – adding network: failed to locate iptables

If you have ever happened to execute some complex scripts using the cron system you were inevitable to discover the Linux environment was different than the login or ssh shell. The different environment tends to lead to a missing or different PATH environment! Here is what happens with podman starting a container from a cron script:

time="2020-04-19T20:45:20Z" level=error msg="Error adding network: failed to locate iptables: exec: \"iptables\": executable file not found in $PATH"
time="2020-04-19T20:45:20Z" level=error msg="Error while adding pod to CNI network \"podman\": failed to locate iptables: exec: \"iptables\": executable file not found in $PATH"
Error: unable to start container "onedrive-cli": error configuring network namespace for container d297cf80db20441d4258a1acc7d810444795d1ca8730ab242d9fe8a13eaa697d: failed to locate iptables: exec: "iptables": executable file not found in $PATH

The iptables executable is missing because the PATH variable is different than the login or ssh shell one. Executing the commands or the script under ssh or login will result in no error and a proper podman (docker) execution!

A similar problem could have happened with another software trying to execute iptables or another tool, which is not found in the cron’s PATH environment because cron’s environment is very limited and

To ensure the PATH is like the user’s (root) environment just source the “profile” or “.bashrc” file of the current user before the execution of the script or in the first lines of it.
This would do the trick.

. /etc/profile

Or user’s custom

. ~/.bashrc

Or the default OS bashrc

. /etc/bashrc

The dot may be replaced by “source”:

source /etc/bashrc

All (environment) variables will be available after the source command.

Here is the difference:
The environment without the sourcing profile/bashrc file:

 
LANG=en_US.UTF-8
XDG_SESSION_ID=19118
USER=root
PWD=/root
HOME=/root
SHELL=/bin/sh
SHLVL=1
LOGNAME=root
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
XDG_RUNTIME_DIR=/run/user/0
PATH=/usr/bin:/bin
_=/usr/bin/env

Sourcing the “/etc/profile” file:

LANG=en_US.UTF-8
HISTCONTROL=ignoredups
HOSTNAME=srv.example.com
XDG_SESSION_ID=19165
USER=root
PWD=/root
HOME=/root
MAIL=/var/spool/mail/root
SHELL=/bin/bash
SHLVL=1
LOGNAME=root
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
XDG_RUNTIME_DIR=/run/user/0
PATH=/usr/local/sbin:/usr/sbin:/usr/bin:/bin
HISTSIZE=1000
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/usr/bin/env

Multiple additional envrinment varibles, which could be important for user’s scripts executed by the cron.

And in CentOS 8 the iptables happens to be in “/usr/sbin/iptables” – a path /usr/sbin not included in the default cron environment PATH variable!
Of course, the PATH environment may be edited in the cron scheduler with crontab (by just setting the PATH with a path) till the next path missing in it and included in the user’s path! It’s just better to ensure the two environments are the same every time by sourcing the environment configuration file such as /etc/profile or user’s bashrc (or the default on in /etc/bashrc?).

lvm2: remove unknown physical volume device in a logical volume with a caching using vgreduce

Following the article SSD cache device to a hard disk drive using LVM the initial setup is:

  1. Single slow 2T harddisk. So no redundancy. If it fails all data is gone.
  2. SSD cache device for the slow harddisk above.

And here is how to handle a slow device failure! The data is all gone because there is no redundancy option when using a single device for data. The data is not valuable, because it is a cache server. This article will show what to expect when the slow device fails and how to replace it.
The original slow device is missing and replaced by a new one and the partitions are as follow:

  1. /dev/sda4 – the slow device. New device.
  2. /dev/sdb5 – the SSD device. Still in the LVM2 Volume Group and the Logical Volume.

Physical Volume is missing marked as “[unknown]”. The pvdisplay shows metadata information for the missing device. The vgdisplay and lvdisplay show information for the group and the logical volume, but the logical volume is in not available status. So the logical volume cannot be used, which is kind of normal when only the cache device is present.

Slow drive failure and the LVM2 status

[root@srv ~]# ssm list
------------------------------------------------------
Device        Free        Used      Total  Pool       
------------------------------------------------------
/dev/sda                          2.00 TB             
/dev/sda1  0.00 KB    50.00 GB   50.03 GB  md         
/dev/sda2  0.00 KB    15.75 GB   15.76 GB  md         
/dev/sda3  0.00 KB  1023.00 MB    1.00 GB  md         
/dev/sda4                         1.93 TB             
/dev/sdb                        894.25 GB             
/dev/sdb1  0.00 KB    50.00 GB   50.03 GB  md         
/dev/sdb2  0.00 KB    15.75 GB   15.76 GB  md         
/dev/sdb3  0.00 KB  1023.00 MB    1.00 GB  md         
/dev/sdb4                         1.00 KB             
/dev/sdb5  0.00 KB   675.00 GB  675.00 GB  VG_storage1
/dev/sdb6                       152.46 GB             
[unknown]  0.00 KB     1.93 TB    1.93 TB  VG_storage1
------------------------------------------------------
-----------------------------------------------------
Pool         Type  Devices     Free     Used    Total  
-----------------------------------------------------
VG_storage1  lvm   2        0.00 KB  2.59 TB  2.59 TB  
-----------------------------------------------------
------------------------------------------------------------------------------
Volume      Pool  Volume size  FS       FS size       Free  Type   Mount point
------------------------------------------------------------------------------
/dev/md125  md       50.00 GB  ext4    50.00 GB   44.41 GB  raid1  /          
/dev/md126  md     1023.00 MB  ext4  1023.00 MB  788.84 MB  raid1  /boot      
/dev/md127  md       15.75 GB                               raid1             
/dev/sdb6           152.46 GB  ext4   152.46 GB  145.77 GB                    
------------------------------------------------------------------------------
----------------------------------------------------------------------------------
Snapshot                      Origin               Pool         Volume size  Type   
----------------------------------------------------------------------------------
/dev/VG_storage1/lv_storage1  [lv_storage1_corig]  VG_storage1      1.93 TB  cache  
----------------------------------------------------------------------------------
[root@srv ~]# pvdisplay 
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  --- Physical volume ---
  PV Name               /dev/sdb5
  VG Name               VG_storage1
  PV Size               675.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              172799
  Free PE               0
  Allocated PE          172799
  PV UUID               oLn3hh-ROFU-WSW8-0m8P-YLWY-Akoz-nCxh96
   
  --- Physical volume ---
  PV Name               [unknown]
  VG Name               VG_storage1
  PV Size               1.93 TiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              507188
  Free PE               0
  Allocated PE          507188
  PV UUID               IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd
   
[root@srv ~]# pvscan 
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  PV /dev/sdb5   VG VG_storage1     lvm2 [<675.00 GiB / 0    free]
  PV [unknown]   VG VG_storage1     lvm2 [1.93 TiB / 0    free]
  Total: 2 [2.59 TiB] / in use: 2 [2.59 TiB] / in no VG: 0 [0   ]
[root@srv ~]# vgdisplay 
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  --- Volume group ---
  VG Name               VG_storage1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  10
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                1
  VG Size               2.59 TiB
  PE Size               4.00 MiB
  Total PE              679987
  Alloc PE / Size       679987 / 2.59 TiB
  Free  PE / Size       0 / 0   
  VG UUID               eZ2ZIb-jcDl-kPLj-oFwJ-LLuN-VxLD-rVJclP
   
[root@srv ~]# vgscan 
  Reading volume groups from cache.
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  Found volume group "VG_storage1" using metadata type lvm2
[root@srv ~]# lvdisplay 
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  --- Logical volume ---
  LV Path                /dev/VG_storage1/lv_storage1
  LV Name                lv_storage1
  VG Name                VG_storage1
  LV UUID                NFWlWF-VmSO-HVr4-72RW-YY82-1ax2-92cI6P
  LV Write Access        read/write
  LV Creation host, time srv.example.com, 2019-10-25 17:18:41 +0000
  LV Cache pool name     lv_cache
  LV Cache origin name   lv_storage1_corig
  LV Status              NOT available
  LV Size                1.93 TiB
  Current LE             507188
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
[root@srv ~]# lvscan 
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  inactive          '/dev/VG_storage1/lv_storage1' [1.93 TiB] inherit
[root@srv ~]# lvs
lvs     lvscan  
[root@srv ~]# lvs -a
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  LV                  VG          Attr       LSize   Pool       Origin              Data%  Meta%  Move Log Cpy%Sync Convert
  [lv_cache]          VG_storage1 Cwi---C--- 674.90g                                                                       
  [lv_cache_cdata]    VG_storage1 Cwi------- 674.90g                                                                       
  [lv_cache_cmeta]    VG_storage1 ewi-------  48.00m                                                                       
  lv_storage1         VG_storage1 Cwi---C-p-   1.93t [lv_cache] [lv_storage1_corig]                                        
  [lv_storage1_corig] VG_storage1 owi---C-p-   1.93t                                                                       
  [lvol0_pmspare]     VG_storage1 ewi-------  48.00m

Keep on reading!