Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS

This is an updated version of the previous version of this topic – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, but this time for Ubuntu 22.04 LTS. The article describes how to build modern analytic and monitoring solutions for system and application performance metrics. A solution, which may host all the server’s metrics and a sophisticated application, allows easy analyses of the data and powerful graphs to visualize the data.
A brief introduction to the main three software used to build the proposed solution:

  1. Grafana – an analytics and a web visualization tool. It supports dashboards, charts, graphs, alerts, and many more.
  2. influxdb – a time series database. Bleeding fast reads and writes and optimized for time.
  3. collectd – a data collection daemon, which obtain metrics from the host it is started and sends the metrics to the database (i.e. influxdb). It has around 170 plugins to collect metrics.

What is the task of each tool:

  1. collectd – gathers metrics and statistics using its plugins every 10 seconds on the host it runs and then sends the data over UDP to the influxdb using a simple text-based protocol.
  2. influxdb – listens on an open UDP port for data coming from multiple collectd instances installed on many different devices. In this case, a Linux server running Ubuntu 22.04 LTS.
  3. Grafana – an analytics and a web visualization tool. A web application, which connects to the InfluxDB and visualizes the time series metrics in graphs organized in dashboards. Graphs for CPU, memory, network, storage usage, and many more.
  4. nginx to enable SSL and proxy in front of the Grafana.

The whole solution uses the Ubuntu 22.04 LTS server edition distro. Installing the Ubuntu 22.04 LTS is a mandatory step to proceed further with this article – Installation of base Ubuntu server 22.04 LTS
The UDP influxdb port should be open per IP basis and web port of the web server (nginx) is up to the purpose of the solution – it can be behind a VPN or openly accessible by Internet.

STEP 1) Install additional repositories for Grafana, InfluxDB and collectd.

collectd is part of the Ubuntu official repositories. Grafana and InfluxDB maintain their official repositories. Here is how to install them.
Add the InfluxDB repository by first, importing the key of the InfluxDB repository and add the URL of the repository in /etc/apt/sources.list.

myuser@srv:~$ sudo curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK
echo 'deb https://repos.influxdata.com/debian stable main' > /etc/apt/sources.list.d/influxdata.list

Then, repeated the same procedure with the Grafana repository:

myuser@srv:~$ sudo curl -sL https://packages.grafana.com/gpg.key | sudo apt-key add -
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK
echo 'deb https://packages.grafana.com/oss/deb stable main' > /etc/apt/sources.list.d/grafana.list

Execute apt update to include the available file packages from all repositories including the ones:

apt update

Keep on reading!

Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9

This article describes how to build a modern analytic and monitoring solutions for system and application performance metrics. A solution, which may host all the server’s metrics and a sophisticated application, allows easy analyses of the data and powerful graphs to visualize the data.
A brief introduction to the main three software used to build the proposed solution:

  1. Grafana – an analytics and a web visualization tool. It supports dashboards, charts, graphs, alerts, and many more.
  2. influxdb – a time series database. Bleeding fast reads and writes and optimized for time.
  3. collectd – a data collection daemon, which obtain metrics from the host it is started and sends the metrics to the database (i.e. influxdb). It has around 170 plugins to collect metrics.

What is the task of each tool:

  1. collectd – gathers metrics and statistics using its plugins every 10 seconds on the host it runs and then sends the data over UDP to the influxdb using a simple text-based protocol.
  2. influxdb – listens on an open UDP port for data coming from multiple collectd instances installed on many different devices. In this case, a Linux server running CentOS Stream 9.
  3. Grafana – an analytics and a web visualization tool. A web application, which connects to the InfluxDB and visualizes the time series metrics in graphs organized in dashboards. Graphs for CPU, memory, network, storage usage, and many more.
  4. nginx to enable SSL and proxy in front of the Grafana.

The whole solution uses the CentOS Stream 9 Linux distro. Installing the CentOS Stream 9 is a mandatory step to proceed further with this article – Network installation of CentOS Stream 9 (20220606.0) – minimal server installation
The UDP influxdb port should be open per IP basis and web port of the web server (nginx) is up to the purpose of the solution – it can be behind a VPN or openly accessible by Internet.

STEP 1) Install additional repositories for Grafana, influxdb and collectd.

Install CentOS official EPEL and OpsTools repositories. EPEL provides additional packages to the base CentOS packages and OpsTools provides collectd and more collectd plugins than the ones included in the built-in repositories.

dnf install -y epel-release centos-release-opstools

Add the InfluxDB repository by creating a file in /etc/yum.repos.d/influxdb.repo

[influxdb]
name = InfluxDB Repository - RHEL $releasever
baseurl = https://repos.influxdata.com/centos/$releasever/$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key

Finally, add the Grafana repository in file /etc/yum.repos.d/grafana.repo

[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt

Keep on reading!

How To Install Linux, Apache, MySQL (MariaDB), PHP-FPM (LAMP) Stack on CentOS Stream 9

main menu
PHP Version 8.0.20

This article describes how to install a Web server with application back-end PHP and database back-end MySQL using MariaDB. In continuing the same topic, but with different software from the previous article – How To Install Linux, Nginx, MySQL (MariaDB), PHP-FPM (LEMP) Stack on CentOS Stream 9, where the Web server is Nginx with application back-end PHP-FPM, which is a sort of CGI (FastCGI). In this article, the Web server is Apache and the application is again PHP-FPM, because since the CentOS 8 the Apache mod_php is deprecated.
All the software installed throughout this article is from the CentOS Stream 9 official repositories including the EPEL repository. The machine is installed with a minimal installation of CentOS Stream 9 and there is a how-to here – Network installation of CentOS Stream 9 (20220606.0) – minimal server installation.
Here are the steps to perform:

  1. Install, configure and start the database MariaDB.
  2. Install, configure and start the PHP-FPM and PHP cli.
  3. Install, configure and start the Web server Apache 2.x.
  4. Configure the system – firewall and SELinux.
  5. Test the installation with a phpMyAdmin installation.
  6. Bonus – Apache HTTPS with SSL certificate – self-signed and letsencrypt.

STEP 1) Install, configure and start the database MariaDB.

First, install the MariaDB server by:

dnf install -y mariadb-server

To configure the MariaDB server, the main file is /etc/my.cnf, which just includes all files under the folder /etc/my.cnf.d/

[root@srv ~]# cat /etc/my.cnf
#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

[root@srv ~]# ls -altr /etc/my.cnf.d/
total 32
-rw-r--r--.  1 root root  295 Mar 25  2022 client.cnf
-rw-r--r--.  1 root root  120 May 18 07:55 spider.cnf
-rw-r--r--.  1 root root  232 May 18 07:55 mysql-clients.cnf
-rw-r--r--.  1 root root  763 May 18 07:55 enable_encryption.preset
-rw-r--r--.  1 root root 1458 Jun 13 13:24 mariadb-server.cnf
-rw-r--r--.  1 root root   42 Jun 13 13:29 auth_gssapi.cnf
drwxr-xr-x.  2 root root 4096 Oct  6 06:34 .
drwxr-xr-x. 81 root root 4096 Oct  6 06:34 ..

The most important file for the MariaDB server is /etc/my.cnf.d/mariadb-server.cnf, where all the server options are included. Under section “[mysqld]” add options to tune the MariaDB server. Supported options could be found here: https://mariadb.com/kb/en/mysqld-options/
Add the following options under “[mysqld]” in /etc/my.cnf.d/mariadb-server.cnf
Keep on reading!

Run LXC Ubuntu 22.04 LTS container with bridged network under CentOS Stream 9

In continuation of the previous article Run LXC CentOS Stream 9 container with bridged network under CentOS Stream 9, this time the LXC container will be Ubuntu 22.04 LTS Jammy Jellyfish.
To receive a better understanding why to use LXC or a much detailed information of some steps in this article it is better to visit the previously mention article and the original Run LXC CentOS 8 container with bridged network under CentOS 8.

STEP 1) Install the needed software EPEL repository and the LXC and its dependencies

To install LXC software the EPEL CentOS Stream 9 repository must be installed. At present, the LXC included in CentOS Stream 9 EPEL repository is 4.0.

dnf install -y epel-release
dnf install -y lxc lxc-templates container-selinux
dnf install -y wget tar

lxc-templates uses template “download” to download different Linux distribution images from http://images.linuxcontainers.org/, which now redirects to http://uk.lxd.images.canonical.com/ (an Ubuntu lxd images mirror).
The container-selinux should be installed only if the host, i.e. the CentOS Stream 9 install, is with enabled SELinux. The packages offers additional SELinux rules or for the LXC and LXC tools like lxc-attach and more.

STEP 2) Create a Ubuntu 22.04 LTS with the help of LXC templates

[root@srv ~]# lxc-create --template download -n mycontainer -- --dist centos --release 9-Stream --arch amd64

In addition, there is a “–variant” option along with “--dist” and “--release” to specify which variant to install – default, cloud, desktop or other. There is a variant column in the table on the images’ page mentioned above.
Keep on reading!

Run LXC CentOS Stream 9 container with bridged network under CentOS Stream 9

In continue of the previous article with CentOS 8 – Run LXC CentOS 8 container with bridged network under CentOS 8, here is an updated version with CentOS Stream 9 running LXC container. In this case, the LXC container is CentOS Stream 9, too.
Under CentOS 8, the LXC software is from branch 3.x, but in CentOS Stream 9 the LXC is 4.x and there are some differences in the LXC configuration file.
It’s worth mentioning the differences between docker/podman containers and LXC from the previous article:

  • Multiprocesses.
  • Easy configuration modification. Even hot-plugin supported.
  • Unprivileged Linux containers.
  • Complex network setups. Multiple network interfaces connected to different networks, for example.
  • Live systemd, i.e. systemd or SysV init are booted as usual. Much of the software relies on systemd/udev features and in many cases, it is really hard to run software without a systemd or init process

Here are the steps to boot a CentOS Stream 9 container under CentOS Stream 9 host server:

STEP 1) Install EPEL repository.

EPEL CentOS Stream 9 repository now includes LXC 4.0 software.

dnf install -y epel-release

STEP 2) Install LXC software and start LXC service.

At present, the LXC software version is 4.0.12. The package lxc-templates includes template scripts to create a Linux distribution environment like CentOS, Ubuntu, Debian, Gentoo, ArchLinux, Oracle, Alpine, and many others and it also includes the configuration templates to start these Linux distributions. In fact, lxc-templates now includes a download script to download images from the Internet.

dnf install -y lxc lxc-templates container-selinux
dnf install -y wget tar

The wget and tar are required if LXC templates installation is going to be performed.
There is an additional package for container’s SELinux, which should be installed before starting the LXC service, because some of the SELinux rules may not apply in the system. If the SELinux is disabled the installation of container-selinux package might be skipped.

STEP 3) Create a CentOS Stream 9 container with the help of LXC templates and run it.

Use the lxc-templates to prepare a CentOS Stream 9 container environment. The currently available containers are listed here http://images.linuxcontainers.org/, which now redirects to http://uk.lxd.images.canonical.com/ (an Ubuntu lxd images mirror). Check out the URL and choose the right container. Here the CentOS Stream 9 amd64, i.e. release 9-Stream, is used.

[root@srv ~]# lxc-create --template download -n mycontainer -- --dist centos --release 9-Stream --arch amd64

In addition, there is a “–variant” option along with “--dist” and “--release” to specify which variant to install – default, cloud, desktop or other. There is a variant column in the table on the images’ page mentioned above.
Keep on reading!

MPEG-DASH and ClearKey, CENC drm encryption with Nginx, bento4 and dashjs under CentOS 8

The purpose of this article is to demonstrate a simple and plain example of ClearKey DRM encryption using a DASH stream.
Usually, the ClearKey is used only for testing the encryption key and the DRM setup, because the decrypting key is transferred in a plain text to the browser. In simple DRM words, the key is transferred in plain text, and the handle of the decryption is not in some proprietary module such as CMD – Content Decryption Modules. The CMD is a proprietary module in the browsers or the players, which works like a black box when handling the decryption key. The most popular DRMs are Google’s Widevine, Apple’s Fireplay, and Microsoft PlayReady, which work through a proprietary module – CMD (Content Decryption Modules) in the browser (or the OS and player).
All the three DRMs work basically in a similar way:

  • There is a (encryption) key and a (encryption) keyID, which purpose is to identify the (encryption) key.
  • The video file is encrypted with the key and it includes the keyID.
  • The client needs to have the appropriate CMD (Content Decryption Modules) to decrypt the video.
  • The clients receive a license from a license server, which is encrypted data for the CDM on how to decrypt the video identified by the keyID. In fact, the client sends the keyID and receives the proper license (i.e. license binary data) for this keyID. That’s why keyID is included in the encrypted video. Bare in mind, the CMD is proprietary Content Decryption Module offered by the creator of the DRM – Google, Apple, Microsoft or another and it lives in the browser (OS or player). All popular browsers support at least one of the proprietary DRMs.

ClearKey is like the proprietary DRM schemes, but without the CMD (Content Decryption Modules).

The “org.w3.clearkey” Key System uses plain-text clear (unencrypted) key(s) to decrypt the source. No additional client-side content protection is required.

So, in general, there is no need for a license server when using ClearKey DRM.
Of course, an additional attempt to hide the plain-text key could be made using an extension to the client’s player such as javascript modules and etc. In general, it is perceived this approach to be less secure, because it is much easier to debug the javascript code on the client side. More on ClearKeyhttps://www.w3.org/TR/encrypted-media/#clear-key

Here are all the steps from the server till the client to use ClearKey.

STEP 1) Download and install bento4 software.

bento4 is an open source toolkit for manipulating some of the most common video formats – MP4 and DASH/HLS/CMAF media. The download page is https://www.bento4.com/downloads/ and the Linux binary for latest stable version: https://www.bok.net/Bento4/binaries/Bento4-SDK-1-6-0-639.x86_64-unknown-linux.zip. There is also a source code snapshot link.
Download the famous blender video for the demostration: https://download.blender.org/demo/movies/BBB/bbb_sunflower_1080p_30fps_normal.mp4
Download and unpack the binary Bento4-SDK-1-6-0-639.x86_64-unknown-linux.zip.
Keep on reading!

Create bridge and add TUN/TAP device using NetworkManager nmcli under CentOS 8

This article shows how to create a network bridge device and a TUN/TAP device, which then is added to the bridge. The CentOS 8 Stream is used along with the console NetworkManager program nmcli.
TUN/TAP devices are often used in the virtualization world as a link device between the host machine and the virtual machine.

This article is for the case when the bridge does not include the main network interface (Internet network interface and so on) of the server but is an additional device, which MAC and virtual machine MACs would not be exposed through the server’s main network interface.

If the server’s main network interface should be included in the bridge device, i.e. replace the main network interface with the bridge there is another article on the subject – Replace current interface configuration with a bridge device using nmcli (NetworkManager)

Device name are as follow:

  • br0 is the name of the network bridge.
  • 10.10.10.1 with mask /24 is the IP of the bridge device with name br0. Because the idea is to use the bridge only locally, a local interface is used. The IP is set manually.
  • tap0 is the name of TUN/TAP device.
  • enp0s3is the server’s main network connection. Not used in this howto.

Here are all the commands to create a bridge, create a TUN/TAP device and add it to the bridge, and then activate the bridge‘s link.

nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "10.10.10.1/24"
nmcli con up br0
nmcli connection add type tun ifname tap0 con-name tap0 mode tap owner 0 ip4 0.0.0.0/24
nmcli con add type bridge-slave ifname tap0 master br0

Here are the steps with much more details and information including all the command output.
The networking before any reconfiguration:

[root@srv ~]# nmcli
enp0s3: connected to enp0s3
        "Intel 82540EM"
        ethernet (e1000), 08:00:27:03:C9:2E, hw, mtu 1500
        ip4 default
        inet4 192.168.0.20/24
        route4 192.168.0.0/24 metric 100
        route4 0.0.0.0/0 via 192.168.0.1 metric 100
        inet6 fe80::a00:27ff:fe03:c92e/64
        route6 fe80::/64 metric 100

lo: unmanaged
        "lo"
        loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
        servers: 8.8.8.8 1.1.1.1
        interface: enp0s3

Use "nmcli device show" to get complete information about known devices and
"nmcli connection show" to get an overview on active connection profiles.

Consult nmcli(1) and nmcli-examples(7) manual pages for complete usage details.
[root@srv ~]# nmcli con
NAME    UUID                                  TYPE      DEVICE 
enp0s3  09497bbf-da59-42b7-a72c-d69369760b36  ethernet  enp0s3 

Keep on reading!

Run LXC CentOS 8 container with bridged network under CentOS 8

The LXC container software comes to CentOS 8 with the EPEL 8 repository. LXC is a multiprocesses container, which offers to boot a Linux distribution under container isolation. It is very similar to systemd-nspawn and a bit different from docker containers. LXC containers are used when multiple processes are needed under one container only. In most cases, the LXC container is a fully-featured Linux distribution (systemd or SysV, i.e. init) booted under a Linux container.
There are several major differences between docker/podman containers and LXC:

  • Multiprocesses.
  • Easy configuration modification. Even hot-plugin supported.
  • Unprivileged Linux containers.
  • Complex network setups. Multiple network interfaces connected to different networks, for example.
  • Live systemd, i.e. systemd or SysV init are booted as usual. Much of the software rellies on systemd/udev features and in many cases, it is really hard to run a software without a systemd or init process

Here are the steps to boot a CentOS 8 container under CentOS 8 host server:

STEP 1) Install EPEL repository.

EPEL CentOS 8 repository now includes LXC 3.0 software.

dnf install -y epel-release

STEP 2) Install LXC software and start LXC service.

At present, the LXC software version is 3.0.4. The package lxc-templates includes template scripts to create a Linux distribution environment like CentOS, Ubuntu, Debian, Gentoo, ArchLinux, Oracle, Alpine, and many others and it also includes the configuration templates to start these Linux distributions.

dnf install -y lxc lxc-templates
dnf install -y wget tar

The wget and tar are required if LXC templates installation is going to be performed.

STEP 3) Create a CentOS 8 container with the help of LXC templates and run it.

Use the lxc-templates to prepare a CentOS 8 container environment. The currently available containers are listed here http://images.linuxcontainers.org/. Check out the URL and choose the right container. Here the CentOS 8 amd64 is used.

lxc-create --template download -n mycontainer -- --dist centos --release 8 --arch amd64 --keyserver hkp://keyserver.ubuntu.com

Keep on reading!

Create and export a GlusterFS volume with NFS-Ganesha in CentOS 8

GlusterFS built-in NFS server supports only NFS version 3. GlusterFS offers NFS exports using NFS-Ganesha, which supports NFS version 3 and 4 protocols.
NFS-Ganesha server is a user-mode file sharing server, which offers a GlusterFS plugin to export GlusterFS volumes. In the following article, the NSF-Ganesha and GlusterFS are installed and a simple GlusterFS volume is created and then exported through NFS 3 and 4 version protocols.
The version of the software in this article:

  • CentOS Stream release 8 (25.04.2021)
  • GlusterFS 8.4
  • NFS-Ganesha 3.5

STEP 1) Install GlusterFS.

dnf install -y centos-release-gluster
dnf install -y glusterfs-server

The first line will installs a new repository under the SIG management – https://wiki.centos.org/SpecialInterestGroup/Storage. The second line installs the GlusterFS server.

STEP 2) Install NFS-Ganesha.

dnf install -y centos-release-nfs-ganesha30
dnf install -y nfs-ganesha nfs-ganesha-gluster

The first line again installs a new repository under the SIG management and the second line installs the NFS-Ganesha server with Gluster plugin.

STEP 3) Create GlusterFS volume

Start the GlusterFS server and create a simple 3 replicas volume with:
Start the GlusterFS on all the three nodes and enable the GlusterFS communication between the three nodes using firewall-cmd utility. So execute the following commands:

systemctl start glusterd
firewall-cmd --permanent --new-zone=glusternodes
firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.200
firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.201
firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.202
firewall-cmd --permanent --zone=glusternodes --add-service=glusterfs
firewall-cmd --reload

On the first node create the GlusterFS volume. First, add the glnode2 and glnode3 to the cluster.

gluster peer probe glnode2
gluster peer probe glnode3
gluster volume create VOL1 replica 3 transport tcp glnode1:/mnt/storage/gluster/brick glnode2:/mnt/storage/gluster/brick glnode3:/mnt/storage/gluster/brick
gluster volume start VOL1

Keep on reading!

glusterfs with localhost (127.0.0.1) nodes on different servers – glusterfs volume with 3 replicas

Binding the GlusterFS nodes on a physical interface may lead to local availability problems even for replication nodes. Bringing down the physical interface will bring down the nodes, even the local replica for the local mounts and applications.

  • 100% local uptime. The local replica will be 100% available. The loopback interface is always in the upstate! Network interfaces are not 100% upstate, because of network reload or cable unplug. Cable unplug (or port down for a switch configuration reload) could lead to a short time unavailable of the local node even for the local system!
  • node resolve rely on /etc/hosts records, not network and remote DNS system.
  • No speed limit. The read from the local system through the loopback interface could easily increase above 1G or even 10G. Probably, building nodes with replicas over a 1G network is much more affected than a network with 10G connectivity. Reading from a node relying on a loopback interface could pass 10G, even though the server is connected to a 1G network!

An addition note – another kind of the proposed solution here is to use a virtual interface to bind the IP of the GlusterFS brick. The most common type of virtual interface is using a bridge interface for the IP.

The example here is to bring a GlusterFS volume in replication mode with 3 servers, i.e. 3 replicas. Each server may mount locally the GlusterFS volume with name VOL1 and it would not get unavailable if the main interface

  • server’s hostname: node1, hostname for the replica brick glnode1. IP: 192.168.0.20, but the node1 locally is resolved as 127.0.0.1 through /etc/hosts.
  • server’s hostname: node2, hostname for the replica brick glnode2. IP: 192.168.0.30, but the node1 locally is resolved as 127.0.0.1 through /etc/hosts.
  • server’s hostname: node3, hostname for the replica brick glnode3 IP: 192.168.0.31, but the node3 locally is resolved as 127.0.0.1 through /etc/hosts.

Of course, the server’s hostname could be used, but it better to have a separate domain for the GlusterFS bricks. Sometimes server hostnames should be a real IP or some software may rely on it, too.

And here are all the commands to bring up the GlusterFS volume on 3 servers:

STEP 1) Install GlusterFS software and initial configuration.

There are GlisterFS packages in the official CentOS 8, but a newer version is supported in the Storage SIG. The GlusterFS version installed in this article is 8.4. Install the software under node1, node2, and node3.

yum install -y centos-release-gluster
yum install -y glusterfs-server

Add the following lines at the end of node1:/etc/hosts file.

127.0.0.1 glnode1
192.168.0.30 glnode2
192.168.0.31 glnode3

Add the following lines at the end of node2:/etc/hosts file.

192.168.0.20 glnode1
127.0.0.1 glnode2
192.168.0.31 glnode3

Add the following lines at the end of node3:/etc/hosts file.

192.168.0.20 glnode1   
192.168.0.30 glnode2
127.0.0.1 glnode3

Start the GlusterFS service on the three nodes

systemctl start glusterd

Mount the storage device if any and make the directory where the GlusterFS brick will reside:

mount /mnt/storage/
mkdir -p /mnt/storage/gluster/brick

STEP 2) Configure the firewall.

CentOS 8 uses firewalld and here a new zone for the GlusterFS is created and the GlusterFS service is added in the whitelist of the new zone. The three IPs of the nodes are also added in the new zone:

firewall-cmd --permanent --new-zone=glusternodes
firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.20
firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.30
firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.31
firewall-cmd --permanent --zone=glusternodes --add-service=glusterfs
firewall-cmd --reload

STEP 3) Add peers to the GlusterFS cluster and create a 3 node replica volume.

gluster peer probe glnode2
gluster peer probe glnode3
gluster volume create VOL1 replica 3 transport tcp glnode1:/mnt/storage/gluster/brick glnode2:/mnt/storage/gluster/brick glnode3:/mnt/storage/gluster/brick
gluster volume start VOL1

The GlusterFS volume can be mounted with:

mount -t glusterfs glnode1:/VOL1 /mnt/VOL1/

And /etc/fstab sample line:

glnode1:/VOL1 /mnt/VOL1 glusterfs defaults,noatime,direct-io-mode=disable 0 0

Always use the local hostname for the current node server. If you would like to mount the volume VOL1 on node1, use glnode1:/VOL1 and so on.
Bringing down the physical interface of the server, which is connected to the Internet (aka the network interface with the real IP) would not make the GlusterFS brick unavailable for the local mounts and applications.

In a cluster with only replicas, the local application will just continue using the mounted GlusterFS volume (or native GlusterFS clients) relying only on the local Gluster brick till the main Internet connection comes back.

Create 3 node replica volume – the whole output

[root@node1 ~]# yum install -y centos-release-gluster
CentOS Stream 8 - AppStream                                                                                 5.8 MB/s | 6.7 MB     00:01    
CentOS Stream 8 - BaseOS                                                                                    2.0 MB/s | 2.3 MB     00:01    
CentOS Stream 8 - Extras                                                                                     22 kB/s | 9.1 kB     00:00    
Dependencies resolved.
============================================================================================================================================
 Package                                          Architecture              Version                         Repository                 Size
============================================================================================================================================
Installing:
 centos-release-gluster8                          noarch                    1.0-1.el8                       extras                    9.3 k
Installing dependencies:
 centos-release-storage-common                    noarch                    2-2.el8                         extras                    9.4 k

Transaction Summary
============================================================================================================================================
Install  2 Packages

Total download size: 19 k
Installed size: 2.4 k
Downloading Packages:
(1/2): centos-release-gluster8-1.0-1.el8.noarch.rpm                                                         136 kB/s | 9.3 kB     00:00    
(2/2): centos-release-storage-common-2-2.el8.noarch.rpm                                                     145 kB/s | 9.4 kB     00:00    
--------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                        27 kB/s |  19 kB     00:00     
warning: /var/cache/dnf/extras-9705a089504ff150/packages/centos-release-gluster8-1.0-1.el8.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 8483c65d: NOKEY
CentOS Stream 8 - Extras                                                                                    725 kB/s | 1.6 kB     00:00    
Importing GPG key 0x8483C65D:
 Userid     : "CentOS (CentOS Official Signing Key) <security@centos.org>"
 Fingerprint: 99DB 70FA E1D7 CE22 7FB6 4882 05B5 55B3 8483 C65D
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                    1/1 
  Installing       : centos-release-storage-common-2-2.el8.noarch                                                                       1/2 
  Installing       : centos-release-gluster8-1.0-1.el8.noarch                                                                           2/2 
  Running scriptlet: centos-release-gluster8-1.0-1.el8.noarch                                                                           2/2 
  Verifying        : centos-release-gluster8-1.0-1.el8.noarch                                                                           1/2 
  Verifying        : centos-release-storage-common-2-2.el8.noarch                                                                       2/2 

Installed:
  centos-release-gluster8-1.0-1.el8.noarch                           centos-release-storage-common-2-2.el8.noarch                          

Complete!
[root@node1 ~]# yum install -y glusterfs-server
Last metadata expiration check: 0:00:14 ago on Wed Apr 14 13:01:50 2021.
Dependencies resolved.
============================================================================================================================================
 Package                                      Architecture          Version                            Repository                      Size
============================================================================================================================================
Installing:
 glusterfs-server                             x86_64                8.4-1.el8                          centos-gluster8                1.4 M
Installing dependencies:
 attr                                         x86_64                2.4.48-3.el8                       baseos                          68 k
 device-mapper-event                          x86_64                8:1.02.175-5.el8                   baseos                         269 k
 device-mapper-event-libs                     x86_64                8:1.02.175-5.el8                   baseos                         269 k
 device-mapper-persistent-data                x86_64                0.8.5-4.el8                        baseos                         468 k
 glusterfs                                    x86_64                8.4-1.el8                          centos-gluster8                689 k
 glusterfs-cli                                x86_64                8.4-1.el8                          centos-gluster8                214 k
 glusterfs-client-xlators                     x86_64                8.4-1.el8                          centos-gluster8                899 k
 glusterfs-fuse                               x86_64                8.4-1.el8                          centos-gluster8                171 k
 libaio                                       x86_64                0.3.112-1.el8                      baseos                          33 k
 libgfapi0                                    x86_64                8.4-1.el8                          centos-gluster8                125 k
 libgfchangelog0                              x86_64                8.4-1.el8                          centos-gluster8                 67 k
 libgfrpc0                                    x86_64                8.4-1.el8                          centos-gluster8                 89 k
 libgfxdr0                                    x86_64                8.4-1.el8                          centos-gluster8                 61 k
 libglusterd0                                 x86_64                8.4-1.el8                          centos-gluster8                 45 k
 libglusterfs0                                x86_64                8.4-1.el8                          centos-gluster8                350 k
 lvm2                                         x86_64                8:2.03.11-5.el8                    baseos                         1.6 M
 lvm2-libs                                    x86_64                8:2.03.11-5.el8                    baseos                         1.1 M
 psmisc                                       x86_64                23.1-5.el8                         baseos                         151 k
 python3-pyxattr                              x86_64                0.5.3-18.el8                       centos-gluster8                 35 k
 rpcbind                                      x86_64                1.2.5-8.el8                        baseos                          70 k
 userspace-rcu                                x86_64                0.10.1-4.el8                       baseos                         101 k

Transaction Summary
============================================================================================================================================
Install  22 Packages

Total download size: 8.2 M
Installed size: 24 M
Downloading Packages:
(1/22): glusterfs-cli-8.4-1.el8.x86_64.rpm                                                                  1.1 MB/s | 214 kB     00:00    
(2/22): glusterfs-8.4-1.el8.x86_64.rpm                                                                      2.6 MB/s | 689 kB     00:00    
(3/22): glusterfs-fuse-8.4-1.el8.x86_64.rpm                                                                 1.1 MB/s | 171 kB     00:00    
(4/22): glusterfs-client-xlators-8.4-1.el8.x86_64.rpm                                                       2.3 MB/s | 899 kB     00:00    
(5/22): libgfapi0-8.4-1.el8.x86_64.rpm                                                                      1.8 MB/s | 125 kB     00:00    
(6/22): libgfchangelog0-8.4-1.el8.x86_64.rpm                                                                755 kB/s |  67 kB     00:00    
(7/22): libgfrpc0-8.4-1.el8.x86_64.rpm                                                                      756 kB/s |  89 kB     00:00    
(8/22): libgfxdr0-8.4-1.el8.x86_64.rpm                                                                      579 kB/s |  61 kB     00:00    
(9/22): libglusterd0-8.4-1.el8.x86_64.rpm                                                                   641 kB/s |  45 kB     00:00    
(10/22): glusterfs-server-8.4-1.el8.x86_64.rpm                                                              3.4 MB/s | 1.4 MB     00:00    
(11/22): libglusterfs0-8.4-1.el8.x86_64.rpm                                                                 1.0 MB/s | 350 kB     00:00    
(12/22): python3-pyxattr-0.5.3-18.el8.x86_64.rpm                                                             97 kB/s |  35 kB     00:00    
(13/22): attr-2.4.48-3.el8.x86_64.rpm                                                                        85 kB/s |  68 kB     00:00    
(14/22): device-mapper-event-1.02.175-5.el8.x86_64.rpm                                                      342 kB/s | 269 kB     00:00    
(15/22): device-mapper-event-libs-1.02.175-5.el8.x86_64.rpm                                                 338 kB/s | 269 kB     00:00    
(16/22): libaio-0.3.112-1.el8.x86_64.rpm                                                                    679 kB/s |  33 kB     00:00    
(17/22): device-mapper-persistent-data-0.8.5-4.el8.x86_64.rpm                                               1.5 MB/s | 468 kB     00:00    
(18/22): psmisc-23.1-5.el8.x86_64.rpm                                                                       1.5 MB/s | 151 kB     00:00    
(19/22): rpcbind-1.2.5-8.el8.x86_64.rpm                                                                     1.2 MB/s |  70 kB     00:00    
(20/22): lvm2-libs-2.03.11-5.el8.x86_64.rpm                                                                 3.1 MB/s | 1.1 MB     00:00    
(21/22): userspace-rcu-0.10.1-4.el8.x86_64.rpm                                                              474 kB/s | 101 kB     00:00    
(22/22): lvm2-2.03.11-5.el8.x86_64.rpm                                                                      3.3 MB/s | 1.6 MB     00:00    
--------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                       2.8 MB/s | 8.2 MB     00:02     
warning: /var/cache/dnf/centos-gluster8-ae72c2c38de8ee20/packages/glusterfs-8.4-1.el8.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID e451e5b5: NOKEY
CentOS-8 - Gluster 8                                                                                        1.0 MB/s | 1.0 kB     00:00    
Importing GPG key 0xE451E5B5:
 Userid     : "CentOS Storage SIG (http://wiki.centos.org/SpecialInterestGroup/Storage) <security@centos.org>"
 Fingerprint: 7412 9C0B 173B 071A 3775 951A D4A2 E50B E451 E5B5
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                    1/1 
  Installing       : libgfxdr0-8.4-1.el8.x86_64                                                                                        1/22 
  Running scriptlet: libgfxdr0-8.4-1.el8.x86_64                                                                                        1/22 
  Installing       : libglusterfs0-8.4-1.el8.x86_64                                                                                    2/22 
  Running scriptlet: libglusterfs0-8.4-1.el8.x86_64                                                                                    2/22 
  Installing       : libgfrpc0-8.4-1.el8.x86_64                                                                                        3/22 
  Running scriptlet: libgfrpc0-8.4-1.el8.x86_64                                                                                        3/22 
  Installing       : libaio-0.3.112-1.el8.x86_64                                                                                       4/22 
  Installing       : glusterfs-client-xlators-8.4-1.el8.x86_64                                                                         5/22 
  Installing       : device-mapper-event-libs-8:1.02.175-5.el8.x86_64                                                                  6/22 
  Running scriptlet: glusterfs-8.4-1.el8.x86_64                                                                                        7/22 
  Installing       : glusterfs-8.4-1.el8.x86_64                                                                                        7/22 
  Running scriptlet: glusterfs-8.4-1.el8.x86_64                                                                                        7/22 
  Installing       : libglusterd0-8.4-1.el8.x86_64                                                                                     8/22 
  Running scriptlet: libglusterd0-8.4-1.el8.x86_64                                                                                     8/22 
  Installing       : glusterfs-cli-8.4-1.el8.x86_64                                                                                    9/22 
  Installing       : device-mapper-event-8:1.02.175-5.el8.x86_64                                                                      10/22 
  Running scriptlet: device-mapper-event-8:1.02.175-5.el8.x86_64                                                                      10/22 
  Installing       : lvm2-libs-8:2.03.11-5.el8.x86_64                                                                                 11/22 
  Installing       : libgfapi0-8.4-1.el8.x86_64                                                                                       12/22 
  Running scriptlet: libgfapi0-8.4-1.el8.x86_64                                                                                       12/22 
  Installing       : device-mapper-persistent-data-0.8.5-4.el8.x86_64                                                                 13/22 
  Installing       : lvm2-8:2.03.11-5.el8.x86_64                                                                                      14/22 
  Running scriptlet: lvm2-8:2.03.11-5.el8.x86_64                                                                                      14/22 
  Installing       : libgfchangelog0-8.4-1.el8.x86_64                                                                                 15/22 
  Running scriptlet: libgfchangelog0-8.4-1.el8.x86_64                                                                                 15/22 
  Installing       : userspace-rcu-0.10.1-4.el8.x86_64                                                                                16/22 
  Running scriptlet: userspace-rcu-0.10.1-4.el8.x86_64                                                                                16/22 
  Running scriptlet: rpcbind-1.2.5-8.el8.x86_64                                                                                       17/22 
  Installing       : rpcbind-1.2.5-8.el8.x86_64                                                                                       17/22 
  Running scriptlet: rpcbind-1.2.5-8.el8.x86_64                                                                                       17/22 
  Installing       : psmisc-23.1-5.el8.x86_64                                                                                         18/22 
  Installing       : attr-2.4.48-3.el8.x86_64                                                                                         19/22 
  Installing       : glusterfs-fuse-8.4-1.el8.x86_64                                                                                  20/22 
  Installing       : python3-pyxattr-0.5.3-18.el8.x86_64                                                                              21/22 
  Installing       : glusterfs-server-8.4-1.el8.x86_64                                                                                22/22 
  Running scriptlet: glusterfs-server-8.4-1.el8.x86_64                                                                                22/22 
  Verifying        : glusterfs-8.4-1.el8.x86_64                                                                                        1/22 
  Verifying        : glusterfs-cli-8.4-1.el8.x86_64                                                                                    2/22 
  Verifying        : glusterfs-client-xlators-8.4-1.el8.x86_64                                                                         3/22 
  Verifying        : glusterfs-fuse-8.4-1.el8.x86_64                                                                                   4/22 
  Verifying        : glusterfs-server-8.4-1.el8.x86_64                                                                                 5/22 
  Verifying        : libgfapi0-8.4-1.el8.x86_64                                                                                        6/22 
  Verifying        : libgfchangelog0-8.4-1.el8.x86_64                                                                                  7/22 
  Verifying        : libgfrpc0-8.4-1.el8.x86_64                                                                                        8/22 
  Verifying        : libgfxdr0-8.4-1.el8.x86_64                                                                                        9/22 
  Verifying        : libglusterd0-8.4-1.el8.x86_64                                                                                    10/22 
  Verifying        : libglusterfs0-8.4-1.el8.x86_64                                                                                   11/22 
  Verifying        : python3-pyxattr-0.5.3-18.el8.x86_64                                                                              12/22 
  Verifying        : attr-2.4.48-3.el8.x86_64                                                                                         13/22 
  Verifying        : device-mapper-event-8:1.02.175-5.el8.x86_64                                                                      14/22 
  Verifying        : device-mapper-event-libs-8:1.02.175-5.el8.x86_64                                                                 15/22 
  Verifying        : device-mapper-persistent-data-0.8.5-4.el8.x86_64                                                                 16/22 
  Verifying        : libaio-0.3.112-1.el8.x86_64                                                                                      17/22 
  Verifying        : lvm2-8:2.03.11-5.el8.x86_64                                                                                      18/22 
  Verifying        : lvm2-libs-8:2.03.11-5.el8.x86_64                                                                                 19/22 
  Verifying        : psmisc-23.1-5.el8.x86_64                                                                                         20/22 
  Verifying        : rpcbind-1.2.5-8.el8.x86_64                                                                                       21/22 
  Verifying        : userspace-rcu-0.10.1-4.el8.x86_64                                                                                22/22 

Installed:
  attr-2.4.48-3.el8.x86_64                                             device-mapper-event-8:1.02.175-5.el8.x86_64                         
  device-mapper-event-libs-8:1.02.175-5.el8.x86_64                     device-mapper-persistent-data-0.8.5-4.el8.x86_64                    
  glusterfs-8.4-1.el8.x86_64                                           glusterfs-cli-8.4-1.el8.x86_64                                      
  glusterfs-client-xlators-8.4-1.el8.x86_64                            glusterfs-fuse-8.4-1.el8.x86_64                                     
  glusterfs-server-8.4-1.el8.x86_64                                    libaio-0.3.112-1.el8.x86_64                                         
  libgfapi0-8.4-1.el8.x86_64                                           libgfchangelog0-8.4-1.el8.x86_64                                    
  libgfrpc0-8.4-1.el8.x86_64                                           libgfxdr0-8.4-1.el8.x86_64                                          
  libglusterd0-8.4-1.el8.x86_64                                        libglusterfs0-8.4-1.el8.x86_64                                      
  lvm2-8:2.03.11-5.el8.x86_64                                          lvm2-libs-8:2.03.11-5.el8.x86_64                                    
  psmisc-23.1-5.el8.x86_64                                             python3-pyxattr-0.5.3-18.el8.x86_64                                 
  rpcbind-1.2.5-8.el8.x86_64                                           userspace-rcu-0.10.1-4.el8.x86_64                                   

Complete!
[root@node1 ~]# systemctl start glusterd
[root@node1 ~]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2021-04-14 13:06:39 UTC; 3s ago
     Docs: man:glusterd(8)
  Process: 10151 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCC>
 Main PID: 10152 (glusterd)
    Tasks: 9 (limit: 11409)
   Memory: 5.6M
   CGroup: /system.slice/glusterd.service
           └─10152 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Apr 14 13:06:39 node1 systemd[1]: Starting GlusterFS, a clustered file-system server...
Apr 14 13:06:39 node1 systemd[1]: Started GlusterFS, a clustered file-system server.
[root@node1 ~]# tail -n 3 /etc/hosts 
127.0.0.1 glnode1
192.168.0.30 glnode2
192.168.0.31 glnode3
[root@node1 ~]# ping glnode1
PING glnode1 (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.050 ms
^C
--- glnode1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1016ms
rtt min/avg/max/mdev = 0.050/0.054/0.058/0.004 ms
[root@node1 ~]# mkdir -p /mnt/storage/gluster/brick
[root@node1 ~]# firewall-cmd --permanent --new-zone=glusternodes
success
[root@node1 ~]# firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.20
success
[root@node1 ~]# firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.30
success
[root@node1 ~]# firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.31
success
[root@node1 ~]# firewall-cmd --permanent --zone=glusternodes --add-service=glusterfs
success
[root@node1 ~]# firewall-cmd --reload
success
[root@node1 ~]# firewall-cmd --zone=glusternodes --list-all
glusternodes (active)
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 192.168.0.20 192.168.0.30 192.168.0.31
  services: glusterfs
  ports: 
  protocols: 
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:
[root@node1 ~]# gluster peer probe glnode2
peer probe: success
[root@node1 ~]# gluster peer probe glnode3
peer probe: success
[root@node1 ~]# gluster peer status
Number of Peers: 2

Hostname: glnode2
Uuid: ab63f0a0-0a72-4fcd-9f34-b88040d1a8e3
State: Peer in Cluster (Connected)

Hostname: glnode3
Uuid: 439ccd19-a95e-427c-ab72-6b65effcbe06
State: Peer in Cluster (Connected)
[root@node1 ~]# gluster volume create VOL1 replica 3 transport tcp glnode1:/mnt/storage/gluster/brick glnode2:/mnt/storage/gluster/brick glnode3:/mnt/storage/gluster/brick
volume create: VOL1: success: please start the volume to access data
[root@node1 ~]# gluster volume start VOL1
volume start: VOL1: success
[root@node1 ~]# gluster volume info VOL1
 
Volume Name: VOL1
Type: Replicate
Volume ID: 7da7bb05-2c9b-464b-b3f9-8940eeb5b0bb
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: glnode1:/mnt/storage/gluster/brick
Brick2: glnode2:/mnt/storage/gluster/brick
Brick3: glnode3:/mnt/storage/gluster/brick
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@node1 storage]# mkdir -p /mnt/VOL1
[root@node1 storage]# tail -n 1 /etc/fstab 
glnode1:/VOL1 /mnt/VOL1 glusterfs defaults,noatime,direct-io-mode=disable 0 0
[root@node1 storage]# mount /mnt/VOL1/
[root@node1 storage]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        892M     0  892M   0% /dev
tmpfs           909M     0  909M   0% /dev/shm
tmpfs           909M  8.5M  901M   1% /run
tmpfs           909M     0  909M   0% /sys/fs/cgroup
/dev/sda1        95G  1.5G   89G   2% /
/dev/sda3       976M  148M  762M  17% /boot
tmpfs           182M     0  182M   0% /run/user/0
glnode1:/VOL1    95G  2.4G   89G   3% /mnt/VOL1
[root@node1 storage]# ls -altr /mnt/VOL1/
total 8
drwxr-xr-x. 3 root root 4096 Apr 14 13:31 .
drwxr-xr-x. 4 root root 4096 Apr 14 13:34 ..