Delete the zookeeper logs and snapshot with FileTxnSnapLog from the command-line

The new version of zookeeper has the ability to auto-purge the logs and snapshots (if autopurge.snapRetainCount and autopurge.purgeInterval are enabled in the configuration), but if an older version is used or the administrator would like to force freeing space, there is a way using the command-line to remove the zookeepers logs and snapshots.
The manual shows how to do it (https://archive.cloudera.com/cdh4/cdh/4/zookeeper/zookeeperAdmin.html#sc_advancedConfiguration):

 java -cp zookeeper.jar:lib/slf4j-api-1.6.1.jar:lib/slf4j-log4j12-1.6.1.jar:lib/log4j-1.2.15.jar:conf \
     org.apache.zookeeper.server.PurgeTxnLog <dataDir> <snapDir> -n <count>
  • Load all needed jars with the current installed versions in zookeeper install directory appended with /lib
  • use the function org.apache.zookeeper.server.PurgeTxnLog
  • Append three parameters “[dataDir] [snapDir] -n [count]”. is in fact /datalog directory, is the directory where the snapshots are kept.

A more clear and detailed syntax:

java -cp [zookeeper & slf4j-api & slf4j-log4j12 & log4j & ... ].jar:conf org.apache.zookeeper.server.PurgeTxnLog \
     <base_datalog_dir> <base_snapshot_dir> <count>

Here is an example with zookeeper 3.7.0:

root@zoo1:~# cd /apache-zookeeper-3.7.0-bin/lib
root@zoo1:/apache-zookeeper-3.7.0-bin/lib# java -cp zookeeper-3.7.0.jar:slf4j-api-1.7.30.jar:slf4j-log4j12-1.7.30.jar:log4j-1.2.17.jar:zookeeper-jute-3.7.0.jar:snappy-java-1.1.7.7.jar:conf org.apache.zookeeper.server.PurgeTxnLog /datalog/ /data/ -n 3
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.server.persistence.FileTxnSnapLog).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Removing file: Sep 9, 2021, 6:07:13 AM  /datalog/version-2/log.4d6800000001
Removing file: Sep 9, 2021, 5:06:57 AM  /data/version-2/snapshot.4d6700c1986c
Removing file: Sep 9, 2021, 6:07:13 AM  /data/version-2/snapshot.4d6800010b26
Removing file: Sep 9, 2021, 4:45:10 AM  /data/version-2/snapshot.4d6700bef73a
Removing file: Sep 9, 2021, 5:44:23 AM  /data/version-2/snapshot.4d6700c62032
Removing file: Sep 9, 2021, 5:31:04 AM  /data/version-2/snapshot.4d6700c48e0f
Removing file: Sep 9, 2021, 5:37:37 AM  /data/version-2/snapshot.4d6700c55610
Removing file: Sep 9, 2021, 5:14:36 AM  /data/version-2/snapshot.4d6700c29039
Removing file: Sep 9, 2021, 5:22:33 AM  /data/version-2/snapshot.4d6700c387b3
Removing file: Sep 9, 2021, 4:57:02 AM  /data/version-2/snapshot.4d6700c0571e
Removing file: Sep 9, 2021, 5:56:59 AM  /data/version-2/snapshot.4d6700c63856

A message for removing files should be printed, if not probably the directories’ paths are wrong!

This is how the zookeeper is organized in the system:

  1. Zookeeper installation directory is /apache-zookeeper-3.7.0-bin meaning the lib is under: /apache-zookeeper-3.7.0-bin/lib, where the jars are placed:
    root@zoo1:/apache-zookeeper-3.7.0-bin/lib# ls /apache-zookeeper-3.7.0-bin/lib/
    audience-annotations-0.12.0.jar               jline-2.14.6.LICENSE.txt                               netty-transport-native-unix-common-4.1.59.Final.LICENSE.txt
    commons-cli-1.4.jar                           jline-2.14.6.jar                                       netty-transport-native-unix-common-4.1.59.Final.jar
    jackson-annotations-2.10.5.jar                log4j-1.2.17.LICENSE.txt                               simpleclient-0.9.0.LICENSE.txt
    jackson-core-2.10.5.jar                       log4j-1.2.17.jar                                       simpleclient-0.9.0.jar
    jackson-databind-2.10.5.1.jar                 metrics-core-4.1.12.1.jar                              simpleclient_common-0.9.0.jar
    javax.servlet-api-3.1.0.jar                   metrics-core-4.1.12.1.jar_LICENSE.txt                  simpleclient_common-0.9.0_LICENSE.txt
    jetty-http-9.4.38.v20210224.LICENSE.txt       netty-buffer-4.1.59.Final.LICENSE.txt                  simpleclient_hotspot-0.9.0.jar
    jetty-http-9.4.38.v20210224.jar               netty-buffer-4.1.59.Final.jar                          simpleclient_hotspot-0.9.0_LICENSE.txt
    jetty-io-9.4.38.v20210224.LICENSE.txt         netty-codec-4.1.59.Final.LICENSE.txt                   simpleclient_servlet-0.9.0.jar
    jetty-io-9.4.38.v20210224.jar                 netty-codec-4.1.59.Final.jar                           simpleclient_servlet-0.9.0_LICENSE.txt
    jetty-security-9.4.38.v20210224.LICENSE.txt   netty-common-4.1.59.Final.LICENSE.txt                  slf4j-1.7.30.LICENSE.txt
    jetty-security-9.4.38.v20210224.jar           netty-common-4.1.59.Final.jar                          slf4j-api-1.7.30.jar
    jetty-server-9.4.38.v20210224.LICENSE.txt     netty-handler-4.1.59.Final.LICENSE.txt                 slf4j-log4j12-1.7.30.jar
    jetty-server-9.4.38.v20210224.jar             netty-handler-4.1.59.Final.jar                         snappy-java-1.1.7.7.jar
    jetty-servlet-9.4.38.v20210224.LICENSE.txt    netty-resolver-4.1.59.Final.LICENSE.txt                snappy-java-1.1.7.7.jar_LICENSE.txt
    jetty-servlet-9.4.38.v20210224.jar            netty-resolver-4.1.59.Final.jar                        zookeeper-3.7.0.jar
    jetty-util-9.4.38.v20210224.LICENSE.txt       netty-transport-4.1.59.Final.LICENSE.txt               zookeeper-jute-3.7.0.jar
    jetty-util-9.4.38.v20210224.jar               netty-transport-4.1.59.Final.jar                       zookeeper-prometheus-metrics-3.7.0.jar
    jetty-util-ajax-9.4.38.v20210224.LICENSE.txt  netty-transport-native-epoll-4.1.59.Final.LICENSE.txt
    jetty-util-ajax-9.4.38.v20210224.jar          netty-transport-native-epoll-4.1.59.Final.jar
    
  2. The datalog directory is under /datalog:
    root@zoo1:/apache-zookeeper-3.7.0-bin/lib# ls -altr /datalog/
    total 12
    drwxr-xr-x 1 root      root      4096 Sep  9 05:56 ..
    drwxr-xr-x 3 zookeeper root      4096 Sep  9 05:56 .
    drwxr-xr-x 2 zookeeper zookeeper 4096 Sep  9 07:04 version-2
    root@zoo1:/apache-zookeeper-3.7.0-bin/lib# ls -altr /datalog/version-2/
    total 131852
    drwxr-xr-x 3 zookeeper root          4096 Sep  9 05:56 ..
    -rw-r--r-- 1 zookeeper zookeeper 67108880 Sep  9 06:18 log.4d6800010b28
    -rw-r--r-- 1 zookeeper zookeeper 67108880 Sep  9 06:29 log.4d6800025751
    -rw-r--r-- 1 zookeeper zookeeper 67108880 Sep  9 06:39 log.4d680003b40f
    -rw-r--r-- 1 zookeeper zookeeper 67108880 Sep  9 06:52 log.4d680004d2d8
    drwxr-xr-x 3 zookeeper zookeeper     4096 Sep  9 06:52 .
    -rw-r--r-- 1 zookeeper zookeeper 67108880 Sep  9 07:03 log.4d6800064eda
    
  3. The snapshot directory is under /data
    root@zoo1:/apache-zookeeper-3.7.0-bin/lib# ls -altr /data/
    total 968
    -rw-r--r-- 1 zookeeper root           2 Aug 31 07:47 myid
    drwxr-xr-x 3 zookeeper root        4096 Aug 31 07:55 .
    drwxr-xr-x 1 root      root        4096 Sep  9 05:56 ..
    drwxr-xr-x 3 zookeeper zookeeper 974848 Sep  9 07:04 version-2
    root@zoo1:/apache-zookeeper-3.7.0-bin# ls -altr /data/version-2/
    total 79004
    drwxr-xr-x 3 zookeeper root         4096 Aug 31 07:55 ..
    -rw-r--r-- 1 zookeeper zookeeper 6402486 Sep  9 04:45 snapshot.4d6700bef73a
    -rw-r--r-- 1 zookeeper zookeeper 6068093 Sep  9 04:57 snapshot.4d6700c0571e
    -rw-r--r-- 1 zookeeper zookeeper 6905356 Sep  9 05:06 snapshot.4d6700c1986c
    -rw-r--r-- 1 zookeeper zookeeper 6484358 Sep  9 05:14 snapshot.4d6700c29039
    -rw-r--r-- 1 zookeeper zookeeper 6319179 Sep  9 05:22 snapshot.4d6700c387b3
    -rw-r--r-- 1 zookeeper zookeeper 6362239 Sep  9 05:31 snapshot.4d6700c48e0f
    -rw-r--r-- 1 zookeeper zookeeper 6280967 Sep  9 05:37 snapshot.4d6700c55610
    -rw-r--r-- 1 zookeeper zookeeper 6251946 Sep  9 05:44 snapshot.4d6700c62032
    -rw-r--r-- 1 zookeeper zookeeper       5 Sep  9 05:56 acceptedEpoch
    -rw-r--r-- 1 zookeeper zookeeper 6208681 Sep  9 05:56 snapshot.4d6700c63856
    -rw-r--r-- 1 zookeeper zookeeper       5 Sep  9 05:56 currentEpoch
    -rw-r--r-- 1 zookeeper zookeeper 7442360 Sep  9 06:07 snapshot.4d6800010b26
    -rw-r--r-- 1 zookeeper zookeeper 7666290 Sep  9 06:18 snapshot.4d680002574f
    -rw-r--r-- 1 zookeeper zookeeper 7467034 Sep  9 06:29 snapshot.4d680003b40d
    drwxr-xr-x 2 root      root         4096 Sep  9 06:32 version-2
    drwxr-xr-x 3 zookeeper zookeeper  974848 Sep  9 06:32 .
    

Do not use for directories /datalog/version-2 or /data/version-2 it is wrong and no files will be removed!

Troubleshooting

If executing the above line outputs a missing java class like below, the easiest way is just to search for the name in the [zookeeper-install-directory]/lib/ with tools like grep or any other text file search tool.

root@zoo1:/apache-zookeeper-3.7.0-bin/lib# java -cp zookeeper-3.7.0.jar:slf4j-api-1.7.30.jar:slf4j-log4j12-1.7.30.jar:log4j-1.2.17.jar:zookeeper-jute-3.7.0.jar:conf org.apache.zookeeper.server.PurgeTxnLog /datalog/ /data/ -n 3
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.server.persistence.FileTxnSnapLog).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.NoClassDefFoundError: org/xerial/snappy/SnappyInputStream
        at org.apache.zookeeper.server.persistence.FileSnap.findNValidSnapshots(FileSnap.java:171)
        at org.apache.zookeeper.server.persistence.FileTxnSnapLog.findNValidSnapshots(FileTxnSnapLog.java:568)
        at org.apache.zookeeper.server.PurgeTxnLog.purge(PurgeTxnLog.java:82)
        at org.apache.zookeeper.server.PurgeTxnLog.main(PurgeTxnLog.java:192)
Caused by: java.lang.ClassNotFoundException: org.xerial.snappy.SnappyInputStream
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(Unknown Source)
        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(Unknown Source)
        at java.base/java.lang.ClassLoader.loadClass(Unknown Source)
        ... 4 more
root@zoo1:/apache-zookeeper-3.7.0-bin/lib# grep -ir SnappyInputStream
grep: snappy-java-1.1.7.7.jar: binary file matches

So there is a match with the name SnappyInputStream in jar file snappy-java-1.1.7.7.jar, so just include it in the java -cp command.

Easy install the latest docker-compose with pip3 under Ubuntu

At present, the latest docker-compose version, which could be installed under Ubuntu 18, 20, and 21 is the 1.25 and 1.27 versions. There may be significant changes included in the latest versions and if one wants to install it there are two options:

For example, depends_on.service.condition: service_healthy is added with version 1.28. Using this new feature it is fairly easy to implement waiting for a docker container (service) before starting another docker.

Here is how easy it is to install and to have the latest stable docker-compose version, which is 1.29.2 at the writing of this article:

STEP 1) Update and upgrade.

Do this step always before installing new software.

sudo apt update
sudo apt upgrade -y

STEP 2) Install pip3 and docker.

pip 3 is the package installer for Python 3. When using docker-compose it is supposed to have the very Docker software, too.

apt install python3-pip docker
systemctl start docker

STEP 3) Install docker-compose using pip3.

pip3 install docker-compose

And here is what a version command prints:

root@srv:~# docker-compose version
docker-compose version 1.29.2, build unknown
docker-py version: 5.0.2
CPython version: 3.8.10
OpenSSL version: OpenSSL 1.1.1f  31 Mar 2020

Just to note, installing packages using other programs other than apt may lead to future conflicts!

The whole console output of the pip3 installing docker-compose

root@srv:~# apt update
Get:1 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB]
Get:2 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]              
Get:3 http://archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB]
Get:4 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Get:5 http://archive.ubuntu.com/ubuntu focal/multiverse amd64 Packages [177 kB]
Get:6 http://archive.ubuntu.com/ubuntu focal/restricted amd64 Packages [33.4 kB]
Get:7 http://archive.ubuntu.com/ubuntu focal/universe amd64 Packages [11.3 MB]
Get:8 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages [1275 kB]                  
Get:9 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [1514 kB]        
Get:10 http://archive.ubuntu.com/ubuntu focal-updates/multiverse amd64 Packages [33.3 kB]
Get:11 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [1069 kB]
Get:12 http://archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [575 kB]
Get:13 http://archive.ubuntu.com/ubuntu focal-backports/universe amd64 Packages [6324 B]
Get:14 http://archive.ubuntu.com/ubuntu focal-backports/main amd64 Packages [2668 B]
Get:15 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [1070 kB]
Get:16 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [790 kB]
Get:17 http://security.ubuntu.com/ubuntu focal-security/multiverse amd64 Packages [30.1 kB]
Get:18 http://security.ubuntu.com/ubuntu focal-security/restricted amd64 Packages [525 kB]
Fetched 19.0 MB in 1s (16.7 MB/s)                          
Reading package lists... Done
Building dependency tree       
Reading state information... Done
All packages are up to date.
root@srv:~# apt upgrade -y
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@srv:~# apt install -y python3-pip docker
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  binutils binutils-common binutils-x86-64-linux-gnu build-essential ca-certificates cpp cpp-9 dirmngr dpkg-dev fakeroot file g++ g++-9 gcc gcc-9 gcc-9-base gnupg gnupg-l10n gnupg-utils
  gpg gpg-agent gpg-wks-client gpg-wks-server gpgconf gpgsm libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl libasan5 libasn1-8-heimdal libassuan0 libatomic1
  libbinutils libbsd0 libc-dev-bin libc6-dev libcc1-0 libcrypt-dev libctf-nobfd0 libctf0 libdpkg-perl libexpat1 libexpat1-dev libfakeroot libfile-fcntllock-perl libgcc-9-dev
  libgdbm-compat4 libgdbm6 libglib2.0-0 libglib2.0-data libgomp1 libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu66 libisl22
  libitm1 libkrb5-26-heimdal libksba8 libldap-2.4-2 libldap-common liblocale-gettext-perl liblsan0 libmagic-mgc libmagic1 libmpc3 libmpdec2 libmpfr6 libnpth0 libperl5.30 libpython3-dev
  libpython3-stdlib libpython3.8 libpython3.8-dev libpython3.8-minimal libpython3.8-stdlib libquadmath0 libreadline8 libroken18-heimdal libsasl2-2 libsasl2-modules libsasl2-modules-db
  libsqlite3-0 libssl1.1 libstdc++-9-dev libtsan0 libubsan1 libwind0-heimdal libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxml2 linux-libc-dev make manpages manpages-dev mime-support
  netbase openssl patch perl perl-modules-5.30 pinentry-curses python-pip-whl python3 python3-dev python3-distutils python3-lib2to3 python3-minimal python3-pkg-resources
  python3-setuptools python3-wheel python3.8 python3.8-dev python3.8-minimal readline-common shared-mime-info tzdata wmdocker xdg-user-dirs xz-utils zlib1g-dev
Suggested packages:
  binutils-doc cpp-doc gcc-9-locales dbus-user-session libpam-systemd pinentry-gnome3 tor debian-keyring g++-multilib g++-9-multilib gcc-9-doc gcc-multilib autoconf automake libtool flex
  bison gdb gcc-doc gcc-9-multilib parcimonie xloadimage scdaemon glibc-doc git bzr gdbm-l10n libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal libsasl2-modules-ldap
  libsasl2-modules-otp libsasl2-modules-sql libstdc++-9-doc make-doc man-browser ed diffutils-doc perl-doc libterm-readline-gnu-perl | libterm-readline-perl-perl libb-debug-perl
  liblocale-codes-perl pinentry-doc python3-doc python3-tk python3-venv python-setuptools-doc python3.8-venv python3.8-doc binfmt-support readline-doc
The following NEW packages will be installed:
  binutils binutils-common binutils-x86-64-linux-gnu build-essential ca-certificates cpp cpp-9 dirmngr docker dpkg-dev fakeroot file g++ g++-9 gcc gcc-9 gcc-9-base gnupg gnupg-l10n
  gnupg-utils gpg gpg-agent gpg-wks-client gpg-wks-server gpgconf gpgsm libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl libasan5 libasn1-8-heimdal libassuan0
  libatomic1 libbinutils libbsd0 libc-dev-bin libc6-dev libcc1-0 libcrypt-dev libctf-nobfd0 libctf0 libdpkg-perl libexpat1 libexpat1-dev libfakeroot libfile-fcntllock-perl libgcc-9-dev
  libgdbm-compat4 libgdbm6 libglib2.0-0 libglib2.0-data libgomp1 libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu66 libisl22
  libitm1 libkrb5-26-heimdal libksba8 libldap-2.4-2 libldap-common liblocale-gettext-perl liblsan0 libmagic-mgc libmagic1 libmpc3 libmpdec2 libmpfr6 libnpth0 libperl5.30 libpython3-dev
  libpython3-stdlib libpython3.8 libpython3.8-dev libpython3.8-minimal libpython3.8-stdlib libquadmath0 libreadline8 libroken18-heimdal libsasl2-2 libsasl2-modules libsasl2-modules-db
  libsqlite3-0 libssl1.1 libstdc++-9-dev libtsan0 libubsan1 libwind0-heimdal libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxml2 linux-libc-dev make manpages manpages-dev mime-support
  netbase openssl patch perl perl-modules-5.30 pinentry-curses python-pip-whl python3 python3-dev python3-distutils python3-lib2to3 python3-minimal python3-pip python3-pkg-resources
  python3-setuptools python3-wheel python3.8 python3.8-dev python3.8-minimal readline-common shared-mime-info tzdata wmdocker xdg-user-dirs xz-utils zlib1g-dev
.....
.....
0 upgraded, 128 newly installed, 0 to remove and 0 not upgraded.
Need to get 84.6 MB of archives.
After this operation, 370 MB of additional disk space will be used.
Processing triggers for ca-certificates (20210119~20.04.1) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
root@srv:~# pip3 install docker-compose
Collecting docker-compose
  Downloading docker_compose-1.29.2-py2.py3-none-any.whl (114 kB)
     |████████████████████████████████| 114 kB 12.4 MB/s 
Collecting requests<3,>=2.20.0
  Downloading requests-2.26.0-py2.py3-none-any.whl (62 kB)
     |████████████████████████████████| 62 kB 355 kB/s 
Collecting jsonschema<4,>=2.5.1
  Downloading jsonschema-3.2.0-py2.py3-none-any.whl (56 kB)
     |████████████████████████████████| 56 kB 3.4 MB/s 
Collecting websocket-client<1,>=0.32.0
  Downloading websocket_client-0.59.0-py2.py3-none-any.whl (67 kB)
     |████████████████████████████████| 67 kB 3.3 MB/s 
Collecting texttable<2,>=0.9.0
  Downloading texttable-1.6.4-py2.py3-none-any.whl (10 kB)
Collecting PyYAML<6,>=3.10
  Downloading PyYAML-5.4.1-cp38-cp38-manylinux1_x86_64.whl (662 kB)
     |████████████████████████████████| 662 kB 76.9 MB/s 
Collecting dockerpty<1,>=0.4.1
  Downloading dockerpty-0.4.1.tar.gz (13 kB)
Collecting docker[ssh]>=5
  Downloading docker-5.0.2-py2.py3-none-any.whl (145 kB)
     |████████████████████████████████| 145 kB 119.5 MB/s 
Collecting distro<2,>=1.5.0
  Downloading distro-1.6.0-py2.py3-none-any.whl (19 kB)
Collecting docopt<1,>=0.6.1
  Downloading docopt-0.6.2.tar.gz (25 kB)
Collecting python-dotenv<1,>=0.13.0
  Downloading python_dotenv-0.19.0-py2.py3-none-any.whl (17 kB)
Collecting urllib3<1.27,>=1.21.1
  Downloading urllib3-1.26.6-py2.py3-none-any.whl (138 kB)
     |████████████████████████████████| 138 kB 141.1 MB/s 
Collecting charset-normalizer~=2.0.0; python_version >= "3"
  Downloading charset_normalizer-2.0.4-py3-none-any.whl (36 kB)
Collecting certifi>=2017.4.17
  Downloading certifi-2021.5.30-py2.py3-none-any.whl (145 kB)
     |████████████████████████████████| 145 kB 133.3 MB/s 
Collecting idna<4,>=2.5; python_version >= "3"
  Downloading idna-3.2-py3-none-any.whl (59 kB)
     |████████████████████████████████| 59 kB 1.6 MB/s 
Collecting pyrsistent>=0.14.0
  Downloading pyrsistent-0.18.0-cp38-cp38-manylinux1_x86_64.whl (118 kB)
     |████████████████████████████████| 118 kB 131.3 MB/s 
Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from jsonschema<4,>=2.5.1->docker-compose) (45.2.0)
Collecting six>=1.11.0
  Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting attrs>=17.4.0
  Downloading attrs-21.2.0-py2.py3-none-any.whl (53 kB)
     |████████████████████████████████| 53 kB 899 kB/s 
Collecting paramiko>=2.4.2; extra == "ssh"
  Downloading paramiko-2.7.2-py2.py3-none-any.whl (206 kB)
     |████████████████████████████████| 206 kB 147.7 MB/s 
Collecting cryptography>=2.5
  Downloading cryptography-3.4.8-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.2 MB)
     |████████████████████████████████| 3.2 MB 147.4 MB/s 
Collecting bcrypt>=3.1.3
  Downloading bcrypt-3.2.0-cp36-abi3-manylinux2010_x86_64.whl (63 kB)
     |████████████████████████████████| 63 kB 1.4 MB/s 
Collecting pynacl>=1.0.1
  Downloading PyNaCl-1.4.0-cp35-abi3-manylinux1_x86_64.whl (961 kB)
     |████████████████████████████████| 961 kB 139.4 MB/s 
Collecting cffi>=1.12
  Downloading cffi-1.14.6-cp38-cp38-manylinux1_x86_64.whl (411 kB)
     |████████████████████████████████| 411 kB 84.0 MB/s 
Collecting pycparser
  Downloading pycparser-2.20-py2.py3-none-any.whl (112 kB)
     |████████████████████████████████| 112 kB 140.9 MB/s 
Building wheels for collected packages: dockerpty, docopt
  Building wheel for dockerpty (setup.py) ... done
  Created wheel for dockerpty: filename=dockerpty-0.4.1-py3-none-any.whl size=16604 sha256=d6f2d3d74bad523b1a308a952176a1db84cb604611235c1a5ae1c936cefe7889
  Stored in directory: /root/.cache/pip/wheels/1a/58/0d/9916bf3c72e224e038beb88f669f68b61d2f274df498ff87c6
  Building wheel for docopt (setup.py) ... done
  Created wheel for docopt: filename=docopt-0.6.2-py2.py3-none-any.whl size=13704 sha256=f8c389703e63ff7ec3734b240ba8d62c8f8bd99f3b05ccdcb0de1397aa523655
  Stored in directory: /root/.cache/pip/wheels/56/ea/58/ead137b087d9e326852a851351d1debf4ada529b6ac0ec4e8c
Successfully built dockerpty docopt
Installing collected packages: urllib3, charset-normalizer, certifi, idna, requests, pyrsistent, six, attrs, jsonschema, websocket-client, texttable, PyYAML, dockerpty, pycparser, cffi, cryptography, bcrypt, pynacl, paramiko, docker, distro, docopt, python-dotenv, docker-compose
Successfully installed PyYAML-5.4.1 attrs-21.2.0 bcrypt-3.2.0 certifi-2021.5.30 cffi-1.14.6 charset-normalizer-2.0.4 cryptography-3.4.8 distro-1.6.0 docker-5.0.2 docker-compose-1.29.2 dockerpty-0.4.1 docopt-0.6.2 idna-3.2 jsonschema-3.2.0 paramiko-2.7.2 pycparser-2.20 pynacl-1.4.0 pyrsistent-0.18.0 python-dotenv-0.19.0 requests-2.26.0 six-1.16.0 texttable-1.6.4 urllib3-1.26.6 websocket-client-0.59.0

Unable to continue upgrading an old cacti 0.8.8 to the latest version 1.2.18

Upgrading an old instance of cacti monitoring software may become a challenge, because of multiple new recommendations and requirements for the latest version 1.2.x.
There are a couple of b recommendations like memory limit and maximum execution time and multiple plugin requirements, which if not fulfilled the setup cannot continue. Second, there are the MySQL recommendations and there is an option innodb_file_format, which in general, is recommended to be Barracuda, but by default, in older version of MySQL use Antelope!

Upgrading from CACTI version 0.8.8 is successful to CACTI version 1.2.3, but then the upgrade process just began restarting and failed to upgrade to the final target CACTI version 1.2.18 because of the old MySQL InnoDB table format – Antelope.

Despite the Barracuda is just recommended and the upgrade process continues through the steps of the setup wizard, it just suddenly stops and returns to the welcome install screen.
Setting the option innodb_file_format resolves the problem and the upgrade setup finishes successfully the upgrade from CACTI version 0.8.8 (apparently with an intermediate upgrade to CACTI version 1.2.3) to CACTI version 1.2.18.

innodb_file_format=barracuda

Probably, this option will be a mandatory MySQL option for upgrading to a newer CACTI version after 1.2.3.

Several screenshots of recommendations and requirements for upgrading to CACTI 1.2.28

Keep on reading!

Transfer only a list of files with rsync

Transferring a list of files from one server to another maybe not so easy as it looks like, if the list consists of files with symlinks in the paths. Consider the following example:

/mnt/storage1/dir1/subdir1/file1
/mnt/storage1/dir1/subdir2/file2
/mnt/storage1/dir1/subdir3/file3

But what if the subdir3 is a symlink to a sub-directory:

/mnt/storage1/dir1/subdir3 -> /mnt/storage2/dir1/subdir3

STEP 1) Generate a list of only files following the symlinks if they exist.

The best way is to use Linux command find:

find -L /mnt/storage1/ -type f &> find.files.log

The option “-L” instructs the find to follow symbolic links and the test for the file (-type f) will always match against the type of the file that the symbolic link points to and not the symlink itself.

STEP 2) rsync the list of the files

There are several options (–copy-links –copy-dirlinks), which must be used with rsync command to be able to transfer the list of files, which may include symlink directories in the files’ paths:

rsync --copy-links --copy-dirlinks --partial --files-from=/root/find.files.log --verbose --progress --stats --times --perms --owner --group 10.10.10.10::storage /

The command above uses the rsync daemon started on the source server with shared root under the name “root” in the rsync configuration. Here is a sample rsync daemon configuration (/etc/rsyncd.conf):

pid file = /run/rsyncd.pid
use chroot = yes
read only = yes
hosts allow = 10.10.10.10/24
hosts deny = *

[root]
        uid=0
        gid=0
        path = /
        comment = root partition
        exclude = /proc /sys

Of course, a relative path could be used in the rsync daemon configuration if the file generated is also a relative path. For simplicity, here the whole real path is used.

In addition, the rsync could be used without a rsync daemon, but in conjunction with ssh daemon:

rsync --rsh='ssh -p 22 -l root' --copy-links --copy-dirlinks --partial --files-from=/root/find.files.log --verbose --progress --stats --times --perms --owner --group 10.10.10.10:/ /

Again, because the files in the find.files.log are with absolute paths the root / must be used in the rsync command. Relative paths may be used, too.

alter table error 1215 (HY000): Cannot add foreign key constraint

Adding a foreign key may result in the following error:

mysql> ALTER TABLE `table1` ADD CONSTRAINT FK_table2_id FOREIGN KEY (`table2_id`) REFERENCES table2(`ID`);
ERROR 1215 (HY000): Cannot add foreign key constraint

This error can occur because in couple of cases and it is not very informative, but basically, it says there is some kind of compatibility issue between the table1.table_id and table2.ID. And the three most common problems, besides the tables and columns exists, are:

  1. One of the two table are not Innodb.
    SHOW CREATE TABLE `table1`;
    
  2. There is a value (or values) in table1.table_id, which does not exists in table2.ID.
  3. SELECT `table1`.`table2_id`, `table2`.`ID` FROM `table1` LEFT JOIN `table2` ON `table1`.`table2_id`=`table2`.`ID`;
    

    Execute a left join to check whether there are NULL values in the table1.table2_id. If NULL values exist a record with the same missing IDs should be inserted in table2.ID. this kind of check may be stopped temporarily with:

    SET FOREIGN_KEY_CHECKS=0;
    

    Change it back to 1 after the ALTER clause.

  4. The type and the attributes of the columns are different. The attributes such as UNSINGED may cause a problem, too!
    ALTER TABLE `table1` CHANGE `table2_id` `table2_id` INT(11) UNSIGNED NOT NULL; 
    

    In this case, the column types are the same, but the attributes are different and the MySQL server throws the error! Change the signed int to unsigned with the above command. And after the table1.table2_id and table2.ID are of the same type and they have the same attributes, the alter command will be executed successfully.

Here are the initial structure of above tables:

CREATE TABLE `table1` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `table2_id` int(11) NOT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=220 DEFAULT CHARSET=utf8
CREATE TABLE `table2` (
  `ID` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `col1` int(11) NOT NULL DEFAULT '0',
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=642 DEFAULT CHARSET=utf8 

And after a successful ALTER statement, the table1 will looks like:

ALTER TABLE `table1` ADD CONSTRAINT `FK_table2_id` FOREIGN KEY (`table2_id`) REFERENCES `table1`(`ID`);
Query OK, 216 rows affected (0.92 sec)
Records: 216  Duplicates: 0  Warnings: 0

CREATE TABLE `table1` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `table2_id` int(11) NOT NULL,
  PRIMARY KEY (`ID`),
  KEY `FK_table2_id` (`table1_id`),
  CONSTRAINT `FK_table2_id` FOREIGN KEY (`table2_id`) REFERENCES `table2` (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=220 DEFAULT CHARSET=utf8

Caching NFS files with cachefilesd

A great tool for caching a network filesystem like NFS mounts is cachefilesd! It is easy to use it and a good deal of stats can be retrieved from the tool. More on how it works here https://www.kernel.org/doc/Documentation/filesystems/caching/fscache.txt

Here are quick steps to cache an NFS mounts (it works with NFS-Ganesha servers, too):

  1. Install the daemon tool cachefilesd
  2. Check the configuration file /etc/cachefilesd.conf. In most cases, no need to edit the file! Just check the disk limits if they are good.
  3. Start the cachefilesd daemon.
  4. Mount the network directories with “fsc” option. Umount and mount them all if they’ve been already mounted. The fsc is mandatory option to enable file cacheing of a network mount.
  5. Check stats to see if the file cching is working properly.

The example below is under CentOS 8, but it is almost the same in most Linux distributions.

STEP 1) Install the daemon tool cachefilesd

This is straight forward, just install it with the package manager:

[root@srv ~]# dnf install cachefilesd
Last metadata expiration check: 2:33:44 ago on Tue 08 Dec 2020 07:18:01 AM UTC.
Dependencies resolved.
=============================================================================================================================================================================================
 Package                                        Architecture                              Version                                            Repository                                 Size
=============================================================================================================================================================================================
Installing:
 cachefilesd                                    x86_64                                    0.10.10-4.el8                                      BaseOS                                     43 k

Transaction Summary
=============================================================================================================================================================================================
Install  1 Package

Total download size: 43 k
Installed size: 71 k
Is this ok [y/N]: y
Downloading Packages:
cachefilesd-0.10.10-4.el8.x86_64.rpm                                                                                                                         3.1 MB/s |  43 kB     00:00    
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                        2.8 MB/s |  43 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                     1/1 
  Installing       : cachefilesd-0.10.10-4.el8.x86_64                                                                                                                                    1/1 
  Running scriptlet: cachefilesd-0.10.10-4.el8.x86_64                                                                                                                                    1/1 
  Verifying        : cachefilesd-0.10.10-4.el8.x86_64                                                                                                                                    1/1 

Installed:
  cachefilesd-0.10.10-4.el8.x86_64                                                                                                                                                           

Complete!

STEP 2) Check the configuration file and tune for your system.

In most cases, the defaults in /etc/cachefilesd.conf are good to start with:

dir /var/cache/fscache
tag mycache
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%

# Assuming you're using SELinux with the default security policy included in
# this package
secctx system_u:system_r:cachefiles_kernel_t:s0

The directory where the cache will reside and the lines with the percentages are for disk space limitation. “brun 10%” means cache can runs freely till the disk space drops below 10%. “bcull 7%” – culling the cache when the free space drops below “7%” and more in the man page (or https://linux.die.net/man/5/cachefilesd.conf).
So if one maintains disk free space below 10% the configuration file should be edited.

STEP 3) Start the cachefilesd daemon.

And enable on boot to start automatically.

[root@srv ~]# systemctl start cachefilesd
[root@srv ~]# systemctl enable cachefilesd
Created symlink /etc/systemd/system/multi-user.target.wants/cachefilesd.service → /usr/lib/systemd/system/cachefilesd.service.
[root@srv ~]# systemctl status cachefilesd
● cachefilesd.service - Local network file caching management daemon
   Loaded: loaded (/usr/lib/systemd/system/cachefilesd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-12-08 10:01:24 UTC; 11s ago
 Main PID: 29786 (cachefilesd)
    Tasks: 1 (limit: 408616)
   Memory: 2.5M
   CGroup: /system.slice/cachefilesd.service
           └─29786 /usr/sbin/cachefilesd -n -f /etc/cachefilesd.conf

Dec 08 10:01:24 srv systemd[1]: Starting Local network file caching management daemon...
Dec 08 10:01:24 srv systemd[1]: Started Local network file caching management daemon.
Dec 08 10:01:24 srv cachefilesd[29786]: About to bind cache
Dec 08 10:01:24 srv cachefilesd[29786]: Bound cache
Dec 08 10:01:24 srv cachefilesd[29786]: Daemon Started

The status command shows the daemon cachefilesd is running. But does it cache?

STEP 4) Mount the network filesystems with option fsc

To make cachefilesd cache a network mount the option fsc must be included in the mount options. Remount may not work correctly, so to be sure a full umount/mount should be executed. Here is an example /etc/fstab file:

192.168.0.1:/mnt/storage /mnt/storage  nfs defaults,hard,intr,noexec,nosuid,_netdev,fsc,vers=4 0 0

And then mount with simple command:

mount /mnt/storage

Check whether the mounts if the FS cache is used. FSC must be “yes”.

[root@srv ~]# cat /proc/fs/nfsfs/volumes
NV SERVER   PORT DEV          FSID                              FSC
v4 c0a80001  801 0:41         d4098a2af096148:ec7560388cbe5b83  yes

There is a proc file for cache statistics:

[root@srv ~]# cat /proc/fs/fscache/stats
FS-Cache statistics
Cookies: idx=49 dat=4385599 spc=0
Objects: alc=43666 nal=0 avl=43666 ded=36002
ChkAux : non=0 ok=12289 upd=0 obs=761
Pages  : mrk=24915179 unc=24492585
Acquire: n=4385648 nul=0 noc=0 ok=4385648 nbf=0 oom=0
Lookups: n=43666 neg=31372 pos=12294 crt=31372 tmo=0
Invals : n=1 run=1
Updates: n=0 nul=0 run=1
Relinqs: n=4377930 nul=0 wcr=0 rtr=0
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=751549 ok=716860 wt=21436 nod=34689 nbf=0 int=0 oom=0
Retrvls: ops=751549 owt=9158 abt=0
Stores : n=550412 ok=550412 agn=0 nbf=0 oom=0
Stores : ops=33238 run=583650 pgs=550412 rxd=550412 olm=0
VmScan : nos=23963352 gon=0 bsy=0 can=0 wt=0
Ops    : pend=9160 run=784788 enq=26874960 can=0 rej=0
Ops    : ini=1301962 dfr=265 rel=1301962 gc=265
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=761 stl=0 rtr=0 cul=0

And here is the cache directory filled with files. If there are no files, the FS cache is not used, probably the mount is not mounted with FSC! Umount and mount the mounts again.

[root@srv ~]# find /var/cache/fscache|head -n 20
/var/cache/fscache
/var/cache/fscache/graveyard
/var/cache/fscache/cache
/var/cache/fscache/cache/@4a
/var/cache/fscache/cache/@4a/I03nfs
/var/cache/fscache/cache/@4a/I03nfs/@84
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400/@68
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400/@68/J1100000000000Q0goaWH946iIn7oUMELrd80000000040000g00Kb000wFe000jt000oG3000000040000g000000000
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400/@68/J1100000000000Q0goaWH946iIn7oUMELrd80000000040000g00Kb000wFe000jt000oG3000000040000g000000000/@8b
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400/@68/J1100000000000Q0goaWH946iIn7oUMELrd80000000040000g00Kb000wFe000jt000oG3000000040000g000000000/@8b/EA0g00sg0200n8000000ixBMHyy9gdcUm_O8ewl7XMW1n8cXgyl60
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400/@68/J1100000000000Q0goaWH946iIn7oUMELrd80000000040000g00Kb000wFe000jt000oG3000000040000g000000000/@8b/EA0g00sg0200n8000000ixBMHyy9gdcUm_O8ewl7XZG2n84qfxl60
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400/@68/J1100000000000Q0goaWH946iIn7oUMELrd80000000040000g00Kb000wFe000jt000oG3000000040000g000000000/@8b/EA0g00sg0200n8000000ixBMHyy9gdcUm_O8ewl7XQM2n8IQ4El60
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400/@68/J1100000000000Q0goaWH946iIn7oUMELrd80000000040000g00Kb000wFe000jt000oG3000000040000g000000000/@8b/EA0g00sg0200n8000000ixBMHyy9gdcUm_O8ewl7XZO2n8Ms7kl60
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400/@68/J1100000000000Q0goaWH946iIn7oUMELrd80000000040000g00Kb000wFe000jt000oG3000000040000g000000000/@8b/EA0g00sg0200n8000000ixBMHyy9gdcUm_O8ewl7XvQ2n88dKgl60
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400/@68/J1100000000000Q0goaWH946iIn7oUMELrd80000000040000g00Kb000wFe000jt000oG3000000040000g000000000/@8b/EA0g00sg0200n8000000ixBMHyy9gdcUm_O8ewl7XV40o8cREC6b0
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400/@68/J1100000000000Q0goaWH946iIn7oUMELrd80000000040000g00Kb000wFe000jt000oG3000000040000g000000000/@8b/EA0g00sg0200n8000000ixBMHyy9gdcUm_O8ewl7X3Y1n8MHO_l60
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400/@68/J1100000000000Q0goaWH946iIn7oUMELrd80000000040000g00Kb000wFe000jt000oG3000000040000g000000000/@8b/EA0g00sg0200n8000000ixBMHyy9gdcUm_O8ewl7Xhkwn80f23zb0
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400/@68/J1100000000000Q0goaWH946iIn7oUMELrd80000000040000g00Kb000wFe000jt000oG3000000040000g000000000/@8b/EA0g00sg0200n8000000ixBMHyy9gdcUm_O8ewl7XEQ2n8IuAll60
/var/cache/fscache/cache/@4a/I03nfs/@84/Jc0010800840000cG0400/@68/J1100000000000Q0goaWH946iIn7oUMELrd80000000040000g00Kb000wFe000jt000oG3000000040000g000000000/@8b/EA0g00sg0200n8000000ixBMHyy9gdcUm_O8ewl7XoO2n8A7Bll60
[root@srv ~]# du -d 1 -h /var/cache/fscache
4.0K    /var/cache/fscache/graveyard
3.8G    /var/cache/fscache/cache
3.8G    /var/cache/fscache

There are 3.8G in the cache.

Create and export a GlusterFS volume with NFS-Ganesha in CentOS 8

GlusterFS built-in NFS server supports only NFS version 3. GlusterFS offers NFS exports using NFS-Ganesha, which supports NFS version 3 and 4 protocols.
NFS-Ganesha server is a user-mode file sharing server, which offers a GlusterFS plugin to export GlusterFS volumes. In the following article, the NSF-Ganesha and GlusterFS are installed and a simple GlusterFS volume is created and then exported through NFS 3 and 4 version protocols.
The version of the software in this article:

  • CentOS Stream release 8 (25.04.2021)
  • GlusterFS 8.4
  • NFS-Ganesha 3.5

STEP 1) Install GlusterFS.

dnf install -y centos-release-gluster
dnf install -y glusterfs-server

The first line will installs a new repository under the SIG management – https://wiki.centos.org/SpecialInterestGroup/Storage. The second line installs the GlusterFS server.

STEP 2) Install NFS-Ganesha.

dnf install -y centos-release-nfs-ganesha30
dnf install -y nfs-ganesha nfs-ganesha-gluster

The first line again installs a new repository under the SIG management and the second line installs the NFS-Ganesha server with Gluster plugin.

STEP 3) Create GlusterFS volume

Start the GlusterFS server and create a simple 3 replicas volume with:
Start the GlusterFS on all the three nodes and enable the GlusterFS communication between the three nodes using firewall-cmd utility. So execute the following commands:

systemctl start glusterd
firewall-cmd --permanent --new-zone=glusternodes
firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.200
firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.201
firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.202
firewall-cmd --permanent --zone=glusternodes --add-service=glusterfs
firewall-cmd --reload

On the first node create the GlusterFS volume. First, add the glnode2 and glnode3 to the cluster.

gluster peer probe glnode2
gluster peer probe glnode3
gluster volume create VOL1 replica 3 transport tcp glnode1:/mnt/storage/gluster/brick glnode2:/mnt/storage/gluster/brick glnode3:/mnt/storage/gluster/brick
gluster volume start VOL1

Keep on reading!

Gentoo emerge virtualbox- Mesa / GLU: Mesa not found at, Mesa headers not found

Emerging the package app-emulation/virtualbox the following error occurs:

Checking for Mesa / GLU: 
  Mesa not found at -L/usr/X11R6/lib -L/usr/X11R6/lib64 -L/usr/local/lib -lXext -lX11 -lGL -I/usr/local/include or Mesa headers not found
  Check the file /var/tmp/portage/app-emulation/virtualbox-6.1.18/work/VirtualBox-6.1.18/configure.log for detailed error information.
Check /var/tmp/portage/app-emulation/virtualbox-6.1.18/work/VirtualBox-6.1.18/configure.log for details
 * ERROR: app-emulation/virtualbox-6.1.18::gentoo failed (configure phase):
 *   (no error message)
 * 
 * Call stack:
 *     ebuild.sh, line  125:  Called src_configure
 *   environment, line 5504:  Called doecho './configure' '--with-gcc=x86_64-pc-linux-gnu-gcc' '--with-g++=x86_64-pc-linux-gnu-g++' '--disable-dbus' '--disable-kmods' '--disable-alsa' '--disable-docs' '--disable-devmapper' '--disable-pulse' '--disable-python' '--enable-webservice' '--enable-vnc'
 *   environment, line 1538:  Called die
 * The specific snippet of code:
 *       "$@" || die

The configure script reports the mesa is missing, but the package media-libs/mesa is installed. Reinstalling does not fix the problem.
Farther investigation in the logs by checking the configure.log reveals the real problem:

srv ~ # tail -n 16 /var/tmp/portage/app-emulation/virtualbox-6.1.18/work/VirtualBox-6.1.18/configure.log
***** Checking Mesa / GLU *****
compiling the following source file:
#include <cstdio>
#include <X11/Xlib.h>
#include <GL/glx.h>
#include <GL/glu.h>
extern "C" int main(void)
{
  return 0;
}
using the following command line:
x86_64-pc-linux-gnu-g++  -fPIC -g -O -Wall -o /var/tmp/portage/app-emulation/virtualbox-6.1.18/work/VirtualBox-6.1.18/.tmp_out /var/tmp/portage/app-emulation/virtualbox-6.1.18/work/VirtualBox-6.1.18/.tmp_src.cc "-L/usr/X11R6/lib -L/usr/X11R6/lib64 -L/usr/local/lib -lXext -lX11 -lGL -I/usr/local/include"
/var/tmp/portage/app-emulation/virtualbox-6.1.18/work/VirtualBox-6.1.18/.tmp_src.cc:4:10: fatal error: GL/glu.h: No such file or directory
    4 | #include <GL/glu.h>
      |          ^~~~~~~~~~
compilation terminated.

The glu part of mesa is missing. In Gentoo, the glu (https://gitlab.freedesktop.org/mesa/glu) is not included in the media-libs/mesa and it is a separate package media-libs/glu.

The solution is to emerge media-libs/glu and then the app-emulation/virtualbox.

emerge -v media-libs/glu

Another Linux distribution may include glu in the main mesa package.

Here, the conclusion is to always check the configure.log, because it reports the exact error and not to trust the generic output of the configure script.

glusterfs with localhost (127.0.0.1) nodes on different servers – glusterfs volume with 3 replicas

Binding the GlusterFS nodes on a physical interface may lead to local availability problems even for replication nodes. Bringing down the physical interface will bring down the nodes, even the local replica for the local mounts and applications.

  • 100% local uptime. The local replica will be 100% available. The loopback interface is always in the upstate! Network interfaces are not 100% upstate, because of network reload or cable unplug. Cable unplug (or port down for a switch configuration reload) could lead to a short time unavailable of the local node even for the local system!
  • node resolve rely on /etc/hosts records, not network and remote DNS system.
  • No speed limit. The read from the local system through the loopback interface could easily increase above 1G or even 10G. Probably, building nodes with replicas over a 1G network is much more affected than a network with 10G connectivity. Reading from a node relying on a loopback interface could pass 10G, even though the server is connected to a 1G network!

An addition note – another kind of the proposed solution here is to use a virtual interface to bind the IP of the GlusterFS brick. The most common type of virtual interface is using a bridge interface for the IP.

The example here is to bring a GlusterFS volume in replication mode with 3 servers, i.e. 3 replicas. Each server may mount locally the GlusterFS volume with name VOL1 and it would not get unavailable if the main interface

  • server’s hostname: node1, hostname for the replica brick glnode1. IP: 192.168.0.20, but the node1 locally is resolved as 127.0.0.1 through /etc/hosts.
  • server’s hostname: node2, hostname for the replica brick glnode2. IP: 192.168.0.30, but the node1 locally is resolved as 127.0.0.1 through /etc/hosts.
  • server’s hostname: node3, hostname for the replica brick glnode3 IP: 192.168.0.31, but the node3 locally is resolved as 127.0.0.1 through /etc/hosts.

Of course, the server’s hostname could be used, but it better to have a separate domain for the GlusterFS bricks. Sometimes server hostnames should be a real IP or some software may rely on it, too.

And here are all the commands to bring up the GlusterFS volume on 3 servers:

STEP 1) Install GlusterFS software and initial configuration.

There are GlisterFS packages in the official CentOS 8, but a newer version is supported in the Storage SIG. The GlusterFS version installed in this article is 8.4. Install the software under node1, node2, and node3.

yum install -y centos-release-gluster
yum install -y glusterfs-server

Add the following lines at the end of node1:/etc/hosts file.

127.0.0.1 glnode1
192.168.0.30 glnode2
192.168.0.31 glnode3

Add the following lines at the end of node2:/etc/hosts file.

192.168.0.20 glnode1
127.0.0.1 glnode2
192.168.0.31 glnode3

Add the following lines at the end of node3:/etc/hosts file.

192.168.0.20 glnode1   
192.168.0.30 glnode2
127.0.0.1 glnode3

Start the GlusterFS service on the three nodes

systemctl start glusterd

Mount the storage device if any and make the directory where the GlusterFS brick will reside:

mount /mnt/storage/
mkdir -p /mnt/storage/gluster/brick

STEP 2) Configure the firewall.

CentOS 8 uses firewalld and here a new zone for the GlusterFS is created and the GlusterFS service is added in the whitelist of the new zone. The three IPs of the nodes are also added in the new zone:

firewall-cmd --permanent --new-zone=glusternodes
firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.20
firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.30
firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.31
firewall-cmd --permanent --zone=glusternodes --add-service=glusterfs
firewall-cmd --reload

STEP 3) Add peers to the GlusterFS cluster and create a 3 node replica volume.

gluster peer probe glnode2
gluster peer probe glnode3
gluster volume create VOL1 replica 3 transport tcp glnode1:/mnt/storage/gluster/brick glnode2:/mnt/storage/gluster/brick glnode3:/mnt/storage/gluster/brick

The GlusterFS volume can be mounted with:

mount -t glusterfs glnode1:/VOL1 /mnt/VOL1/

And /etc/fstab sample line:

glnode1:/VOL1 /mnt/VOL1 glusterfs defaults,noatime,direct-io-mode=disable 0 0

Always use the local hostname for the current node server. If you would like to mount the volume VOL1 on node1, use glnode1:/VOL1 and so on.
Bringing down the physical interface of the server, which is connected to the Internet (aka the network interface with the real IP) would not make the GlusterFS brick unavailable for the local mounts and applications.

In a cluster with only replicas, the local application will just continue using the mounted GlusterFS volume (or native GlusterFS clients) relying only on the local Gluster brick till the main Internet connection comes back.

Create 3 node replica volume – the whole output

[root@node1 ~]# yum install -y centos-release-gluster
CentOS Stream 8 - AppStream                                                                                 5.8 MB/s | 6.7 MB     00:01    
CentOS Stream 8 - BaseOS                                                                                    2.0 MB/s | 2.3 MB     00:01    
CentOS Stream 8 - Extras                                                                                     22 kB/s | 9.1 kB     00:00    
Dependencies resolved.
============================================================================================================================================
 Package                                          Architecture              Version                         Repository                 Size
============================================================================================================================================
Installing:
 centos-release-gluster8                          noarch                    1.0-1.el8                       extras                    9.3 k
Installing dependencies:
 centos-release-storage-common                    noarch                    2-2.el8                         extras                    9.4 k

Transaction Summary
============================================================================================================================================
Install  2 Packages

Total download size: 19 k
Installed size: 2.4 k
Downloading Packages:
(1/2): centos-release-gluster8-1.0-1.el8.noarch.rpm                                                         136 kB/s | 9.3 kB     00:00    
(2/2): centos-release-storage-common-2-2.el8.noarch.rpm                                                     145 kB/s | 9.4 kB     00:00    
--------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                        27 kB/s |  19 kB     00:00     
warning: /var/cache/dnf/extras-9705a089504ff150/packages/centos-release-gluster8-1.0-1.el8.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 8483c65d: NOKEY
CentOS Stream 8 - Extras                                                                                    725 kB/s | 1.6 kB     00:00    
Importing GPG key 0x8483C65D:
 Userid     : "CentOS (CentOS Official Signing Key) <security@centos.org>"
 Fingerprint: 99DB 70FA E1D7 CE22 7FB6 4882 05B5 55B3 8483 C65D
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                    1/1 
  Installing       : centos-release-storage-common-2-2.el8.noarch                                                                       1/2 
  Installing       : centos-release-gluster8-1.0-1.el8.noarch                                                                           2/2 
  Running scriptlet: centos-release-gluster8-1.0-1.el8.noarch                                                                           2/2 
  Verifying        : centos-release-gluster8-1.0-1.el8.noarch                                                                           1/2 
  Verifying        : centos-release-storage-common-2-2.el8.noarch                                                                       2/2 

Installed:
  centos-release-gluster8-1.0-1.el8.noarch                           centos-release-storage-common-2-2.el8.noarch                          

Complete!
[root@node1 ~]# yum install -y glusterfs-server
Last metadata expiration check: 0:00:14 ago on Wed Apr 14 13:01:50 2021.
Dependencies resolved.
============================================================================================================================================
 Package                                      Architecture          Version                            Repository                      Size
============================================================================================================================================
Installing:
 glusterfs-server                             x86_64                8.4-1.el8                          centos-gluster8                1.4 M
Installing dependencies:
 attr                                         x86_64                2.4.48-3.el8                       baseos                          68 k
 device-mapper-event                          x86_64                8:1.02.175-5.el8                   baseos                         269 k
 device-mapper-event-libs                     x86_64                8:1.02.175-5.el8                   baseos                         269 k
 device-mapper-persistent-data                x86_64                0.8.5-4.el8                        baseos                         468 k
 glusterfs                                    x86_64                8.4-1.el8                          centos-gluster8                689 k
 glusterfs-cli                                x86_64                8.4-1.el8                          centos-gluster8                214 k
 glusterfs-client-xlators                     x86_64                8.4-1.el8                          centos-gluster8                899 k
 glusterfs-fuse                               x86_64                8.4-1.el8                          centos-gluster8                171 k
 libaio                                       x86_64                0.3.112-1.el8                      baseos                          33 k
 libgfapi0                                    x86_64                8.4-1.el8                          centos-gluster8                125 k
 libgfchangelog0                              x86_64                8.4-1.el8                          centos-gluster8                 67 k
 libgfrpc0                                    x86_64                8.4-1.el8                          centos-gluster8                 89 k
 libgfxdr0                                    x86_64                8.4-1.el8                          centos-gluster8                 61 k
 libglusterd0                                 x86_64                8.4-1.el8                          centos-gluster8                 45 k
 libglusterfs0                                x86_64                8.4-1.el8                          centos-gluster8                350 k
 lvm2                                         x86_64                8:2.03.11-5.el8                    baseos                         1.6 M
 lvm2-libs                                    x86_64                8:2.03.11-5.el8                    baseos                         1.1 M
 psmisc                                       x86_64                23.1-5.el8                         baseos                         151 k
 python3-pyxattr                              x86_64                0.5.3-18.el8                       centos-gluster8                 35 k
 rpcbind                                      x86_64                1.2.5-8.el8                        baseos                          70 k
 userspace-rcu                                x86_64                0.10.1-4.el8                       baseos                         101 k

Transaction Summary
============================================================================================================================================
Install  22 Packages

Total download size: 8.2 M
Installed size: 24 M
Downloading Packages:
(1/22): glusterfs-cli-8.4-1.el8.x86_64.rpm                                                                  1.1 MB/s | 214 kB     00:00    
(2/22): glusterfs-8.4-1.el8.x86_64.rpm                                                                      2.6 MB/s | 689 kB     00:00    
(3/22): glusterfs-fuse-8.4-1.el8.x86_64.rpm                                                                 1.1 MB/s | 171 kB     00:00    
(4/22): glusterfs-client-xlators-8.4-1.el8.x86_64.rpm                                                       2.3 MB/s | 899 kB     00:00    
(5/22): libgfapi0-8.4-1.el8.x86_64.rpm                                                                      1.8 MB/s | 125 kB     00:00    
(6/22): libgfchangelog0-8.4-1.el8.x86_64.rpm                                                                755 kB/s |  67 kB     00:00    
(7/22): libgfrpc0-8.4-1.el8.x86_64.rpm                                                                      756 kB/s |  89 kB     00:00    
(8/22): libgfxdr0-8.4-1.el8.x86_64.rpm                                                                      579 kB/s |  61 kB     00:00    
(9/22): libglusterd0-8.4-1.el8.x86_64.rpm                                                                   641 kB/s |  45 kB     00:00    
(10/22): glusterfs-server-8.4-1.el8.x86_64.rpm                                                              3.4 MB/s | 1.4 MB     00:00    
(11/22): libglusterfs0-8.4-1.el8.x86_64.rpm                                                                 1.0 MB/s | 350 kB     00:00    
(12/22): python3-pyxattr-0.5.3-18.el8.x86_64.rpm                                                             97 kB/s |  35 kB     00:00    
(13/22): attr-2.4.48-3.el8.x86_64.rpm                                                                        85 kB/s |  68 kB     00:00    
(14/22): device-mapper-event-1.02.175-5.el8.x86_64.rpm                                                      342 kB/s | 269 kB     00:00    
(15/22): device-mapper-event-libs-1.02.175-5.el8.x86_64.rpm                                                 338 kB/s | 269 kB     00:00    
(16/22): libaio-0.3.112-1.el8.x86_64.rpm                                                                    679 kB/s |  33 kB     00:00    
(17/22): device-mapper-persistent-data-0.8.5-4.el8.x86_64.rpm                                               1.5 MB/s | 468 kB     00:00    
(18/22): psmisc-23.1-5.el8.x86_64.rpm                                                                       1.5 MB/s | 151 kB     00:00    
(19/22): rpcbind-1.2.5-8.el8.x86_64.rpm                                                                     1.2 MB/s |  70 kB     00:00    
(20/22): lvm2-libs-2.03.11-5.el8.x86_64.rpm                                                                 3.1 MB/s | 1.1 MB     00:00    
(21/22): userspace-rcu-0.10.1-4.el8.x86_64.rpm                                                              474 kB/s | 101 kB     00:00    
(22/22): lvm2-2.03.11-5.el8.x86_64.rpm                                                                      3.3 MB/s | 1.6 MB     00:00    
--------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                       2.8 MB/s | 8.2 MB     00:02     
warning: /var/cache/dnf/centos-gluster8-ae72c2c38de8ee20/packages/glusterfs-8.4-1.el8.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID e451e5b5: NOKEY
CentOS-8 - Gluster 8                                                                                        1.0 MB/s | 1.0 kB     00:00    
Importing GPG key 0xE451E5B5:
 Userid     : "CentOS Storage SIG (http://wiki.centos.org/SpecialInterestGroup/Storage) <security@centos.org>"
 Fingerprint: 7412 9C0B 173B 071A 3775 951A D4A2 E50B E451 E5B5
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                    1/1 
  Installing       : libgfxdr0-8.4-1.el8.x86_64                                                                                        1/22 
  Running scriptlet: libgfxdr0-8.4-1.el8.x86_64                                                                                        1/22 
  Installing       : libglusterfs0-8.4-1.el8.x86_64                                                                                    2/22 
  Running scriptlet: libglusterfs0-8.4-1.el8.x86_64                                                                                    2/22 
  Installing       : libgfrpc0-8.4-1.el8.x86_64                                                                                        3/22 
  Running scriptlet: libgfrpc0-8.4-1.el8.x86_64                                                                                        3/22 
  Installing       : libaio-0.3.112-1.el8.x86_64                                                                                       4/22 
  Installing       : glusterfs-client-xlators-8.4-1.el8.x86_64                                                                         5/22 
  Installing       : device-mapper-event-libs-8:1.02.175-5.el8.x86_64                                                                  6/22 
  Running scriptlet: glusterfs-8.4-1.el8.x86_64                                                                                        7/22 
  Installing       : glusterfs-8.4-1.el8.x86_64                                                                                        7/22 
  Running scriptlet: glusterfs-8.4-1.el8.x86_64                                                                                        7/22 
  Installing       : libglusterd0-8.4-1.el8.x86_64                                                                                     8/22 
  Running scriptlet: libglusterd0-8.4-1.el8.x86_64                                                                                     8/22 
  Installing       : glusterfs-cli-8.4-1.el8.x86_64                                                                                    9/22 
  Installing       : device-mapper-event-8:1.02.175-5.el8.x86_64                                                                      10/22 
  Running scriptlet: device-mapper-event-8:1.02.175-5.el8.x86_64                                                                      10/22 
  Installing       : lvm2-libs-8:2.03.11-5.el8.x86_64                                                                                 11/22 
  Installing       : libgfapi0-8.4-1.el8.x86_64                                                                                       12/22 
  Running scriptlet: libgfapi0-8.4-1.el8.x86_64                                                                                       12/22 
  Installing       : device-mapper-persistent-data-0.8.5-4.el8.x86_64                                                                 13/22 
  Installing       : lvm2-8:2.03.11-5.el8.x86_64                                                                                      14/22 
  Running scriptlet: lvm2-8:2.03.11-5.el8.x86_64                                                                                      14/22 
  Installing       : libgfchangelog0-8.4-1.el8.x86_64                                                                                 15/22 
  Running scriptlet: libgfchangelog0-8.4-1.el8.x86_64                                                                                 15/22 
  Installing       : userspace-rcu-0.10.1-4.el8.x86_64                                                                                16/22 
  Running scriptlet: userspace-rcu-0.10.1-4.el8.x86_64                                                                                16/22 
  Running scriptlet: rpcbind-1.2.5-8.el8.x86_64                                                                                       17/22 
  Installing       : rpcbind-1.2.5-8.el8.x86_64                                                                                       17/22 
  Running scriptlet: rpcbind-1.2.5-8.el8.x86_64                                                                                       17/22 
  Installing       : psmisc-23.1-5.el8.x86_64                                                                                         18/22 
  Installing       : attr-2.4.48-3.el8.x86_64                                                                                         19/22 
  Installing       : glusterfs-fuse-8.4-1.el8.x86_64                                                                                  20/22 
  Installing       : python3-pyxattr-0.5.3-18.el8.x86_64                                                                              21/22 
  Installing       : glusterfs-server-8.4-1.el8.x86_64                                                                                22/22 
  Running scriptlet: glusterfs-server-8.4-1.el8.x86_64                                                                                22/22 
  Verifying        : glusterfs-8.4-1.el8.x86_64                                                                                        1/22 
  Verifying        : glusterfs-cli-8.4-1.el8.x86_64                                                                                    2/22 
  Verifying        : glusterfs-client-xlators-8.4-1.el8.x86_64                                                                         3/22 
  Verifying        : glusterfs-fuse-8.4-1.el8.x86_64                                                                                   4/22 
  Verifying        : glusterfs-server-8.4-1.el8.x86_64                                                                                 5/22 
  Verifying        : libgfapi0-8.4-1.el8.x86_64                                                                                        6/22 
  Verifying        : libgfchangelog0-8.4-1.el8.x86_64                                                                                  7/22 
  Verifying        : libgfrpc0-8.4-1.el8.x86_64                                                                                        8/22 
  Verifying        : libgfxdr0-8.4-1.el8.x86_64                                                                                        9/22 
  Verifying        : libglusterd0-8.4-1.el8.x86_64                                                                                    10/22 
  Verifying        : libglusterfs0-8.4-1.el8.x86_64                                                                                   11/22 
  Verifying        : python3-pyxattr-0.5.3-18.el8.x86_64                                                                              12/22 
  Verifying        : attr-2.4.48-3.el8.x86_64                                                                                         13/22 
  Verifying        : device-mapper-event-8:1.02.175-5.el8.x86_64                                                                      14/22 
  Verifying        : device-mapper-event-libs-8:1.02.175-5.el8.x86_64                                                                 15/22 
  Verifying        : device-mapper-persistent-data-0.8.5-4.el8.x86_64                                                                 16/22 
  Verifying        : libaio-0.3.112-1.el8.x86_64                                                                                      17/22 
  Verifying        : lvm2-8:2.03.11-5.el8.x86_64                                                                                      18/22 
  Verifying        : lvm2-libs-8:2.03.11-5.el8.x86_64                                                                                 19/22 
  Verifying        : psmisc-23.1-5.el8.x86_64                                                                                         20/22 
  Verifying        : rpcbind-1.2.5-8.el8.x86_64                                                                                       21/22 
  Verifying        : userspace-rcu-0.10.1-4.el8.x86_64                                                                                22/22 

Installed:
  attr-2.4.48-3.el8.x86_64                                             device-mapper-event-8:1.02.175-5.el8.x86_64                         
  device-mapper-event-libs-8:1.02.175-5.el8.x86_64                     device-mapper-persistent-data-0.8.5-4.el8.x86_64                    
  glusterfs-8.4-1.el8.x86_64                                           glusterfs-cli-8.4-1.el8.x86_64                                      
  glusterfs-client-xlators-8.4-1.el8.x86_64                            glusterfs-fuse-8.4-1.el8.x86_64                                     
  glusterfs-server-8.4-1.el8.x86_64                                    libaio-0.3.112-1.el8.x86_64                                         
  libgfapi0-8.4-1.el8.x86_64                                           libgfchangelog0-8.4-1.el8.x86_64                                    
  libgfrpc0-8.4-1.el8.x86_64                                           libgfxdr0-8.4-1.el8.x86_64                                          
  libglusterd0-8.4-1.el8.x86_64                                        libglusterfs0-8.4-1.el8.x86_64                                      
  lvm2-8:2.03.11-5.el8.x86_64                                          lvm2-libs-8:2.03.11-5.el8.x86_64                                    
  psmisc-23.1-5.el8.x86_64                                             python3-pyxattr-0.5.3-18.el8.x86_64                                 
  rpcbind-1.2.5-8.el8.x86_64                                           userspace-rcu-0.10.1-4.el8.x86_64                                   

Complete!
[root@node1 ~]# systemctl start glusterd
[root@node1 ~]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2021-04-14 13:06:39 UTC; 3s ago
     Docs: man:glusterd(8)
  Process: 10151 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCC>
 Main PID: 10152 (glusterd)
    Tasks: 9 (limit: 11409)
   Memory: 5.6M
   CGroup: /system.slice/glusterd.service
           └─10152 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Apr 14 13:06:39 node1 systemd[1]: Starting GlusterFS, a clustered file-system server...
Apr 14 13:06:39 node1 systemd[1]: Started GlusterFS, a clustered file-system server.
[root@node1 ~]# tail -n 3 /etc/hosts 
127.0.0.1 glnode1
192.168.0.30 glnode2
192.168.0.31 glnode3
[root@node1 ~]# ping glnode1
PING glnode1 (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.050 ms
^C
--- glnode1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1016ms
rtt min/avg/max/mdev = 0.050/0.054/0.058/0.004 ms
[root@node1 ~]# mkdir -p /mnt/storage/gluster/brick
[root@node1 ~]# firewall-cmd --permanent --new-zone=glusternodes
success
[root@node1 ~]# firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.20
success
[root@node1 ~]# firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.30
success
[root@node1 ~]# firewall-cmd --permanent --zone=glusternodes --add-source=192.168.0.31
success
[root@node1 ~]# firewall-cmd --permanent --zone=glusternodes --add-service=glusterfs
success
[root@node1 ~]# firewall-cmd --reload
success
[root@node1 ~]# firewall-cmd --zone=glusternodes --list-all
glusternodes (active)
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 192.168.0.20 192.168.0.30 192.168.0.31
  services: glusterfs
  ports: 
  protocols: 
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:
[root@node1 ~]# gluster peer probe glnode2
peer probe: success
[root@node1 ~]# gluster peer probe glnode3
peer probe: success
[root@node1 ~]# gluster peer status
Number of Peers: 2

Hostname: glnode2
Uuid: ab63f0a0-0a72-4fcd-9f34-b88040d1a8e3
State: Peer in Cluster (Connected)

Hostname: glnode3
Uuid: 439ccd19-a95e-427c-ab72-6b65effcbe06
State: Peer in Cluster (Connected)
[root@node1 ~]# gluster volume create VOL1 replica 3 transport tcp glnode1:/mnt/storage/gluster/brick glnode2:/mnt/storage/gluster/brick glnode3:/mnt/storage/gluster/brick
volume create: VOL1: success: please start the volume to access data
[root@node1 ~]# gluster volume start VOL1
volume start: VOL1: success
[root@node1 ~]# gluster volume info VOL1
 
Volume Name: VOL1
Type: Replicate
Volume ID: 7da7bb05-2c9b-464b-b3f9-8940eeb5b0bb
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: glnode1:/mnt/storage/gluster/brick
Brick2: glnode2:/mnt/storage/gluster/brick
Brick3: glnode3:/mnt/storage/gluster/brick
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@node1 storage]# mkdir -p /mnt/VOL1
[root@node1 storage]# tail -n 1 /etc/fstab 
glnode1:/VOL1 /mnt/VOL1 glusterfs defaults,noatime,direct-io-mode=disable 0 0
[root@node1 storage]# mount /mnt/VOL1/
[root@node1 storage]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        892M     0  892M   0% /dev
tmpfs           909M     0  909M   0% /dev/shm
tmpfs           909M  8.5M  901M   1% /run
tmpfs           909M     0  909M   0% /sys/fs/cgroup
/dev/sda1        95G  1.5G   89G   2% /
/dev/sda3       976M  148M  762M  17% /boot
tmpfs           182M     0  182M   0% /run/user/0
glnode1:/VOL1    95G  2.4G   89G   3% /mnt/VOL1
[root@node1 storage]# ls -altr /mnt/VOL1/
total 8
drwxr-xr-x. 3 root root 4096 Apr 14 13:31 .
drwxr-xr-x. 4 root root 4096 Apr 14 13:34 ..

Simple export of a ext4 directory with NFS Ganesha 3.5 server in CentOS 8 with SELinux enforcing

In fact, this article is a continuation of the previous NFS Ganesha article – Simple export of an ext4 directory with NFS Ganesha 3.5 server in CentOS 8 without SELinux because it has the same purpose to export a directory residing on an ext4 file system under CentOS 8 Stream, but this time the SELinux is enabled and it is in enforcing mode! There is a need for this additional article because the SELinux is not enabled in many user configurations (despite being wrong!) and the SELinux configuration may add complexity to the first article, which could lead to misleading thoughts. The previous article might be a little bit more detailed, so the reader could check it, too.
It’s worth mentioning the key points of NFS-Ganesha:

  • a user-mode file sharing server
  • supports NFS 3, 4.x and 9P
  • using plugins for different file systems
  • CentOS Storage Special Interest Group offers a file repository with NFS-Ganesha server
  • supports file systems like ext4, xfs, brtfs, zfs and more. There are sample configurations: https://github.com/phdeniel/nfs-ganesha/tree/master/src/config_samples
  • supports cluster and/or distributed file systems like GlusterFS, Ceph, GPFS, HPSS, Lustre
  • Current version 3.5 and it is included in the official SIG CentOS Storage Special Interest Group repository.

This article assumes the reader has a clean CentOS 8 Stream installation with SELinux in enforcing mode.

STEP 1) Install the repository and NFS-Ganesha software

NFS-Ganesha 3 packages are from the CentOS Storage SIG repository, which is a good repository and may be trusted.

dnf install -y centos-release-nfs-ganesha30
dnf install -y nfs-ganesha nfs-ganesha-vfs nfs-ganesha-selinux

STEP 2) Configuration for exporting a directory.

There are two files under /etc/ganesha/:

ganesha.conf
vfs.conf

ganesha.conf includes global configuration and NFS share configuration. Each export path begins with the keyword EXPORT followed by a block ebraced by brackets {}.
vfs.conf includes a simple example for the VFS plugin, but this configuration file is not used by the NFS Ganesha server. It is just a sample file.
Here is a simple configuration, which exports /mnt/storage with Read/Write permissions to a single IP. Just add at the end of the file /etc/ganesha/ganesha.conf contains:

 
EXPORT
{
        Export_Id = 2;
        Path = /mnt/storage1;
        Pseudo = /mnt/storage1;
        Protocols = 3,4;
        Access_Type = RW;
        Squash = None;
        FSAL
        {
                Name = VFS;
        }
        CLIENT
        {
                Clients = 192.168.0.12;
        }
}

STEP 3) Start the server and mount the exported directory. Configure the firewall.

Start the server, enable the service to start on boot and then configure the firewall to pass the NFS requests:

systemctl start nfs-ganesha
systemctl enable nfs-ganesha
firewall-cmd --permanent --zone=public --add-service=nfs
firewall-cmd --reload

Keep on reading!