Elasticsearch failed to set password apm_system error in initial setup

A relatively typical error when installing a single node Elastic Elasticsearch software is when the passwords are set:

[root@loganalyzer elasticsearch]# ./bin/elasticsearch-setup-passwords -v auto

Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y



Connection failure to: http://192.168.0.4:9200/_security/user/apm_system/_password?pretty failed: Read timed out


ERROR: Failed to set password for user [apm_system].

Such error may prevent the initial password setting of several important passwords and compromise the Elasticsearch software security model. Even including the

discovery.type: single-node

in the /etc/elasticsearch/elasticsearch.yml would lead to such error. The missing option in the configuration /etc/elasticsearch/elasticsearch.yml is:

discovery.seed_hosts: ["node-1"]

By default, this option is commented out and it should be set on initial installation, though it is not required when starting the elasticsearch node (with no security model enabled)!
This is an array with all the servers’ hostnames in the cluster setup. In single-node mode, this option (discovery.seed_hosts) should be set only to the hostname of the single node like in this case “node-1”. This is the hostname of the server. The user must include the user’s current server hostname, not this example name “node-1”!

Setting the right hostname for discovery.seed_hosts in /etc/elasticsearch/elasticsearch.yml would let the user to set all password with the Elasticsearch tool elasticsearch-setup-passwords

The error may occur in a cluster setup with multiple servers, too, if the hosts are not filled in this option – discovery.seed_hosts.
Here is what to expect when executing elasticsearch-setup-passwords (even with some RED indexes):

[root@loganalyzer ~]# cd /usr/share/elasticsearch/
[root@loganalyzer elasticsearch]# ./bin/elasticsearch-setup-passwords -v auto

Your cluster health is currently RED.
This means that some cluster data is unavailable and your cluster is not fully functional.

It is recommended that you resolve the issues with your cluster before running elasticsearch-setup-passwords.
It is very likely that the password changes will fail when run against an unhealthy cluster.

Do you want to continue with the password setup process [y/N]y

Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y


Changed password for user apm_system
PASSWORD apm_system = judakai2Wai9Saiph8ah

Changed password for user kibana_system
PASSWORD kibana_system = eisiadit3CieG4Requie

Changed password for user kibana
PASSWORD kibana = bi3NohquohLoonaizei1

Changed password for user logstash_system
PASSWORD logstash_system = AhC2kue5eeR4eK1LeeZa

Changed password for user beats_system
PASSWORD beats_system = reeyu8ooj8Eebee5ni2c

Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = aeshahx9Ohkoph3rai6a

Changed password for user elastic
PASSWORD elastic = beiPhei4xu5iXailocei

No errors and the password are set successfully.

logrotate stopped rotating a log file

logrotate under CentOS 7 had worked perfectly till one day when a web server log stopped being rotated! All the other logs were rotated successfully. Apparently, there was no reason for this, and the logrotate status file was pointing to the very web server log file was rotated successfully during the night, but it wasn’t really!

The problem appeared to be the presence of an uncompressed file, despite the compressed file of the same file being generated.

Apparently, the logrotate or the compressing process was interrupted and the uncompressed file was left in place. After that, every time rotation happened it just stopped on this step for this log file. So the solution is to remove the compressed file and then the normal rotation will continue as it should be.

The logrotate status file is /var/lib/logrotate/logrotate.status and it is just a text file with the time of the last rotation for each log file:

"/var/log/nginx/error.log" 2021-11-2-9:33:26
"/var/log/nginx/media.static.log" 2021-11-2-9:33:26
"/var/log/php-fpm/www.access.log" 2021-11-2-9:33:26
"/var/log/yum.log" 2021-11-2-9:31:49
"/var/log/httpd/*log" 2021-11-2-9:0:0
"/var/log/wtmp" 2021-11-2-9:33:26
"/var/log/chrony/*.log" 2021-9-25-3:0:0
....

But the file has been one big file for the last 30 days:

[root@srv nginx]# ls -altr |tail -n 15
-rw-r-----.  1 nginx adm         1199 Oct  2 02:42 media.static.error.log-20211002.gz
-rw-r-----.  1 nginx adm         1919 Oct  2 02:54 error.log-20211002.gz
-rw-r-----.  1 nginx adm     39174314 Oct  2 03:16 media.static.log-20211002.gz
-rw-r-----.  1 nginx adm       307922 Oct  2 03:16 access.log-20211002.gz
-rw-r-----.  1 nginx adm         1461 Oct  3 02:16 media.static.error.log-20211003.gz
-rw-r-----.  1 nginx adm         1892 Oct  3 02:42 error.log-20211003.gz
-rw-r-----.  1 nginx adm       518079 Oct  3 03:38 access.log-20211003.gz
-rw-r-----.  1 nginx adm   1666744662 Oct  3 03:38 media.static.log-20211003
drwxr-xr-x.  2 root  root       20480 Oct  4 03:31 .
-rw-r-----.  1 nginx adm     23642112 Oct  4 03:31 media.static.log-20211003.gz
-rw-r-----.  1 nginx adm       633643 Nov  2 06:29 media.static.error.log
-rw-r-----.  1 nginx adm       680493 Nov  2 09:30 error.log
drwxr-xr-x. 16 root  root        4096 Nov  2 09:31 ..
-rw-r-----.  1 nginx adm    279561983 Nov  2 09:43 access.log
-rw-r-----.  1 nginx adm  29415439068 Nov  2 09:43 media.static.log

Debug the problem

Executing the logrotate manually will just report that the log files does not need rotation based on the time in the logrotate status file is /var/lib/logrotate/logrotate.status and in the status file it says the file has been rotated on “2021-11-2-9:33:26”!
But executing with force option reveals the real problem:
There is an uncompressed file and a compressed one with the same only with the suffix .gz.
So the uncompressed log file was wrong left in place after compression (or probably the compression or rotation process was interrupted). After the event (probably interruption of the rotating process) every day the logrotate outputted error when rotating this very same log file, but it had been written in the logrorate status file as such the rotating was successful with the latest date and time.

[root@srv nginx]# logrotate -vf /etc/logrotate.conf 
....
rotating log /var/log/nginx/media.static.log, log->rotateCount is 1000
dateext suffix '-20211102'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
compressing log with: /bin/gzip
set default create context to system_u:object_r:httpd_log_t:s0
error: error creating output file /var/log/nginx/media.static.log-20211003.gz: File exists

rotating pattern: /var/log/php-fpm/*log  forced from command line (4 rotations)
empty log files are not rotated, old logs are removed
....

The solution

First, remove the compressed file force logrotate to rotate the logs.

[root@srv nginx]# rm media.static.log-20211003.gz
[root@srv nginx]# logrotate -vf /etc/logrotate.conf
reading config file /etc/logrotate.conf
including /etc/logrotate.d
reading config file chrony
reading config file nginx
reading config file php-fpm
....
rotating log /var/log/nginx/media.static.log, log->rotateCount is 1000
dateext suffix '-20211003'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
compressing log with: /bin/gzip
set default create context to system_u:object_r:httpd_log_t:s0
destination /var/log/nginx/media.static.log-20211003 already exists, skipping rotation
....

No error, just skipping the rotation (the renaming of the file) and compressing it.

Install and make GNU GCC 10 default in Ubuntu 20.04 Focal

The best way to install and use GNU GCC 10 is to install first, build-essential package, which will pull in the GNU GCC 9.2, and then install the GNU GCC 10. In fact, it is possible to install only GNU GCC 10 packages, but build-essential brings with it many additional packages, which are mandatory for the configuration and compiling stages.
GNU GCC is included in the official Ubuntu Focal repository! It just needs to be installed and made to be the default compiler.

Install build-essential ensures all generic packages to be installed, which are involved in the building process of a software product.

There are three steps to install and use GNU GCC 10 under Ubuntu 20.04:

  1. Install build-essential package.
  2. Install gcc-10 packages (g++, too).
  3. Make GNU GCC 10 default compiler – use update-alternatives to point the GNU GCC 10 as the default compiler

Four simple lines and the third is a little bit more complex:

apt update -y
apt upgrade -y
apt install -y build-essential
apt install -y gcc-10 g++-10 cpp-10
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 100 --slave /usr/bin/g++ g++ /usr/bin/g++-10 --slave /usr/bin/gcov gcov /usr/bin/gcov-10

Always update and upgrade before installing new software. Use sudo before each command, if installation is performed by a user other than root:

sudo apt update -y
sudo apt upgrade -y
sudo apt install -y build-essential
sudo apt install -y gcc-10 g++-10 cpp-10
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 100 --slave /usr/bin/g++ g++ /usr/bin/g++-10 --slave /usr/bin/gcov gcov /usr/bin/gcov-10

It is a good idea to install cmake package, in addition to the above, because cmake is not included in build-essential:

apt install -y cmake

The apt output of installing GNU GCC 10

Many additional packages like multiple perl modules, libraries and make are installed.
Keep on reading!

Changing the queueing discipline – Error: Cannot delete qdisc with handle of zero.

If trying to delete a qdisc root to set no rules and no queuing algorithm probably for the purpose to add a new one a replace command is the right option!
An error saying there are no existing rules:

[root@srv ~]# tc qdisc del dev ens1f0 root
Error: Cannot delete qdisc with handle of zero.

Changing the qdisc with another. The example below replaces the MQ (multiqueues) with FQ (Fair Queue policy):

[root@srv ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 1000
    link/ether b4:96:54:9f:10:71 brd ff:ff:ff:ff:ff:ff
    inet 84.16.231.24/26 brd 84.16.231.63 scope global noprefixroute ens1f0
       valid_lft forever preferred_lft forever
    inet6 fe80::b696:91ff:fe8e:860/64 scope link 
       valid_lft forever preferred_lft forever
[root@srv ~]# tc qdisc replace dev ens1f0 root fq
[root@srv ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether b4:96:54:9f:10:71 brd ff:ff:ff:ff:ff:ff
    inet 84.16.231.24/26 brd 84.16.231.63 scope global noprefixroute ens1f0
       valid_lft forever preferred_lft forever
    inet6 fe80::b696:91ff:fe8e:860/64 scope link 
       valid_lft forever preferred_lft forever

The replace command is used,but the add command has the same effect in this case. del cannot remove the queueing discipline from a physical network device. noqueue might be only special devices like loopback (i.e. localhost), bonding network devices and so on.

In CentOS, for additional kernel modules use kernel-modules-extra package if some qdiscs are missing.

OpenVPN stops working after network restart

Encountering the following problem – OpenVPN works perfectly when started or restarted, but when the network connection of the computer restarts or for example, the WIFI device resets (or loses the wifi network and connects again when it becomes available) the VPN never recovers. All networks routed via the VPN never seem to work again and they become dead ends to the computer despite the Internet connectivity is OK.
The logs show only attempts to connect again to the servers, but apparently with no success:

....
Oct 13 02:22:55 www openvpn[7744]: Attempting to establish TCP connection with [AF_INET]111.111.111.111:12345 [nonblock]
....
Oct 13 02:24:55 www openvpn[7744]: TCP: connect to [AF_INET]111.111.111.111:12345 failed: Connection timed out
....
Oct 13 02:22:55 www openvpn[7744]: Attempting to establish TCP connection with [AF_INET]111.111.111.111:12345 [nonblock]
....
Oct 13 02:24:55 www openvpn[7744]: TCP: connect to [AF_INET]111.111.111.111:12345 failed: Connection timed out

The server 111.111.111.111 is unreachable and it stays unreachable even the network connectivity recovered and the Internet of the computer is OK.
In this case, the OpenVPN server is part of a route network push to the client, and the OpenVPN IP is part of the pushed network to be routed via the VPN. And when the client’s network connection resets, the additional server’s IP route to the gateway disappears, but the pushed route does not, and now the OpenVPN server’s IP is part of a VPN route, which is dead because the VPN channel is dead. A restart of all OpenVPN routes is required (remove the special VPN device tun device with its routes and then add the device and routes again after successful reconnection to the OpenVPN server), but when an OpenVPN client option persist-tun is in the configuration, the restart won’t happen in the mentioned way. Only a restart of the service will remove the tun and its routes and then add them after a successful reconnection to the OpenVPN server.

Removing the persist-tun from the client’s configuration will trigger a full restart of the VPN channel – remove the special tun device and all OpenVPN routes and then reconnect and initialize the special tun device and then add the pushed routes.

Here is the example:

  • The OpenVPN server’s IP is 111.111.111.111.
  • The OpenVPN server pushes several networks to the client. Networks, which must be routed via the VPN. One of them happened to be 111.111.111.0/24, which includes the OpenVPN server’s IP.
  • The OpenVPN server adds a route for its IP to be routed via the default client’s gateway, which is 192.168.0.1. This is how the VPN channel works despite it pushed the whole network 111.111.111.0/24 via the VPN.

The problem appears when the network resets and the client’s OpenVPN process removes the route for the OpenVPN IP via the gateway because the gateway is not valid anymore! With persist-tun the pushed OpenVPN routes won’t be removed only reconnect attempts will be tried. But the OpenVPN IP now routes via the pushed server’s route via the dead VPN channel.
Keep on reading!

Run LXC CentOS 8 container with bridged network under CentOS 8

The LXC container software comes to CentOS 8 with the EPEL 8 repository. LXC is a multiprocesses container, which offers to boot a Linux distribution under container isolation. It is very similar to systemd-nspawn and a bit different from docker containers. LXC containers are used when multiple processes are needed under one container only. In most cases, the LXC container is a fully-featured Linux distribution (systemd or SysV, i.e. init) booted under a Linux container.
There are several major differences between docker/podman containers and LXC:

  • Multiprocesses.
  • Easy configuration modification. Even hot-plugin supported.
  • Unprivileged Linux containers.
  • Complex network setups. Multiple network interfaces connected to different networks, for example.
  • Live systemd, i.e. systemd or SysV init are booted as usual. Much of the software rellies on systemd/udev features and in many cases, it is really hard to run a software without a systemd or init process

Here are the steps to boot a CentOS 8 container under CentOS 8 host server:

STEP 1) Install EPEL repository.

EPEL CentOS 8 repository now includes LXC 3.0 software.

dnf install -y epel-release

STEP 2) Install LXC software and start LXC service.

At present, the LXC software version is 3.0.4. The package lxc-templates includes template scripts to create a Linux distribution environment like CentOS, Ubuntu, Debian, Gentoo, ArchLinux, Oracle, Alpine, and many others and it also includes the configuration templates to start these Linux distributions.

dnf install -y lxc lxc-templates
dnf install -y wget tar

The wget and tar are required if LXC templates installation is going to be performed.

STEP 3) Create a CentOS 8 container with the help of LXC templates and run it.

Use the lxc-templates to prepare a CentOS 8 container environment. The currently available containers are listed here http://images.linuxcontainers.org/. Check out the URL and choose the right container. Here the CentOS 8 amd64 is used.

lxc-create --template download -n mycontainer -- --dist centos --release 8 --arch amd64 --keyserver hkp://keyserver.ubuntu.com

Keep on reading!

Replace current interface configuration with a bridge device using nmcli (NetworkManager)

This article shows how the primary network interface could be replaced by a bridge device and the network interface becomes a part of the bridge as a slave device without reboot or restart of the server. Using nmcli under CentOS 8 (and probably any other Linux distribution like Ubuntu, which uses NetworkManager to configure network devices).
The main steps are:

  1. Create a connection profile of a bridge device.
  2. Set the same network configuration as the primary network to the bridge device.
  3. Create a connection profile for the primary interface device as a slave network device to the newly created bridge.
  4. Delete the current primary connection, which is using the primary network device and configuration.
  5. Reload the bridge connection profile to take effect. The bridge device will actually begin to work.

The main goal is not to reboot the server or lose the connection to the server. The primary network interface is the only connection on the server and losing it the server is going to be unreachable. So the last two steps should be performed in the background or a script or a detached terminal (like screen).
Here are all the commands in one place:

nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "192.168.0.20/24" ipv4.gateway "192.168.0.1" ipv4.dns "8.8.8.8 1.1.1.1"
nmcli con add type bridge-slave ifname enp0s3 master br0
nmcli con del "enp0s3"; nmcli con reload "br0" &

Here is the detailed information for the above commands:
Keep on reading!

Gentoo ERROR: Python module pytevent of version 0.10.2 not found, and bundling disabled

Emerging the sys-libs/ldb-2.3.0-r1 package may fail with an error for a missing Python mode, despite the sys-libs/tevent with a python USE flag is presented in the system:

Checking for system tevent (>=0.10.2)                                                           : yes 
ERROR: Python module pytevent of version 0.10.2 not found, and bundling disabled
 * ERROR: sys-libs/ldb-2.3.0-r1::gentoo failed (configure phase):
 *   configure failed
 * 
 * Call stack:

Indeed, the tevent (>=0.10.2) is found, but not the Python module! And the checking pase of the setup fails.
First, check whether the USE of sys-libs/tevent has python and the right version PYTHON_SINGLE_TARGET=”python3_8″ is used (the Python version may vary here):

root@srv # emerge -pv =tevent-0.10.2::gentoo

[ebuild   R    ] sys-libs/tevent-0.10.2::gentoo  USE="python" ABI_X86="32 (64) (-x32)" PYTHON_SINGLE_TARGET="python3_8 -python3_9" 0 KiB

Total: 1 package (1 reinstall), Size of downloads: 0 KiB

This system uses Python 3.8 and the library sys-libs/tevent was built with this USE flag.

The problem here is the tevent is installed under a its own directory: /usr/lib64/tevent. Using the ldconig utility the problem quickly has been resolved. Just add a file /etc/ld.so.conf.d/tevent_ldb.conf with the path to the library and then regenerate the ldconfig:

root@srv # cat /etc/ld.so.conf.d/tevent_ldb.conf
/usr/lib64/tevent
root@srv # ldconfig
root@srv # 

Do not forget to run the “ldconfig“, because tevent library won’t be added to the LD cache. Emerging the sys-libs/ldb and then samba was successful after this quick workaround! There is a Gentoo bug reported, but the problem still exists – https://bugs.gentoo.org/590026

Zookeeper cluster delete all in a subtree – Failed to delete some node(s) in the subtree!

Deleting a node with a subtree in a Zookeeper is pretty easy and it could be done even with the Zookeeper cli from the command-line.

Deleting a node, which has a subtree could be done with the command “deleteall” but the command should be executing on the leader in the Zookeeper cluster!

If the user receives the following error:

root@zk01:/apache-zookeeper-3.7.0-bin# ./bin/zkCli.sh
[zk: localhost:2181(CONNECTED) 25] deleteall /clickhouse/statdata/collectd_metrics/replicas/replica_2
Failed to delete some node(s) in the subtree!

First, check if the Zookeeper node is the leader, and second, execute the deleteall command on the path, which should be deleted. Using the zkServer.sh in the Zookeeper bin directory to check whether the Zookeeper node is the leader one:

root@zk03:/apache-zookeeper-3.7.0-bin# ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
root@zk03:/apache-zookeeper-3.7.0-bin# ./bin/zkCli.sh
[zk: localhost:2181(CONNECTED) 0] deleteall /clickhouse/statdata/collectd_metrics/replicas/replica_2
[zk: localhost:2181(CONNECTED) 1] 

No error, when the command deleteall is executed in the leader. No additional output either.

Deleting with the “delete” command will result in an error, too:

[zk: localhost:2181(CONNECTED) 0] delete /clickhouse/statdata/collectd_metrics/replicas/replica_2
Node not empty: /clickhouse/statdata/collectd_metrics/replicas/replica_2

Zookeeper cli output

The follower’s output with delete, deleteall and list command ls:

root@zk01:/apache-zookeeper-3.7.0-bin# ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
root@zk01:/apache-zookeeper-3.7.0-bin# ./bin/zkCli.sh
Connecting to localhost:2181
2021-09-21 04:24:34,369 [myid:] - INFO  [main:Environment@98] - Client environment:zookeeper.version=3.7.0-e3704b390a6697bfdf4b0bef79e3da7a4f6bac4b, built on 2021-03-17 09:46 UTC
2021-09-21 04:24:34,373 [myid:] - INFO  [main:Environment@98] - Client environment:host.name=zk01
2021-09-21 04:24:34,373 [myid:] - INFO  [main:Environment@98] - Client environment:java.version=11.0.12
2021-09-21 04:24:34,375 [myid:] - INFO  [main:Environment@98] - Client environment:java.vendor=Oracle Corporation
2021-09-21 04:24:34,376 [myid:] - INFO  [main:Environment@98] - Client environment:java.home=/usr/local/openjdk-11
2021-09-21 04:24:34,376 [myid:] - INFO  [main:Environment@98] - Client environment:java.class.path=/apache-zookeeper-3.7.0-bin/bin/../zookeeper-server/target/classes:/apache-zookeeper-3.7.0-bin/bin/../build/classes:/apache-zookeeper-3.7.0-bin/bin/../zookeeper-server/target/lib/*.jar:/apache-zookeeper-3.7.0-bin/bin/../build/lib/*.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/zookeeper-prometheus-metrics-3.7.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/zookeeper-jute-3.7.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/zookeeper-3.7.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/snappy-java-1.1.7.7.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/slf4j-log4j12-1.7.30.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/slf4j-api-1.7.30.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/simpleclient_servlet-0.9.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/simpleclient_hotspot-0.9.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/simpleclient_common-0.9.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/simpleclient-0.9.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-transport-native-unix-common-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-transport-native-epoll-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-transport-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-resolver-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-handler-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-common-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-codec-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-buffer-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/metrics-core-4.1.12.1.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/log4j-1.2.17.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jline-2.14.6.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-util-ajax-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-util-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-servlet-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-server-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-security-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-io-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-http-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/javax.servlet-api-3.1.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jackson-databind-2.10.5.1.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jackson-core-2.10.5.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jackson-annotations-2.10.5.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/commons-cli-1.4.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/audience-annotations-0.12.0.jar:/apache-zookeeper-3.7.0-bin/bin/../zookeeper-*.jar:/apache-zookeeper-3.7.0-bin/bin/../zookeeper-server/src/main/resources/lib/*.jar:/conf:
2021-09-21 04:24:34,376 [myid:] - INFO  [main:Environment@98] - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
2021-09-21 04:24:34,376 [myid:] - INFO  [main:Environment@98] - Client environment:java.io.tmpdir=/tmp
2021-09-21 04:24:34,376 [myid:] - INFO  [main:Environment@98] - Client environment:java.compiler=<NA>
2021-09-21 04:24:34,377 [myid:] - INFO  [main:Environment@98] - Client environment:os.name=Linux
2021-09-21 04:24:34,377 [myid:] - INFO  [main:Environment@98] - Client environment:os.arch=amd64
2021-09-21 04:24:34,377 [myid:] - INFO  [main:Environment@98] - Client environment:os.version=5.4.0-42-generic
2021-09-21 04:24:34,377 [myid:] - INFO  [main:Environment@98] - Client environment:user.name=root
2021-09-21 04:24:34,377 [myid:] - INFO  [main:Environment@98] - Client environment:user.home=/root
2021-09-21 04:24:34,377 [myid:] - INFO  [main:Environment@98] - Client environment:user.dir=/apache-zookeeper-3.7.0-bin
2021-09-21 04:24:34,378 [myid:] - INFO  [main:Environment@98] - Client environment:os.memory.free=55MB
2021-09-21 04:24:34,380 [myid:] - INFO  [main:Environment@98] - Client environment:os.memory.max=256MB
2021-09-21 04:24:34,380 [myid:] - INFO  [main:Environment@98] - Client environment:os.memory.total=64MB
2021-09-21 04:24:34,385 [myid:] - INFO  [main:ZooKeeper@637] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@7946e1f4
2021-09-21 04:24:34,390 [myid:] - INFO  [main:X509Util@77] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2021-09-21 04:24:34,397 [myid:] - INFO  [main:ClientCnxnSocket@239] - jute.maxbuffer value is 1048575 Bytes
2021-09-21 04:24:34,409 [myid:] - INFO  [main:ClientCnxn@1726] - zookeeper.request.timeout value is 0. feature enabled=false
Welcome to ZooKeeper!
2021-09-21 04:24:34,439 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1171] - Opening socket connection to server localhost/127.0.0.1:2181.
2021-09-21 04:24:34,441 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1173] - SASL config status: Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2021-09-21 04:24:34,454 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1005] - Socket connection established, initiating session, client: /127.0.0.1:40846, server: localhost/127.0.0.1:2181
2021-09-21 04:24:34,467 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1438] - Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x1006ad505c30007, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] delete /clickhouse/statdata/collectd_metrics/replicas/replica_2
Node not empty: /clickhouse/statdata/collectd_metrics/replicas/replica_2
[zk: localhost:2181(CONNECTED) 1] deleteall /clickhouse/statdata/collectd_metrics/replicas/replica_2
Failed to delete some node(s) in the subtree!
[zk: localhost:2181(CONNECTED) 2] ls /clickhouse/statdata/collectd_metrics/replicas/replica_2/parts
[2021_5155978_5164111_2367, 2021_5155978_5164112_2368, 2021_5155978_5164113_2369, 2021_5155978_5164114_2370, 2021_5155978_5164115_2371, 2021_5155978_5164116_2372, 2021_5155978_5164117_2373]

Deleting the node subtree in the Zookeeper leader:

root@zk03:/apache-zookeeper-3.7.0-bin# ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
root@zk03:/apache-zookeeper-3.7.0-bin# ./bin/zkCli.sh 
Connecting to localhost:2181
2021-09-21 04:41:55,397 [myid:] - INFO  [main:Environment@98] - Client environment:zookeeper.version=3.7.0-e3704b390a6697bfdf4b0bef79e3da7a4f6bac4b, built on 2021-03-17 09:46 UTC
2021-09-21 04:41:55,401 [myid:] - INFO  [main:Environment@98] - Client environment:host.name=zk03
2021-09-21 04:41:55,401 [myid:] - INFO  [main:Environment@98] - Client environment:java.version=11.0.12
2021-09-21 04:41:55,404 [myid:] - INFO  [main:Environment@98] - Client environment:java.vendor=Oracle Corporation
2021-09-21 04:41:55,404 [myid:] - INFO  [main:Environment@98] - Client environment:java.home=/usr/local/openjdk-11
2021-09-21 04:41:55,404 [myid:] - INFO  [main:Environment@98] - Client environment:java.class.path=/apache-zookeeper-3.7.0-bin/bin/../zookeeper-server/target/classes:/apache-zookeeper-3.7.0-bin/bin/../build/classes:/apache-zookeeper-3.7.0-bin/bin/../zookeeper-server/target/lib/*.jar:/apache-zookeeper-3.7.0-bin/bin/../build/lib/*.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/zookeeper-prometheus-metrics-3.7.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/zookeeper-jute-3.7.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/zookeeper-3.7.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/snappy-java-1.1.7.7.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/slf4j-log4j12-1.7.30.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/slf4j-api-1.7.30.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/simpleclient_servlet-0.9.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/simpleclient_hotspot-0.9.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/simpleclient_common-0.9.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/simpleclient-0.9.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-transport-native-unix-common-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-transport-native-epoll-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-transport-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-resolver-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-handler-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-common-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-codec-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/netty-buffer-4.1.59.Final.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/metrics-core-4.1.12.1.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/log4j-1.2.17.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jline-2.14.6.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-util-ajax-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-util-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-servlet-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-server-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-security-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-io-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jetty-http-9.4.38.v20210224.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/javax.servlet-api-3.1.0.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jackson-databind-2.10.5.1.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jackson-core-2.10.5.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/jackson-annotations-2.10.5.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/commons-cli-1.4.jar:/apache-zookeeper-3.7.0-bin/bin/../lib/audience-annotations-0.12.0.jar:/apache-zookeeper-3.7.0-bin/bin/../zookeeper-*.jar:/apache-zookeeper-3.7.0-bin/bin/../zookeeper-server/src/main/resources/lib/*.jar:/conf:
2021-09-21 04:41:55,405 [myid:] - INFO  [main:Environment@98] - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
2021-09-21 04:41:55,405 [myid:] - INFO  [main:Environment@98] - Client environment:java.io.tmpdir=/tmp
2021-09-21 04:41:55,405 [myid:] - INFO  [main:Environment@98] - Client environment:java.compiler=<NA>
2021-09-21 04:41:55,405 [myid:] - INFO  [main:Environment@98] - Client environment:os.name=Linux
2021-09-21 04:41:55,405 [myid:] - INFO  [main:Environment@98] - Client environment:os.arch=amd64
2021-09-21 04:41:55,405 [myid:] - INFO  [main:Environment@98] - Client environment:os.version=5.4.0-42-generic
2021-09-21 04:41:55,406 [myid:] - INFO  [main:Environment@98] - Client environment:user.name=root
2021-09-21 04:41:55,406 [myid:] - INFO  [main:Environment@98] - Client environment:user.home=/root
2021-09-21 04:41:55,406 [myid:] - INFO  [main:Environment@98] - Client environment:user.dir=/apache-zookeeper-3.7.0-bin
2021-09-21 04:41:55,406 [myid:] - INFO  [main:Environment@98] - Client environment:os.memory.free=56MB
2021-09-21 04:41:55,408 [myid:] - INFO  [main:Environment@98] - Client environment:os.memory.max=256MB
2021-09-21 04:41:55,408 [myid:] - INFO  [main:Environment@98] - Client environment:os.memory.total=64MB
2021-09-21 04:41:55,414 [myid:] - INFO  [main:ZooKeeper@637] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@7946e1f4
2021-09-21 04:41:55,419 [myid:] - INFO  [main:X509Util@77] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2021-09-21 04:41:55,427 [myid:] - INFO  [main:ClientCnxnSocket@239] - jute.maxbuffer value is 1048575 Bytes
2021-09-21 04:41:55,439 [myid:] - INFO  [main:ClientCnxn@1726] - zookeeper.request.timeout value is 0. feature enabled=false
Welcome to ZooKeeper!
2021-09-21 04:41:55,473 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1171] - Opening socket connection to server localhost/127.0.0.1:2181.
2021-09-21 04:41:55,477 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1173] - SASL config status: Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2021-09-21 04:41:55,496 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1005] - Socket connection established, initiating session, client: /127.0.0.1:41234, server: localhost/127.0.0.1:2181
2021-09-21 04:41:55,513 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1438] - Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x3006ad4fdbc0002, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] deleteall /clickhouse/statdata/collectd_metrics/replicas/replica_2
[zk: localhost:2181(CONNECTED) 1] 2021-09-21 04:42:53,061 [myid:] - ERROR [main:ServiceUtils@42] - Exiting JVM with code 0
root@zk03:/apache-zookeeper-3.7.0-bin#

Delete the zookeeper logs and snapshot with FileTxnSnapLog from the command-line

The new version of zookeeper has the ability to auto-purge the logs and snapshots (if autopurge.snapRetainCount and autopurge.purgeInterval are enabled in the configuration), but if an older version is used or the administrator would like to force freeing space, there is a way using the command-line to remove the zookeepers logs and snapshots.
The manual shows how to do it (https://archive.cloudera.com/cdh4/cdh/4/zookeeper/zookeeperAdmin.html#sc_advancedConfiguration):

 java -cp zookeeper.jar:lib/slf4j-api-1.6.1.jar:lib/slf4j-log4j12-1.6.1.jar:lib/log4j-1.2.15.jar:conf \
     org.apache.zookeeper.server.PurgeTxnLog <dataDir> <snapDir> -n <count>
  • Load all needed jars with the current installed versions in zookeeper install directory appended with /lib
  • use the function org.apache.zookeeper.server.PurgeTxnLog
  • Append three parameters “[dataDir] [snapDir] -n [count]”. is in fact /datalog directory, is the directory where the snapshots are kept.

A more clear and detailed syntax:

java -cp [zookeeper & slf4j-api & slf4j-log4j12 & log4j & ... ].jar:conf org.apache.zookeeper.server.PurgeTxnLog \
     <base_datalog_dir> <base_snapshot_dir> <count>

Here is an example with zookeeper 3.7.0:

root@zoo1:~# cd /apache-zookeeper-3.7.0-bin/lib
root@zoo1:/apache-zookeeper-3.7.0-bin/lib# java -cp zookeeper-3.7.0.jar:slf4j-api-1.7.30.jar:slf4j-log4j12-1.7.30.jar:log4j-1.2.17.jar:zookeeper-jute-3.7.0.jar:snappy-java-1.1.7.7.jar:conf org.apache.zookeeper.server.PurgeTxnLog /datalog/ /data/ -n 3
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.server.persistence.FileTxnSnapLog).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Removing file: Sep 9, 2021, 6:07:13 AM  /datalog/version-2/log.4d6800000001
Removing file: Sep 9, 2021, 5:06:57 AM  /data/version-2/snapshot.4d6700c1986c
Removing file: Sep 9, 2021, 6:07:13 AM  /data/version-2/snapshot.4d6800010b26
Removing file: Sep 9, 2021, 4:45:10 AM  /data/version-2/snapshot.4d6700bef73a
Removing file: Sep 9, 2021, 5:44:23 AM  /data/version-2/snapshot.4d6700c62032
Removing file: Sep 9, 2021, 5:31:04 AM  /data/version-2/snapshot.4d6700c48e0f
Removing file: Sep 9, 2021, 5:37:37 AM  /data/version-2/snapshot.4d6700c55610
Removing file: Sep 9, 2021, 5:14:36 AM  /data/version-2/snapshot.4d6700c29039
Removing file: Sep 9, 2021, 5:22:33 AM  /data/version-2/snapshot.4d6700c387b3
Removing file: Sep 9, 2021, 4:57:02 AM  /data/version-2/snapshot.4d6700c0571e
Removing file: Sep 9, 2021, 5:56:59 AM  /data/version-2/snapshot.4d6700c63856

A message for removing files should be printed, if not probably the directories’ paths are wrong!

This is how the zookeeper is organized in the system:

  1. Zookeeper installation directory is /apache-zookeeper-3.7.0-bin meaning the lib is under: /apache-zookeeper-3.7.0-bin/lib, where the jars are placed:
    root@zoo1:/apache-zookeeper-3.7.0-bin/lib# ls /apache-zookeeper-3.7.0-bin/lib/
    audience-annotations-0.12.0.jar               jline-2.14.6.LICENSE.txt                               netty-transport-native-unix-common-4.1.59.Final.LICENSE.txt
    commons-cli-1.4.jar                           jline-2.14.6.jar                                       netty-transport-native-unix-common-4.1.59.Final.jar
    jackson-annotations-2.10.5.jar                log4j-1.2.17.LICENSE.txt                               simpleclient-0.9.0.LICENSE.txt
    jackson-core-2.10.5.jar                       log4j-1.2.17.jar                                       simpleclient-0.9.0.jar
    jackson-databind-2.10.5.1.jar                 metrics-core-4.1.12.1.jar                              simpleclient_common-0.9.0.jar
    javax.servlet-api-3.1.0.jar                   metrics-core-4.1.12.1.jar_LICENSE.txt                  simpleclient_common-0.9.0_LICENSE.txt
    jetty-http-9.4.38.v20210224.LICENSE.txt       netty-buffer-4.1.59.Final.LICENSE.txt                  simpleclient_hotspot-0.9.0.jar
    jetty-http-9.4.38.v20210224.jar               netty-buffer-4.1.59.Final.jar                          simpleclient_hotspot-0.9.0_LICENSE.txt
    jetty-io-9.4.38.v20210224.LICENSE.txt         netty-codec-4.1.59.Final.LICENSE.txt                   simpleclient_servlet-0.9.0.jar
    jetty-io-9.4.38.v20210224.jar                 netty-codec-4.1.59.Final.jar                           simpleclient_servlet-0.9.0_LICENSE.txt
    jetty-security-9.4.38.v20210224.LICENSE.txt   netty-common-4.1.59.Final.LICENSE.txt                  slf4j-1.7.30.LICENSE.txt
    jetty-security-9.4.38.v20210224.jar           netty-common-4.1.59.Final.jar                          slf4j-api-1.7.30.jar
    jetty-server-9.4.38.v20210224.LICENSE.txt     netty-handler-4.1.59.Final.LICENSE.txt                 slf4j-log4j12-1.7.30.jar
    jetty-server-9.4.38.v20210224.jar             netty-handler-4.1.59.Final.jar                         snappy-java-1.1.7.7.jar
    jetty-servlet-9.4.38.v20210224.LICENSE.txt    netty-resolver-4.1.59.Final.LICENSE.txt                snappy-java-1.1.7.7.jar_LICENSE.txt
    jetty-servlet-9.4.38.v20210224.jar            netty-resolver-4.1.59.Final.jar                        zookeeper-3.7.0.jar
    jetty-util-9.4.38.v20210224.LICENSE.txt       netty-transport-4.1.59.Final.LICENSE.txt               zookeeper-jute-3.7.0.jar
    jetty-util-9.4.38.v20210224.jar               netty-transport-4.1.59.Final.jar                       zookeeper-prometheus-metrics-3.7.0.jar
    jetty-util-ajax-9.4.38.v20210224.LICENSE.txt  netty-transport-native-epoll-4.1.59.Final.LICENSE.txt
    jetty-util-ajax-9.4.38.v20210224.jar          netty-transport-native-epoll-4.1.59.Final.jar
    
  2. The datalog directory is under /datalog:
    root@zoo1:/apache-zookeeper-3.7.0-bin/lib# ls -altr /datalog/
    total 12
    drwxr-xr-x 1 root      root      4096 Sep  9 05:56 ..
    drwxr-xr-x 3 zookeeper root      4096 Sep  9 05:56 .
    drwxr-xr-x 2 zookeeper zookeeper 4096 Sep  9 07:04 version-2
    root@zoo1:/apache-zookeeper-3.7.0-bin/lib# ls -altr /datalog/version-2/
    total 131852
    drwxr-xr-x 3 zookeeper root          4096 Sep  9 05:56 ..
    -rw-r--r-- 1 zookeeper zookeeper 67108880 Sep  9 06:18 log.4d6800010b28
    -rw-r--r-- 1 zookeeper zookeeper 67108880 Sep  9 06:29 log.4d6800025751
    -rw-r--r-- 1 zookeeper zookeeper 67108880 Sep  9 06:39 log.4d680003b40f
    -rw-r--r-- 1 zookeeper zookeeper 67108880 Sep  9 06:52 log.4d680004d2d8
    drwxr-xr-x 3 zookeeper zookeeper     4096 Sep  9 06:52 .
    -rw-r--r-- 1 zookeeper zookeeper 67108880 Sep  9 07:03 log.4d6800064eda
    
  3. The snapshot directory is under /data
    root@zoo1:/apache-zookeeper-3.7.0-bin/lib# ls -altr /data/
    total 968
    -rw-r--r-- 1 zookeeper root           2 Aug 31 07:47 myid
    drwxr-xr-x 3 zookeeper root        4096 Aug 31 07:55 .
    drwxr-xr-x 1 root      root        4096 Sep  9 05:56 ..
    drwxr-xr-x 3 zookeeper zookeeper 974848 Sep  9 07:04 version-2
    root@zoo1:/apache-zookeeper-3.7.0-bin# ls -altr /data/version-2/
    total 79004
    drwxr-xr-x 3 zookeeper root         4096 Aug 31 07:55 ..
    -rw-r--r-- 1 zookeeper zookeeper 6402486 Sep  9 04:45 snapshot.4d6700bef73a
    -rw-r--r-- 1 zookeeper zookeeper 6068093 Sep  9 04:57 snapshot.4d6700c0571e
    -rw-r--r-- 1 zookeeper zookeeper 6905356 Sep  9 05:06 snapshot.4d6700c1986c
    -rw-r--r-- 1 zookeeper zookeeper 6484358 Sep  9 05:14 snapshot.4d6700c29039
    -rw-r--r-- 1 zookeeper zookeeper 6319179 Sep  9 05:22 snapshot.4d6700c387b3
    -rw-r--r-- 1 zookeeper zookeeper 6362239 Sep  9 05:31 snapshot.4d6700c48e0f
    -rw-r--r-- 1 zookeeper zookeeper 6280967 Sep  9 05:37 snapshot.4d6700c55610
    -rw-r--r-- 1 zookeeper zookeeper 6251946 Sep  9 05:44 snapshot.4d6700c62032
    -rw-r--r-- 1 zookeeper zookeeper       5 Sep  9 05:56 acceptedEpoch
    -rw-r--r-- 1 zookeeper zookeeper 6208681 Sep  9 05:56 snapshot.4d6700c63856
    -rw-r--r-- 1 zookeeper zookeeper       5 Sep  9 05:56 currentEpoch
    -rw-r--r-- 1 zookeeper zookeeper 7442360 Sep  9 06:07 snapshot.4d6800010b26
    -rw-r--r-- 1 zookeeper zookeeper 7666290 Sep  9 06:18 snapshot.4d680002574f
    -rw-r--r-- 1 zookeeper zookeeper 7467034 Sep  9 06:29 snapshot.4d680003b40d
    drwxr-xr-x 2 root      root         4096 Sep  9 06:32 version-2
    drwxr-xr-x 3 zookeeper zookeeper  974848 Sep  9 06:32 .
    

Do not use for directories /datalog/version-2 or /data/version-2 it is wrong and no files will be removed!

Troubleshooting

If executing the above line outputs a missing java class like below, the easiest way is just to search for the name in the [zookeeper-install-directory]/lib/ with tools like grep or any other text file search tool.

root@zoo1:/apache-zookeeper-3.7.0-bin/lib# java -cp zookeeper-3.7.0.jar:slf4j-api-1.7.30.jar:slf4j-log4j12-1.7.30.jar:log4j-1.2.17.jar:zookeeper-jute-3.7.0.jar:conf org.apache.zookeeper.server.PurgeTxnLog /datalog/ /data/ -n 3
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.server.persistence.FileTxnSnapLog).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.NoClassDefFoundError: org/xerial/snappy/SnappyInputStream
        at org.apache.zookeeper.server.persistence.FileSnap.findNValidSnapshots(FileSnap.java:171)
        at org.apache.zookeeper.server.persistence.FileTxnSnapLog.findNValidSnapshots(FileTxnSnapLog.java:568)
        at org.apache.zookeeper.server.PurgeTxnLog.purge(PurgeTxnLog.java:82)
        at org.apache.zookeeper.server.PurgeTxnLog.main(PurgeTxnLog.java:192)
Caused by: java.lang.ClassNotFoundException: org.xerial.snappy.SnappyInputStream
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(Unknown Source)
        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(Unknown Source)
        at java.base/java.lang.ClassLoader.loadClass(Unknown Source)
        ... 4 more
root@zoo1:/apache-zookeeper-3.7.0-bin/lib# grep -ir SnappyInputStream
grep: snappy-java-1.1.7.7.jar: binary file matches

So there is a match with the name SnappyInputStream in jar file snappy-java-1.1.7.7.jar, so just include it in the java -cp command.