Install and use collectd-ping under CentOS 8 to monitor latency

Tracking the network latency of the servers’ network is not an easy job. Most monitoring software is capable to monitor the state of the server, but how to monitor the state of the connectivity and the network latency and even the Internet connectivity with some respectful addresses like 1.1.1.1 or 8.8.8.8? It should be easy to do it with ICMP and ping command but using the collectd daemon and one of its plugins offers collectd-ping from https://collectd.org/wiki/index.php/Plugin:Ping to save all the history in a time series back-end and using grafanahttps://grafana.com/ (or other graphs/histograms and etc software) to make graphs.
Using the collectd-ping plugin in conjunction with grafana may reach the similar effect as using the old and gold smokeping.
CentOS 7 included the collectd-ping plugin in its official repository, but in CentOS 8 the plugin is missing! Under Cent OS 8 the CentOS SIG OpsTools https://wiki.centos.org/SpecialInterestGroup/OpsTools includes the collectd-ping plugin in their repository. More on SIG and OpsTools may be obtained in the later page. In general, it is safe to use this repository it would not break user’s system.
Here is how to install and configure it. Real grafana examples are also included at the end.

The example here assumes there is a grafana server installed with influxdb backend.

STEP 1) Add OpsTools repository and install the collectd and collectd-ping.

The OpsTools repository is installed with centos-release-opstools package.
Here is what is going to install:

dnf install -y centos-release-opstools
dnf install -y collectd collectd-ping

Keep on reading!

Install newer version of python 3.10 under CentOS 8

At present, the default version of python under CentOS 8 is Python 3.6.8, which is 6 years old. More and more python software needs newer versions, so it is a vital for pretty stable Linux distro to have an easy way to install newer programming languages like python!
Using Conda it is really easy to manage different environments for different python versions!

Conda is an open source package management system and environment management system that runs on Windows, macOS and Linux.

More on CondaInstalling conda command line in various systems with miniconda and create a simple python environment and all Conda tags – https://ahelpme.com/category/software/anaconda/. This article is not intended to introduce the reader with Conda, but to show how easy is to install the newer version of python 3.10 under CentOS 8 and it is easy because of using the Conda package management system!

To summarize, the purpose is to have a user with python 3.10. The user can be an ordinary or administrative one or even root.
Using this method older or newer versions of python may be installed on the same machine (at the same time).

STEP 1) Install the latest Miniconda3

The installation is easy and for more details check out the first link above.
Keep on reading!

How to run QEMU full virtualization with MacVTap networking using NetworkManager under CentOS 8

In addition to the previously presented article on the subject Howto do QEMU full virtualization with MacVTap networking this one shows how to run a QEMU virtual machine with a MAcVTap device in bridge mode on the host server configured only by using the NetworkManager cli – nmcli.

It is worth mentioning the MacVTap is a virtual bridge, which will make the host and the guest device show up directly on the host switch. So when using QEMU, the guest virtualized system will be as if it is connected to the host switch with one limitation – the host and guest cannot communicate with each other. The IPs of the host won’t be reachable from the guest, so NAT (masquerade) between the host and guest is not possible with this setup. Still, if the NAT server is on another server or a real IP is planned for the guest, MacVTap is the right functionality to use with the QEMU guest system.

Summary

  1. Add MacVTap device in bridge mode with name macvtap0.
  2. Install QEMU.
  3. Create QEMU local disk.
  4. Run a QEMU virtual server.

STEP 1) Add MacVTap device in bridge mode with name macvtap0

[root@srv ~]# nmcli connection add type macvlan dev enp0s3 mode bridge tap yes ifname macvtap0 con-name macvtap0 ip4 0.0.0.0/24
Connection 'macvtap0' (7a5ef04c-ea98-4642-ac5d-4239f715f631) successfully added.
[root@srv ~]# nmcli con
NAME      UUID                                  TYPE      DEVICE   
enp0s3    09497bbf-da59-42b7-a72c-d69369760b36  ethernet  enp0s3   
macvtap0  7a5ef04c-ea98-4642-ac5d-4239f715f631  macvlan   macvtap0 

First, create a MacVTap device with the name macvtap0 in bridge mode with the network interface enp0s3 a and a connection with the name macvtap0. The IP is set to manual mode.
More detailed information on how to create and add MacVTap device with the NetworkManager here – Create MacVTap device using NetworkManager nmcli under CentOS 8

STEP 2) Install QEMU.

Install the QEMU virtual tools under CentOS 8 Stream. At present, the QEMU version is 6.2, which is pretty new.
Keep on reading!

Create MacVTap device using NetworkManager nmcli under CentOS 8

In continuation of NetworkManager management with nmcli, here is a quick Linux console tip for users like CentOS 8 (or all distributions, which use the NetworkManager for managing the networking). How to create a virtualized bridge device MacVTap device with the NetworkManager nmcli command utility, which will preserve all the configuration over reboots.

nmcli connection add type macvlan dev enp0s3 mode bridge tap yes ifname macvtap0 con-name macvtap0 ip4 0.0.0.0/24

The line above creates a virtualized bridged interface and a connection with the name macvtap0. The MAcVTap device with the name macvtap0 is in bridge mode with the physical network interface enp0s3 with manual IP setting. If the IP is not included a DHCP option will be used as default.

There is one big limitation – there is no link between the enp0s3 and macvtap0. When used macvtap0 could receive packets from the network through the enp0s3, but there is no direct link between the two network devices. In simple words, when used in a virtualized environment in a virtual machine the virtual machine may have access to the network shared with the enp0s3, but the virtual machine cannot communicate with the IPs of the enp0s3!

Typically, this is used to make both the guest and the host show up directly on the switch that the host is connected to.

Linux Virtualization, https://virt.kernelnewbies.org/MacVTap

Initial state, only one connection in NetworkManager.

The main server connection with name enp0s3 using the same name network interface enp0s3:

[root@srv ~]# nmcli con
NAME    UUID                                  TYPE      DEVICE 
enp0s3  09497bbf-da59-42b7-a72c-d69369760b36  ethernet  enp0s3
[root@srv ~]# nmcli 
enp0s3: connected to enp0s3
        "Intel 82540EM"
        ethernet (e1000), 08:00:27:03:C9:2E, hw, mtu 1500
        ip4 default
        inet4 192.168.0.20/24
        route4 192.168.0.0/24 metric 100
        route4 0.0.0.0/0 via 192.168.0.1 metric 100
        inet6 fe80::a00:27ff:fe03:c92e/64
        route6 fe80::/64 metric 100

lo: unmanaged
        "lo"
        loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
        servers: 8.8.8.8 1.1.1.1
        interface: enp0s3

Use "nmcli device show" to get complete information about known devices and
"nmcli connection show" to get an overview on active connection profiles.

Consult nmcli(1) and nmcli-examples(7) manual pages for complete usage details.

Add the MacVTap device with the name macvlan0

[root@srv ~]# nmcli connection add type macvlan dev enp0s3 mode bridge tap yes ifname macvtap0 con-name macvtap0 ip4 0.0.0.0/24
Connection 'macvtap0' (7a5ef04c-ea98-4642-ac5d-4239f715f631) successfully added.

A MacVTap device, a network connection, and a link are established. The name of the MacVTap device and the network connection is macvtap0.

Keep on reading!

How to run QEMU full virtualization with bridged networking using NetworkManager under CentOS 8

In addition to the previously presented article on the subject Howto do QEMU full virtualization with bridged networking this one shows how to run a QEMU virtual machine with a bridge networking on the host server configured only by using the NetworkManager cli – nmcli.

It is worth mentioning the bridge interface presented in this article is a local bridge device for the server and no Internet addresses or real (or main or Internet-connected) network cards are bound to it. So no MAC addresses of slaved bridged devices will leave the server.
If a network bridge, which includes the Internet (main) server network device is needed, for example, to set real IPs in a virtual machine, there is another article on the bridge networking subject – Replace current interface configuration with a bridge device using nmcli (NetworkManager)

Summary

  1. Add bridge and TUN/TAP device.
  2. Install QEMU.
  3. Create QEMU local disk.
  4. Run a QEMU virtual server.

STEP 1) Add bridge and TUN/TAP device.

[root@srv ~]# nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "192.168.0.1/24"
Connection 'br0' (ad6878c8-1e06-4af8-a81f-1eb39e761df8) successfully added.
[root@srv ~]# nmcli connection up br0
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
[root@srv ~]# nmcli connection add type tun ifname tap0 con-name tap0 mode tap owner 0 ip4 0.0.0.0/24
Connection 'tap0' (dacee2be-a14b-4cf5-83d4-96d072a96725) successfully added.
[root@srv ~]# nmcli con add type bridge-slave ifname tap0 master br0
Connection 'bridge-slave-tap0' (66490382-b239-4eb2-ae1d-ee811e39596c) successfully added.
[root@srv ~]# nmcli con
NAME               UUID                                  TYPE      DEVICE 
System eno1        abf4c85b-57cc-4484-4fa9-b4a71689c359  ethernet  eno1   
br0                ad6878c8-1e06-4af8-a81f-1eb39e761df8  bridge    br0    
tap0               dacee2be-a14b-4cf5-83d4-96d072a96725  tun       tap0   
bridge-slave-tap0  66490382-b239-4eb2-ae1d-ee811e39596c  ethernet  -- 

First, a bridge device is added with manual IP. If the IP is skipped the bridge interface br0 would have DHCP enabled by default, which may not be the desired.
More detailed information on how to create and add TUN/TAP device with the NetworkManager here – Create bridge and add TUN/TAP device using NetworkManager nmcli under CentOS 8

STEP 2) Install QEMU.

Install the QEMU virtual tools under CentOS 8 Stream. At present, the QEMU version is 6.2, which is pretty new.
Keep on reading!

Create bridge and add TUN/TAP device using NetworkManager nmcli under CentOS 8

This article shows how to create a network bridge device and a TUN/TAP device, which then is added to the bridge. The CentOS 8 Stream is used along with the console NetworkManager program nmcli.
TUN/TAP devices are often used in the virtualization world as a link device between the host machine and the virtual machine.

This article is for the case when the bridge does not include the main network interface (Internet network interface and so on) of the server but is an additional device, which MAC and virtual machine MACs would not be exposed through the server’s main network interface.

If the server’s main network interface should be included in the bridge device, i.e. replace the main network interface with the bridge there is another article on the subject – Replace current interface configuration with a bridge device using nmcli (NetworkManager)

Device name are as follow:

  • br0 is the name of the network bridge.
  • 10.10.10.1 with mask /24 is the IP of the bridge device with name br0. Because the idea is to use the bridge only locally, a local interface is used. The IP is set manually.
  • tap0 is the name of TUN/TAP device.
  • enp0s3is the server’s main network connection. Not used in this howto.

Here are all the commands to create a bridge, create a TUN/TAP device and add it to the bridge, and then activate the bridge‘s link.

nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "10.10.10.1/24"
nmcli con up br0
nmcli connection add type tun ifname tap0 con-name tap0 mode tap owner 0 ip4 0.0.0.0/24
nmcli con add type bridge-slave ifname tap0 master br0

Here are the steps with much more details and information including all the command output.
The networking before any reconfiguration:

[root@srv ~]# nmcli
enp0s3: connected to enp0s3
        "Intel 82540EM"
        ethernet (e1000), 08:00:27:03:C9:2E, hw, mtu 1500
        ip4 default
        inet4 192.168.0.20/24
        route4 192.168.0.0/24 metric 100
        route4 0.0.0.0/0 via 192.168.0.1 metric 100
        inet6 fe80::a00:27ff:fe03:c92e/64
        route6 fe80::/64 metric 100

lo: unmanaged
        "lo"
        loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
        servers: 8.8.8.8 1.1.1.1
        interface: enp0s3

Use "nmcli device show" to get complete information about known devices and
"nmcli connection show" to get an overview on active connection profiles.

Consult nmcli(1) and nmcli-examples(7) manual pages for complete usage details.
[root@srv ~]# nmcli con
NAME    UUID                                  TYPE      DEVICE 
enp0s3  09497bbf-da59-42b7-a72c-d69369760b36  ethernet  enp0s3 

Keep on reading!

Install and deploy MySQL 8 InnoDB Cluster with 3 nodes under CentOS 8 and MySQL Router for HA

This article is going to show how to install a MySQL server and deploy a MySQL 8 InnoDB Cluster with three nodes behind a MySQL router to archive a high availability with MySQL database back-end.

In really simple words, MySQL 8.0 InnoDB Cluster is just MySQL replication on steroids – i.e. a little more additional work between the servers in the group before committing the transactions. It uses MySQL Group Replication plugin, which allows the group to operate in two different modes:

  1. a single-primary mode with automatic primary election. Only one server gets the updates.
  2. a multi-master mode – all servers accept the updates. For advanced setups.

Group Replication is bi-directional, the servers communicate with each other and use row replication to replicate the data. The main limitation is that only the MySQL InnoDB engine is supported, because of the transactions support. So the performance (and most features and caveats) of MySQL InnoDB is not impacted by cluster setup and overhead compared to the MySQL in replication mode (or a single server setups) from the previous MySQL versions. Still, all read-write transactions commit only after they have been approved by the group – a verification process providing consensus between the servers. In fact, most of the features like GUIDs, row-based replication (i.e. different replication modes) are developed and available to older versions. The new part is handled by Group Communication System (GCS) protocols, which provide a failure detection mechanism, a group membership service, and a safe and completely ordered message delivery (more on the subject here https://dev.mysql.com/doc/refman/8.0/en/group-replication-background.html).
In addition to the group replication, MySQL Router 8.0 provides the HAhigh availability. The program, which redirects, fails over, balances to the right server in the group is the MySQL Router. Clients may connect directly to the servers in the group, but only if the clients connect using MySQL router will have HA because Group Replication does not have a built-in method for it. It is worth noting, there could be many MySQL Routers in different servers, they do not need to communicate or synchronize anything with each other. So the router could be installed in the same place, where the application is installed or on a separate dedicated server, or on every MySQL server in the group.

Key points in this article of MySQL InnoDB Cluster deployment:

  • CentOS 8 Stream is used for the operating system
  • SELinux tuning to allow MySQL process to connect the network.
  • CentOS 8 firewall tuning to unblock the nodes traffic between them.
  • Disable mysql package system module to use the official MySQL repository.
  • Three MySQL 8.0.28 server nodes will be installed
  • To create and manage the cluster MySQL Shell 8.0 and dba object in it are used.
  • Three MySQL routers on each MySQL node will be installed.
  • Each server will have the domains of the all three servers in /etc/hosts file – db-cluster-1, db-cluster-2, db-cluster-3.
  • The cluster is in group replication with one primary (i.e. master) and two secondary nodes (i.e. slaves)

STEP 1) Install CentOS 8 Stream.

There is an article with the CentOS 8 – How to do a network installation of CentOS 8 (8.0.1950) – minimal server installation, which installation is essentially the same as CentOS 8 Stream.

STEP 2) Prepare the CentOS 8 Stream to install MySQL 8 server.

At present, the latest MySQL Community edition is 8.0.28. The preferred way to install the MySQL server is to download the RPM repository file from MySQL web site – https://dev.mysql.com/downloads/repo/yum/
Keep on reading!

Installing single node Elasticsearch 7.16 and Kibana 7.16 behind nginx web server under CentOS 8

This article will show how to install two big software – Elasticsearch to store information and Kibana to visualize the information under CentOS 8. Elasticsearch is ideal to store big data such as logs from user activities or server logs – one central repository for data, which is structured properly and it could be easily accessed and manipulated with various software.
Kibana is used mainly for visualizing the data stored in the Elasticseach server and manage the Elasticsearch service by the web. ste

Here is a simple example: send the web servers logs in Elasticsearch and visual statistical data with Kibana.

Using the rpm repository for the two software is the best option for installation and in future upgrades.

STEP 1) Install the CentOS 8.

How to install CentOS 8 could be found here – How to do a network installation of CentOS 8 (8.0.1950) – minimal server installation.
Or if a container approach is needed, there is a how to with LXC containerRun LXC CentOS 8 container with bridged network under CentOS 8.

STEP 2) Install the Elasticsearch.

This installation and configuration is for single node server setup.
First, create a rpm repository file /etc/yum.repos.d/elasticsearch.repo and fill it with the Elasticsearch repository information:

[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Then import the Elasticsearch GPG key and install the Elasticsearch software:

[root@loganalyzer ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@loganalyzer ~]# dnf install elasticsearch
Last metadata expiration check: 0:00:19 ago on 11.12.2021 (Sat) 12:43:24 UTC.
Dependencies resolved.
==========================================================================================================================================
 Package            Architecture             Version                     Repository                                Size
==========================================================================================================================================
Installing:
 elasticsearch      x86_64                   7.16.0-1                    elasticsearch                             327 M

Transaction Summary
=========================================================================================================================================
Install  1 Package

Total download size: 327 M
Installed size: 526 M
Is this ok [y/N]: y
Downloading Packages:
elasticsearch-7.16.0-x86_64.rpm                                                                                 43 MB/s | 327 MB     00:07    
------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                           43 MB/s | 327 MB     00:07     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                     1/1 
  Running scriptlet: elasticsearch-7.16.0-1.x86_64                                                                                                                                       1/1 
Creating elasticsearch group... OK
Creating elasticsearch user... OK

  Installing       : elasticsearch-7.16.0-1.x86_64                                                                                                                                       1/1 
  Running scriptlet: elasticsearch-7.16.0-1.x86_64                                                                                                                                       1/1 
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
 sudo systemctl daemon-reload
 sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
 sudo systemctl start elasticsearch.service

Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore

[/usr/lib/tmpfiles.d/elasticsearch.conf:1] Line references path below legacy directory /var/run/, updating /var/run/elasticsearch → /run/elasticsearch; please update the tmpfiles.d/ drop-in file accordingly.

  Verifying        : elasticsearch-7.16.0-1.x86_64                                                                                                                                       1/1 

Installed:
  elasticsearch-7.16.0-1.x86_64                                                                                                                                                              

Complete!

The configuration files are placed in /etc/elasticsearch/:
Keep on reading!

Run LXC CentOS 8 container with bridged network under CentOS 8

The LXC container software comes to CentOS 8 with the EPEL 8 repository. LXC is a multiprocesses container, which offers to boot a Linux distribution under container isolation. It is very similar to systemd-nspawn and a bit different from docker containers. LXC containers are used when multiple processes are needed under one container only. In most cases, the LXC container is a fully-featured Linux distribution (systemd or SysV, i.e. init) booted under a Linux container.
There are several major differences between docker/podman containers and LXC:

  • Multiprocesses.
  • Easy configuration modification. Even hot-plugin supported.
  • Unprivileged Linux containers.
  • Complex network setups. Multiple network interfaces connected to different networks, for example.
  • Live systemd, i.e. systemd or SysV init are booted as usual. Much of the software rellies on systemd/udev features and in many cases, it is really hard to run a software without a systemd or init process

Here are the steps to boot a CentOS 8 container under CentOS 8 host server:

STEP 1) Install EPEL repository.

EPEL CentOS 8 repository now includes LXC 3.0 software.

dnf install -y epel-release

STEP 2) Install LXC software and start LXC service.

At present, the LXC software version is 3.0.4. The package lxc-templates includes template scripts to create a Linux distribution environment like CentOS, Ubuntu, Debian, Gentoo, ArchLinux, Oracle, Alpine, and many others and it also includes the configuration templates to start these Linux distributions.

dnf install -y lxc lxc-templates
dnf install -y wget tar

The wget and tar are required if LXC templates installation is going to be performed.

STEP 3) Create a CentOS 8 container with the help of LXC templates and run it.

Use the lxc-templates to prepare a CentOS 8 container environment. The currently available containers are listed here http://images.linuxcontainers.org/. Check out the URL and choose the right container. Here the CentOS 8 amd64 is used.

lxc-create --template download -n mycontainer -- --dist centos --release 8 --arch amd64 --keyserver hkp://keyserver.ubuntu.com

Keep on reading!

Replace current interface configuration with a bridge device using nmcli (NetworkManager)

This article shows how the primary network interface could be replaced by a bridge device and the network interface becomes a part of the bridge as a slave device without reboot or restart of the server. Using nmcli under CentOS 8 (and probably any other Linux distribution like Ubuntu, which uses NetworkManager to configure network devices).
The main steps are:

  1. Create a connection profile of a bridge device.
  2. Set the same network configuration as the primary network to the bridge device.
  3. Create a connection profile for the primary interface device as a slave network device to the newly created bridge.
  4. Delete the current primary connection, which is using the primary network device and configuration.
  5. Reload the bridge connection profile to take effect. The bridge device will actually begin to work.

The main goal is not to reboot the server or lose the connection to the server. The primary network interface is the only connection on the server and losing it the server is going to be unreachable. So the last two steps should be performed in the background or a script or a detached terminal (like screen).
Here are all the commands in one place:

nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "192.168.0.20/24" ipv4.gateway "192.168.0.1" ipv4.dns "8.8.8.8 1.1.1.1"
nmcli con add type bridge-slave ifname enp0s3 master br0
nmcli con del "enp0s3"; nmcli con reload "br0" &

Here is the detailed information for the above commands:
Keep on reading!