Change the LXC container root folder under CentOS with SELinux

The default LXC container folder in CentOS (all versions – 7,8, Stream 8 and Stream 9) is /var/lib/lxc, which may resides in the root partition. When changing the lxc.rootfs or (the main directory /var/lib/lxc) to another place, the containers may still work without any additional SELinux permissions. Some tools like lxc-attach would definitely stop working with permission errors – lxc_attach_run_shell: 1333 Permission denied – failed to exec shell. This article will show how to use lxc-create and SELinux commands to properly change the LXC container’s rootfs.
For detailed information how to create a LXC container check out – Run LXC CentOS Stream 9 container with bridged network under CentOS Stream 9 or Run LXC Ubuntu 22.04 LTS container with bridged network under CentOS Stream 9.

Create LXC container with not default path

  • Change the rootfs only. To change only the LXC container root filesystem location use “–dir=” lxc-create option:
    lxc-create --template download -n mycontainer2 --dir=/mnt/storage/servers/mycontainer2 -- --dist centos --release 9-Stream --arch amd64
    

    It will place the files under /mnt/storage/servers/mycontainer2, but the configuration will still be located in /var/lib/lxc/mycontainer2/.

    [root@srv ~]# ls -altr /var/lib/lxc/mycontainer2/
    total 16
    drwxr-xr-x. 3 root root 4096 Oct 14 13:42 ..
    drwxr-xr-x. 2 root root 4096 Oct 14 13:42 rootfs
    -rw-r-----. 1 root root  775 Oct 14 13:42 config
    drwxrwx---. 3 root root 4096 Oct 14 13:42 .
    [root@srv ~]# ls -altr /var/lib/lxc/mycontainer2/rootfs/
    total 8
    drwxr-xr-x. 2 root root 4096 Oct 14 13:42 .
    drwxrwx---. 3 root root 4096 Oct 14 13:42 ..
    [root@srv ~]# ls -altr /mnt/storage/servers/mycontainer2/
    total 76
    drwxrwxrwt.  2 root root 4096 Aug  9  2021 tmp
    drwxr-xr-x.  2 root root 4096 Aug  9  2021 srv
    lrwxrwxrwx.  1 root root    8 Aug  9  2021 sbin -> usr/sbin
    drwxr-xr-x.  2 root root 4096 Aug  9  2021 opt
    drwxr-xr-x.  2 root root 4096 Aug  9  2021 mnt
    drwxr-xr-x.  2 root root 4096 Aug  9  2021 media
    lrwxrwxrwx.  1 root root    9 Aug  9  2021 lib64 -> usr/lib64
    lrwxrwxrwx.  1 root root    7 Aug  9  2021 lib -> usr/lib
    drwxr-xr-x.  2 root root 4096 Aug  9  2021 home
    dr-xr-xr-x.  2 root root 4096 Aug  9  2021 boot
    lrwxrwxrwx.  1 root root    7 Aug  9  2021 bin -> usr/bin
    dr-xr-xr-x.  2 root root 4096 Aug  9  2021 afs
    dr-xr-xr-x.  2 root root 4096 Oct 14 07:11 sys
    dr-xr-xr-x.  2 root root 4096 Oct 14 07:11 proc
    drwxr-xr-x. 12 root root 4096 Oct 14 07:11 usr
    drwxr-xr-x.  8 root root 4096 Oct 14 07:11 run
    drwxr-xr-x. 18 root root 4096 Oct 14 07:11 var
    dr-xr-x---.  2 root root 4096 Oct 14 07:12 root
    drwxr-xr-x.  2 root root 4096 Oct 14 07:12 selinux
    drwxr-xr-x. 19 root root 4096 Oct 14 07:15 .
    drwxr-xr-x.  4 root root 4096 Oct 14 13:41 ..
    drwxr-xr-x.  3 root root 4096 Oct 14 13:42 dev
    drwxr-xr-x. 63 root root 4096 Oct 14 13:42 etc
    
  • Change the LXC container path – the folder containing the configuration and the container’s root filesystems use “-P”
    lxc-create -P /mnt/storage/servers/ --template download -n mycontainer -- --dist centos --release 9-Stream --arch amd64
    

    All the LXC container configuration and root filesystem will be placed under /mnt/storage/servers/[container_name], which in the example above is /mnt/storage/servers/mycontainer

    [root@srv ~]# ls -al /mnt/storage/servers/mycontainer
    total 16
    drwxrwx---.  3 root root 4096 Oct 14 13:38 .
    drwxr-xr-x.  4 root root 4096 Oct 14 13:41 ..
    -rw-r-----.  1 root root  780 Oct 14 13:38 config
    drwxr-xr-x. 19 root root 4096 Oct 14 07:15 rootfs
    

It is better to use the “-P” and to change the LXC container location than only the filesystem path. In this case, a good practice is to make a symbolic link in /var/lib/lxc/[container-name] to the new location:

ln -s /mnt/storage/servers/mycontainer /var/lib/lxc/mycontainer

So all LXC tools will continue to work without explicitly adding an option for the new path of this container.

Change the SELinux file context to be container_var_lib_t of the LXC root filesystem

Add the file context container_var_lib_t to the container’s root filesystem path and change the SELinux labels.
First, verify all the needed tools are installed:

dnf install -y policycoreutils-python-utils container-selinux

Then, add a new file context to the path /mnt/storage/servers/mycontainer and run the restorecon to change the SELinux labels to container_var_lib_t

semanage fcontext -a -t container_var_lib_t '/mnt/storage/servers/mycontainer(/.*)?'
restorecon -Rv /mnt/storage/servers/mycontainer

The file context may be shown with:

[root@srv ~]# ls -alZ /mnt/storage/servers/mycontainer
total 16
drwxrwx---.  3 root root unconfined_u:object_r:container_var_lib_t:s0 4096 Oct 14 13:38 .
drwxr-xr-x.  4 root root unconfined_u:object_r:mnt_t:s0               4096 Oct 14 13:41 ..
-rw-r-----.  1 root root unconfined_u:object_r:container_var_lib_t:s0  780 Oct 14 13:38 config
drwxr-xr-x. 19 root root unconfined_u:object_r:container_var_lib_t:s0 4096 Oct 14 07:15 rootfs

Failing to set the proper SELinux labels may result to errors such as lxc_attach_run_shell: 1333 Permission denied – failed to exec shell

Install and use collectd-ping under CentOS 8 to monitor latency

Tracking the network latency of the servers’ network is not an easy job. Most monitoring software is capable to monitor the state of the server, but how to monitor the state of the connectivity and the network latency and even the Internet connectivity with some respectful addresses like 1.1.1.1 or 8.8.8.8? It should be easy to do it with ICMP and ping command but using the collectd daemon and one of its plugins offers collectd-ping from https://collectd.org/wiki/index.php/Plugin:Ping to save all the history in a time series back-end and using grafanahttps://grafana.com/ (or other graphs/histograms and etc software) to make graphs.
Using the collectd-ping plugin in conjunction with grafana may reach the similar effect as using the old and gold smokeping.
CentOS 7 included the collectd-ping plugin in its official repository, but in CentOS 8 the plugin is missing! Under Cent OS 8 the CentOS SIG OpsTools https://wiki.centos.org/SpecialInterestGroup/OpsTools includes the collectd-ping plugin in their repository. More on SIG and OpsTools may be obtained in the later page. In general, it is safe to use this repository it would not break user’s system.
Here is how to install and configure it. Real grafana examples are also included at the end.

The example here assumes there is a grafana server installed with influxdb backend.

STEP 1) Add OpsTools repository and install the collectd and collectd-ping.

The OpsTools repository is installed with centos-release-opstools package.
Here is what is going to install:

dnf install -y centos-release-opstools
dnf install -y collectd collectd-ping

Keep on reading!

MPEG-DASH and ClearKey, CENC drm encryption with Nginx, bento4 and dashjs under CentOS 8

The purpose of this article is to demonstrate a simple and plain example of ClearKey DRM encryption using a DASH stream.
Usually, the ClearKey is used only for testing the encryption key and the DRM setup, because the decrypting key is transferred in a plain text to the browser. In simple DRM words, the key is transferred in plain text, and the handle of the decryption is not in some proprietary module such as CMD – Content Decryption Modules. The CMD is a proprietary module in the browsers or the players, which works like a black box when handling the decryption key. The most popular DRMs are Google’s Widevine, Apple’s Fireplay, and Microsoft PlayReady, which work through a proprietary module – CMD (Content Decryption Modules) in the browser (or the OS and player).
All the three DRMs work basically in a similar way:

  • There is a (encryption) key and a (encryption) keyID, which purpose is to identify the (encryption) key.
  • The video file is encrypted with the key and it includes the keyID.
  • The client needs to have the appropriate CMD (Content Decryption Modules) to decrypt the video.
  • The clients receive a license from a license server, which is encrypted data for the CDM on how to decrypt the video identified by the keyID. In fact, the client sends the keyID and receives the proper license (i.e. license binary data) for this keyID. That’s why keyID is included in the encrypted video. Bare in mind, the CMD is proprietary Content Decryption Module offered by the creator of the DRM – Google, Apple, Microsoft or another and it lives in the browser (OS or player). All popular browsers support at least one of the proprietary DRMs.

ClearKey is like the proprietary DRM schemes, but without the CMD (Content Decryption Modules).

The “org.w3.clearkey” Key System uses plain-text clear (unencrypted) key(s) to decrypt the source. No additional client-side content protection is required.

So, in general, there is no need for a license server when using ClearKey DRM.
Of course, an additional attempt to hide the plain-text key could be made using an extension to the client’s player such as javascript modules and etc. In general, it is perceived this approach to be less secure, because it is much easier to debug the javascript code on the client side. More on ClearKeyhttps://www.w3.org/TR/encrypted-media/#clear-key

Here are all the steps from the server till the client to use ClearKey.

STEP 1) Download and install bento4 software.

bento4 is an open source toolkit for manipulating some of the most common video formats – MP4 and DASH/HLS/CMAF media. The download page is https://www.bento4.com/downloads/ and the Linux binary for latest stable version: https://www.bok.net/Bento4/binaries/Bento4-SDK-1-6-0-639.x86_64-unknown-linux.zip. There is also a source code snapshot link.
Download the famous blender video for the demostration: https://download.blender.org/demo/movies/BBB/bbb_sunflower_1080p_30fps_normal.mp4
Download and unpack the binary Bento4-SDK-1-6-0-639.x86_64-unknown-linux.zip.
Keep on reading!

Install newer version of python 3.10 under CentOS 8

At present, the default version of python under CentOS 8 is Python 3.6.8, which is 6 years old. More and more python software needs newer versions, so it is a vital for pretty stable Linux distro to have an easy way to install newer programming languages like python!
Using Conda it is really easy to manage different environments for different python versions!

Conda is an open source package management system and environment management system that runs on Windows, macOS and Linux.

More on CondaInstalling conda command line in various systems with miniconda and create a simple python environment and all Conda tags – https://ahelpme.com/category/software/anaconda/. This article is not intended to introduce the reader with Conda, but to show how easy is to install the newer version of python 3.10 under CentOS 8 and it is easy because of using the Conda package management system!

To summarize, the purpose is to have a user with python 3.10. The user can be an ordinary or administrative one or even root.
Using this method older or newer versions of python may be installed on the same machine (at the same time).

STEP 1) Install the latest Miniconda3

The installation is easy and for more details check out the first link above.
Keep on reading!

How to run QEMU full virtualization with MacVTap networking using NetworkManager under CentOS 8

In addition to the previously presented article on the subject Howto do QEMU full virtualization with MacVTap networking this one shows how to run a QEMU virtual machine with a MAcVTap device in bridge mode on the host server configured only by using the NetworkManager cli – nmcli.

It is worth mentioning the MacVTap is a virtual bridge, which will make the host and the guest device show up directly on the host switch. So when using QEMU, the guest virtualized system will be as if it is connected to the host switch with one limitation – the host and guest cannot communicate with each other. The IPs of the host won’t be reachable from the guest, so NAT (masquerade) between the host and guest is not possible with this setup. Still, if the NAT server is on another server or a real IP is planned for the guest, MacVTap is the right functionality to use with the QEMU guest system.

Summary

  1. Add MacVTap device in bridge mode with name macvtap0.
  2. Install QEMU.
  3. Create QEMU local disk.
  4. Run a QEMU virtual server.

STEP 1) Add MacVTap device in bridge mode with name macvtap0

[root@srv ~]# nmcli connection add type macvlan dev enp0s3 mode bridge tap yes ifname macvtap0 con-name macvtap0 ip4 0.0.0.0/24
Connection 'macvtap0' (7a5ef04c-ea98-4642-ac5d-4239f715f631) successfully added.
[root@srv ~]# nmcli con
NAME      UUID                                  TYPE      DEVICE   
enp0s3    09497bbf-da59-42b7-a72c-d69369760b36  ethernet  enp0s3   
macvtap0  7a5ef04c-ea98-4642-ac5d-4239f715f631  macvlan   macvtap0 

First, create a MacVTap device with the name macvtap0 in bridge mode with the network interface enp0s3 a and a connection with the name macvtap0. The IP is set to manual mode.
More detailed information on how to create and add MacVTap device with the NetworkManager here – Create MacVTap device using NetworkManager nmcli under CentOS 8

STEP 2) Install QEMU.

Install the QEMU virtual tools under CentOS 8 Stream. At present, the QEMU version is 6.2, which is pretty new.
Keep on reading!

Create MacVTap device using NetworkManager nmcli under CentOS 8

In continuation of NetworkManager management with nmcli, here is a quick Linux console tip for users like CentOS 8 (or all distributions, which use the NetworkManager for managing the networking). How to create a virtualized bridge device MacVTap device with the NetworkManager nmcli command utility, which will preserve all the configuration over reboots.

nmcli connection add type macvlan dev enp0s3 mode bridge tap yes ifname macvtap0 con-name macvtap0 ip4 0.0.0.0/24

The line above creates a virtualized bridged interface and a connection with the name macvtap0. The MAcVTap device with the name macvtap0 is in bridge mode with the physical network interface enp0s3 with manual IP setting. If the IP is not included a DHCP option will be used as default.

There is one big limitation – there is no link between the enp0s3 and macvtap0. When used macvtap0 could receive packets from the network through the enp0s3, but there is no direct link between the two network devices. In simple words, when used in a virtualized environment in a virtual machine the virtual machine may have access to the network shared with the enp0s3, but the virtual machine cannot communicate with the IPs of the enp0s3!

Typically, this is used to make both the guest and the host show up directly on the switch that the host is connected to.

Linux Virtualization, https://virt.kernelnewbies.org/MacVTap

Initial state, only one connection in NetworkManager.

The main server connection with name enp0s3 using the same name network interface enp0s3:

[root@srv ~]# nmcli con
NAME    UUID                                  TYPE      DEVICE 
enp0s3  09497bbf-da59-42b7-a72c-d69369760b36  ethernet  enp0s3
[root@srv ~]# nmcli 
enp0s3: connected to enp0s3
        "Intel 82540EM"
        ethernet (e1000), 08:00:27:03:C9:2E, hw, mtu 1500
        ip4 default
        inet4 192.168.0.20/24
        route4 192.168.0.0/24 metric 100
        route4 0.0.0.0/0 via 192.168.0.1 metric 100
        inet6 fe80::a00:27ff:fe03:c92e/64
        route6 fe80::/64 metric 100

lo: unmanaged
        "lo"
        loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
        servers: 8.8.8.8 1.1.1.1
        interface: enp0s3

Use "nmcli device show" to get complete information about known devices and
"nmcli connection show" to get an overview on active connection profiles.

Consult nmcli(1) and nmcli-examples(7) manual pages for complete usage details.

Add the MacVTap device with the name macvlan0

[root@srv ~]# nmcli connection add type macvlan dev enp0s3 mode bridge tap yes ifname macvtap0 con-name macvtap0 ip4 0.0.0.0/24
Connection 'macvtap0' (7a5ef04c-ea98-4642-ac5d-4239f715f631) successfully added.

A MacVTap device, a network connection, and a link are established. The name of the MacVTap device and the network connection is macvtap0.

Keep on reading!

How to run QEMU full virtualization with bridged networking using NetworkManager under CentOS 8

In addition to the previously presented article on the subject Howto do QEMU full virtualization with bridged networking this one shows how to run a QEMU virtual machine with a bridge networking on the host server configured only by using the NetworkManager cli – nmcli.

It is worth mentioning the bridge interface presented in this article is a local bridge device for the server and no Internet addresses or real (or main or Internet-connected) network cards are bound to it. So no MAC addresses of slaved bridged devices will leave the server.
If a network bridge, which includes the Internet (main) server network device is needed, for example, to set real IPs in a virtual machine, there is another article on the bridge networking subject – Replace current interface configuration with a bridge device using nmcli (NetworkManager)

Summary

  1. Add bridge and TUN/TAP device.
  2. Install QEMU.
  3. Create QEMU local disk.
  4. Run a QEMU virtual server.

STEP 1) Add bridge and TUN/TAP device.

[root@srv ~]# nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "192.168.0.1/24"
Connection 'br0' (ad6878c8-1e06-4af8-a81f-1eb39e761df8) successfully added.
[root@srv ~]# nmcli connection up br0
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
[root@srv ~]# nmcli connection add type tun ifname tap0 con-name tap0 mode tap owner 0 ip4 0.0.0.0/24
Connection 'tap0' (dacee2be-a14b-4cf5-83d4-96d072a96725) successfully added.
[root@srv ~]# nmcli con add type bridge-slave ifname tap0 master br0
Connection 'bridge-slave-tap0' (66490382-b239-4eb2-ae1d-ee811e39596c) successfully added.
[root@srv ~]# nmcli con
NAME               UUID                                  TYPE      DEVICE 
System eno1        abf4c85b-57cc-4484-4fa9-b4a71689c359  ethernet  eno1   
br0                ad6878c8-1e06-4af8-a81f-1eb39e761df8  bridge    br0    
tap0               dacee2be-a14b-4cf5-83d4-96d072a96725  tun       tap0   
bridge-slave-tap0  66490382-b239-4eb2-ae1d-ee811e39596c  ethernet  -- 

First, a bridge device is added with manual IP. If the IP is skipped the bridge interface br0 would have DHCP enabled by default, which may not be the desired.
More detailed information on how to create and add TUN/TAP device with the NetworkManager here – Create bridge and add TUN/TAP device using NetworkManager nmcli under CentOS 8

STEP 2) Install QEMU.

Install the QEMU virtual tools under CentOS 8 Stream. At present, the QEMU version is 6.2, which is pretty new.
Keep on reading!

rsync server under CentOS 8 with SELinux enabled

Here is a quick and useful tip on how to run a rsync daemon under CentOS 8 with SELinux in Enforcing mode.
There are three basic steps:

  1. rsync daemon installation and configuration.
  2. firewall configuration.
  3. SELinux configuration.

STEP 1) rsync daemon installation and configuration.

Under CentOS 8 rsync daemon files are in a separate rpm package rsync-daemon (more on the subject rsync daemon in CentOS 8):

[root@srv ~]# dnf install -y rsync-daemon
Last metadata expiration check: 2:45:48 ago on Thu Apr  7 07:40:42 2022.
Dependencies resolved.
==============================================================================================================
 Package                     Architecture          Version                        Repository             Size
==============================================================================================================
Installing:
 rsync-daemon                noarch                3.1.3-14.el8                   baseos                 43 k

Transaction Summary
==============================================================================================================
Install  1 Package

Total download size: 43 k
Installed size: 17 k
Downloading Packages:
rsync-daemon-3.1.3-14.el8.noarch.rpm                                           98 kB/s |  43 kB     00:00    
--------------------------------------------------------------------------------------------------------------
Total                                                                          81 kB/s |  43 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                      1/1 
  Installing       : rsync-daemon-3.1.3-14.el8.noarch                                                     1/1 
  Running scriptlet: rsync-daemon-3.1.3-14.el8.noarch                                                     1/1 
  Verifying        : rsync-daemon-3.1.3-14.el8.noarch                                                     1/1 

Installed:
  rsync-daemon-3.1.3-14.el8.noarch                                                                            

Complete!

Keep on reading!

Run LXC CentOS 8 container with bridged network under CentOS 8

The LXC container software comes to CentOS 8 with the EPEL 8 repository. LXC is a multiprocesses container, which offers to boot a Linux distribution under container isolation. It is very similar to systemd-nspawn and a bit different from docker containers. LXC containers are used when multiple processes are needed under one container only. In most cases, the LXC container is a fully-featured Linux distribution (systemd or SysV, i.e. init) booted under a Linux container.
There are several major differences between docker/podman containers and LXC:

  • Multiprocesses.
  • Easy configuration modification. Even hot-plugin supported.
  • Unprivileged Linux containers.
  • Complex network setups. Multiple network interfaces connected to different networks, for example.
  • Live systemd, i.e. systemd or SysV init are booted as usual. Much of the software rellies on systemd/udev features and in many cases, it is really hard to run a software without a systemd or init process

Here are the steps to boot a CentOS 8 container under CentOS 8 host server:

STEP 1) Install EPEL repository.

EPEL CentOS 8 repository now includes LXC 3.0 software.

dnf install -y epel-release

STEP 2) Install LXC software and start LXC service.

At present, the LXC software version is 3.0.4. The package lxc-templates includes template scripts to create a Linux distribution environment like CentOS, Ubuntu, Debian, Gentoo, ArchLinux, Oracle, Alpine, and many others and it also includes the configuration templates to start these Linux distributions.

dnf install -y lxc lxc-templates
dnf install -y wget tar

The wget and tar are required if LXC templates installation is going to be performed.

STEP 3) Create a CentOS 8 container with the help of LXC templates and run it.

Use the lxc-templates to prepare a CentOS 8 container environment. The currently available containers are listed here http://images.linuxcontainers.org/. Check out the URL and choose the right container. Here the CentOS 8 amd64 is used.

lxc-create --template download -n mycontainer -- --dist centos --release 8 --arch amd64 --keyserver hkp://keyserver.ubuntu.com

Keep on reading!

Replace current interface configuration with a bridge device using nmcli (NetworkManager)

This article shows how the primary network interface could be replaced by a bridge device and the network interface becomes a part of the bridge as a slave device without reboot or restart of the server. Using nmcli under CentOS 8 (and probably any other Linux distribution like Ubuntu, which uses NetworkManager to configure network devices).
The main steps are:

  1. Create a connection profile of a bridge device.
  2. Set the same network configuration as the primary network to the bridge device.
  3. Create a connection profile for the primary interface device as a slave network device to the newly created bridge.
  4. Delete the current primary connection, which is using the primary network device and configuration.
  5. Reload the bridge connection profile to take effect. The bridge device will actually begin to work.

The main goal is not to reboot the server or lose the connection to the server. The primary network interface is the only connection on the server and losing it the server is going to be unreachable. So the last two steps should be performed in the background or a script or a detached terminal (like screen).
Here are all the commands in one place:

nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "192.168.0.20/24" ipv4.gateway "192.168.0.1" ipv4.dns "8.8.8.8 1.1.1.1"
nmcli con add type bridge-slave ifname enp0s3 master br0
nmcli con del "enp0s3"; nmcli con reload "br0" &

Here is the detailed information for the above commands:
Keep on reading!