The LXC container software comes to CentOS 8 with the EPEL 8 repository. LXC is a multiprocesses container, which offers to boot a Linux distribution under container isolation. It is very similar to systemd-nspawn and a bit different from docker containers. LXC containers are used when multiple processes are needed under one container only. In most cases, the LXC container is a fully-featured Linux distribution (systemd or SysV, i.e. init) booted under a Linux container.
There are several major differences between docker/podman containers and LXC:
- Easy configuration modification. Even hot-plugin supported.
- Unprivileged Linux containers.
- Complex network setups. Multiple network interfaces connected to different networks, for example.
- Live systemd, i.e. systemd or SysV init are booted as usual. Much of the software rellies on systemd/udev features and in many cases, it is really hard to run a software without a systemd or init process
Here are the steps to boot a CentOS 8 container under CentOS 8 host server:
STEP 1) Install EPEL repository.
EPEL CentOS 8 repository now includes LXC 3.0 software.
dnf install -y epel-release
STEP 2) Install LXC software and start LXC service.
At present, the LXC software version is 3.0.4. The package lxc-templates includes template scripts to create a Linux distribution environment like CentOS, Ubuntu, Debian, Gentoo, ArchLinux, Oracle, Alpine, and many others and it also includes the configuration templates to start these Linux distributions.
dnf install -y lxc lxc-templates
dnf install -y wget tar
The wget and tar are required if LXC templates installation is going to be performed.
STEP 3) Create a CentOS 8 container with the help of LXC templates and run it.
Use the lxc-templates to prepare a CentOS 8 container environment. The currently available containers are listed here http://images.linuxcontainers.org/. Check out the URL and choose the right container. Here the CentOS 8 amd64 is used.
lxc-create --template download -n mycontainer -- --dist centos --release 8 --arch amd64 --keyserver hkp://keyserver.ubuntu.com
Keep on reading!
This article shows how the primary network interface could be replaced by a bridge device and the network interface becomes a part of the bridge as a slave device without reboot or restart of the server. Using nmcli under CentOS 8 (and probably any other Linux distribution like Ubuntu, which uses NetworkManager to configure network devices).
The main steps are:
- Create a connection profile of a bridge device.
- Set the same network configuration as the primary network to the bridge device.
- Create a connection profile for the primary interface device as a slave network device to the newly created bridge.
- Delete the current primary connection, which is using the primary network device and configuration.
- Reload the bridge connection profile to take effect. The bridge device will actually begin to work.
The main goal is not to reboot the server or lose the connection to the server. The primary network interface is the only connection on the server and losing it the server is going to be unreachable. So the last two steps should be performed in the background or a script or a detached terminal (like screen).
Here are all the commands in one place:
nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "192.168.0.20/24" ipv4.gateway "192.168.0.1" ipv4.dns "188.8.131.52 184.108.40.206"
nmcli con add type bridge-slave ifname enp0s3 master br0
nmcli con del "enp0s3"; nmcli con reload "br0" &
Here is the detailed information for the above commands:
Keep on reading!
This article shows what files to add if you want to add a bonding interface under CentOS 8 without invoking the Network manager command utility.
Our goal is to use one boding group with the name bond0 in LACP (aka 802.3ad) mode (but it could be any of the other types) with two networks 10Gbps interfaces. The setup resented here uses NetworkManager, which handles the loading of bonding module properly.
In fact, the network-scripts are now deprecated and they are missing from the system (but they still exist in the additional package – “network-scripts”, who knows till when? do not rely on them!).
The configuration files are with the same syntax as under CentOS 7, but this time the network manager parses them. The ifup and ifdown still exist and they just call the Network manager when executed (unless the “network-scripts” package is installed). If you need to enable bonding without any configuration files (for emergency situations) you may still use – How to enable Linux bonding without ifenslave
What do you need:
STEP 1) Configure the bonding device
The boding interface’s name will be bond0 and the configuration will be located in /etc/sysconfig/network-scripts/ifcfg-bond0
Keep on reading!