Run LXC Ubuntu 22.04 LTS container with bridged network under CentOS Stream 9

In continuation of the previous article Run LXC CentOS Stream 9 container with bridged network under CentOS Stream 9, this time the LXC container will be Ubuntu 22.04 LTS Jammy Jellyfish.
To receive a better understanding why to use LXC or a much detailed information of some steps in this article it is better to visit the previously mention article and the original Run LXC CentOS 8 container with bridged network under CentOS 8.

STEP 1) Install the needed software EPEL repository and the LXC and its dependencies

To install LXC software the EPEL CentOS Stream 9 repository must be installed. At present, the LXC included in CentOS Stream 9 EPEL repository is 4.0.

dnf install -y epel-release
dnf install -y lxc lxc-templates container-selinux
dnf install -y wget tar

lxc-templates uses template “download” to download different Linux distribution images from http://images.linuxcontainers.org/, which now redirects to http://uk.lxd.images.canonical.com/ (an Ubuntu lxd images mirror).
The container-selinux should be installed only if the host, i.e. the CentOS Stream 9 install, is with enabled SELinux. The packages offers additional SELinux rules or for the LXC and LXC tools like lxc-attach and more.

STEP 2) Create a Ubuntu 22.04 LTS with the help of LXC templates

[root@srv ~]# lxc-create --template download -n mycontainer -- --dist centos --release 9-Stream --arch amd64

In addition, there is a “–variant” option along with “--dist” and “--release” to specify which variant to install – default, cloud, desktop or other. There is a variant column in the table on the images’ page mentioned above.
Keep on reading!

Run LXC CentOS Stream 9 container with bridged network under CentOS Stream 9

In continue of the previous article with CentOS 8 – Run LXC CentOS 8 container with bridged network under CentOS 8, here is an updated version with CentOS Stream 9 running LXC container. In this case, the LXC container is CentOS Stream 9, too.
Under CentOS 8, the LXC software is from branch 3.x, but in CentOS Stream 9 the LXC is 4.x and there are some differences in the LXC configuration file.
It’s worth mentioning the differences between docker/podman containers and LXC from the previous article:

  • Multiprocesses.
  • Easy configuration modification. Even hot-plugin supported.
  • Unprivileged Linux containers.
  • Complex network setups. Multiple network interfaces connected to different networks, for example.
  • Live systemd, i.e. systemd or SysV init are booted as usual. Much of the software relies on systemd/udev features and in many cases, it is really hard to run software without a systemd or init process

Here are the steps to boot a CentOS Stream 9 container under CentOS Stream 9 host server:

STEP 1) Install EPEL repository.

EPEL CentOS Stream 9 repository now includes LXC 4.0 software.

dnf install -y epel-release

STEP 2) Install LXC software and start LXC service.

At present, the LXC software version is 4.0.12. The package lxc-templates includes template scripts to create a Linux distribution environment like CentOS, Ubuntu, Debian, Gentoo, ArchLinux, Oracle, Alpine, and many others and it also includes the configuration templates to start these Linux distributions. In fact, lxc-templates now includes a download script to download images from the Internet.

dnf install -y lxc lxc-templates container-selinux
dnf install -y wget tar

The wget and tar are required if LXC templates installation is going to be performed.
There is an additional package for container’s SELinux, which should be installed before starting the LXC service, because some of the SELinux rules may not apply in the system. If the SELinux is disabled the installation of container-selinux package might be skipped.

STEP 3) Create a CentOS Stream 9 container with the help of LXC templates and run it.

Use the lxc-templates to prepare a CentOS Stream 9 container environment. The currently available containers are listed here http://images.linuxcontainers.org/, which now redirects to http://uk.lxd.images.canonical.com/ (an Ubuntu lxd images mirror). Check out the URL and choose the right container. Here the CentOS Stream 9 amd64, i.e. release 9-Stream, is used.

[root@srv ~]# lxc-create --template download -n mycontainer -- --dist centos --release 9-Stream --arch amd64

In addition, there is a “–variant” option along with “--dist” and “--release” to specify which variant to install – default, cloud, desktop or other. There is a variant column in the table on the images’ page mentioned above.
Keep on reading!

How to run QEMU full virtualization with bridged networking using NetworkManager under CentOS 8

In addition to the previously presented article on the subject Howto do QEMU full virtualization with bridged networking this one shows how to run a QEMU virtual machine with a bridge networking on the host server configured only by using the NetworkManager cli – nmcli.

It is worth mentioning the bridge interface presented in this article is a local bridge device for the server and no Internet addresses or real (or main or Internet-connected) network cards are bound to it. So no MAC addresses of slaved bridged devices will leave the server.
If a network bridge, which includes the Internet (main) server network device is needed, for example, to set real IPs in a virtual machine, there is another article on the bridge networking subject – Replace current interface configuration with a bridge device using nmcli (NetworkManager)

Summary

  1. Add bridge and TUN/TAP device.
  2. Install QEMU.
  3. Create QEMU local disk.
  4. Run a QEMU virtual server.

STEP 1) Add bridge and TUN/TAP device.

[root@srv ~]# nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "192.168.0.1/24"
Connection 'br0' (ad6878c8-1e06-4af8-a81f-1eb39e761df8) successfully added.
[root@srv ~]# nmcli connection up br0
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
[root@srv ~]# nmcli connection add type tun ifname tap0 con-name tap0 mode tap owner 0 ip4 0.0.0.0/24
Connection 'tap0' (dacee2be-a14b-4cf5-83d4-96d072a96725) successfully added.
[root@srv ~]# nmcli con add type bridge-slave ifname tap0 master br0
Connection 'bridge-slave-tap0' (66490382-b239-4eb2-ae1d-ee811e39596c) successfully added.
[root@srv ~]# nmcli con
NAME               UUID                                  TYPE      DEVICE 
System eno1        abf4c85b-57cc-4484-4fa9-b4a71689c359  ethernet  eno1   
br0                ad6878c8-1e06-4af8-a81f-1eb39e761df8  bridge    br0    
tap0               dacee2be-a14b-4cf5-83d4-96d072a96725  tun       tap0   
bridge-slave-tap0  66490382-b239-4eb2-ae1d-ee811e39596c  ethernet  -- 

First, a bridge device is added with manual IP. If the IP is skipped the bridge interface br0 would have DHCP enabled by default, which may not be the desired.
More detailed information on how to create and add TUN/TAP device with the NetworkManager here – Create bridge and add TUN/TAP device using NetworkManager nmcli under CentOS 8

STEP 2) Install QEMU.

Install the QEMU virtual tools under CentOS 8 Stream. At present, the QEMU version is 6.2, which is pretty new.
Keep on reading!

Run LXC CentOS 8 container with bridged network under CentOS 8

The LXC container software comes to CentOS 8 with the EPEL 8 repository. LXC is a multiprocesses container, which offers to boot a Linux distribution under container isolation. It is very similar to systemd-nspawn and a bit different from docker containers. LXC containers are used when multiple processes are needed under one container only. In most cases, the LXC container is a fully-featured Linux distribution (systemd or SysV, i.e. init) booted under a Linux container.
There are several major differences between docker/podman containers and LXC:

  • Multiprocesses.
  • Easy configuration modification. Even hot-plugin supported.
  • Unprivileged Linux containers.
  • Complex network setups. Multiple network interfaces connected to different networks, for example.
  • Live systemd, i.e. systemd or SysV init are booted as usual. Much of the software rellies on systemd/udev features and in many cases, it is really hard to run a software without a systemd or init process

Here are the steps to boot a CentOS 8 container under CentOS 8 host server:

STEP 1) Install EPEL repository.

EPEL CentOS 8 repository now includes LXC 3.0 software.

dnf install -y epel-release

STEP 2) Install LXC software and start LXC service.

At present, the LXC software version is 3.0.4. The package lxc-templates includes template scripts to create a Linux distribution environment like CentOS, Ubuntu, Debian, Gentoo, ArchLinux, Oracle, Alpine, and many others and it also includes the configuration templates to start these Linux distributions.

dnf install -y lxc lxc-templates
dnf install -y wget tar

The wget and tar are required if LXC templates installation is going to be performed.

STEP 3) Create a CentOS 8 container with the help of LXC templates and run it.

Use the lxc-templates to prepare a CentOS 8 container environment. The currently available containers are listed here http://images.linuxcontainers.org/. Check out the URL and choose the right container. Here the CentOS 8 amd64 is used.

lxc-create --template download -n mycontainer -- --dist centos --release 8 --arch amd64 --keyserver hkp://keyserver.ubuntu.com

Keep on reading!

Replace current interface configuration with a bridge device using nmcli (NetworkManager)

This article shows how the primary network interface could be replaced by a bridge device and the network interface becomes a part of the bridge as a slave device without reboot or restart of the server. Using nmcli under CentOS 8 (and probably any other Linux distribution like Ubuntu, which uses NetworkManager to configure network devices).
The main steps are:

  1. Create a connection profile of a bridge device.
  2. Set the same network configuration as the primary network to the bridge device.
  3. Create a connection profile for the primary interface device as a slave network device to the newly created bridge.
  4. Delete the current primary connection, which is using the primary network device and configuration.
  5. Reload the bridge connection profile to take effect. The bridge device will actually begin to work.

The main goal is not to reboot the server or lose the connection to the server. The primary network interface is the only connection on the server and losing it the server is going to be unreachable. So the last two steps should be performed in the background or a script or a detached terminal (like screen).
Here are all the commands in one place:

nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "192.168.0.20/24" ipv4.gateway "192.168.0.1" ipv4.dns "8.8.8.8 1.1.1.1"
nmcli con add type bridge-slave ifname enp0s3 master br0
nmcli con del "enp0s3"; nmcli con reload "br0" &

Here is the detailed information for the above commands:
Keep on reading!