Create MacVTap device using NetworkManager nmcli under CentOS 8

In continuation of NetworkManager management with nmcli, here is a quick Linux console tip for users like CentOS 8 (or all distributions, which use the NetworkManager for managing the networking). How to create a virtualized bridge device MacVTap device with the NetworkManager nmcli command utility, which will preserve all the configuration over reboots.

nmcli connection add type macvlan dev enp0s3 mode bridge tap yes ifname macvtap0 con-name macvtap0 ip4 0.0.0.0/24

The line above creates a virtualized bridged interface and a connection with the name macvtap0. The MAcVTap device with the name macvtap0 is in bridge mode with the physical network interface enp0s3 with manual IP setting. If the IP is not included a DHCP option will be used as default.

There is one big limitation – there is no link between the enp0s3 and macvtap0. When used macvtap0 could receive packets from the network through the enp0s3, but there is no direct link between the two network devices. In simple words, when used in a virtualized environment in a virtual machine the virtual machine may have access to the network shared with the enp0s3, but the virtual machine cannot communicate with the IPs of the enp0s3!

Typically, this is used to make both the guest and the host show up directly on the switch that the host is connected to.

Linux Virtualization, https://virt.kernelnewbies.org/MacVTap

Initial state, only one connection in NetworkManager.

The main server connection with name enp0s3 using the same name network interface enp0s3:

[root@srv ~]# nmcli con
NAME    UUID                                  TYPE      DEVICE 
enp0s3  09497bbf-da59-42b7-a72c-d69369760b36  ethernet  enp0s3
[root@srv ~]# nmcli 
enp0s3: connected to enp0s3
        "Intel 82540EM"
        ethernet (e1000), 08:00:27:03:C9:2E, hw, mtu 1500
        ip4 default
        inet4 192.168.0.20/24
        route4 192.168.0.0/24 metric 100
        route4 0.0.0.0/0 via 192.168.0.1 metric 100
        inet6 fe80::a00:27ff:fe03:c92e/64
        route6 fe80::/64 metric 100

lo: unmanaged
        "lo"
        loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
        servers: 8.8.8.8 1.1.1.1
        interface: enp0s3

Use "nmcli device show" to get complete information about known devices and
"nmcli connection show" to get an overview on active connection profiles.

Consult nmcli(1) and nmcli-examples(7) manual pages for complete usage details.

Add the MacVTap device with the name macvlan0

[root@srv ~]# nmcli connection add type macvlan dev enp0s3 mode bridge tap yes ifname macvtap0 con-name macvtap0 ip4 0.0.0.0/24
Connection 'macvtap0' (7a5ef04c-ea98-4642-ac5d-4239f715f631) successfully added.

A MacVTap device, a network connection, and a link are established. The name of the MacVTap device and the network connection is macvtap0.

Keep on reading!

Create bridge and add TUN/TAP device using NetworkManager nmcli under CentOS 8

This article shows how to create a network bridge device and a TUN/TAP device, which then is added to the bridge. The CentOS 8 Stream is used along with the console NetworkManager program nmcli.
TUN/TAP devices are often used in the virtualization world as a link device between the host machine and the virtual machine.

This article is for the case when the bridge does not include the main network interface (Internet network interface and so on) of the server but is an additional device, which MAC and virtual machine MACs would not be exposed through the server’s main network interface.

If the server’s main network interface should be included in the bridge device, i.e. replace the main network interface with the bridge there is another article on the subject – Replace current interface configuration with a bridge device using nmcli (NetworkManager)

Device name are as follow:

  • br0 is the name of the network bridge.
  • 10.10.10.1 with mask /24 is the IP of the bridge device with name br0. Because the idea is to use the bridge only locally, a local interface is used. The IP is set manually.
  • tap0 is the name of TUN/TAP device.
  • enp0s3is the server’s main network connection. Not used in this howto.

Here are all the commands to create a bridge, create a TUN/TAP device and add it to the bridge, and then activate the bridge‘s link.

nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "10.10.10.1/24"
nmcli con up br0
nmcli connection add type tun ifname tap0 con-name tap0 mode tap owner 0 ip4 0.0.0.0/24
nmcli con add type bridge-slave ifname tap0 master br0

Here are the steps with much more details and information including all the command output.
The networking before any reconfiguration:

[root@srv ~]# nmcli
enp0s3: connected to enp0s3
        "Intel 82540EM"
        ethernet (e1000), 08:00:27:03:C9:2E, hw, mtu 1500
        ip4 default
        inet4 192.168.0.20/24
        route4 192.168.0.0/24 metric 100
        route4 0.0.0.0/0 via 192.168.0.1 metric 100
        inet6 fe80::a00:27ff:fe03:c92e/64
        route6 fe80::/64 metric 100

lo: unmanaged
        "lo"
        loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
        servers: 8.8.8.8 1.1.1.1
        interface: enp0s3

Use "nmcli device show" to get complete information about known devices and
"nmcli connection show" to get an overview on active connection profiles.

Consult nmcli(1) and nmcli-examples(7) manual pages for complete usage details.
[root@srv ~]# nmcli con
NAME    UUID                                  TYPE      DEVICE 
enp0s3  09497bbf-da59-42b7-a72c-d69369760b36  ethernet  enp0s3 

Keep on reading!

Run LXC CentOS 8 container with bridged network under CentOS 8

The LXC container software comes to CentOS 8 with the EPEL 8 repository. LXC is a multiprocesses container, which offers to boot a Linux distribution under container isolation. It is very similar to systemd-nspawn and a bit different from docker containers. LXC containers are used when multiple processes are needed under one container only. In most cases, the LXC container is a fully-featured Linux distribution (systemd or SysV, i.e. init) booted under a Linux container.
There are several major differences between docker/podman containers and LXC:

  • Multiprocesses.
  • Easy configuration modification. Even hot-plugin supported.
  • Unprivileged Linux containers.
  • Complex network setups. Multiple network interfaces connected to different networks, for example.
  • Live systemd, i.e. systemd or SysV init are booted as usual. Much of the software rellies on systemd/udev features and in many cases, it is really hard to run a software without a systemd or init process

Here are the steps to boot a CentOS 8 container under CentOS 8 host server:

STEP 1) Install EPEL repository.

EPEL CentOS 8 repository now includes LXC 3.0 software.

dnf install -y epel-release

STEP 2) Install LXC software and start LXC service.

At present, the LXC software version is 3.0.4. The package lxc-templates includes template scripts to create a Linux distribution environment like CentOS, Ubuntu, Debian, Gentoo, ArchLinux, Oracle, Alpine, and many others and it also includes the configuration templates to start these Linux distributions.

dnf install -y lxc lxc-templates
dnf install -y wget tar

The wget and tar are required if LXC templates installation is going to be performed.

STEP 3) Create a CentOS 8 container with the help of LXC templates and run it.

Use the lxc-templates to prepare a CentOS 8 container environment. The currently available containers are listed here http://images.linuxcontainers.org/. Check out the URL and choose the right container. Here the CentOS 8 amd64 is used.

lxc-create --template download -n mycontainer -- --dist centos --release 8 --arch amd64 --keyserver hkp://keyserver.ubuntu.com

Keep on reading!

Replace current interface configuration with a bridge device using nmcli (NetworkManager)

This article shows how the primary network interface could be replaced by a bridge device and the network interface becomes a part of the bridge as a slave device without reboot or restart of the server. Using nmcli under CentOS 8 (and probably any other Linux distribution like Ubuntu, which uses NetworkManager to configure network devices).
The main steps are:

  1. Create a connection profile of a bridge device.
  2. Set the same network configuration as the primary network to the bridge device.
  3. Create a connection profile for the primary interface device as a slave network device to the newly created bridge.
  4. Delete the current primary connection, which is using the primary network device and configuration.
  5. Reload the bridge connection profile to take effect. The bridge device will actually begin to work.

The main goal is not to reboot the server or lose the connection to the server. The primary network interface is the only connection on the server and losing it the server is going to be unreachable. So the last two steps should be performed in the background or a script or a detached terminal (like screen).
Here are all the commands in one place:

nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "192.168.0.20/24" ipv4.gateway "192.168.0.1" ipv4.dns "8.8.8.8 1.1.1.1"
nmcli con add type bridge-slave ifname enp0s3 master br0
nmcli con del "enp0s3"; nmcli con reload "br0" &

Here is the detailed information for the above commands:
Keep on reading!

Howto do QEMU full virtualization with bridged networking

This howto rather continues the previous one “Howto do QEMU full virtualization with MacVTap networking” with the exception it will be showed how to use a classic setup of the networking – the use of bridge device. Because this setup requires a specific configuration for every Linux distro if we do not just add the bridge manually it is separated in this howto. For the clear and full howto, we would repeat the two first steps just to enable this howto to be independent of the original one mentioned above.
So use full virtualization under Linux you can use QEMU and no other library or manager like virt-manager. QEMU is simple enough and with a couple of parameters to it, you can start KVM virtual machines with near-native performance. To use KVM you must enable it in the BIOS of your server (or desktop machine).

Here are the main steps:

STEP 1) Enable KVM in the BIOS

  • For Intel machine you must find option Intel Virtualization Technology (or Intel VT-x) probably in BIOS menu of Chipset, Advanced CPU Configuration or other.
  • For AMD machine the virtualization cannot be disabled so it is enabled by default, but you can check for additional virtualization features to enable like Virtualization Extensions, Vanderpool and other.
  • Enable also additional features – Intel VT-d or AMD IOMMU, if they are available.

Reboot your machine and check if the KVM is supported:

srv@local ~$ cat /proc/cpuinfo |grep -E "vmx|svm"
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap xsaveopt dtherm ida arat pln pts
...

STEP 2) Install QEMU

Under CentOS 7 you can just install couple of packets – that’s all you need:

yum install -y qemu qemu-common qemu-img qemu-kvm-common qemu-system-x86 qemu-user bridge-utils

Or under Ubuntu

apt-get install qemu-kvm bridge-utils

STEP 3) Prepare the network 1 – the bridge device

Under CentOS 7 add the following configuration file

/etc/sysconfig/network-scripts/ifcfg-br0

with the content of

DEVICE=br0
TYPE=Bridge
BOOTPROTO=none
ONBOOT=yes
IPADDR0=192.168.0.1
PREFIX0=24
#GATEWAY0=192.168.0.1
NETMASK=255.255.255.0
IPV6_FAILURE_FATAL=no
NM_CONTROLLED=no
ZONE=public

If you want to use a real IP set to your virtual machine, you should set a real IP here and uncomment the GATEWAY0 with the real gateway IP. If real IP is used then you should include the main Internet network interface to the bridge by adding at the end of the configuration file /etc/sysconfig/network-scripts/ifcfg-eth0 (if eth0 is your network interface):

...
BRIDGE=br0

And restart the network

srv@local ~$ systemctl restart network

Under Ubuntu add to the file

/etc/network/interfaces

the following:

# Bridge
auto br0
iface br0 inet static
  address 192.168.0.10
  netmask 255.255.255.0
#  gateway 192.168.0.1
  bridge_ports none
  bridge_stp off
  bridge_fd 0
  bridge_maxwait 0

If you want to use real IP set to your virtual machine, you should set a real IP here and uncomment the GATEWAY0 with the real gateway IP and replace the “none” in the option “bridge_ports” with the name of your main Internet network interface. For example:

  ...
  bridge_ports eth0
  ...

And restart the network

srv@local ~$ /etc/init.d/networking restart

Or we can add the bridge device manually:

srv@local ~$ brctl addbr br0
srv@local ~$ ip link set dev br0 up
srv@local ~$ ip addr add 192.168.0.1/24 dev br0

If we use real IP we have to add the main Internet network interface to the bridge, so when you set up the network in our virtual machine with a real IP it will work with no more additional configurations, but if we use a local IPs like our setup here and we want to have Internet in our virtual machine we must enable masquerade and linux routing. You can do it with:

srv@local ~$ echo 1 > /proc/sys/net/ipv4/ip_forward
#NAT with firewalld
srv@local ~$ firewall-cmd --add-masquerade --permanent
#NAT with iptables
srv@local ~$ iptables -t nat -A POSTROUTING -o br0 -j MASQUERADE

Use either firewalld or iptables setup, depends on your system configuration, just check if firewalld is running with:

srv@local ~$ firewall-cmd --list-all

If you receive an error, saying command not found or firewalld is not running, you should use the “NAT with iptables”
So the network is ready!

STEP 4) Prepare the network 2 – the tun/tap for the virtual machine

After we have added a bridge device tun/tap device, which will be used for the QEMU virtual machine must be added:

srv@local ~$ ip tuntap add tap0 mode tap
srv@local ~$ brctl addif br0 tap0

STEP 5) Create a QEMU hard drive

Create a 100G file

srv@local ~$ cd /mnt/storage1/disks/
srv@local ~$ qemu-img create -f qcow2 vm_harddisk.qcow2 100G
Formatting 'vm_harddisk.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 lazy_refcounts=off 

or you can enable encryption (but on every start of your virtual machine you must set the key through the qemu console to start the virtual machine):

srv@local ~$ qemu-img create -f qcow2 vm_harddisk_e.bin -o encryption 100G
Formatting 'vm_harddisk_e.bin', fmt=qcow2 size=107374182400 encryption=on cluster_size=65536 lazy_refcounts=off

STEP 6) Boot up the QEMU KVM virtual server

srv@local ~$ qemu-system-x86_64 -enable-kvm -cpu host -smp 4 -runas qemu -daemonize -vnc 127.0.0.1:1 \
-drive file=/mnt/storage1/disks/vm_harddisk.qcow2,index=0,cache=none,aio=threads,if=virtio \
-boot d -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0 \
-balloon virtio -m 2048 -monitor telnet:127.0.0.1:5801,server,nowait

The command above will :

  • “-enable-kvm” – enable the KVM – full virtualization with near native performance
  • “-cpu host” – will expose all supported host CPU features (only supported in KVM mode)
  • “-smp 4” – sets 4 processors to the virtual machine
  • “-daemonize” – start the command in daemon mode
  • “-runas qemu” – run under user, you can run thwo whole virtual machine from a user created especially for it, no need to run it with root, even it is recommended to run it under unprivileged user
  • “-vnc 192.168.1.10:1” – start a VNC server on this IP:PORT = 192.168.1.10:5901, the IP must present on the server or you can use 0.0.0.0:1 for 0.0.0.0:5901, but in every situation limit the access by a firewall
  • “-drive file=/mnt/storage1/disks/vm_harddisk.qcow2,index=0,cache=none,aio=threads,if=virtio” – set the main hard drive of the system
  • “-boot d” – boot from the first hard drive
  • “-net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0” – set the network interface using the tap device created by STEP 3) and STEP 4)
  • “-balloon virtio” – use balloon driver to be able to hot add or hot remove RAM (newer version this option is depricated and it can be skipped)
  • “-m 2048” – set virtual RAM size to megs
  • “-monitor telnet:127.0.0.1:5801,server,nowait” – set the management console for the this virtual server, you can connect with:
    srv@local ~$ telnet 127.0.0.1 5801
    Trying 127.0.0.1...
    Connected to 127.0.0.1.
    Escape character is '^]'.
    QEMU 2.0.0 monitor - type 'help' for more information
    (qemu) 
    <Press "CTRL+]">
    telnet> Connection closed.
    

    When quitting the management console you must NOT exit the console with quite/exit or CTRL+d, becuause it will terminate the virtual server, you must disconnect from the console with “CTRL+]” and then quit the telnet shell. With the console you can hot add/remove CPU, RAM, network cards, pci devices, harddrives, start/stop/shutdown/reset the virtual machine and a lot more.

Boot the virtual machine from the hard drive given by “-drive” with network “-net” (couple of options), the RAM uses baloon memory and could be adjusted on-the-fly and sets the vncserver to listen for connection on port IP:port = 192.168.1.1:5901 (probably you’ll want to change this with a the real IP of your server, but be careful to set up a firewall rule for 5901 – the vnc port) and a management console listening on IP:port 127.0.0.1:5801.

* Boot the virtual server from a virtual CD/DVD

Probably the first time booting you might need to boot from an installation disk, this could be done by the following command:

srv@local ~$ qemu-system-x86_64 -enable-kvm -cpu host -smp 4 -runas qemu -daemonize -vnc 127.0.0.1:1 -cdrom /mnt/storage1/disks/isos/CentOS-7-x86_64-NetInstall-1708.iso -boot c -drive file=/mnt/storage1/disks/vm_harddisk.qcow2,index=0,cache=none,aio=threads,if=virtio -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0 -balloon virtio -m 2048 -monitor telnet:127.0.0.1:5805,server,nowait

The changes:

  1. “-boot c” – First boot device is now CD/DVD. “c” is for CD, “d” is for disk
  2. “-cdrom /mnt/storage1/disks/isos/CentOS-7-x86_64-NetInstall-1708.iso” – added the installation disk to the virtual machine

A newer QEMU version may need adding “script=no,downscript=no” to the tap0 interface!

srv@local ~$ qemu-system-x86_64 -enable-kvm -cpu host -smp 4 -runas qemu -daemonize -vnc 127.0.0.1:1 -cdrom /mnt/storage1/disks/isos/CentOS-7-x86_64-NetInstall-1708.iso -boot c -drive file=/mnt/storage1/disks/vm_harddisk.qcow2,index=0,cache=none,aio=threads,if=virtio -net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0,script=no,downscript=no -m 2048 -monitor telnet:127.0.0.1:5805,server,nowait