Debug options for LXC and lxc-start when lxc container could not start

Setup and running LXC container is really easy, but sometimes it is unclear why the LXC container could not start. Most of the time, there is a generic error, which says nothing for the real reason:

root@srv ~ # lxc-start -n test-lxc
lxc-start: test-lxc: lxccontainer.c: wait_on_daemonized_start: 867 Received container state "ABORTING" instead of "RUNNING"
lxc-start: test-lxc: tools/lxc_start.c: main: 306 The container failed to start
lxc-start: test-lxc: tools/lxc_start.c: main: 309 To get more details, run the container in foreground mode
lxc-start: test-lxc: tools/lxc_start.c: main: 311 Additional information can be obtained by setting the --logfile and --logpriority options

No specific reason why the LXC container test-lxc can not be started and the lxc-start command failed. There is just an offer to use the logging options and here is how the administrator of the box may do it by including the following lxc-start options:

-l DEBUG –logfile=test-lxc.log –logpriority=9

Here is a real-world example of an old kernel trying to run LXC 4.0
Keep on reading!

Run LXC CentOS 8 container with bridged network under CentOS 8

The LXC container software comes to CentOS 8 with the EPEL 8 repository. LXC is a multiprocesses container, which offers to boot a Linux distribution under container isolation. It is very similar to systemd-nspawn and a bit different from docker containers. LXC containers are used when multiple processes are needed under one container only. In most cases, the LXC container is a fully-featured Linux distribution (systemd or SysV, i.e. init) booted under a Linux container.
There are several major differences between docker/podman containers and LXC:

  • Multiprocesses.
  • Easy configuration modification. Even hot-plugin supported.
  • Unprivileged Linux containers.
  • Complex network setups. Multiple network interfaces connected to different networks, for example.
  • Live systemd, i.e. systemd or SysV init are booted as usual. Much of the software rellies on systemd/udev features and in many cases, it is really hard to run a software without a systemd or init process

Here are the steps to boot a CentOS 8 container under CentOS 8 host server:

STEP 1) Install EPEL repository.

EPEL CentOS 8 repository now includes LXC 3.0 software.

dnf install -y epel-release

STEP 2) Install LXC software and start LXC service.

At present, the LXC software version is 3.0.4. The package lxc-templates includes template scripts to create a Linux distribution environment like CentOS, Ubuntu, Debian, Gentoo, ArchLinux, Oracle, Alpine, and many others and it also includes the configuration templates to start these Linux distributions.

dnf install -y lxc lxc-templates
dnf install -y wget tar

The wget and tar are required if LXC templates installation is going to be performed.

STEP 3) Create a CentOS 8 container with the help of LXC templates and run it.

Use the lxc-templates to prepare a CentOS 8 container environment. The currently available containers are listed here http://images.linuxcontainers.org/. Check out the URL and choose the right container. Here the CentOS 8 amd64 is used.

lxc-create --template download -n mycontainer -- --dist centos --release 8 --arch amd64 --keyserver hkp://keyserver.ubuntu.com

Keep on reading!

tmpfs mount on /dev/shm in LXC container or chroot environment

When using LXC containers booting the lxc container would not populate it as the normal boot process. Or when you create a chroot jail /dev is not mounted or just some devices are created.
There is an option to populate (when using LXC containers) it with minimal required devices:

lxc.autodev = 1

which will create a tmpfs mount under /dev and create some basic devices, it will ensure /dev/shm to be mounted on with tmpfs!
If you omit this option, the /dev directory won’t be populated and will stay with the devices you made or copied when you made the LXC container (or the chroot jail) and /dev/shm will not be mounted using tmps, which could create numerous bad issues.
If you get errors like

 * configure has detected that the sem_open function is broken.
 * Please ensure that /dev/shm is mounted as a tmpfs with mode 1777.

You could mount the /dev/shm of the LXC container or the chroot jail (usually you can tune the size half of the server’s RAM) with

mkdir -p /dev/shm
mount -t tmpfs -o nodev,nosuid,noexec,mode=1777,size=6144m tmpfs /dev/shm

Or reboot your LXC container with a new configuration (probably in the “/var/lxc/[lxc_name]/config”) adding the following line:

lxc.mount.entry = none dev/shm tmpfs nodev,nosuid,noexec,mode=1777,create=dir 0 0

Thus you ensure the /dev/shm to be mounted on tmpfs and all semaphore functions to work properly.

* Real output of Gentoo failed compilation of python package:
 * configure has detected that the sem_open function is broken.
 * Please ensure that /dev/shm is mounted as a tmpfs with mode 1777.
 * ERROR: dev-lang/python-3.3.4-r1::gentoo failed (configure phase):
 *   Broken sem_open function (bug 496328)
 * 
 * Call stack:
 *     ebuild.sh, line 124:  Called src_configure
 *   environment, line 3542:  Called die
 * The specific snippet of code:
 *           die "Broken sem_open function (bug 496328)";
 * 
 * If you need support, post the output of `emerge --info '=dev-lang/python-3.3.4-r1::gentoo'`,
 * the complete build log and the output of `emerge -pqv '=dev-lang/python-3.3.4-r1::gentoo'`.
 * The complete build log is located at '/var/tmp/portage/dev-lang/python-3.3.4-r1/temp/build.log'.
 * The ebuild environment file is located at '/var/tmp/portage/dev-lang/python-3.3.4-r1/temp/environment'.
 * Working directory: '/var/tmp/portage/dev-lang/python-3.3.4-r1/work/x86_64-pc-linux-gnu'
 * S: '/var/tmp/portage/dev-lang/python-3.3.4-r1/work/Python-3.3.4'

>>> Failed to emerge dev-lang/python-3.3.4-r1, Log file: