Pass-through the NVIDIA card to be used in the LXC container is simple enough and there are three simple rules to watch for:
- mount bind the NVIDIA devices in /dev to the LXC container’s /dev
- Allow cgroup access for the bound /dev devices.
- Install the same version of the NVIDIA driver/software under the host and the LXC container or there will be multiple errors of the sort – version mismatch
When using the LXC container pass-through, i.e. mount bind, the video card may be used simultaneously on the host and on all the LXC containers where it is mount bind. Multiple LXC containers share the video device(s).
This is a working LXC 4.0.12 configuration:
# Distribution configuration lxc.include = /usr/share/lxc/config/common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs.path = dir:/mnt/storage1/servers/gpu1u/rootfs lxc.uts.name = gpu1u # Network configuration lxc.net.0.type = macvlan lxc.net.0.link = enp1s0f1 lxc.net.0.macvlan.mode = bridge lxc.net.0.flags = up lxc.net.0.name = eth0 lxc.net.0.hwaddr = fe:77:3f:27:15:60 # Allow cgroup access lxc.cgroup2.devices.allow = c 195:* rwm lxc.cgroup2.devices.allow = c 234:* rwm lxc.cgroup2.devices.allow = c 237:* rwm # Pass through device files lxc.mount.entry = /dev/nvidia0 dev/nvidia0 none bind,optional,create=file lxc.mount.entry = /dev/nvidia1 dev/nvidia1 none bind,optional,create=file lxc.mount.entry = /dev/nvidia2 dev/nvidia2 none bind,optional,create=file lxc.mount.entry = /dev/nvidia3 dev/nvidia3 none bind,optional,create=file lxc.mount.entry = /dev/nvidiactl dev/nvidiactl none bind,optional,create=file lxc.mount.entry = /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file lxc.mount.entry = /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file lxc.mount.entry = /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file lxc.mount.entry = /dev/nvidia-caps dev/nvidia-caps none bind,optional,create=dir # Autostart lxc.group = onboot lxc.start.auto = 1 lxc.start.delay = 10