This article is a follow up after the Run podman/docker InfluxDB 1.8 container to collect statistics from collectd, where the time series database InfluxDB stores data and by using Grafana in another container it is easy and lightweight enough to visualize the collected data.
Containerizing the Grafana service is simple enough with docker/podman, but there are several tips and steps to consider before doing it. These steps will significantly ease the maintainer’s life, making upgrading, moving to another server, or backup important data really easy – just stop and start another container with the same options except name and container version.
Here are the important points to mind when running Grafana 9 in a docker/podman container:
Keep on reading!
Category: podman
Run podman/docker InfluxDB 1.8 container to collect statistics from collectd
Yet another article on the topic of the InfluxDB 1.8 and collectd gathering statistics, in continuation of the articles Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9. This time, the InfluxDB runs in a container created with podman or docker software.
Here are the important points to mind when running InfluxDB 1.8 in a docker/podman container:
Keep on reading!
gitlab in podman cannot create unix sockets in glusterfs because of SELinux
Installing gitlab-ee (and gitlab-ce) under CentOS 7 with enabled SELinux (i.e. enforcing mode) looped endlessly the container in restarting the installation process! There were multiple errors for missing sockets in the podman logs of the gitlab container. Here are some of the errors:
Missing postgresql unix socket in “/var/opt/gitlab/postgresql”:
Recipe: gitlab::database_migrations * bash[migrate gitlab-rails database] action run [execute] rake aborted! PG::ConnectionBad: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"? /opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:53:in `block (3 levels) in <top (required)>' /opt/gitlab/embedded/bin/bundle:23:in `load' /opt/gitlab/embedded/bin/bundle:23:in `<main>' Tasks: TOP => gitlab:db:configure (See full trace by running task with --trace) Error executing action `run` on resource 'bash[migrate gitlab-rails database]' ..... ..... Running handlers: There was an error running gitlab-ctl reconfigure: bash[migrate gitlab-rails database] (gitlab::database_migrations line 55) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1' ---- Begin output of "bash" "/tmp/chef-script20200915-35-lemic5" ---- STDOUT: rake aborted! PG::ConnectionBad: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"? /opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:53:in `block (3 levels) in <top (required)>' /opt/gitlab/embedded/bin/bundle:23:in `load' /opt/gitlab/embedded/bin/bundle:23:in `<main>' Tasks: TOP => gitlab:db:configure (See full trace by running task with --trace) STDERR: ---- End output of "bash" "/tmp/chef-script20200915-35-lemic5" ---- Ran "bash" "/tmp/chef-script20200915-35-lemic5" returned 1
Missing redis socket in
Running handlers: There was an error running gitlab-ctl reconfigure: redis_service[redis] (redis::enable line 19) had an error: RuntimeError: ruby_block[warn pending redis restart] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/redis/resources/service.rb line 65) had an error: RuntimeError: Execution of the command `/opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket INFO` failed with a non-zero exit code (1) stdout: stderr: Could not connect to Redis at /var/opt/gitlab/redis/redis.socket: No such file or directory
It should be noted that the /var/opt/gitlab directory has been mapped in /mnt/storage/podman/gitlab/data. GlusterFS is used for /mnt/storage, so the gitlab files resides on a GlusterFS volume.
ERROR 1) Cannot create unix socket.
Checking the /var/log/audit/audit.log reveiled the problem immediately:
Keep on reading!
edit mysql options in docker (or docker-compose) mysql
Modifying the default options for the docker (podman) MySQL server is essential. The default MySQL options are too conservative and even for simple (automation?) tests the options could be .
For example, modifying only one or two of the default InnoDB configuration options may lead to boosting multiple times faster execution of SQL queries and the related automation tests.
Here are three simple ways to modify the (default or current) MySQL my.cnf configuration options:
- Command-line arguments. All MySQL configuration options could be overriden by passing them in the command line of mysqld binary. The format is:
--variable-name=value
and the variable names could be obtained by
mysqld --verbose --help
and for the live configuration options:
mysqladmin variables
- Options in a additional configuration file, which will be included in the main configuration. The options in /etc/mysql/conf.d/config-file.cnftake precedence.
- Replacing the default my.cnf configuration file – /etc/mysql/my.cnf.
Check out also the official page – https://hub.docker.com/_/mysql.
Under CentOS 8 docker is replaced by podman and just replace the docker with podman in all of the commands below.
OPTION 1) Command-line arguments.
This is the simplest way of modifying the default my.cnf (the one, which comes with the docker image or this in the current docker image file). It is fast and easy to use and change, just a little bit of much writing in the command-line. As mentioned above all MySQL options could be changed by a command-line argument to the mysqld binary. For example:
mysqld --innodb_buffer_pool_size=1024M
It will start MySQL server with variable innodb_buffer_pool_size set to 1G. Translating it to (for multiple options just add them at the end of the command):
-
docker run
root@srv ~ # docker run --name my-mysql -v /var/lib/mysql:/var/lib/mysql \ -e MYSQL_ROOT_PASSWORD=111111 \ -d mysql:8 \ --innodb_buffer_pool_size=1024M \ --innodb_read_io_threads=4 \ --innodb_flush_log_at_trx_commit=2 \ --innodb_flush_method=O_DIRECT 1bb7f415ab03b8bfd76d1cf268454e3c519c52dc383b1eb85024e506f1d04dea root@srv ~ # docker exec -it my-mysql mysqladmin -p111111 variables|grep innodb_buffer_pool_size | innodb_buffer_pool_size | 1073741824
-
docker-compose:
# Docker MySQL arguments example version: '3.1' services: db: image: mysql:8 command: --default-authentication-plugin=mysql_native_password --innodb_buffer_pool_size=1024M --innodb_read_io_threads=4 --innodb_flush_log_at_trx_commit=2 --innodb_flush_method=O_DIRECT restart: always environment: MYSQL_ROOT_PASSWORD: 111111 volumes: - /var/lib/mysql_data:/var/lib/mysql ports: - "3306:3306"
Here is how to run it (the above text file should be named docker-compose.yml and the file should be in the current directory when executing the command below):
root@srv ~ # docker-compose up Creating network "docker-compose-mysql_default" with the default driver Creating my-mysql ... done Attaching to my-mysql my-mysql | 2020-06-16 09:45:35+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.20-1debian10 started. my-mysql | 2020-06-16 09:45:35+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' my-mysql | 2020-06-16 09:45:35+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.20-1debian10 started. my-mysql | 2020-06-16T09:45:36.293747Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release. my-mysql | 2020-06-16T09:45:36.293906Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.20) starting as process 1 my-mysql | 2020-06-16T09:45:36.307654Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. my-mysql | 2020-06-16T09:45:36.942424Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. my-mysql | 2020-06-16T09:45:37.136537Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/var/run/mysqld/mysqlx.sock' bind-address: '::' port: 33060 my-mysql | 2020-06-16T09:45:37.279733Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. my-mysql | 2020-06-16T09:45:37.306693Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. my-mysql | 2020-06-16T09:45:37.353358Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.20' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
And check the option:
root@srv ~ # docker exec -it my-mysql mysqladmin -p111111 variables|grep innodb_buffer_pool_size | innodb_buffer_pool_size | 1073741824
OPTION 2) Options in a additional configuration file.
Create a MySQL option file with name config-file.cnf:
[mysqld] innodb_buffer_pool_size=1024M innodb_read_io_threads=4 innodb_flush_log_at_trx_commit=2 innodb_flush_method=O_DIRECT
- docker run
- docker-compose
The source path may not be absolute path.# Docker MySQL arguments example version: '3.1' services: db: container_name: my-mysql image: mysql:8 command: --default-authentication-plugin=mysql_native_password restart: always environment: MYSQL_ROOT_PASSWORD: 111111 volumes: - /var/lib/mysql_data:/var/lib/mysql - ./config-file.cnf:/etc/mysql/conf.d/config-file.cnf ports: - "3306:3306"
The source path must be absolute path!
docker run --name my-mysql \ -v /var/lib/mysql_data:/var/lib/mysql \ -v /etc/mysql/docker-instances/config-file.cnf:/etc/mysql/conf.d/config-file.cnf \ -e MYSQL_ROOT_PASSWORD=111111 \ -d mysql:8
OPTION 3) Replacing the default my.cnf configuration file.
Add the modified options to a my.cnf template file and map it to the container on /etc/mysql/my.cnf. When overwriting the main MySQL option file – my.cnf you may map the whole /etc/mysql directory (just replace /etc/mysql/my.cnf with /etc/mysql below), too. The source file (or directory) may be any file (or directory) not the /etc/mysql/my.cnf (or /etc/mysql)
- docker run:
The source path must be absolute path.docker run --name my-mysql \ -v /var/lib/mysql_data:/var/lib/mysql \ -v /etc/mysql/my.cnf:/etc/mysql/my.cnf \ -e MYSQL_ROOT_PASSWORD=111111 \ --publish 3306:3306 \ -d mysql:8
Note: here a new option “–publish 3306:3306” is included to show how to map the ports out of the container like all the examples with the docker-compose here.
- docker-compose:
The source path may not be absolute path, but the current directory.# Use root/example as user/password credentials version: '3.1' services: db: container_name: my-mysql image: mysql:8 command: --default-authentication-plugin=mysql_native_password restart: always environment: MYSQL_ROOT_PASSWORD: 111111 volumes: - /var/lib/mysql_data:/var/lib/mysql - ./mysql/my.cnf:/etc/mysql/my.cnf ports: - "3306:3306"
Change the location of container storage in podman (with SELinux enabled)
There two main options to change the location of all the containers’ storages:
- “mount bind” the new location to the default storage directory (look Note 1)
- Change the path of the location in the configuration file /etc/containers/storage.conf
You should stop all your containers though it is not mandatory.
You should stop the containers (if any) and copy the directory, because when reconfigured the storage path podman won’t access the ones in the old path – containers and images!
STEP 1) Change the storage path in the podman configuration file.
If the SELinux has been disabled, which should not be done, it is just a matter of changing a path option in the configuration file /etc/containers/storage.conf
# Primary Read/Write location of container storage graphroot = "/var/lib/containers/storage"
Change it to whatever path you like. Mostly, it should point to the big storage device. In our case, the big storage is mounted under “/mnt/mystorage/virtual/storage”. Change the options to:
# Primary Read/Write location of container storage graphroot = "/mnt/mystorage/virtual/storage"
Check the running configuration with:
[root@lsrv1 mystorage]# podman info host: BuildahVersion: 1.12.0-dev CgroupVersion: v1 Conmon: package: conmon-2.0.8-1.el7.x86_64 path: /usr/bin/conmon version: 'conmon version 2.0.8, commit: f85c8b1ce77b73bcd48b2d802396321217008762' Distribution: distribution: '"centos"' version: "7" MemFree: 191963136 MemTotal: 16563531776 OCIRuntime: name: runc package: runc-1.0.0-67.rc10.el7_8.x86_64 path: /usr/bin/runc version: 'runc version spec: 1.0.1-dev' SwapFree: 7857680384 SwapTotal: 8581541888 arch: amd64 cpus: 8 eventlogger: journald hostname: lsrv1 kernel: 3.10.0-1062.9.1.el7.x86_64 os: linux rootless: false uptime: 607h 10m 53.36s (Approximately 25.29 days) registries: blocked: null insecure: null search: - registry.access.redhat.com - registry.fedoraproject.org - registry.centos.org - docker.io store: ConfigFile: /etc/containers/storage.conf ContainerStore: number: 0 GraphDriverName: overlay GraphOptions: {} GraphRoot: /mnt/mystorage/virtual/storage GraphStatus: Backing Filesystem: extfs Native Overlay Diff: "true" Supports d_type: "true" Using metacopy: "false" ImageStore: number: 0 RunRoot: /var/run/containers/storage VolumePath: /mnt/mystorage/virtual/storage/volumes
podman – Error adding network: failed to allocate for range 0: 10.88.0.46 has been allocated after server reboot
We’ve just stumbled on the following error with one of our podman CentOS 8 servers after restart:
[root@srv ~]# podman start mysql-slave ERRO[0000] Error adding network: failed to allocate for range 0: 10.88.0.46 has been allocated to c97823be46832ddebbce29f3f51e3091620188710cb7ace246e173a7a981baed, duplicate allocation is not allowed ERRO[0000] Error while adding pod to CNI network "podman": failed to allocate for range 0: 10.88.0.46 has been allocated to c97823be46832ddebbce29f3f51e3091620188710cb7ace246e173a7a981baed, duplicate allocation is not allowed Error: unable to start container "mysql-slave": error configuring network namespace for container c97823be46832ddebbce29f3f51e3091620188710cb7ace246e173a7a981baed: failed to allocate for range 0: 10.88.0.46 has been allocated to c97823be46832ddebbce29f3f51e3091620188710cb7ace246e173a7a981baed, duplicate allocation is not allowed
Apparently, something got wrong, because the two containers were fine before restarting and they were multiple times stopped, started and restarted.
The solution is to remove IP-named files in /var/lib/cni/networks/podman and start the podman containers again.
It resembles to a bug https://github.com/containers/libpod/issues/3759, which should have already been closed by the new minor CentOS 8 releases.
The interesting part is that the container we are trying to start mysql-slave has c97823be46832ddebbce29f3f51e3091620188710cb7ace246e173a7a981baed, but it reports it cannot allocate it, because it has already been allocated to a container with the same ID. That’s the problem:
The IP-named files in /var/lib/cni/networks/podman were not removed when the podman container had stopped.
Typically, when a podman container executes a stop command, the process should remove the files in /var/lib/cni/networks/podman. Before restarting the CentOS 8 server you may need to stop the podman containers for now.
[root@srv ~]# cd /var/lib/cni/networks/podman [root@srv podman]# ls -altr total 24 -rwxr-x---. 1 root root 0 3 Dec 0,43 lock drwxr-xr-x. 3 root root 4096 3 Dec 0,43 .. -rw-r--r--. 1 root root 64 9 Dec 18,34 10.88.0.46 -rw-r--r--. 1 root root 64 16 Dec 12,01 10.88.0.47 -rw-r--r--. 1 root root 10 1 Mar 9,28 last_reserved_ip.0 -rw-r--r--. 1 root root 70 1 Mar 9,28 10.88.0.49 drwxr-xr-x. 2 root root 4096 1 Mar 9,28 . [root@srv podman]# rm 10.88.0.46 rm: remove regular file '10.88.0.46'? y [root@srv podman]# rm 10.88.0.47 rm: remove regular file '10.88.0.47'? y [root@srv podman]# podman start mysql-slave mysql-slave [root@srv podman]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c97823be4683 localhost/centos-mysql-5.6:0.9 /entrypoint.sh my... 2 months ago Up 2 minutes ago mysql-slave e96134b31894 docker.io/example/client:latest start-boinc.sh 2 months ago Up 6 minutes ago example-client [root@srv podman]# ls -altr общо 20 -rwxr-x---. 1 root root 0 3 Dec 0,43 lock drwxr-xr-x. 3 root root 4096 3 Dec 0,43 .. -rw-r--r--. 1 root root 70 1 Mar 9,28 10.88.0.49 -rw-r--r--. 1 root root 10 1 Mar 9,32 last_reserved_ip.0 -rw-r--r--. 1 root root 70 1 Mar 9,32 10.88.0.50 drwxr-xr-x. 2 root root 4096 1 Mar 9,32 . [root@srv podman]#
We’ve deleted the old IPs (old by date!) 10.88.0.46 and 10.88.0.47 and the mysql-slave container started successfully.
firewalld and podman (or docker) – no internet in the container and could not resolve host
If you happen to use CentOS 8 you have already discovered that Red Hat (i.e. CentOS) switch to podman, which is a fork of docker. So probably the following fix might help to someone, which does not use CentOS 8 or podman. For now, podman and docker are 99.99% the same.
So creating and starting a container is easy and in most cases one command only, but you may stumble on the error your container could not resolve or could not connect to an IP even there is a ping to the IP!
The service in the container may live a happy life without Internet access but just the mapped ports from the outside world. Still, it may happen to need Internet access, let’s say if an update should be performed.
Here is how to fix podman (docker) missing the Internet access in the container:
- No ping to the outside world. The chances you are missing
sysctl -w net.ipv4.ip_forward=1
And do not forget to make it permanent by adding the “net.ipv4.ip_forward=1” to /etc/sysctl.conf (or a file “.conf” in /etc/sysctl.d/).
- ping to the outside IP of the container is available, but no connection to any service is available! Probably the NAT is not enabled in your podman docker configuration. In the case with firewalld, at least, you must enable the masquerade option of the public zone
firewall-cmd --zone=public --add-masquerade firewall-cmd --permanent --zone=public --add-masquerade
The second command with “–permanent” is to make the option permanent over reboots.
The error – Could not resolve host (Name or service not known) despite having servers in /etc/resolv.conf and ping to them!
One may think having IPs in /etc/resolv.conf and ping to them in the container should give the container access to the Internet. But the following error occurs:
[root@srv /]# yum install telnet Loaded plugins: fastestmirror, ovl Determining fastest mirrors * base: artfiles.org * extras: centos.mirror.net-d-sign.de * updates: centos.bio.lmu.de http://mirror.fra10.de.leaseweb.net/centos/7.7.1908/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: mirror.fra10.de.leaseweb.net; Unknown error" Trying other mirror. http://artfiles.org/centos.org/7.7.1908/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: artfiles.org; Unknown error" Trying other mirror. ^C Exiting on user cancel [root@srv /]# ^C [root@srv /]# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=5.05 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=5.06 ms ^C --- 8.8.8.8 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 5.050/5.055/5.061/0.071 ms [root@srv ~]# cat /etc/resolv.conf nameserver 8.8.8.8 nameserver 8.8.4.4 [root@srv /]# ping google.com ping: google.com: Name or service not known
The error 2 – Can’t connect to despite having ping to the IP!
[root@srv /]# ping 2.2.2.2 PING 2.2.2.2 (2.2.2.2) 56(84) bytes of data. 64 bytes from 2.2.2.2: icmp_seq=1 ttl=56 time=9.15 ms 64 bytes from 2.2.2.2: icmp_seq=2 ttl=56 time=9.16 ms ^C [root@srv2 /]# mysql -h2.2.2.2 -uroot -p Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on '2.2.2.2' (113) [root@srv2 /]#
Despite having ping the MySQL server on 2.2.2.2 and despite the firewall on 2.2.2.2 allows outside connections the container could not connect to it. And testing other services like HTTP, HTTPS, FTP and so on resulted in “unable to connect“, too. Simply because the NAT (aka masquerade is not enabled in the firewall).