edit mysql options in docker (or docker-compose) mysql

Modifying the default options for the docker (podman) MySQL server is essential. The default MySQL options are too conservative and even for simple (automation?) tests the options could be .
For example, modifying only one or two of the default InnoDB configuration options may lead to boosting multiple times faster execution of SQL queries and the related automation tests.

Here are three simple ways to modify the (default or current) MySQL my.cnf configuration options:

  • Command-line arguments. All MySQL configuration options could be overriden by passing them in the command line of mysqld binary. The format is:
    --variable-name=value
    

    and the variable names could be obtained by

    mysqld --verbose --help
    

    and for the live configuration options:

    mysqladmin variables
    
  • Options in a additional configuration file, which will be included in the main configuration. The options in /etc/mysql/conf.d/config-file.cnftake precedence.
  • Replacing the default my.cnf configuration file/etc/mysql/my.cnf.

Check out also the official page – https://hub.docker.com/_/mysql.
Under CentOS 8 docker is replaced by podman and just replace the docker with podman in all of the commands below.

OPTION 1) Command-line arguments.

This is the simplest way of modifying the default my.cnf (the one, which comes with the docker image or this in the current docker image file). It is fast and easy to use and change, just a little bit of much writing in the command-line. As mentioned above all MySQL options could be changed by a command-line argument to the mysqld binary. For example:

mysqld --innodb_buffer_pool_size=1024M

It will start MySQL server with variable innodb_buffer_pool_size set to 1G. Translating it to (for multiple options just add them at the end of the command):

  • docker run

    root@srv ~ # docker run --name my-mysql -v /var/lib/mysql:/var/lib/mysql \
    -e MYSQL_ROOT_PASSWORD=111111 \
    -d mysql:8 \
    --innodb_buffer_pool_size=1024M \
    --innodb_read_io_threads=4 \
    --innodb_flush_log_at_trx_commit=2 \
    --innodb_flush_method=O_DIRECT
    1bb7f415ab03b8bfd76d1cf268454e3c519c52dc383b1eb85024e506f1d04dea
    root@srv ~ # docker exec -it my-mysql mysqladmin -p111111 variables|grep innodb_buffer_pool_size
    | innodb_buffer_pool_size                                  | 1073741824
    
  • docker-compose:

    # Docker MySQL arguments example
    version: '3.1'
    
    services:
    
      db:
        image: mysql:8
        command: --default-authentication-plugin=mysql_native_password --innodb_buffer_pool_size=1024M --innodb_read_io_threads=4 --innodb_flush_log_at_trx_commit=2 --innodb_flush_method=O_DIRECT
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: 111111
        volumes:
         - /var/lib/mysql_data:/var/lib/mysql
        ports:
          - "3306:3306"
    

    Here is how to run it (the above text file should be named docker-compose.yml and the file should be in the current directory when executing the command below):

    root@srv ~ # docker-compose up
    Creating network "docker-compose-mysql_default" with the default driver
    Creating my-mysql ... done
    Attaching to my-mysql
    my-mysql | 2020-06-16 09:45:35+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.20-1debian10 started.
    my-mysql | 2020-06-16 09:45:35+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
    my-mysql | 2020-06-16 09:45:35+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.20-1debian10 started.
    my-mysql | 2020-06-16T09:45:36.293747Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
    my-mysql | 2020-06-16T09:45:36.293906Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.20) starting as process 1
    my-mysql | 2020-06-16T09:45:36.307654Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
    my-mysql | 2020-06-16T09:45:36.942424Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
    my-mysql | 2020-06-16T09:45:37.136537Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/var/run/mysqld/mysqlx.sock' bind-address: '::' port: 33060
    my-mysql | 2020-06-16T09:45:37.279733Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
    my-mysql | 2020-06-16T09:45:37.306693Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
    my-mysql | 2020-06-16T09:45:37.353358Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.20'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server - GPL.
    

    And check the option:

    root@srv ~ # docker exec -it my-mysql mysqladmin -p111111 variables|grep innodb_buffer_pool_size
    | innodb_buffer_pool_size                                  | 1073741824
    

OPTION 2) Options in a additional configuration file.

Create a MySQL option file with name config-file.cnf:

[mysqld]
innodb_buffer_pool_size=1024M
innodb_read_io_threads=4
innodb_flush_log_at_trx_commit=2
innodb_flush_method=O_DIRECT
  1. docker run
  2. The source path must be absolute path!

    docker run --name my-mysql \
    -v /var/lib/mysql_data:/var/lib/mysql \
    -v /etc/mysql/docker-instances/config-file.cnf:/etc/mysql/conf.d/config-file.cnf \
    -e MYSQL_ROOT_PASSWORD=111111 \
    -d mysql:8
    
  3. docker-compose
    The source path may not be absolute path.

    # Docker MySQL arguments example
    version: '3.1'
    
    services:
    
      db:
        container_name: my-mysql
        image: mysql:8
        command: --default-authentication-plugin=mysql_native_password
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: 111111
        volumes:
         - /var/lib/mysql_data:/var/lib/mysql
         - ./config-file.cnf:/etc/mysql/conf.d/config-file.cnf
        ports:
          - "3306:3306"
    

OPTION 3) Replacing the default my.cnf configuration file.

Add the modified options to a my.cnf template file and map it to the container on /etc/mysql/my.cnf. When overwriting the main MySQL option file – my.cnf you may map the whole /etc/mysql directory (just replace /etc/mysql/my.cnf with /etc/mysql below), too. The source file (or directory) may be any file (or directory) not the /etc/mysql/my.cnf (or /etc/mysql)

  • docker run:
    The source path must be absolute path.

    docker run --name my-mysql \
    -v /var/lib/mysql_data:/var/lib/mysql \
    -v /etc/mysql/my.cnf:/etc/mysql/my.cnf \
    -e MYSQL_ROOT_PASSWORD=111111 \
    --publish 3306:3306 \
    -d mysql:8
    

    Note: here a new option “–publish 3306:3306” is included to show how to map the ports out of the container like all the examples with the docker-compose here.

  • docker-compose:
    The source path may not be absolute path, but the current directory.

    # Use root/example as user/password credentials
    version: '3.1'
    
    services:
    
      db:
        container_name: my-mysql
        image: mysql:8
        command: --default-authentication-plugin=mysql_native_password
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: 111111
        volumes:
         - /var/lib/mysql_data:/var/lib/mysql
         - ./mysql/my.cnf:/etc/mysql/my.cnf
        ports:
          - "3306:3306"
    

Chromium browser in Ubuntu 20.04 LTS without snap to use in docker container

Ubuntu team has its own vision for the snap (https://snapcraft.io/) service and that’s why they have moved the really big and difficult to maintain Chromium browser package in the snap package. Unfortunately, the snap has many issues with docker containers and in short, it is way difficult to run snap in a docker container. The user may just want not to mess with snap packages (despite this is the future according to the Ubuntu team) or like most developers they all need a browser for their tests executed in a container.
Whether you are a developer or an ordinary user this article is for you, who wants Chromium browser installed not from the snap service under Ubuntu 20.04 LTS!
There are multiple options, which could end up with a Chromium browser installed on the system, not from the snap service:

  1. Using Debian package and Debian repository. The problem here is that using simultaneously Ubuntu and Debian repository on one machine is not a good idea! Despite the hack, Debian packages are with low priority – https://askubuntu.com/questions/1204571/chromium-without-snap/1206153#1206153
  2. Using Google Chromehttps://www.google.com/chrome/?platform=linux. It is just a single Debian package, which provides Chromium-like browser and all dependencies requesting the Chromium browser package are fulfilled.
  3. Using Chromium team dev or beta PPA (https://launchpad.net/~chromium-team) for the nearest version if still missing Ubuntu packages for Focal (Ubuntu 20.04 LTS).
  4. more options available?

This article will show how to use Ubuntu 18 (Bionic) Chromium browser package from Chromium team beta PPA under Ubuntu 20.04 LTS (Focal). Bionic package from the very same repository of Ubuntu Chromium team may be used, too.

All dependencies will be downloaded from the Ubuntu 20.04 and just several Chromium-* packages will be downloaded from the Chromium team PPA Ubuntu 19 repository. The chances to break something are really small compared to the options 1 above, which uses the Debian packages and repositories. Hope, soon we are going to have focal (Ubuntu 20.04 LTS) packages in the Ubuntu Chromium team PPA!

Dockerfile

An example of a Dockerfile installing Chromium (and python3 selenium for automating web browser interactions)

RUN apt-key adv --fetch-keys "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xea6e302dc78cc4b087cfc3570ebea9b02842f111" \
&& echo 'deb http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic main ' >> /etc/apt/sources.list.d/chromium-team-beta.list \
&& apt update
RUN export DEBIAN_FRONTEND=noninteractive \
&& export DEBCONF_NONINTERACTIVE_SEEN=true \
&& apt-get -y install chromium-browser
RUN apt-get -y install python3-selenium

First command adds the repository key and the repository to the Ubuntu source lists. Note we are adding the “bionic main”, not “focal main”.
From the all dependencies of the Bionic chromium-browser only three packages are pulled from the Bionic repository and all other are from the Ubuntu 20 (Focal):

.....
Get:1 http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic/main amd64 chromium-codecs-ffmpeg-extra amd64 84.0.4147.38-0ubuntu0.18.04.1 [1174 kB]
.....
Get:5 http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic/main amd64 chromium-browser amd64 84.0.4147.38-0ubuntu0.18.04.1 [67.8 MB]
.....
Get:187 http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic/main amd64 chromium-browser-l10n all 84.0.4147.38-0ubuntu0.18.04.1 [3429 kB]
.....

Here is the whole Dockerfile sample file:

#
#   Docker file for the image "chromium brower without snap"
#
FROM ubuntu:20.04
MAINTAINER myuser@example.com

#chromium browser
#original PPA repository, use if our local fails
RUN echo "tzdata tzdata/Areas select Etc" | debconf-set-selections && echo "tzdata tzdata/Zones/Etc select UTC" | debconf-set-selections
RUN export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NONINTERACTIVE_SEEN=true
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get -y install gnupg2 apt-utils wget
#RUN wget -O /root/chromium-team-beta.pub "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xea6e302dc78cc4b087cfc3570ebea9b02842f111" && apt-key add /root/chromium-team-beta.pub
RUN apt-key adv --fetch-keys "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xea6e302dc78cc4b087cfc3570ebea9b02842f111" && echo 'deb http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic main ' >> /etc/apt/sources.list.d/chromium-team-beta.list && apt update
RUN export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NONINTERACTIVE_SEEN=true && apt-get -y install chromium-browser
RUN apt-get -y install python3-selenium

Desktop install

The desktop installation is almost the same as the Dockerfile above. Just execute the following lines:
Keep on reading!

firewalld and podman (or docker) – no internet in the container and could not resolve host

If you happen to use CentOS 8 you have already discovered that Red Hat (i.e. CentOS) switch to podman, which is a fork of docker. So probably the following fix might help to someone, which does not use CentOS 8 or podman. For now, podman and docker are 99.99% the same.
So creating and starting a container is easy and in most cases one command only, but you may stumble on the error your container could not resolve or could not connect to an IP even there is a ping to the IP!
The service in the container may live a happy life without Internet access but just the mapped ports from the outside world. Still, it may happen to need Internet access, let’s say if an update should be performed.
Here is how to fix podman (docker) missing the Internet access in the container:

  • No ping to the outside world. The chances you are missing
    sysctl -w net.ipv4.ip_forward=1
    

    And do not forget to make it permanent by adding the “net.ipv4.ip_forward=1” to /etc/sysctl.conf (or a file “.conf” in /etc/sysctl.d/).

  • ping to the outside IP of the container is available, but no connection to any service is available! Probably the NAT is not enabled in your podman docker configuration. In the case with firewalld, at least, you must enable the masquerade option of the public zone
    firewall-cmd --zone=public --add-masquerade
    firewall-cmd --permanent --zone=public --add-masquerade
    

    The second command with “–permanent” is to make the option permanent over reboots.

The error – Could not resolve host (Name or service not known) despite having servers in /etc/resolv.conf and ping to them!

One may think having IPs in /etc/resolv.conf and ping to them in the container should give the container access to the Internet. But the following error occurs:

[root@srv /]# yum install telnet
Loaded plugins: fastestmirror, ovl
Determining fastest mirrors
 * base: artfiles.org
 * extras: centos.mirror.net-d-sign.de
 * updates: centos.bio.lmu.de
http://mirror.fra10.de.leaseweb.net/centos/7.7.1908/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: mirror.fra10.de.leaseweb.net; Unknown error"
Trying other mirror.
http://artfiles.org/centos.org/7.7.1908/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: artfiles.org; Unknown error"
Trying other mirror.
^C

Exiting on user cancel
[root@srv /]# ^C
[root@srv /]# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=5.05 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=5.06 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 5.050/5.055/5.061/0.071 ms
[root@srv ~]# cat /etc/resolv.conf 
nameserver 8.8.8.8
nameserver 8.8.4.4
[root@srv /]# ping google.com
ping: google.com: Name or service not known

The error 2 – Can’t connect to despite having ping to the IP!

[root@srv /]# ping 2.2.2.2
PING 2.2.2.2 (2.2.2.2) 56(84) bytes of data.
64 bytes from 2.2.2.2: icmp_seq=1 ttl=56 time=9.15 ms
64 bytes from 2.2.2.2: icmp_seq=2 ttl=56 time=9.16 ms
^C
[root@srv2 /]# mysql -h2.2.2.2 -uroot -p
Enter password: 
ERROR 2003 (HY000): Can't connect to MySQL server on '2.2.2.2' (113)
[root@srv2 /]#

Despite having ping the MySQL server on 2.2.2.2 and despite the firewall on 2.2.2.2 allows outside connections the container could not connect to it. And testing other services like HTTP, HTTPS, FTP and so on resulted in “unable to connect“, too. Simply because the NAT (aka masquerade is not enabled in the firewall).

docker mysql – Fatal error: Please read “Security” section of the manual to find out how to run mysqld as root!

Pulling the official MySQL image from the docker registry https://hub.docker.com/r/mysql/mysql-server to start a MySQL instance with your configuration file (and MySQL binary files). Adding the “–volume” option for the configuration directory (or file) and MySQL binary files and you stumble on the error:

2019-12-03 01:13:38 0 [Note] mysqld (mysqld 5.6.46-log) starting as process 67 ...
2019-12-03 01:13:38 67 [ERROR] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root!

2019-12-03 01:13:38 67 [ERROR] Aborting

2019-12-03 01:13:38 67 [Note] Binlog end
2019-12-03 01:13:38 67 [Note] mysqld: Shutdown complete

Apparently, the server option is not configured to run properly as a root user and you do not want to run it, but why it keeps insisting to run it as root?

Because of the entry point script will execute only “mysqld” as a command, which expects to have a “user” option in the “[mysqld]” section of your my.cnf configuration file!

Do not miss the user option in my.cnf! This is how the MySQL server will be using the “mysql” username not the root!

user=mysql

Typical error, because it is not so common to include the username in my.cnf configuration file of the mysqld process to run as. If you use the official docker MySQL image to create your configuration file you would not encounter the above error, but if you use an existing (probably old and from non virtualized environment) my.cnf make sure to include the username, which should be used to run the mysqld process as.

Here is our command to execute the container:

docker run --privileged -d -v /mnt/storage/docker/mysql-slave/files:/var/lib/mysql -v /mnt/storage/docker/mysql-slave/etc/my.cnf:/etc/my.cnf mysql/mysql-server:5.6

Build docker image with custom Dockerfile name – docker build requires exactly 1 argument

Docker uses the Dockerfile to build docker images, but what if you want to change the name and (or) the path of this file?
By default “docker build” command uses a file named Dockerfile on the same directory you execute the “docker build“. There is an option to change the path and name of this special file:

  -f, --file string             Name of the Dockerfile (Default is 'PATH/Dockerfile')

And the “-f” may include path and file name but it is mandatory to specify the path at the end “docker build” usually the current directory (context by the docker terminology) by adding “.” (the dot at the end of the command)

So if you want to build with a docker file mydockerfile in the current directory you must execute:

docker build -f mydockerfile .

If your file is in a sub-directory execute:

docker build -f subdirectory/mydockerfile .

The command will create a docker image in your local repository. Here is the output of the first command:

root@srv:~/docker# docker build -f mydockerfile .
Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM ubuntu:bionic-20191029
bionic-20191029: Pulling from library/ubuntu
7ddbc47eeb70: Pull complete 
c1bbdc448b72: Pull complete 
8c3b70e39044: Pull complete 
45d437916d57: Pull complete 
Digest: sha256:6e9f67fa63b0323e9a1e587fd71c561ba48a034504fb804fd26fd8800039835d
Status: Downloaded newer image for ubuntu:bionic-20191029
 ---> 775349758637
Step 2/3 : MAINTAINER test@example.com
 ---> Running in 5fa42bca749c
Removing intermediate container 5fa42bca749c
 ---> 0a1ffa1728f4
Step 3/3 : RUN apt-get update && apt-get upgrade -y && apt-get install -y git wget
 ---> Running in 2e35040f247c
Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
.....
.....
Processing triggers for ca-certificates (20180409) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Removing intermediate container 2e35040f247c
 ---> 2382809739a4
Successfully built 2382809739a4

Here is the image:

REPOSITORY                            TAG                 IMAGE ID            CREATED              SIZE
root@srv:~# docker images
<none>                                <none>              2382809739a4        About a minute ago   186MB

Build command with custom name and registry URL and TAG

root@srv:~# docker build -t gitlab.ahelpme.com:4567/root/ubuntu-project/ubuntu18-manual-base:v0.1 -f mydockerfile .
Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM ubuntu:bionic-20191029
 ---> 775349758637
Step 2/3 : MAINTAINER test@example.com
 ---> Using cache
 ---> 0a1ffa1728f4
Step 3/3 : RUN apt-get update && apt-get upgrade -y && apt-get install -y git wget
 ---> Using cache
 ---> 2382809739a4
Successfully built 2382809739a4
Successfully tagged gitlab.ahelpme.com:4567/root/ubuntu-project/ubuntu18-manual-base:v0.1
root@srv:~# docker push gitlab.ahelpme.com:4567/root/ubuntu-project/ubuntu18-manual-base:v0.1
The push refers to repository [gitlab.ahelpme.com:4567/root/ubuntu-project/ubuntu18-manual-base]
7cebba4bf6c3: Pushed 
e0b3afb09dc3: Pushed 
6c01b5a53aac: Pushed 
2c6ac8e5063e: Pushed 
cc967c529ced: Pushed 
v0.1: digest: sha256:acf42078bf46e320c402f09c6417a3dae8992ab4f4f685265486063daf30cb13 size: 1364

the registry URL is “gitlab.ahelpme.com:4567” and the project path is “/root/ubuntu-project/” and the name of the image is “ubuntu18-manual-base” with tag “v0.1“. The build command uses the cache from our first build example here (because the docker file is the same).

Typical errors with “-f”

Two errors you may encounter when trying the “-f” to change the name of the default Dockerfile name:

$ docker build -t gitlab.ahelpme.com:4567/root/ubuntu-project/ubuntu18-manual-base:v0.1 -f mydockerfile subdirectory/
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /builds/dev/docker-containers/mydockerfile: no such file or directory

$ docker build -t gitlab.ahelpme.com:4567/root/ubuntu-project/ubuntu18-manual-base:v0.1 -f subdirectory/mydockerfile
"docker build" requires exactly 1 argument.
See 'docker build --help'.

Usage:  docker build [OPTIONS] PATH | URL | -

First, you might think the -f would take the path and file name and this should be enough, but the errors above appears!

Our example Dockerfile

This is our simple example docker file:

FROM ubuntu:bionic-20191029
MAINTAINER test@example.com

RUN apt-get update && apt-get upgrade -y && apt-get install -y git wget

We are using the official docker image from Ubuntu. Always use official docker images!

Docker change the port mapping of an existing container

Unfortunately, it is not possible to change the port mapping (forwarded ports from the hosts to the container) of an existing RUNNING container!

Not only that, but you cannot change the mapped ports (forwarded ports) even when the container is stopped, so think twice when you run or start a container from the image you’ve chosen. Of course, you can always use docker’s commit command, which just creates a new image from you (running, in a sense of changes fro the original image) container and then you can run the new image with new mapped ports!

Still, there is a solution not involving the creation of new docker images and containers, but just to edit manually a configuration file while the Docker service is stopped.

So if you have several docker containers running you should stop all of them! When the Docker service stops, edit the “hostconfig.json” file! Here is the whole procedure:

  1. Stop the container.
  2. Stop the Docker container service.
  3. Edit the container’s file – hostconfig.json (usually in /var/lib/docker/containers/[ID]/hostconfig.json) and add or replace ports.
  4. Start the Docker container service.
  5. Start the docker container.

Real World Example

myuser@srv:~# sudo docker ps
CONTAINER ID        IMAGE                         COMMAND                  CREATED             STATUS                  PORTS                                                                                    NAMES
a9e21e92e2dd        gitlab/gitlab-runner:latest   "/usr/bin/dumb-init …"   2 days ago          Up 33 hours                                                                                                      gitlab-runner
5d025e7f93a4        gitlab/gitlab-ce:latest       "/assets/wrapper"        3 days ago          Up 34 hours (healthy)   0.0.0.0:80->80/tcp, 0.0.0.0:4567->4567/tcp, 0.0.0.0:1022->22/tcp   gitlab
myuser@srv:~# sudo docker ps
CONTAINER ID        IMAGE                         COMMAND                  CREATED             STATUS                  PORTS                                                                                    NAMES
a9e21e92e2dd        gitlab/gitlab-runner:latest   "/usr/bin/dumb-init …"   2 days ago          Up 33 hours                                                                                                      gitlab-runner
5d025e7f93a4        gitlab/gitlab-ce:latest       "/assets/wrapper"        3 days ago          Up 34 hours (healthy)   0.0.0.0:80->80/tcp, 0.0.0.0:4567->4567/tcp, 0.0.0.0:1022->22/tcp   gitlab
myuser@srv:~# sudo docker stop gitlab-runner
gitlab-runner
myuser@srv:~# sudo docker stop gitlab
gitlab
myuser@srv:~# sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
myuser@srv:~# systemctl stop docker
myuser@srv:~# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Thu 2019-11-14 21:54:57 UTC; 5s ago
     Docs: https://docs.docker.com
  Process: 2340 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=0/SUCCESS)
 Main PID: 2340 (code=exited, status=0/SUCCESS)

Nov 14 21:54:33 srv dockerd[2340]: time="2019-11-14T21:54:33.308531424Z" level=warning msg="a9e21e92e2dd297a68f68441353fc3bda39d0bb5564b60d402ae651fa80f5c72 cleanu
Nov 14 21:54:46 srv dockerd[2340]: time="2019-11-14T21:54:46.394643530Z" level=info msg="Container 5d025e7f93a45a50dbbaa87c55d7cdbbf6515bbe1d45ff599074f1cdcf320a0c
Nov 14 21:54:46 srv dockerd[2340]: time="2019-11-14T21:54:46.757171067Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete ty
Nov 14 21:54:47 srv dockerd[2340]: time="2019-11-14T21:54:47.031709355Z" level=warning msg="5d025e7f93a45a50dbbaa87c55d7cdbbf6515bbe1d45ff599074f1cdcf320a0c cleanu
Nov 14 21:54:57 srv systemd[1]: Stopping Docker Application Container Engine...
Nov 14 21:54:57 srv dockerd[2340]: time="2019-11-14T21:54:57.439296168Z" level=info msg="Processing signal 'terminated'"
Nov 14 21:54:57 srv dockerd[2340]: time="2019-11-14T21:54:57.447803201Z" level=info msg="Daemon shutdown complete"
Nov 14 21:54:57 srv dockerd[2340]: time="2019-11-14T21:54:57.449422219Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled
Nov 14 21:54:57 srv dockerd[2340]: time="2019-11-14T21:54:57.449576789Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled
Nov 14 21:54:57 srv systemd[1]: Stopped Docker Application Container Engine.
myuser@srv:~# cat /var/lib/docker/containers/5d025e7f93a45a50dbbaa87c55d7cdbbf6515bbe1d45ff599074f1cdcf320a0c/hostconfig.json 
{"Binds":["/srv/gitlab/config:/etc/gitlab","/srv/gitlab/logs:/var/log/gitlab","/srv/gitlab/data:/var/opt/gitlab"],"ContainerIDFile":"","LogConfig":{"Type":"json-file","Config":{}},"NetworkMode":"default","PortBindings":{"22/tcp":[{"HostIp":"","HostPort":"1022"}],"4567/tcp":[{"HostIp":"","HostPort":"4567"}],"80/tcp":[{"HostIp":"","HostPort":"80"}]},"RestartPolicy":{"Name":"always","MaximumRetryCount":0},"AutoRemove":false,"VolumeDriver":"","VolumesFrom":null,"CapAdd":null,"CapDrop":null,"Capabilities":null,"Dns":[],"DnsOptions":[],"DnsSearch":[],"ExtraHosts":null,"GroupAdd":null,"IpcMode":"private","Cgroup":"","Links":null,"OomScoreAdj":0,"PidMode":"","Privileged":false,"PublishAllPorts":false,"ReadonlyRootfs":false,"SecurityOpt":null,"UTSMode":"","UsernsMode":"","ShmSize":67108864,"Runtime":"runc","ConsoleSize":[0,0],"Isolation":"","CpuShares":0,"Memory":0,"NanoCpus":0,"CgroupParent":"","BlkioWeight":0,"BlkioWeightDevice":[],"BlkioDeviceReadBps":null,"BlkioDeviceWriteBps":null,"BlkioDeviceReadIOps":null,"BlkioDeviceWriteIOps":null,"CpuPeriod":0,"CpuQuota":0,"CpuRealtimePeriod":0,"CpuRealtimeRuntime":0,"CpusetCpus":"","CpusetMems":"","Devices":[],"DeviceCgroupRules":null,"DeviceRequests":null,"KernelMemory":0,"KernelMemoryTCP":0,"MemoryReservation":0,"MemorySwap":0,"MemorySwappiness":null,"OomKillDisable":false,"PidsLimit":null,"Ulimits":null,"CpuCount":0,"CpuPercent":0,"IOMaximumIOps":0,"IOMaximumBandwidth":0,"MaskedPaths":["/proc/asound","/proc/acpi","/proc/kcore","/proc/keys","/proc/latency_stats","/proc/timer_list","/proc/timer_stats","/proc/sched_debug","/proc/scsi","/sys/firmware"],"ReadonlyPaths":["/proc/bus","/proc/fs","/proc/irq","/proc/sys","/proc/sysrq-trigger"]
myuser@srv:~# nano /var/lib/docker/containers/5d025e7f93a45a50dbbaa87c55d7cdbbf6515bbe1d45ff599074f1cdcf320a0c/hostconfig.json 
myuser@srv:~# systemctl start docker
myuser@srv:~# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2019-11-14 22:12:06 UTC; 2s ago
     Docs: https://docs.docker.com
 Main PID: 4693 (dockerd)
    Tasks: 54
   CGroup: /system.slice/docker.service
           ├─4693 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
           ├─4867 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 4567 -container-ip 172.17.0.3 -container-port 4567
           ├─4881 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 443 -container-ip 172.17.0.3 -container-port 443
           ├─4895 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1022 -container-ip 172.17.0.3 -container-port 22
           └─4907 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.17.0.3 -container-port 80

Nov 14 22:12:04 srv dockerd[4693]: time="2019-11-14T22:12:04.034007956Z" level=warning msg="Your kernel does not support swap memory limit"
Nov 14 22:12:04 srv dockerd[4693]: time="2019-11-14T22:12:04.034062799Z" level=warning msg="Your kernel does not support cgroup rt period"
Nov 14 22:12:04 srv dockerd[4693]: time="2019-11-14T22:12:04.034074070Z" level=warning msg="Your kernel does not support cgroup rt runtime"
Nov 14 22:12:04 srv dockerd[4693]: time="2019-11-14T22:12:04.034361581Z" level=info msg="Loading containers: start."
Nov 14 22:12:04 srv dockerd[4693]: time="2019-11-14T22:12:04.344354207Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Dae
Nov 14 22:12:05 srv dockerd[4693]: time="2019-11-14T22:12:05.916782317Z" level=info msg="Loading containers: done."
Nov 14 22:12:05 srv dockerd[4693]: time="2019-11-14T22:12:05.988204406Z" level=info msg="Docker daemon" commit=9013bf583a graphdriver(s)=overlay2 version=19.03.4
Nov 14 22:12:05 srv dockerd[4693]: time="2019-11-14T22:12:05.988317448Z" level=info msg="Daemon has completed initialization"
Nov 14 22:12:06 srv dockerd[4693]: time="2019-11-14T22:12:06.010801856Z" level=info msg="API listen on /var/run/docker.sock"
Nov 14 22:12:06 srv systemd[1]: Started Docker Application Container Engine.
myuser@srv:~# sudo docker start gitlab-runner
gitlab-runner
myuser@srv:~# sudo docker start gitlab
gitlab
myuser@srv:~# sudo docker ps
CONTAINER ID        IMAGE                         COMMAND                  CREATED             STATUS                             PORTS                                                                                    NAMES
a9e21e92e2dd        gitlab/gitlab-runner:latest   "/usr/bin/dumb-init …"   2 days ago          Up 19 seconds                                                                                                               gitlab-runner
5d025e7f93a4        gitlab/gitlab-ce:latest       "/assets/wrapper"        3 days ago          Up 19 seconds (health: starting)   0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:4567->4567/tcp, 0.0.0.0:1022->22/tcp   gitlab
myuser@srv:~# wget --no-check-certificate https://192.168.0.238/
--2019-11-14 22:13:30--  https://192.168.0.238/
Connecting to 192.168.0.238:443... connected.
    WARNING: certificate common name ‘gitlab.ahelpme.com’ doesn't match requested host name ‘192.168.0.238’.
HTTP request sent, awaiting response... 302 Found
Location: https://192.168.0.238/users/sign_in [following]
--2019-11-14 22:13:30--  https://192.168.0.238/users/sign_in
Reusing existing connection to 192.168.0.238:443.
HTTP request sent, awaiting response... 200 OK
Length: unspecified 
Saving to: ‘index.html’

index.html                                     [ <=>                                                                                     ]  12.41K  --.-KB/s    in 0s      

2019-11-14 22:13:31 (134 MB/s) - ‘index.html’ saved [12708]


Change the ports or add more ports in “PortBindings”. The syntax is pretty straightforward just mind the comas, [] and {}.

"PortBindings":{"22/tcp":[{"HostIp":"","HostPort":"1022"}],"4567/tcp":[{"HostIp":"","HostPort":"4567"}],"80/tcp":[{"HostIp":"","HostPort":"80"}]}

Here we change the mapping from “host port 1022 to 22” to “host port 2222 to 22” just replacing the “1022” to “2222”:

"PortBindings":{"22/tcp":[{"HostIp":"","HostPort":"2222"}],"4567/tcp":[{"HostIp":"","HostPort":"4567"}],"80/tcp":[{"HostIp":"","HostPort":"80"}]}

And the second example is in addition to the 2222 change we want to add another mapping “host from 443 to 443” (open the HTTPS), just add new group with the above syntax:

"PortBindings":{"22/tcp":[{"HostIp":"","HostPort":"2222"}],"4567/tcp":[{"HostIp":"","HostPort":"4567"}],"80/tcp":[{"HostIp":"","HostPort":"80"}],"443/tcp":[{"HostIp":"","HostPort":"443"}]}

A note!

Probably there may be an idea not to be easy to add mapped ports when you think one of the main Docker goals is to isolate services per a Docker instance. It sounds strange to have a docker container for one service exporting a number of ports (or a single port) and later why you would need to expose another port? For another service in the same container, but you should use a separate container, not the same one!
But more and more Docker containers are used also to deliver a fine-tuned environment of a whole platform, which provides multiple services in a single docker container. Let’s take an example – GitLab, which offers installation in a Docker container hosting more than 10 services in a single container!

docker and dind service (.gitlab-ci.yml) with self-signed certificate and x509: certificate signed by unknown authority

When using GitLab and the CI/CD for building docker images you may stumble on such error using the “docker:dind” (dind stands for docker in docker) image:

$ docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $REGISTRY_URL
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get https://gitlab.ahelpme.com:4567/v2/: x509: certificate signed by unknown authority
ERROR: Job failed: exit code 1

In our case, because “docker build” command needs a docker service to be running and the GitLab runner needs to provide this docker service so docker:dind is our best option! A self-signed certificate could be really difficult to use in such a big platform as GitLab, but no matter whatever might be the reasons to use docker service in a docker container you may need to use a custom registry with a self-signed certificate!

There are two options to use self-signed certificates with docker:

  1. Add the self-signed certificate in “/etc/docker/certs.d/[custom_registry]/ca.crt”. custom_registry must include the port, for example: “/etc/docker/certs.d/gitlab.example.com\:4567/ca.crt” and restart the docker service! This could be difficult when you use GitLab CI/CD and .gitlab-ci.yml
  2. Add “–insecure-registry” in docker configuration and restart. Apperantly it is easier than the first option when using GitLab CI/CD .gitlab-ci.yml.

The solution

In the GitLab CI/CD file .gitlab-ci.yml add two options (entrypoint, command) to the services, which provides the “dind” (docker in docker). The start of your should start with something like:

image: docker:18.09.7
services:
  - name: docker:18.09.7-dind
    entrypoint: ["dockerd-entrypoint.sh"]
    command: ["--insecure-registry", "gitlab.ahelpme.com:4567"]

Of course, replace the “gitlab.ahelpme.com:4567” with your custom docker registry domain.

Real world example – failed job in gitlab-runner

Keep on reading!

Install gitlab-ce (community edition) in docker container with HTTPS and docker registry

This article is a howto install of the official docker gitlab-ce (GitLab Community Edition). GitLab maintains a docker image in the Docker registry and this is the best way to install GitLab.
In this article you are going to learn how:

  • to install the GitLab CE in docker
  • to enable HTTPS (SSL) web support to your GitLab
  • to enable the docker registry functionality of GitLab

To install GitLab docker image in your Linux distribution all you need is a working docker environment and started docker daemon. As you know, installing software with docker will allow you to keep your main system clean and let you use a fined tuned installation from the official developer (creator). As mentioned already, the GitLab maintains an official GitLab image in the Docker Registry so you may expect everything to work smoothly and better than if you make an installation in a clean Linux distribution like CentOS, Ubuntu and so on. In this article, we will include the most important docker commands to control and configure the GitLab docker container and even if you are not familiar with the Docker software they are simple enough to use them and prefer this method over GitLab normal installation.

GitLab has integrated the Docker Container Registry in GitLab Container Registry and now with GitLab you can have a local Docker registry containing all project’s docker images!

Just to note, the Docker Registry is the place for the Docker (aka Linux) images.
Using GitLab Container Registry with CI/CD (continuous integration and continuous delivery) you can create automatically test, staging, development and production docker images.
Keep on reading!

cobbler import – stderr file: No such file or directory

What a miss here in our docker Cobbler instance! When trying to import a new distro in Cobbler the import finished with failed task and multiple errors of:

running: /usr/bin/file /var/www/cobbler/ks_mirror/ubuntu-18.04.3-server-amd64-x86_64/dists/bionic/Release.gpg
received on stdout: 
received on stderr: /bin/sh: /usr/bin/file: No such file or directory

running: /usr/bin/file /var/www/cobbler/ks_mirror/ubuntu-18.04.3-server-amd64-x86_64/dists/bionic/main/binary-i386/Release
received on stdout: 
received on stderr: /bin/sh: /usr/bin/file: No such file or directory

The “file”? Apparently, the Linux file command is missing and it is just possible because it is a docker container with the minimum installation of packages and dependencies.

Without the “file” command import will fail with “No signature matched” even you have synced with the latest signature just before the execution of the import command.

Install the package “file” in your server.
CentOS 7/Fedora:

yum install -y file

Ubuntu

apt install -y file

Keep on reading!