gitlab in podman cannot create unix sockets in glusterfs because of SELinux

Installing gitlab-ee (and gitlab-ce) under CentOS 7 with enabled SELinux (i.e. enforcing mode) looped endlessly the container in restarting the installation process! There were multiple errors for missing sockets in the podman logs of the gitlab container. Here are some of the errors:
Missing postgresql unix socket in “/var/opt/gitlab/postgresql”:

Recipe: gitlab::database_migrations
  * bash[migrate gitlab-rails database] action run
    [execute] rake aborted!
              PG::ConnectionBad: could not connect to server: No such file or directory
                Is the server running locally and accepting
                connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"?
              /opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:53:in `block (3 levels) in <top (required)>'
              /opt/gitlab/embedded/bin/bundle:23:in `load'
              /opt/gitlab/embedded/bin/bundle:23:in `<main>'
              Tasks: TOP => gitlab:db:configure
              (See full trace by running task with --trace)
    
    
    Error executing action `run` on resource 'bash[migrate gitlab-rails database]'
.....
.....
Running handlers:
There was an error running gitlab-ctl reconfigure:

bash[migrate gitlab-rails database] (gitlab::database_migrations line 55) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of "bash"  "/tmp/chef-script20200915-35-lemic5" ----
STDOUT: rake aborted!
PG::ConnectionBad: could not connect to server: No such file or directory
        Is the server running locally and accepting
        connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"?
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:53:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/bin/bundle:23:in `load'
/opt/gitlab/embedded/bin/bundle:23:in `<main>'
Tasks: TOP => gitlab:db:configure
(See full trace by running task with --trace)
STDERR: 
---- End output of "bash"  "/tmp/chef-script20200915-35-lemic5" ----
Ran "bash"  "/tmp/chef-script20200915-35-lemic5" returned 1

Missing redis socket in

Running handlers:
There was an error running gitlab-ctl reconfigure:

redis_service[redis] (redis::enable line 19) had an error: RuntimeError: ruby_block[warn pending redis restart] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/redis/resources/service.rb line 65) had an error: RuntimeError: Execution of the command `/opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket INFO` failed with a non-zero exit code (1)
stdout: 
stderr: Could not connect to Redis at /var/opt/gitlab/redis/redis.socket: No such file or directory

It should be noted that the /var/opt/gitlab directory has been mapped in /mnt/storage/podman/gitlab/data. GlusterFS is used for /mnt/storage, so the gitlab files resides on a GlusterFS volume.

ERROR 1) Cannot create unix socket.

Checking the /var/log/audit/audit.log reveiled the problem immediately:
Keep on reading!

make Gluster daemon to resolve the proper hostnames of your peers

This is a useful tip for GlusterFS nodes. When adding a peer to a gluster cluster you may use the hostname (or IP) and the Gluster daemon on the added server tries to resolve the hostname from the IP, which contacts it (or if the cluster has multiple peers – multiple IP resolves would happen).
Here is a simple example. The cluster will have two peers (srv1.example.com and srv2.example.com):
Add the peer srv2.example.com to your cluster srv1.example.com (in fact, the cluster consists only from the local Gluster daemon):

[root@srv1 ~]# gluster peer probe srv2.example.com
peer probe: success.
[root@srv1 ~]# gluster peer status
Number of Peers: 1

Hostname: srv2.example.com
Uuid: 8322b61c-a94d-491b-afc9-9f10eb8e8b92
State: Peer in Cluster (Connected)

And when you check the status of the cluster in the second server srv2.example.com. The second server uses the PTR domain of the first server:

[root@srv2 ~]# gluster peer status
Number of Peers: 1

Hostname: static.123.123.123.123.clients.your-server.de
Uuid: 3d273834-eca6-4997-871f-1a282ca90fb0
State: Peer in Cluster (Connected)

You see the hostname is a temporary namestatic.123.123.123.123.clients.your-server.de, the PTR of the srv1.example.com. You may have problems in the future if you leave it like that and even it is the really uninformative domain name for your cluster’s configuration. To change the peer hostname in a cluster is really difficult and dangerous, so the option is to change the PTR of the servers’ IPs, but if you cannot do it or it is too slow to do it you can just use “/etc/hosts” file!

Use “/etc/hosts” to make Gluster daemon to resolve the proper hostnames of your peers!

Edit the “/etc/hosts” on (the first and) the (peer) second server (add the line, do not remove the others if they exit). Replace the IP with your first server’s IP and hostname.

123.123.123.123 srv1.example.com

And then add it to the cluster on the first server and check again in the second server:

[root@srv2 ~]# gluster peer status
Number of Peers: 1

Hostname: srv1.example.com
Uuid: 3d273834-eca6-4997-871f-1a282ca90fb0
State: Peer in Cluster (Connected)

And in the fist server:

[root@srv1 ~]# gluster peer status
Number of Peers: 1

Hostname: srv2.example.com
Uuid: 8322b61c-a94d-491b-afc9-9f10eb8e8b92
State: Peer in Cluster (Connected)

Now the two servers have the right hostnames for peers. And these hostnames will be used for the Gluster configuration saved in the servers.

In fact, it is a good idea to add all your cluster peers in the “/etc/hosts” on all servers:

123.123.123.123 srv1.example.com
124.124.124.124 srv2.example.com