gpg list key and display key details from a file (without importing the key)

Files with GPG keyspublic or private. Here is how to get more information without importing the keys.
GPG cli could give enough information for an explored key in a file:

  • public or private key
  • encrypted or unencrypted key
  • user id description (including email)
  • key id and issuer fpr v4
  • when the key was generated and when it will expire
  • the algo for the encrypted key
  • more

The key may be in binary or ascii format. No difference.
Here is the GNU GPG cli command:

gpg --list-packets < ./filewith.key

All examples below are made with gpg (GnuPG) 2.2.19.
Keep on reading!

Cron missing path – executing docker/podman – adding network: failed to locate iptables

If you have ever happened to execute some complex scripts using the cron system you were inevitable to discover the Linux environment was different than the login or ssh shell. The different environment tends to lead to a missing or different PATH environment! Here is what happens with podman starting a container from a cron script:

time="2020-04-19T20:45:20Z" level=error msg="Error adding network: failed to locate iptables: exec: \"iptables\": executable file not found in $PATH"
time="2020-04-19T20:45:20Z" level=error msg="Error while adding pod to CNI network \"podman\": failed to locate iptables: exec: \"iptables\": executable file not found in $PATH"
Error: unable to start container "onedrive-cli": error configuring network namespace for container d297cf80db20441d4258a1acc7d810444795d1ca8730ab242d9fe8a13eaa697d: failed to locate iptables: exec: "iptables": executable file not found in $PATH

The iptables executable is missing because the PATH variable is different than the login or ssh shell one. Executing the commands or the script under ssh or login will result in no error and a proper podman (docker) execution!

A similar problem could have happened with another software trying to execute iptables or another tool, which is not found in the cron’s PATH environment because cron’s environment is very limited and

To ensure the PATH is like the user’s (root) environment just source the “profile” or “.bashrc” file of the current user before the execution of the script or in the first lines of it.
This would do the trick.

. /etc/profile

Or user’s custom

. ~/.bashrc

Or the default OS bashrc

. /etc/bashrc

The dot may be replaced by “source”:

source /etc/bashrc

All (environment) variables will be available after the source command.

Here is the difference:
The environment without the sourcing profile/bashrc file:

 
LANG=en_US.UTF-8
XDG_SESSION_ID=19118
USER=root
PWD=/root
HOME=/root
SHELL=/bin/sh
SHLVL=1
LOGNAME=root
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
XDG_RUNTIME_DIR=/run/user/0
PATH=/usr/bin:/bin
_=/usr/bin/env

Sourcing the “/etc/profile” file:

LANG=en_US.UTF-8
HISTCONTROL=ignoredups
HOSTNAME=srv.example.com
XDG_SESSION_ID=19165
USER=root
PWD=/root
HOME=/root
MAIL=/var/spool/mail/root
SHELL=/bin/bash
SHLVL=1
LOGNAME=root
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
XDG_RUNTIME_DIR=/run/user/0
PATH=/usr/local/sbin:/usr/sbin:/usr/bin:/bin
HISTSIZE=1000
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/usr/bin/env

Multiple additional envrinment varibles, which could be important for user’s scripts executed by the cron.

And in CentOS 8 the iptables happens to be in “/usr/sbin/iptables” – a path /usr/sbin not included in the default cron environment PATH variable!
Of course, the PATH environment may be edited in the cron scheduler with crontab (by just setting the PATH with a path) till the next path missing in it and included in the user’s path! It’s just better to ensure the two environments are the same every time by sourcing the environment configuration file such as /etc/profile or user’s bashrc (or the default on in /etc/bashrc?).

Overwrite Return-Path with postfix because of “550-Sender verification is required but failed”

Sending emails from web applications like PHP may result in rejecting the emails from some servers. Fighting spam emails results in too strict filters and rules, which reject the mails even before the anti-spam service of the accepting server. Here is an error:

Apr  1 04:10:18 srv-mail postfix/pickup[26902]: AB13578FAB3: uid=1015 from=<www-data>
Apr  1 04:10:18 srv-mail postfix/cleanup[21182]: AB13578FAB3: message-id=<20200401041018.AB13578FAB3@www.mydomain.com>
Apr  1 04:10:18 srv-mail postfix/qmgr[6485]: AB13578FAB3: from=<www-data@www.mydomain.com>, size=7923, nrcpt=1 (queue active)
Apr  1 04:10:19 srv-mail postfix/smtp[45689]: AB13578FAB3: to=<mailbox@example.com>, relay=mx.example.com[1.1.1.1]:25, delay=11, delays=0.02/0.01/0.65/10, dsn=5.0.0, status=bounced (host mx.example.com[1.1.1.1] said: 550-Sender verification is required but failed. (ID:550:0:5 550 (smtp1.mx.example.com)): www-data@mydomain.com (in reply to MAIL FROM command))

The receiving server has too strict rules!

It just expects the “From” and the “Return-Path” headers to contain the same string – the sender’s email box.

As you can see, from the example above, the application sends all emails (from let’s say web forms) from the www-data@mydomain.com and probably the www-data is the username of the OS user, under which the application executes.
Or you want to overwrite the Return-Path because it uses the username of the application, which sent the email like “web”, “apache”, “www-data” and so on.
Here is how to overwrite the Return-Path with postfix mail system.

STEP 1) Edit postfix configuration

Add a line in /etc/postfix/main.cf (it is perfectly fine to be on the last line):

smtp_generic_maps = hash:/etc/postfix/generic

And create the file /etc/postfix/generic with mapping “old@mailbox.com new@mail.com”:

www-data@mydomain.com no-reply@domain.com

The domains of the emails may be different or the same. It doesn’t matter. If you do not know what is your “www-data@mydomain.com” the mail logs in /var/log/messages or /varlog/mail maight help to find the emailbox or just send yourself an email and look for the Return-Path.
And a real-world example for /etc/postfix/generic

www-data@www.mydomain.com no-reply@ahelpme.com

STEP 2) Generate the hash file, which postfix will use. Reload the postfix.

The postfix will use the hash file add in the configuration. Just execute:

postmap /etc/postfix/generic

The above command will create a binary file /etc/postfix/generic.db, which will be used by the postfix mail system. Do not edit the file directly. To add entry, just use a text editor and edit /etc/postfix/generic (without the “.db” suffix) and then reload/restart the postfix to enable the new configuration.
And reload (or restart) postfix with

systemctl reload postfix

or for init systems:

/etc/init.d/postfix restart

nginx proxy cache and expires directive – pass-through the origin cache control

Proxying static content sometimes requires to modify the expire directive on the proxy server, but sometimes it may just need to pass the origin expire directive. What if the “expires” is defined in the server section and ones need to pass through the value from the origin by the proxy server?

Simply switch off the expires by “expires off”. It will disable an earlier definition in the block it is used and the proxy answer to the client will include the origin header for the cache control.

It won’t disable the cache control meaning to add no-cache in proxy answer to the client. So if it is used “expires” in a block and a pass-through from the origin is required, just make a location block with “expires off”:

        server {
                listen          10.10.10.10:443 ssl http2;
                server_name     srv1.example.com;

                ssl_certificate  /etc/ssl/nginx/srv1.example.com.chain.crt;
                ssl_certificate_key /etc/ssl/nginx/srv1.example.com.key;

                resolver 8.8.8.8;

                client_max_body_size 12m;

                expires         -1;
                root            /mnt/storage/web/root;
                access_log      /mnt/storage/web/logs/srv1.example.com;
                error_log       /mnt/storage/web/logs/srv1.example.com warn;

                location / {
                        #proxy
                        proxy_buffer_size   128k;
                        proxy_buffers   4 256k;
                        proxy_busy_buffers_size   256k;
                        proxy_buffering off;
                        proxy_read_timeout 600;
                        proxy_send_timeout 600;
                        proxy_store off;
                        proxy_cache off;
                        proxy_redirect off;
                        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
                        proxy_no_cache $http_pragma $http_authorization;
                        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
                        proxy_cache_bypass $http_pragma $http_authorization;

                        proxy_pass https://https_backend;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
                        proxy_set_header Host $host;
                        proxy_set_header        X-Real-IP-EXAMPLE       $remote_addr;
                        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                }

                location ~* \.jpeg {
                        expires off;
                        #proxy
                        proxy_buffer_size   128k;
                        proxy_buffers   4 256k;
                        proxy_busy_buffers_size   256k;
                        proxy_buffering off;
                        proxy_read_timeout 600;
                        proxy_send_timeout 600;
                        proxy_store off;
                        proxy_cache off;
                        proxy_redirect off;
                        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
                        proxy_no_cache $http_pragma $http_authorization;
                        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
                        proxy_cache_bypass $http_pragma $http_authorization;
                        proxy_ignore_headers "Expires" "Cache-Control";


                        proxy_pass https://https_backend;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
                        proxy_set_header Host $host;
                        proxy_set_header        X-Real-IP-EXAMPLE       $remote_addr;
                        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                }

                 location ~* \.(jpg|gif|png|css|mcss|js|mjs|woff|woff2)$ {
                        expires 30d;
                        #proxy
                        proxy_buffer_size   128k;
                        proxy_buffers   4 256k;
                        proxy_busy_buffers_size   256k;
                        proxy_buffering off;
                        proxy_read_timeout 600;
                        proxy_send_timeout 600;
                        proxy_store off;
                        proxy_cache off;
                        proxy_redirect off;
                        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
                        proxy_no_cache $http_pragma $http_authorization;
                        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
                        proxy_cache_bypass $http_pragma $http_authorization;

                        proxy_pass http://http_backend;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
                        proxy_set_header Host $host;
                        proxy_set_header        X-Real-IP-EXAMPLE       $remote_addr;
                        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                }
        }

The above sample configuration defines 3 location blocks for 3 different “expires” cases:

  1. Uses the server block “expires -1” – proxy for dynamic code such as proxy site application code. No caching.
  2. Uses the location block .jpeg “expires off” – the proxy will pass-through the value from the origin server. Any value from the origin.
  3. Uses the location block .(gif|png|css|mcss|js|mjs|woff|woff2) “expires 30d” – it will set +30 days expire header no matter of the origin value. Origin server value is replaced by the today +30 days in the future cache cache header.

Dracut boot failed with missing device – exit and continue normal booting!

This issue deserves a much more article, in fact, a straightforward tip:

You may be able to continue a normal boot only by typing “exit” and hitting enter in the “Dracut” console.

Most of the time this Dracut console entering is caused because the system administrator of the server/machine added, replaced or deleted a RAID or similar device and forgot to update the configuration (grub2 probably). And in most of these cases, the raid is not critical for machine normal boot from the root partition, but it may be critical for the services lately. Booting in normal mode, even without some devices, is the main goal because under the normal mode it easier to repair the system.
Check out the two articles on the topic (especially the first one):

SCREENSHOT 1) Just type “exit” and hit enter.

It’s worth noting that if you executed some commands in the console and/or mounted devices to test they are with healthy file system or for whatever reason you did it, the boot process may not continue after typeing exit and probablly a reboot is required. The server will go once more in this mode and then just typing will work.

main menu
type exit

Keep on reading!

Dual 10Gbit network using PCI 2.0 (5GT/s) x4 – what is the maximum bandwidth?

Ever wondered what is the maximum bandwidth of a Dual 10Gbit LAN card, which can be reached using a dual 10Gbit ports card in a PCI Express 2.0 (Speed 5GT/s) and Width x4?
Here is the graph:
h3>SCREENSHOT 1) The bandwidth never exceeds 13.90Gbps (performed with only synthetic tests and mixed synthetic plus real http traffic).

main menu
Max graph bandwidth – below 14Gbps

As you can see the total of the two network ports is a little bit under 14Gbps. We are using intel dual-port controller:

Intel Corporation Ethernet Server Adapter X520-2

Even the dmesg reports the card is not in the right place:

[ 2.541813] ixgbe 0000:82:00.0: (Speed:5.0GT/s, Width: x4, Encoding Loss:20%)
[ 2.541832] ixgbe 0000:82:00.0: This is not sufficient for optimal performance of this card.
[ 2.541854] ixgbe 0000:82:00.0: For optimal performance, at least 20GT/s of bandwidth is required.
[ 2.541876] ixgbe 0000:82:00.0: A slot with more lanes and/or higher speed is suggested.
[ 2.541978] ixgbe 0000:82:00.0: MAC: 2, PHY: 19, SFP+: 5, PBA No: FFFFFF-0FF
[ 2.541996] ixgbe 0000:82:00.0: 00:16:31:fd:03:b8
[ 2.543027] ixgbe 0000:82:00.0: Intel(R) 10 Gigabit Network Connection
[ 2.694839] ixgbe 0000:82:00.1: Multiqueue Enabled: Rx Queue count = 48, Tx Queue count = 48 XDP Queue count = 0
[ 2.695531] ixgbe 0000:82:00.1: PCI Express bandwidth of 16GT/s available
[ 2.696087] ixgbe 0000:82:00.1: (Speed:5.0GT/s, Width: x4, Encoding Loss:20%)
[ 2.696631] ixgbe 0000:82:00.1: This is not sufficient for optimal performance of this card.
[ 2.697181] ixgbe 0000:82:00.1: For optimal performance, at least 20GT/s of bandwidth is required.
[ 2.697723] ixgbe 0000:82:00.1: A slot with more lanes and/or higher speed is suggested.
[ 2.698352] ixgbe 0000:82:00.1: MAC: 2, PHY: 19, SFP+: 6, PBA No: FFFFFF-0FF
[ 2.698890] ixgbe 0000:82:00.1: 00:16:31:fd:03:b9
[ 2.700436] ixgbe 0000:82:00.1: Intel(R) 10 Gigabit Network Connection

The controller is in the PCI Express slot – PCI 2.0 (Speed 5.0GT/s) Width x4, but the capability of the card is Speed 5GT/s, Width x8. This can be seen with “lspci -vvv” and the meanings with simple words:

  • LnkCap – it is the device capability. In fact, this is the hightest possible speed of the device put in the slot.
  • LnkSta – the actual speed of the PCI Express link.

If the device capacity (LnkCap) is higher than the actual speed (LnkSta) you could put the device in another slot with a higher capacity to take full advantage of the device.

In our case, the maximum bandwidth of the two ports of the Dual 10G port Intel card was just below 14Gbps (13.85Gbps ~ 13.95Gbps). After we move the very same card in another slot with the capability of Speed 5GT/s Width x8, the card’s maximum bandwidth increased to 19.20Gbps ~ 19.40Gbps.

SCREENSHOT 2) After changing the slot of the network card, which supports PCI 2.0 (5GT/s) Width x8, the bandwidth tops arround 19.40Gbps in synthetic tests (performed with iperf3).

main menu
Max graph bandwidth – almost 20Gbps

Keep on reading!

send access logs in json to Elasticsearch using rsyslog

Here is a simple example of how to send well-formatted JSON access logs directly to the Elasticsearch server.

It is as simple as Nginx (it could be any webserver) sends the access logs using UDP to the rsyslog server, which then sends well-formatted JSON data to the Elasticsearch server.

No other server program like logstash is used. The data is transformed in rsyslog and it is passed through a couple of modules to ensure the JSON is valid and Elasticsearch would not complain (and missing logs entry!).
Objectives:

  1. Nginx to send access logs using UDP to the rsyslog server.
  2. rsyslog server to accept UDP messages.
  3. rsyslog server transforms the web-server access logs from the Nginx server to JSON.
  4. rsyslog server sends the validated JSON to the Elasticsearch server.

The configuration and the commands are tested on CentOS 7, CentOS 8 and Ubuntu 18 LTS (just replace yum with apt).

STEP 1) Nginx to send access logs using UDP to the rsyslog server.

It is simple enough to send Nginx’ access logs to a UDP server (local or remote) there are two articles here: nginx remote logging to UDP rsyslog server (CentOS 7) and syslog – UDP local to rsyslog and send remote with TCP and compression. For simplicity, Nginx will send to the remote rsyslog server using UDP.
Instruct the Nginx to send access logs using UDP to the remote rsyslog server.
Define a new access log format in http serction:

        log_format mainJSON escape=json '@cee: {'
                '"vhost":"$server_name",'
                '"remote_addr":"$remote_addr",'
                '"time_iso8601":"$time_iso8601",'
                '"request_uri":"$request_uri",'
                '"request_length":"$request_length",'
                '"request_method":"$request_method",'
                '"request_time":"$request_time",'
                '"server_port":"$server_port",'
                '"server_protocol":"$server_protocol",'
                '"ssl_protocol":"$ssl_protocol",'
                '"status":"$status",'
                '"bytes_sent":"$bytes_sent",'
                '"http_referer":"$http_referer",'
                '"http_user_agent":"$http_user_agent",'
                '"upstream_response_time":"$upstream_response_time",'
                '"upstream_addr":"$upstream_addr",'
                '"upstream_connect_time":"$upstream_connect_time",'
                '"upstream_cache_status":"$upstream_cache_status",'
                '"tcpinfo_rtt":"$tcpinfo_rtt",'
                '"tcpinfo_rttvar":"$tcpinfo_rttvar"'
                '}';

It is a valid JSON object, but sometimes in user agent or referer contain non-standard and not valid characters, so it breaks the JSON format, which may lead to problems in Elasticsearch (read ahead).

In a server section of Nginx configuration file /etc/nginx/nginx.conf:

server {
     .....
     access_log      /var/log/nginx/example.com_access.log main;
     access_log      syslog:server=10.10.10.2:514,facility=local7,tag=nginx,severity=info mainJSON;
     .....
}

Keep on reading!

Patch and resume compilation of a failed package in Gentoo – ebuild, local repository or ctrl+Z

A dependency package failed to compile throwing error and existing the emerge of a queue with a hundred and more packages. Or worse you installed a new version of a package and multiple rebuilds are pulled, but one of the dependencies fails and you may end up with a broken system? What can you do? There is no new version of the failed package and yes, there is a bug in the Gentoo’s Bugzilla – https://bugs.gentoo.org/. And there is a solution with a patch, which has not made its way to the production and in Gentoo portage yet.
The package in the portage is broken, no new fixed package is released, but there is a patch to fix your issue. Here is what you can do:

  • Make your own package with the fixed version of the original package and put it in your local repository (not the official one, because on every emerge –sync it will be deleted). You should make a local repository and put the ebuild and all necessary files.
  • Or just download the patch and patch the source in the directory, which still holds the source of the failed package and resume the compilation manually. Then install it. Using this tutorial – Resume installation after a package build error, when emerging firefox under Gentoo
  • Just after the uncompress operation of the emerge press CTRL+Z to put the operation in the background and download and patch. Then bring back the emerge from the background with “fg” command.

The second and third options are not permanent solutions, but they are fast enough to be used in some situations.
Here are steps for the first and second option you may have:

OPTION 1) Make your own package.

Create a local repository (for details Simple steps to create Gentoo custom repository and add a package):

root@srv ~ # mkdir -p /var/db/repos/my-local-portage/{metadata,profiles}
root@srv ~ # cat  << 'EOF' > /var/db/repos/my-local-portage/metadata/layout.conf
masters = gentoo
auto-sync = false
EOF
root@srv ~ # cat  << 'EOF' > /etc/portage/repos.conf/my-local-portage.conf
[my-local-portage]
location = /var/db/repos/my-local-portage 
EOF
root@srv ~ # cat  << 'EOF' > /var/db/repos/my-local-portage/profiles/repo_name
my-local-portage
EOF

Copy the ebuild file of the package you want to modify in the custom repository directory created above (it’s a good idea to copy all the sub-directories, too):
Keep on reading!

Simple steps to create Gentoo custom repository and add a package

Creating a custom repository would give you a chance to fast edit (ebuild) files of existing packages and drop better versions to the custom repository, which then will be used to install in the system. Here is the simplest way to create a Gentoo custom repository without installing any mandatory software. You may check the two Gentoo articles on the subject – https://wiki.gentoo.org/wiki/Custom_repository, which uses repoman (and additional software to install) and https://wiki.gentoo.org/wiki/Handbook:AMD64/Portage/CustomTree#Defining_a_custom_repository, which is part of a bigger article and without a clear example with a package as we are going to show.
Our custom repository name is “my-local-portage”.

STEP 1) Create the directories and basic configuration files for the new custom repository

Just two mandatory directories.

mkdir -p /var/db/repos/my-local-portage/{metadata,profiles}

The minimal configuration in two files:

cat  << 'EOF' > /var/db/repos/my-local-portage/metadata/layout.conf
masters = gentoo
auto-sync = false
EOF
cat  << 'EOF' > /var/db/repos/my-local-portage/profiles/repo_name
my-local-portage
EOF

Fix the permissions

chown -R portage:portage /var/db/repos/my-local-portage

The custom repository is set up. Now only the emerge should get the configuration to check for it in the next step.

STEP 2) Portage global configuration.

Add a file pointing for your custom repository in the Portage global configuration directory “/etc/portage”:

cat  << 'EOF' > /etc/portage/repos.conf/my-local-portage.conf
[my-local-portage]
location = /var/db/repos/my-local-portage
EOF

STEP 3) Add a package in the new custom repository.

The package version may be the same version as in the official Gentoo repository, but the package form the custom repository will be used if no repository is included in the “emerge” command.
For simplicity, we are going not to modify the ebuild file of a copied official package, but the idea is to copy an existing ebuild file and then change it for the user’s needs and the steps are the same as follow.
Just copy the file (and edit it). The package “app-text/calibre” was randomly selected for the example.

mkdir /var/db/repos/my-local-portage/app-text/calibre
cp /usr/portage/app-text/calibre/calibre-4.9.1-r1.ebuild /var/db/repos/my-local-portage/app-text/calibre/

Create the manifest files and you are ready:

cd /var/db/repos/my-local-portage/app-text/calibre/
ebuild calibre-4.9.1-r1.ebuild manifest
>>> Downloading 'ftp://ftp.free.fr/mirrors/ftp.gentoo.org/distfiles/2e/calibre-4.9.1.tar.xz'
--2020-02-06 18:04:59--  ftp://ftp.free.fr/mirrors/ftp.gentoo.org/distfiles/2e/calibre-4.9.1.tar.xz
           => '/usr/portage/distfiles/calibre-4.9.1.tar.xz.__download__'
Resolving ftp.free.fr... 212.27.60.27, 2a01:e0c:1:1598::1
Connecting to ftp.free.fr|212.27.60.27|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done.    ==> PWD ... done.
==> TYPE I ... done.  ==> CWD (1) /mirrors/ftp.gentoo.org/distfiles/2e ... done.
==> SIZE calibre-4.9.1.tar.xz ... 37529656
==> PASV ... done.    ==> RETR calibre-4.9.1.tar.xz ... done.
Length: 37529656 (36M) (unauthoritative)

calibre-4.9.1.tar.xz                       100%[========================================================================================>]  35.79M  5.53MB/s    in 8.0s    

2020-02-06 18:05:08 (4.48 MB/s) - '/usr/portage/distfiles/calibre-4.9.1.tar.xz.__download__' saved [37529656]

>>> Creating Manifest for /var/db/repos/my-local-portage/app-text/calibre

Fix the permissions with

chown -R portage:portage /var/db/repos/my-local-portage

The manifest file contains the hash of the ebuild file and all the additional files if any (for this package there are no additional files). All file needed for the operation will be downloaded so they must be network accessed in the time of executing the command (except in the cases when they have already existed in the distfiles directory and/or the subdirectories of the /var/db/repos/my-local-portage/app-text/calibre)
Keep on reading!

more than the default 4 parallel processes using distributed compiling with distcc

Distributed compilation could greatly speed the build process of Gentoo packages (and not only Gentoo, of course). If you tend to use Gentoo on a laptop or a relatively old CPU you may want to build packages distributively across multiple hosts.
Different (Linux) distributions use different configurations and environment scheme and sometimes it is difficult to sift the configuration, which could be applied to your setup. This is not a tutorial on how to enable parallel processing in Gentoo but it is just our client-site setup.

By default, there is a limit of 4 parallel processes, which is utterly insufficient, because nowadays most servers have more than 8 cores/logical compute units (not to mention that probably most would have 16 and above cores compute units).

The environment variable DISTCC_HOSTS controls, which hosts will receive files for the compilation of what they support and what is the limit of parallel processes.

In Gentoo we set this variable in the /etc/portage/make.conf. Here what you may include in make.conf to have 16 parallel remote processes and up to maximum 4 local (if the remote fails):

MAKEOPTS="-j16 -l4"
FEATURES="distcc"
DISTCC_HOSTS="192.168.0.101/16"

We use the environment DISTCC_HOSTS (here in Gentoo put in the make.conf, but in another Linux distribution an environment variable with this name should be set) because it is easy to set up and control globally for the Gentoo emerge system.
According to the documents:

In order, distcc looks in the $DISTCC_HOSTS environment variable, the user’s $DISTCC_DIR/hosts file, and the system-wide host file.

So when using emerge to build the packages, the emerge will rely on $DISTCC_HOSTS in make.conf (/etc/portage/make.conf or /etc/make.conf if you still use the old path), “/var/tmp/portage/.distcc/” (the build process uses “portage” user and group, not root!) and “/etc/distcc/hosts”. The first option used in the order above will be set the hosts and the limitation for the distributed processing. So if you use $DISTCC_HOSTS in make.conf (or environment) you wouldn’t need to set the “hosts” file.
Separate the different hosts with white space if you have more than one and always use the notation “/LIMIT” for each host. The default value is only 4 parallel processes (i.e it is implicitly added /4 to each hosts in the configuration!)
Keep on reading!