Fix your broken or old portage in Gentoo, fix the emerge command

Using Gentoo for servers and chroots it might happen to deal with old and not regularly updated Gentoo environment, which in some cases is completely non-updatable using the regular process with emerge command. And no emerge means no installs from the Gentoo packaging system. Or you maybe have a broken portage after failed update with missing emerge command or missing or broken portage modules

In general, this article is for those, who cannot update their “sys-apps/portage” or even the emerge command exits with errors and is of no use.

In cases such as:

  • Missing portage files (broken portage)
  • Broken emerge command (broken portage)
  • no update of new portage is possible – very very old portage, which does not support EAPI of all current “sys-apps/portage” (now is EAPI=5) and you do not have old portage ebuild files
  • Easy update the portage to the newest in an old Gentoo system.

A real world errors are included at the bottom for each case above.

You need just a working python 2.7.x and git to clone “https://anongit.gentoo.org/git/proj/portage.git” or at least some tool to download the portage package.

Unpack to a globally accessible directory, DO NOT USE “/root“, because emerge changes user to non-privileges and needs execute permission for other for the all the path to the directory of emerge. Look below for more details.

Our example is to repair a completely broken portage with error:

srv ~ # emerge -va portage
Traceback (most recent call last):
  File "/usr/lib/python-exec/python2.7/emerge", line 42, in <module>
    import portage
ImportError: No module named portage

STEP 1) Clone the portage package from the git or download the portage tarball.

If you do not have a git client you can always check the URL in the sys-app/portage ebuild and download the URL. Still, if you cannot find the tarball’s URL, you can just simply get a mirror such as https://mirror.leaseweb.com/gentoo/distfiles, wait for the loading of the page (12Mbytes, it’s huge, all Gentoo tarball files’ names are here) and grep in it by the name “portage”. Or you can check here https://dev.gentoo.org/~zmedico/portage/archives/. Download the latest tarball and unpack it in the /opt directory.
Using git is the easiest method, but it is the same with downloading and unpacking the tarball in the “/opt”.

srv opt # git clone https://anongit.gentoo.org/git/proj/portage.git
Cloning into 'portage'...
remote: Enumerating objects: 151654, done.
remote: Counting objects: 100% (151654/151654), done.
remote: Compressing objects: 100% (39915/39915), done.
remote: Total 151654 (delta 112542), reused 149864 (delta 111349)
Receiving objects: 100% (151654/151654), 22.38 MiB | 550.00 KiB/s, done.
Resolving deltas: 100% (112542/112542), done.
srv opt # cd portage/
srv portage # ls -al
total 232
drwxr-xr-x. 11 root root  4096 Oct  6 04:52 .
drwx------.  4 root root  4096 Oct  6 04:51 ..
drwxr-xr-x.  6 root root  4096 Oct  6 04:52 bin
drwxr-xr-x.  5 root root  4096 Oct  6 04:52 cnf
-rw-r--r--.  1 root root  1840 Oct  6 04:52 COPYING
-rw-r--r--.  1 root root  6646 Oct  6 04:52 DEVELOPING
drwxr-xr-x.  5 root root  4096 Oct  6 04:52 doc
-rw-r--r--.  1 root root   200 Oct  6 04:52 .editorconfig
drwxr-xr-x.  7 root root  4096 Oct  6 04:52 .git
-rw-r--r--.  1 root root    80 Oct  6 04:52 .gitignore
drwxr-xr-x.  4 root root  4096 Oct  6 04:52 lib
-rw-r--r--.  1 root root 18092 Oct  6 04:52 LICENSE
-rwxr-xr-x.  1 root root  1272 Oct  6 04:52 make.conf.example-repatch.sh
drwxr-xr-x.  3 root root  4096 Oct  6 04:52 man
-rw-r--r--.  1 root root   290 Oct  6 04:52 MANIFEST.in
drwxr-xr-x.  2 root root  4096 Oct  6 04:52 misc
-rw-r--r--.  1 root root 17845 Oct  6 04:52 NEWS
-rw-r--r--.  1 root root     0 Oct  6 04:52 .portage_not_installed
-rw-r--r--.  1 root root  2246 Oct  6 04:52 README
-rw-r--r--.  1 root root 68122 Oct  6 04:52 RELEASE-NOTES
drwxr-xr-x.  6 root root  4096 Oct  6 04:52 repoman
-rwxr-xr-x.  1 root root  4889 Oct  6 04:52 runtests
-rwxr-xr-x.  1 root root 19743 Oct  6 04:52 setup.py
drwxr-xr-x.  2 root root  4096 Oct  6 04:52 src
-rwxr-xr-x.  1 root root    81 Oct  6 04:52 tabcheck.py
-rw-r--r--.  1 root root  1864 Oct  6 04:52 TEST-NOTES
-rw-r--r--.  1 root root   571 Oct  6 04:52 testpath
-rw-r--r--.  1 root root   348 Oct  6 04:52 tox.ini
-rw-r--r--.  1 root root   584 Oct  6 04:52 .travis.yml

This is the latest portage with working emerge command in the bin sub-directory.
Keep on reading!

SSD cache device to a software raid using LVM2

Inspired by our article – SSD cache device to a hard disk drive using LVM, which uses SSD driver as a cache device to a single hard drive, we decided to make a new article, but this time using two hard drives in raid setup (in our case RAID1 for redundancy) and a single NVME SSD drive.
The goal:
Caching RAID1 consisting of two 8T hard drive with a single 1T NVME SSD drive. Caching reads and writes, i.e. the write-back is enabled.
Our setup:

  • 1 NVME SSD disk Samsung 1T. It will be used for writeback cache device (you may use writethrough, too, to maintain the redundancy of the whole storage)!
  • 2 Hard disk drive 8T grouped in RAID1 for redundancy.

STEP 1) Install lvm2 and enable the lvm2 service

Only this step is different on different Linux distributions. We included three of them:
Ubuntu 16+:

sudo apt update && apt upgrade -y
sudo apt install lvm2 -y
systemctl enable lvm2-lvmetad
systemctl start lvm2-lvmetad

CentOS 7:

yum update
yum install -y lvm2
systemctl enable lvm2-lvmetad
systemctl start lvm2-lvmetad

Gentoo:

emerge --sync
emerge -v sys-fs/lvm2
/etc/init.d/lvm start
rc-update add default lvm

Keep on reading!

genkernel – ERROR: Unable to generate splash, ‘splash_geninitramfs’ was not found!

If you tried recently to build your kernel with genkernel (4.0.0_beta17, but you will have the same problem with 3.x) you may end up with the following error:

* ERROR: Unable to generate splash, 'splash_geninitramfs' was not found!

The splash_geninitramfs was part of the removed Gentoo ebuild package – “media-gfx/splashutils“. The software in the package splashutils had not been maintained for a really long time and eventually, it was removed. At present, there is no package offering the splash_geninitramfs. Still, the same functionality could be archived with “sys-boot/plymouth“, but the genkernel does not have the option to use it.
The genkernel command option, which triggers the error is “–splash” you must remove it. And check whether the following option in “/etc/genkernel.conf” is set to “no” (at least, in genkernel version 3.x is the default set to “yes”):

SPLASH="no"

When you instruct the genkernel not to add boot splash using splashutils the error will be gone and the kernel build will be successful.

The output error

We include the output error when using “–splash” with genkernel and the last part of genkernel.log.

livecd ~ # genkernel --splash --install --clean --no-mrproper --menuconfig all
* Gentoo Linux Genkernel; Version 4.0.0_beta17
* Using genkernel configuration from '/etc/genkernel.conf' ...
* Running with options: --splash --install --clean --no-mrproper --menuconfig all

* Working with Linux kernel 5.3.1-gentoo for x86_64
* Using kernel config file '/usr/share/genkernel/arch/x86_64/generated-config' ...
* 
* Note: The version above is subject to change (depends on config and status of kernel sources).

* kernel: >> Initializing ...
*         >> Running 'make clean' ...
*         >> --no-mrproper is set; Skipping 'make mrproper' ...
*         >> Running 'make oldconfig' ...
*         >> Invoking menuconfig ...
*         >> Re-running 'make oldconfig' due to changed kernel options ...
*         >> Kernel version has changed (probably due to config change) since genkernel start:
*            We are now building Linux kernel 5.3.1-gentoo-x86_64 for x86_64 ...
*         >> Compiling 5.3.1-gentoo-x86_64 bzImage ...
*         >> Compiling 5.3.1-gentoo-x86_64 modules ...
*         >> Installing 5.3.1-gentoo-x86_64 modules (and stripping) ...
*         >> Generating module dependency data ...
*         >> Saving config of successful build to '/etc/kernels/kernel-config-5.3.1-gentoo-x86_64' ...

* initramfs: >> Initializing ...
*         >> Appending devices cpio data ...
*         >> Appending base_layout cpio data ...
*         >> Appending auxilary cpio data ...
*         >> Appending blkid cpio data ...
*         >> Appending busybox cpio data ...
*         >> Appending mdadm cpio data ...
*         >> Appending modprobed cpio data ...
*         >> Appending splash cpio data ...
* ERROR: Unable to generate splash, 'splash_geninitramfs' was not found!
* Please consult '/var/log/genkernel.log' for more information and any
* errors that were reported above.
* 
* Report any genkernel bugs to bugs.gentoo.org and
* assign your bug to genkernel@gentoo.org. Please include
* as much information as you can in your bug report; attaching
* '/var/log/genkernel.log' so that your issue can be dealt with effectively.
* 
* Please do *not* report kernel compilation failures as genkernel bugs!

livecd ~ # cat /var/log/genkernel.log
.....
.....
*         >> Appending splash cpio data ...

* ERROR: Unable to generate splash, 'splash_geninitramfs' was not found!
* Please consult '/var/log/genkernel.log' for more information and any
* errors that were reported above.
* 
* Report any genkernel bugs to bugs.gentoo.org and
* assign your bug to genkernel@gentoo.org. Please include
* as much information as you can in your bug report; attaching
* '/var/log/genkernel.log' so that your issue can be dealt with effectively.
* 
* Please do *not* report kernel compilation failures as genkernel bugs!
* 

* mount: >> '/boot' is not a mountpoint; Nothing to restore ...
>>> Ended on: 2019-09-30 18:11:20 (after 0 days 0 hours 08 minutes 06 seconds)

Save iptables rules over reboots on Ubuntu 16 and Ubuntu 18 – persistent iptables rules

Moving towards the firewalld software and especially the systemd some good old init scripts got missing! For example, one of those good scripts is the init script for iptables firewall, which allows saving iptables rules and during boot, it loads them again. With the init iptables script we have persistence of the iptables rules. Meanwhile, we can always call the init script with “save” argument to update the currently saved rules. Many different Linux distributions have this init script – “/etc/init.d/iptables”, but in systemd world, it has been removed and replaced with nothing (probably, because you are encouraged to use firewalld, which is not a bad thing!).

There are two packages “iptables-persistent” and “netfilter-persistent”, which work together to have iptables persistence over reboots. The rules are saved and restored automatically during system startup.

First, install “iptables-persistent” and “netfilter-persistent” with

sudo apt install netfilter-persistent iptables-persistent

During the iptables–persistent installation the setup asks the user to save the current iptables rules. Hit “Yes” if you want to save the current iptables rules, which will be automatically loaded the next time the system starts up.

main menu
Configuring iptables-persistent setup

So it is safe to install it on a live system – the current iptables rules won’t be deleted.
Second, ensure the boot script to restore the iptables rules is enabled

sudo systemctl enable netfilter-persistent

Additional information

Saving the current state of the iptables rules:

myuser@myubuntupc:~$ sudo /usr/sbin/netfilter-persistent save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save

Restore the original state of the iptables rules:

sudo systemctl restart netfilter-persistent

And all commands you can do – start, stop, restart, reload, flush, save. You can use the script directly (it is not mandatory to use systemctl to restart, i.e. restore rules and etc.)

myuser@myubuntupc:~$ sudo /usr/sbin/netfilter-persistent
Usage: /usr/sbin/netfilter-persistent (start|stop|restart|reload|flush|save)

The script netfilter-persistent executes 2 other scripts as plugins:

/usr/share/netfilter-persistent/plugins.d/15-ip4tables
/usr/share/netfilter-persistent/plugins.d/25-ip6tables

The iptables rules are saved respectively in files

/etc/iptables/rules.v4
/etc/iptables/rules.v6

And you can always edit them manually or save/restore with iptables-save and iptables-restore redirecting the output to the above files.

It’s normal the state of the “active (exited)”. The service is “enabled” as you can see (by default the setup automatically enables the service on Ubuntu, but always check it to be sure, it’s the firewall!).

myuser@myubuntupc:~$ sudo systemctl status netfilter-persistent
● netfilter-persistent.service - netfilter persistent configuration
   Loaded: loaded (/lib/systemd/system/netfilter-persistent.service; enabled; vendor preset: enabled)
   Active: active (exited) since Thu 2019-01-17 20:44:08 EST; 14min ago
 Main PID: 666 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/netfilter-persistent.service

Jan 17 20:44:08 myubuntupc systemd[1]: Starting netfilter persistent configuration...
Jan 17 20:44:08 myubuntupc netfilter-persistent[666]: run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables start
Jan 17 20:44:08 myubuntupc netfilter-persistent[666]: run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables start
Jan 17 20:44:08 myubuntupc systemd[1]: Started netfilter persistent configuration.

CentOS 7 – Your kernel headers for kernel cannot be found at – missing kernel-devel

Getting the following error may be deceiving:

Error! echo
Your kernel headers for kernel 3.10.0-1062.1.1.el7.x86_64 cannot be found at
/lib/modules/3.10.0-1062.1.1.el7.x86_64/build or /lib/modules/3.10.0-1062.1.1.el7.x86_64/source.

Because you may have already installed the kernel-headers package for the current kernel and still to get the same error. So what is missing?

In fact, the kernel headers for compiling a kernel module is in kernel-devel package.

[root@localhost ~]# yum install kernel-devel
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.wwfx.net
 * extras: mirror.wwfx.net
 * updates: mirror.wwfx.net
Resolving Dependencies
--> Running transaction check
---> Package kernel-devel.x86_64 0:3.10.0-1062.1.1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================================================================
 Package                                   Arch                                Version                                           Repository                            Size
============================================================================================================================================================================
Installing:
 kernel-devel                              x86_64                              3.10.0-1062.1.1.el7                               updates                               18 M

Transaction Summary
============================================================================================================================================================================
Install  1 Package

Total download size: 18 M
Installed size: 38 M
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
kernel-devel-3.10.0-1062.1.1.el7.x86_64.rpm                                                                                                          |  18 MB  00:00:02     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : kernel-devel-3.10.0-1062.1.1.el7.x86_64                                                                                                                  1/1 
  Verifying  : kernel-devel-3.10.0-1062.1.1.el7.x86_64                                                                                                                  1/1 

Installed:
  kernel-devel.x86_64 0:3.10.0-1062.1.1.el7                                                                                                                                 

Complete!

If you have used other Linux distribution the “kernel headers”/”linux headers” package just means what it is named. In the CentOS 7 world there are two packages:

[root@localhost ~]# yum info kernel-devel
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.wwfx.net
 * extras: mirror.wwfx.net
 * updates: mirror.wwfx.net
Installed Packages
Name        : kernel-devel
Arch        : x86_64
Version     : 3.10.0
Release     : 1062.1.1.el7
Size        : 38 M
Repo        : installed
From repo   : updates
Summary     : Development package for building kernel modules to match the kernel
URL         : http://www.kernel.org/
License     : GPLv2
Description : This package provides kernel headers and makefiles sufficient to build modules
            : against the kernel package.

[root@localhost ~]# yum info kernel-headers
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.wwfx.net
 * epel: mirrors.neterra.net
 * extras: mirror.wwfx.net
 * updates: mirror.wwfx.net
Installed Packages
Name        : kernel-headers
Arch        : x86_64
Version     : 3.10.0
Release     : 1062.1.1.el7
Size        : 3.7 M
Repo        : installed
From repo   : updates
Summary     : Header files for the Linux kernel for use by glibc
URL         : http://www.kernel.org/
License     : GPLv2
Description : Kernel-headers includes the C header files that specify the interface
            : between the Linux kernel and userspace libraries and programs.  The
            : header files define structures and constants that are needed for
            : building most standard programs and are also needed for rebuilding the
            : glibc package.

CentOS 7 – Dependency Resolution – Error – Requires: dkms – missing epel repository

Quick note for those not familiar with the CentOS 7 peculiarity and especially the repository peculiarity.
Receiving the follwoing error:

--> Finished Dependency Resolution
Error: Package: 3:kmod-nvidia-latest-dkms-418.87.00-2.el7.x86_64 (cuda)
           Requires: dkms
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

It means you need a package (or meta-package, which might pull multiple packages and dependencies offering a big framework, for example), which could not be found in the existing repositories. In this very case, we need the DKMS (Dynamic Kernel Module Support) – https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support

The DKMS is offered in epel repository and it could not be found in the CentOS 7 official repositories. Just add the epel repository.

[root@localhost ~]# yum install -y epel-release
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.daticum.com
 * extras: mirrors.daticum.com
 * updates: mirrors.daticum.com
Resolving Dependencies
--> Running transaction check
---> Package epel-release.noarch 0:7-11 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================================================================
 Package                                       Arch                                    Version                                Repository                               Size
============================================================================================================================================================================
Installing:
 epel-release                                  noarch                                  7-11                                   extras                                   15 k

Transaction Summary
============================================================================================================================================================================
Install  1 Package

Total download size: 15 k
Installed size: 24 k
Downloading packages:
epel-release-7-11.noarch.rpm                                                                                                                         |  15 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : epel-release-7-11.noarch                                                                                                                                 1/1 
  Verifying  : epel-release-7-11.noarch                                                                                                                                 1/1 

Installed:
  epel-release.noarch 0:7-11                                                                                                                                                

Complete!

And rerun your first install yum line. Now you won’t receive the DKMS error.

Install CUDA and NVIDIA video driver under CentOS 7

Nvidia CUDA Toolkit supports CentOS 7 and it is relatively simple to install it. Nvidia provides three types of installation – a big setup file, a big rpm file and an official Nvidia repository, which we are going to use it in this article. The Nvidia repository contains the Nvidia video driver for the Nvidia video cards like GeForce, GTX, RTX and so on. You may need CUDA Toolkit if you are a game developer or you want to build yourself some of the mining software like XMR-STAK.
In this article, we are going to use the NVIDIA official repository for CUDA and the video driver module. There are other ways to install CUDA, which are not the purpose of this article. Using an official repository is the best practice for installing software on your system.

STEP 1) Update and install the NVIDIA official repository.

yum update -y
yum install -y yum-utils
yum-config-manager --add-repo http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo

Keep on reading!

nginx remote logging to UDP rsyslog server (CentOS 7)

This article will present to you all the configuration needed to remotely save access logs of an Nginx web server. All the configuration from the client and server sides is included. The client and the server use CentOS 7 Linux distribution and the configuration could be used under different Linux distribution. Probably only Selinux rules are kind of specific to the CentOS 7 and the firewalld rules are specific for those who use it as a firewall replacing the iptables. Here is the summary of what to expect:

  • Client-side – nginx configuration
  • Server-side – rsyslog configuration to accept UDP connections
  • Server-side – selinux and firewall configuration

The JSON formatted logs may be sent to a Elasticsearch server, for example. Here is how to do it – send access logs in json to Elasticsearch using rsyslog

STEP 1) Client-side – the Nginx configuration.

Nginx configuration is pretty simple just a single line with the log template and the IP (and port if not default 514) of the rsyslog server. For the record, this is the official documentation https://nginx.org/en/docs/syslog.html. In addition it worth mentioning there could be multiple access_log directives in a single section to log simultaneously on different targets (and the templates may be different or the same). So you can set the access log output of a section locally and remotely.
Nginx configuration (probably /etc/nginx/nginx.conf or whatever is the organization of your Nginx configuration files.)

server {
     .....
     access_log      /var/log/nginx/example.com_access.log main;
     access_log      syslog:server=10.10.10.2:514,facility=local7,tag=nginx,severity=info main3;
     .....
}

The “main” and “main3” are just names of the logging templates defined earlier (you may check rsyslog remote logging – prevent local messages to appear to see an interesting Nginx logging template).
The error log also could be remotely logged:

error_log syslog:server=10.10.10.3 debug;

STEP 2) Server-side – rsyslog configuration to accept UDP connections.

Of course, if you have not installed the rsyslog it’s high time you installed it with (for CentOS 7):

yum install -y rsyslog

To enable rsyslog to listen for UDP connections your rsyslog configuration file (/etc/rsyslog.conf) must include the following:

$ModLoad imudp
$UDPServerRun 514

Most of the Linux distributions have these two lines commented so you just need to uncomment them by removing the “#” from the beginning of the lines. If the lines are missing just add them under section “MODULES” (it should be near the first lines of the rsyslog configuration file).
Change the 514 with the number you like for the UDP listening port.
Write the client’s incoming lines of information to a different location and prevent merging with the local log messages – rsyslog remote logging – prevent local messages to appear. Include as a first rule under the rules’ section starting with “RULES” of the rsyslog configuration file (/etc/rsyslog.conf):

# Remote logging
$template HostIPtemp,"/mnt/logging/%FROMHOST-IP%.log"
if ($fromhost-ip != "127.0.0.1" ) then ?HostIPtemp
& stop

Logs only of remote hosts are going to be saved under /mnt/logging/.log.
Keep on reading!

nginx proxy cache – log the upstream response server, time, cache status, connect time and more in nginx access logs

The Nginx upstream module exposes embedded variables, which we can use to log them in the Nginx access log files.
Some of the variables are really interesting and could be of great use to the system administrators and in general to tune your systems (content delivery network?). For example, you can log

  • $upstream_cache_status – the cache status of the object the server served. For each URI you will have in the logs if the item is from the cache (HIT) or the Nginx used an upstream server to get the item (MISS)
  • $upstream_response_time – the time Nginx proxy needed to get the resource from the upstream server
  • $upstream_addr – the Nginx upstream server used for the requested URI in the logs.
  • $upstream_connect_time – the connect time to the specific

And many more you may check the documentation at the bottom with heading “Embedded Variables” – http://nginx.org/en/docs/http/ngx_http_upstream_module.html

For example, in peak hours, you can see how the time to get the resource from the upstream servers changes.

And you can substruct the time from the time your server served the URI to the client.

Of course, you can use this with any upstream case not only with proxy cache! These variables may be used with application backend servers like PHP (FastCGI) application servers and more. In a single log in the access log file, there could be information not only for the URI but for the time spent to generate the request in the application server.

Example

Logging in JSON format (JSON is just for the example, you can use the default string):

        log_format main3 escape=json '{'
                '"remote_addr":"$remote_addr",'
                '"time_iso8601":"$time_iso8601",'
                '"request_uri":"$request_uri",'
                '"request_length":"$request_length",'
                '"request_method":"$request_method",'
                '"request_time":"$request_time",'
                '"server_port":"$server_port",'
                '"server_protocol":"$server_protocol",'
                '"ssl_protocol":"$ssl_protocol",'
                '"status":"$status",'
                '"bytes_sent":"$bytes_sent",'
                '"http_referer":"$http_referer",'
                '"http_user_agent":"$http_user_agent",'
                '"upstream_response_time":"$upstream_response_time",'
                '"upstream_addr":"$upstream_addr",'
                '"upstream_connect_time":"$upstream_connect_time",'
                '"upstream_cache_status":"$upstream_cache_status",'
                '"tcpinfo_rtt":"$tcpinfo_rtt",'
                '"tcpinfo_rttvar":"$tcpinfo_rttvar"'
                '}';

We included the variables we needed, but there are a lot more, check out the Nginx documentation for more.
Just add the above snippet to your Nginx configuration and activate it with the access_log directive:

access_log      /var/log/nginx/example.com-json.log main3;

“main3” is the name of the format and it could be anything you like.

And the logs look like:

{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T13:36:33+00:00","request_uri":"/i/example/bc/bcda7f798ea1c75f18838bc3f0ffbd1c_200.jpg","request_length":"412","request_method":"GET","request_time":"0.325","server_port":"8801","server_protocol":"HTTP/1.1","ssl_protocol":"","status":"404","bytes_sent":"332","http_referer":"https://example.com/test_1","http_user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12","upstream_response_time":"0.324","upstream_addr":"10.10.10.2","upstream_connect_time":"0.077","upstream_cache_status":"MISS","tcpinfo_rtt":"45614","tcpinfo_rttvar":"22807"}
{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T13:36:33+00:00","request_uri":"/i/example/2d/2df5f3dfe1754b3b4ba8ac66159c0384_200.jpg","request_length":"412","request_method":"GET","request_time":"0.242","server_port":"8801","server_protocol":"HTTP/1.1","ssl_protocol":"","status":"404","bytes_sent":"332","http_referer":"https://example.com/test_1","http_user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12","upstream_response_time":"0.242","upstream_addr":"10.10.10.2","upstream_connect_time":"0.000","upstream_cache_status":"MISS","tcpinfo_rtt":"46187","tcpinfo_rttvar":"23093"}
{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T13:36:41+00:00","request_uri":"/flv/example/test_1.ts?st=E05FMg-DSIAgRfVhbadUWQ&e=1568381799&sopt=pdlfwefdfsr","request_length":"357","request_method":"GET","request_time":"0.960","server_port":"8801","server_protocol":"HTTP/1.0","ssl_protocol":"","status":"200","bytes_sent":"3988358","http_referer":"","http_user_agent":"Lavf53.32.100","upstream_response_time":"0.959","upstream_addr":"10.10.10.2","upstream_connect_time":"0.000","upstream_cache_status":"MISS","tcpinfo_rtt":"46320","tcpinfo_rttvar":"91"}
{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T14:09:34+00:00","request_uri":"/flv/example/aee001dce114c88874b306bc73c2d482_1.ts?range=564-1804987","request_length":"562","request_method":"GET","request_time":"0.613","server_port":"8801","server_protocol":"HTTP/1.0","ssl_protocol":"","status":"200","bytes_sent":"5318082","http_referer":"","http_user_agent":"AppleCoreMedia/1.0.0.16E227 (iPad; U; CPU OS 12_2 like Mac OS X; en_us)","upstream_response_time":"","upstream_addr":"","upstream_connect_time":"","upstream_cache_status":"HIT","tcpinfo_rtt":"45322","tcpinfo_rttvar":"295"}

It’s easy to print them beatiful in the console with the “jq” tool

[root@srv logging]# tail -f 10.10.10.10.log|awk 'BEGIN {FS="{"} {print "{"$2}'|jq "."
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T13:36:33+00:00",
  "request_uri": "/i/example/bc/bcda7f798ea1c75f18838bc3f0ffbd1c_200.jpg",
  "request_length": "412",
  "request_method": "GET",
  "request_time": "0.325",
  "server_port": "8801",
  "server_protocol": "HTTP/1.1",
  "ssl_protocol": "",
  "status": "404",
  "bytes_sent": "332",
  "http_referer": "https://example.com/test_1",
  "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12",
  "upstream_response_time": "0.324",
  "upstream_addr": "10.10.10.2",
  "upstream_connect_time": "0.077",
  "upstream_cache_status": "MISS",
  "tcpinfo_rtt": "45614",
  "tcpinfo_rttvar": "22807"
}
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T13:36:33+00:00",
  "request_uri": "/i/example/2d/2df5f3dfe1754b3b4ba8ac66159c0384_200.jpg",
  "request_length": "412",
  "request_method": "GET",
  "request_time": "0.242",
  "server_port": "8801",
  "server_protocol": "HTTP/1.1",
  "ssl_protocol": "",
  "status": "404",
  "bytes_sent": "332",
  "http_referer": "https://example.com/test_1",
  "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12",
  "upstream_response_time": "0.242",
  "upstream_addr": "10.10.10.2",
  "upstream_connect_time": "0.000",
  "upstream_cache_status": "MISS",
  "tcpinfo_rtt": "46187",
  "tcpinfo_rttvar": "23093"
}
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T13:36:41+00:00",
  "request_uri": "/flv/example/test_1.ts?st=E05FMg-DSIAgRfVhbadUWQ&e=1568381799&sopt=pdlfwefdfsr",
  "request_length": "357",
  "request_method": "GET",
  "request_time": "0.960",
  "server_port": "8801",
  "server_protocol": "HTTP/1.0",
  "ssl_protocol": "",
  "status": "200",
  "bytes_sent": "3988358",
  "http_referer": "",
  "http_user_agent": "Lavf53.32.100",
  "upstream_response_time": "0.959",
  "upstream_addr": "10.10.10.2",
  "upstream_connect_time": "0.000",
  "upstream_cache_status": "MISS",
  "tcpinfo_rtt": "46320",
  "tcpinfo_rttvar": "91"
}
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T14:09:34+00:00",
  "request_uri": "/flv/example/aee001dce114c88874b306bc73c2d482_1.ts?range=564-1804987",
  "request_length": "562",
  "request_method": "GET",
  "request_time": "0.613",
  "server_port": "8801",
  "server_protocol": "HTTP/1.0",
  "ssl_protocol": "",
  "status": "200",
  "bytes_sent": "5318082",
  "http_referer": "",
  "http_user_agent": "AppleCoreMedia/1.0.0.16E227 (iPad; U; CPU OS 12_2 like Mac OS X; en_us)",
  "upstream_response_time": "",
  "upstream_addr": "",
  "upstream_connect_time": "",
  "upstream_cache_status": "HIT",
  "tcpinfo_rtt": "45322",
  "tcpinfo_rttvar": "295"
}

3 misses and 1 hit, the hit 3 of the upstream variables we used are blank, because the server took the item from the cache.

rsyslog remote logging – prevent local messages to appear

A tip for those who have a remote user server for their log files. When you set up a remote server you probably don’t want local messages to appear in the logging directory (directories) and here is how you can archive it:
Above all the rules in the configuration file “/etc/rsyslog.conf” (or where it is in your system) you include “if” statement for the local server like this:

# Remote logging
$template HostIPtemp,"/mnt/logging/%FROMHOST-IP%.log"
if ($fromhost-ip != "127.0.0.1" ) then ?HostIPtemp
& stop

The name of the template is “HostIPtemp” and the starting part of the path “/mnt/logging/” may be anything you like.
All the remote messages will be redirected to the template and all the rules after won’t be applied to them because we use the “stop instruction”.

That’s why this rule must be above all rules in the whole rule configuration. Above all rules – probably you will find a commented line with “#### RULES ####”

The above configuration will have the following directory structure:

[root@srv ~]# ls -altr /mnt/logging/
total 2792
drwxr-xr-x. 7 root root    4096 12 Sep 10,05 ..
drwxr-xr-x. 2 root root    4096 12 Sep 13,01 .
-rw-------. 1 root root 2844525 12 Sep 13,01 10.10.10.10.log
-rw-------. 1 root root 1245633 12 Sep 13,01 10.10.10.11.log
-rw-------. 1 root root 9722578 12 Sep 13,01 10.10.10.12.log
-rw-------. 1 root root 1127231 12 Sep 13,01 10.10.10.13.log