nginx with SELinux and ngx_http_lua_module – PANIC: unprotected error in call to Lua API

main menu
runtime code generation failed restricted kernel

The CentOS 8 (CentOS Stream 9, too) might prevent to successfully execute a Lua code in NGINX web server with the modules like ngx_http_lua_module because of insufficient SELinux rules. By default, the web server is not allowed to execute programs that require memory addresses that are both executable and writable. Using Lua code may even crash with a core dump and restart of the NGIX worker process. The error logs contains information for the incident:

PANIC: unprotected error in call to Lua API (runtime code generation failed, restricted kernel?)

In fact, the error message hints what is going on – that the kernel prevents the execution of a code, which might be an addition limit enforced to this system.
Searching through the SELinux audit log shows the solution by enable the httpd_execmem SELinux boolean.

[root@srv ]# ausearch -c 'nginx' --raw | audit2allow


#============= httpd_t ==============

#!!!! This avc can be allowed using the boolean 'httpd_execmem'
allow httpd_t self:process execmem;

Apparently, the Lua code used needs a privileges to execute a code in the memory addresses (execmem). So when using the OpenResty ngx_http_lua_module, which is not distributed with the NGINX server, the server administrator should enable to true the SELinux boolean – httpd_execmem with:

setsebool -P httpd_execmem=1

The “-P” is going to make the value persistent over reboots.

Note, the official manual for the SELinux boolean. By enabling the httpd_execmem, several SELinux rules are applied, which may make the web server less secure, because the web server process is allowed to execute code in the memory.

– httpd_execmem

When enabled, this Boolean allows httpd to execute programs that require memory addresses that are both executable and writable. Enabling this Boolean is not recommended from a security standpoint as it reduces protection against buffer overflows, however certain modules and applications (such as Java and Mono applications) require this privilege.

The AVC (Access Vector Cache) message looks like:

type=AVC msg=audit(1699774004.724:59165857): avc:  denied  { execmem } for  pid=3872876 comm="nginx" scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:system_r:httpd_t:s0 tclass=process permissive=0
type=SYSCALL msg=audit(1699774004.724:59165857): arch=c000003e syscall=10 success=no exit=-13 a0=7f399cd69000 a1=10000 a2=5 a3=38 items=0 ppid=884311 pid=3872876 auid=4294967295 uid=990 gid=987 euid=990 suid=990 fsuid=990 egid=987 sgid=987 fsgid=987 tty=(none) ses=4294967295 comm="nginx" exe="/usr/sbin/nginx" subj=system_u:system_r:httpd_t:s0 key=(null)ARCH=x86_64 SYSCALL=mprotect AUID="unset" UID="nginx" GID="nginx" EUID="nginx" SUID="nginx" FSUID="nginx" EGID="nginx" SGID="nginx" FSGID="nginx"
type=PROCTITLE msg=audit(1699774004.724:59165857): proctitle=6E67696E783A20776F726B65722070726F63657373

More on NGINXhttps://ahelpme.com/tag/nginx/ or SELinuxhttps://ahelpme.com/tag/selinux/.

How To Install Linux, Nginx, MySQL (MariaDB), PHP-FPM (LEMP) Stack on CentOS Stream 9

main menu
dnf mariadb

This article presents how to install a Web server with application back-end PHP and database back-end MySQL using MariaDB. All the software installed throughout this article is from the CentOS Stream 9 official repositories including the EPEL repository. The machine is installed with a minimal installation of CentOS Stream 9 and there is a how-to here – Network installation of CentOS Stream 9 (20220606.0) – minimal server installation.
Here are the steps to perform:

  1. Install, configure and start the database MariaDB.
  2. Install, configure and start the PHP-FPM and PHP cli.
  3. Install, configure and start the Web server Nginx.
  4. Configure the system – firewall and SELinux.
  5. Test the installation with a phpMyAdmin installation.
  6. Bonus – Nginx HTTPS with SSL certificate – self-signed and letsencrypt.

STEP 1) Install, configure and start the database MariaDB.

First, install the MariaDB server by:

dnf install -y mariadb-server

To configure the MariaDB server, the main file is /etc/my.cnf, which just includes all files under the folder /etc/my.cnf.d/

[root@srv ~]# cat /etc/my.cnf
#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

[root@srv ~]# ls -altr /etc/my.cnf.d/
total 32
-rw-r--r--.  1 root root  295 Mar 25  2022 client.cnf
-rw-r--r--.  1 root root  120 May 18 07:55 spider.cnf
-rw-r--r--.  1 root root  232 May 18 07:55 mysql-clients.cnf
-rw-r--r--.  1 root root  763 May 18 07:55 enable_encryption.preset
-rw-r--r--.  1 root root 1458 Jun 13 13:24 mariadb-server.cnf
-rw-r--r--.  1 root root   42 Jun 13 13:29 auth_gssapi.cnf
drwxr-xr-x.  2 root root 4096 Oct  6 06:34 .
drwxr-xr-x. 81 root root 4096 Oct  6 06:34 ..

The most important file for the MariaDB server is /etc/my.cnf.d/mariadb-server.cnf, where all the server options are included. Under section “[mysqld]” add options to tune the MariaDB server. Supported options could be found here: https://mariadb.com/kb/en/mysqld-options/
Add the following options under “[mysqld]” in /etc/my.cnf.d/mariadb-server.cnf
Keep on reading!

MPEG-DASH and ClearKey, CENC drm encryption with Nginx, bento4 and dashjs under CentOS 8

The purpose of this article is to demonstrate a simple and plain example of ClearKey DRM encryption using a DASH stream.
Usually, the ClearKey is used only for testing the encryption key and the DRM setup, because the decrypting key is transferred in a plain text to the browser. In simple DRM words, the key is transferred in plain text, and the handle of the decryption is not in some proprietary module such as CMD – Content Decryption Modules. The CMD is a proprietary module in the browsers or the players, which works like a black box when handling the decryption key. The most popular DRMs are Google’s Widevine, Apple’s Fireplay, and Microsoft PlayReady, which work through a proprietary module – CMD (Content Decryption Modules) in the browser (or the OS and player).
All the three DRMs work basically in a similar way:

  • There is a (encryption) key and a (encryption) keyID, which purpose is to identify the (encryption) key.
  • The video file is encrypted with the key and it includes the keyID.
  • The client needs to have the appropriate CMD (Content Decryption Modules) to decrypt the video.
  • The clients receive a license from a license server, which is encrypted data for the CDM on how to decrypt the video identified by the keyID. In fact, the client sends the keyID and receives the proper license (i.e. license binary data) for this keyID. That’s why keyID is included in the encrypted video. Bare in mind, the CMD is proprietary Content Decryption Module offered by the creator of the DRM – Google, Apple, Microsoft or another and it lives in the browser (OS or player). All popular browsers support at least one of the proprietary DRMs.

ClearKey is like the proprietary DRM schemes, but without the CMD (Content Decryption Modules).

The “org.w3.clearkey” Key System uses plain-text clear (unencrypted) key(s) to decrypt the source. No additional client-side content protection is required.

So, in general, there is no need for a license server when using ClearKey DRM.
Of course, an additional attempt to hide the plain-text key could be made using an extension to the client’s player such as javascript modules and etc. In general, it is perceived this approach to be less secure, because it is much easier to debug the javascript code on the client side. More on ClearKeyhttps://www.w3.org/TR/encrypted-media/#clear-key

Here are all the steps from the server till the client to use ClearKey.

STEP 1) Download and install bento4 software.

bento4 is an open source toolkit for manipulating some of the most common video formats – MP4 and DASH/HLS/CMAF media. The download page is https://www.bento4.com/downloads/ and the Linux binary for latest stable version: https://www.bok.net/Bento4/binaries/Bento4-SDK-1-6-0-639.x86_64-unknown-linux.zip. There is also a source code snapshot link.
Download the famous blender video for the demostration: https://download.blender.org/demo/movies/BBB/bbb_sunflower_1080p_30fps_normal.mp4
Download and unpack the binary Bento4-SDK-1-6-0-639.x86_64-unknown-linux.zip.
Keep on reading!

collectd nginx plugin: curl_easy_perform failed because of selinux

Enabling the Nginx plugin for collectd under CentOS (or any other system using SELinux) might be confusing for a newbie. Most sources on the Internet would just install collectd-nginx:

yum install -y collectd-nginx

and configure it in the nginx.conf and collectd.conf. Still, the statistics might not work as expected, the collectd may not be able to gather statistics from the Nginx.

SELinux may prevent collectd (plugin) daemon to connect to Nginx and gather statistics from the Nginx stats page.

Checking the collectd log and it reports a problem:
Keep on reading!

nginx proxy cache and expires directive – pass-through the origin cache control

Proxying static content sometimes requires to modify the expire directive on the proxy server, but sometimes it may just need to pass the origin expire directive. What if the “expires” is defined in the server section and ones need to pass through the value from the origin by the proxy server?

Simply switch off the expires by “expires off”. It will disable an earlier definition in the block it is used and the proxy answer to the client will include the origin header for the cache control.

It won’t disable the cache control meaning to add no-cache in proxy answer to the client. So if it is used “expires” in a block and a pass-through from the origin is required, just make a location block with “expires off”:

        server {
                listen          10.10.10.10:443 ssl http2;
                server_name     srv1.example.com;

                ssl_certificate  /etc/ssl/nginx/srv1.example.com.chain.crt;
                ssl_certificate_key /etc/ssl/nginx/srv1.example.com.key;

                resolver 8.8.8.8;

                client_max_body_size 12m;

                expires         -1;
                root            /mnt/storage/web/root;
                access_log      /mnt/storage/web/logs/srv1.example.com;
                error_log       /mnt/storage/web/logs/srv1.example.com warn;

                location / {
                        #proxy
                        proxy_buffer_size   128k;
                        proxy_buffers   4 256k;
                        proxy_busy_buffers_size   256k;
                        proxy_buffering off;
                        proxy_read_timeout 600;
                        proxy_send_timeout 600;
                        proxy_store off;
                        proxy_cache off;
                        proxy_redirect off;
                        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
                        proxy_no_cache $http_pragma $http_authorization;
                        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
                        proxy_cache_bypass $http_pragma $http_authorization;

                        proxy_pass https://https_backend;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
                        proxy_set_header Host $host;
                        proxy_set_header        X-Real-IP-EXAMPLE       $remote_addr;
                        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                }

                location ~* \.jpeg {
                        expires off;
                        #proxy
                        proxy_buffer_size   128k;
                        proxy_buffers   4 256k;
                        proxy_busy_buffers_size   256k;
                        proxy_buffering off;
                        proxy_read_timeout 600;
                        proxy_send_timeout 600;
                        proxy_store off;
                        proxy_cache off;
                        proxy_redirect off;
                        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
                        proxy_no_cache $http_pragma $http_authorization;
                        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
                        proxy_cache_bypass $http_pragma $http_authorization;
                        proxy_ignore_headers "Expires" "Cache-Control";


                        proxy_pass https://https_backend;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
                        proxy_set_header Host $host;
                        proxy_set_header        X-Real-IP-EXAMPLE       $remote_addr;
                        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                }

                 location ~* \.(jpg|gif|png|css|mcss|js|mjs|woff|woff2)$ {
                        expires 30d;
                        #proxy
                        proxy_buffer_size   128k;
                        proxy_buffers   4 256k;
                        proxy_busy_buffers_size   256k;
                        proxy_buffering off;
                        proxy_read_timeout 600;
                        proxy_send_timeout 600;
                        proxy_store off;
                        proxy_cache off;
                        proxy_redirect off;
                        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
                        proxy_no_cache $http_pragma $http_authorization;
                        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
                        proxy_cache_bypass $http_pragma $http_authorization;

                        proxy_pass http://http_backend;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
                        proxy_set_header Host $host;
                        proxy_set_header        X-Real-IP-EXAMPLE       $remote_addr;
                        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                }
        }

The above sample configuration defines 3 location blocks for 3 different “expires” cases:

  1. Uses the server block “expires -1” – proxy for dynamic code such as proxy site application code. No caching.
  2. Uses the location block .jpeg “expires off” – the proxy will pass-through the value from the origin server. Any value from the origin.
  3. Uses the location block .(gif|png|css|mcss|js|mjs|woff|woff2) “expires 30d” – it will set +30 days expire header no matter of the origin value. Origin server value is replaced by the today +30 days in the future cache cache header.

nginx and proxy_cache not growing cache despite max_size is bigger – shared memory zone to blame

One of our big Nginx cache servers has recently been upgraded to have 70T storage, which is pretty good storage for a proxy. And in a hurry to configure the big storage we changed only the “max_size” option of proxy_cache_path directive! After a week in production, the proxy reached 23T and it just stopped growing with no apparent reason! Space and Nginx max_size were OK 75T total space and 70T for the proxy cache, but no Nginx had not added more space after reaching 23T occupied space for two days, which was impossible because all files were kept for 5 years and 200G per day were generated. No errors in the logs and we even use “virtual host traffic status module” – Live status information like used space and more for nginx proxy cache, but still no clue why it did not grow above this threshold of 23T! And it began to remove cached objects!

proxy_cache_path /mnt/cache levels=1:2 keys_zone=STATIC:900m inactive=42600h max_size=70000g;

It appeared we exhausted the shared memory zone limit for the zone! And Nginx cache just stopped growing.

According to the Nginx manual “One megabyte (of shared memory zone), a zone can store about 8 thousand keys“. Apparently, after 23T of files, we have passed 7 200 000 keys and exhausted the limit we configured in the proxy_cache_path line!

The solution is really simple just increase the limit for the shared memory zone for the zone.

proxy_cache_path /mnt/cache levels=1:2 keys_zone=STATIC:4000m inactive=42600h max_size=70000g;

In the past, with small cache (15T) it was enough to have 900Mbytes for the cache’s shared memory. Now we set it to 4000 Mbytes to be able to store approximately 32 000 000 keys. We have 23T and 900M of shared memory for keys (for our setup, your setup may differ a lot!!!) and setting it to 4000M, which is more than 4 times bigger than before it will probably be enough for the rest free storage to be used at full extent.
Be careful this operation will trigger the “Nginx cache loader” to load the cache index and may produce IO during this operation!

Nginx shared memory zone size

Nginx workers use shared mappings – mmap, which is different from the SYSV and POSIX shared memory (so you cannot use ipcs tool to check for shared memory). You should check how many memory currently the process is using. Here is how you can get the size of the shared memory zone occupied by the Nginx processes and as you can see each Nginx worker is around 900M of column “RSS” (Resident Set Size):

[root@srv ~]# ps -o rss,pid,comm,user,cmd -C nginx
  RSS   PID COMMAND         USER     CMD
904888 3979 nginx           nginx    nginx: worker process
905116 3980 nginx           nginx    nginx: worker process
904828 3981 nginx           nginx    nginx: worker process
905176 3982 nginx           nginx    nginx: worker process
905196 3983 nginx           nginx    nginx: worker process
905008 3984 nginx           nginx    nginx: worker process
904908 3985 nginx           nginx    nginx: worker process
905372 3986 nginx           nginx    nginx: worker process
905088 3987 nginx           nginx    nginx: worker process
902688 3988 nginx           nginx    nginx: worker process
904932 3989 nginx           nginx    nginx: worker process
905032 3990 nginx           nginx    nginx: worker process
26452  3991 nginx           nginx    nginx: cache manager process
33928  8148 nginx           root     nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf

For a single Nginx process:

[root@srv ~]# cat /proc/3981/status |grep RssShmem
RssShmem:         894240 kB

You can check the occupied inodes of your file system with df to get approximately how many files you have:

[root@srv ~]# df -i
Filesystem         Inodes   IUsed      IFree IUse% Mounted on
devtmpfs         16452656     715   16451941   1% /dev
tmpfs            16455999       1   16455998   1% /dev/shm
tmpfs            16455999    1153   16454846   1% /run
tmpfs            16455999      17   16455982   1% /sys/fs/cgroup
/dev/md1          2076704   39687    2037017   2% /
/dev/md3       1214685184 6897020 1207788164   1% /mnt/cache
tmpfs            16455999       5   16455994   1% /run/user/0

inodes around 6 897 020 and not growing for days. This number is very close to the maximum keys, which 900M key shared memory zone may store!

Two days after changing the key shared memory zone limit to 4000Mbytes:

proxy_cache_path /mnt/cache levels=1:2 keys_zone=STATIC:4000m inactive=42600h max_size=70000g;

The Nginx workers passed 900Mbytes RSS (Resident Set Size) and it reached 1Gbyte. The occupied cached sized grew with 1T and continued to grow.

[root@srv ~]# ps -o rss,pid,comm,user,cmd -C nginx
  RSS   PID COMMAND         USER     CMD
52256  8148 nginx           root     nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
1005624 16899 nginx         nginx    nginx: worker process
1005948 16900 nginx         nginx    nginx: worker process
1005936 16901 nginx         nginx    nginx: worker process
1005912 16902 nginx         nginx    nginx: worker process
1005832 16903 nginx         nginx    nginx: worker process
1005836 16904 nginx         nginx    nginx: worker process
1005868 16905 nginx         nginx    nginx: worker process
1005932 16906 nginx         nginx    nginx: worker process
1005796 16907 nginx         nginx    nginx: worker process
1005980 16908 nginx         nginx    nginx: worker process
1005848 16909 nginx         nginx    nginx: worker process
1005888 16910 nginx         nginx    nginx: worker process
26328 16911 nginx           nginx    nginx: cache manager process

The occupied inodes also increased to 7 484 291, which means the cache added around 700 000 new files.

[root@srv ~]# df -i
Filesystem         Inodes   IUsed      IFree IUse% Mounted on
devtmpfs         16452656     715   16451941    1% /dev
tmpfs            16455999       1   16455998    1% /dev/shm
tmpfs            16455999    1153   16454846    1% /run
tmpfs            16455999      17   16455982    1% /sys/fs/cgroup
/dev/md1          2076704   39690    2037014    2% /
/dev/md3       1214685184 7484582 1207200602    1% /mnt/cache
tmpfs            16455999       5   16455994    1% /run/user/0

nginx remote logging to UDP rsyslog server (CentOS 7)

This article will present to you all the configuration needed to remotely save access logs of an Nginx web server. All the configuration from the client and server sides is included. The client and the server use CentOS 7 Linux distribution and the configuration could be used under different Linux distribution. Probably only Selinux rules are kind of specific to the CentOS 7 and the firewalld rules are specific for those who use it as a firewall replacing the iptables. Here is the summary of what to expect:

  • Client-side – nginx configuration
  • Server-side – rsyslog configuration to accept UDP connections
  • Server-side – selinux and firewall configuration

The JSON formatted logs may be sent to a Elasticsearch server, for example. Here is how to do it – send access logs in json to Elasticsearch using rsyslog

STEP 1) Client-side – the Nginx configuration.

Nginx configuration is pretty simple just a single line with the log template and the IP (and port if not default 514) of the rsyslog server. For the record, this is the official documentation https://nginx.org/en/docs/syslog.html. In addition it worth mentioning there could be multiple access_log directives in a single section to log simultaneously on different targets (and the templates may be different or the same). So you can set the access log output of a section locally and remotely.
Nginx configuration (probably /etc/nginx/nginx.conf or whatever is the organization of your Nginx configuration files.)

server {
     .....
     access_log      /var/log/nginx/example.com_access.log main;
     access_log      syslog:server=10.10.10.2:514,facility=local7,tag=nginx,severity=info main3;
     .....
}

The “main” and “main3” are just names of the logging templates defined earlier (you may check rsyslog remote logging – prevent local messages to appear to see an interesting Nginx logging template).
The error log also could be remotely logged:

error_log syslog:server=10.10.10.3 debug;

STEP 2) Server-side – rsyslog configuration to accept UDP connections.

Of course, if you have not installed the rsyslog it’s high time you installed it with (for CentOS 7):

yum install -y rsyslog

To enable rsyslog to listen for UDP connections your rsyslog configuration file (/etc/rsyslog.conf) must include the following:

$ModLoad imudp
$UDPServerRun 514

Most of the Linux distributions have these two lines commented so you just need to uncomment them by removing the “#” from the beginning of the lines. If the lines are missing just add them under section “MODULES” (it should be near the first lines of the rsyslog configuration file).
Change the 514 with the number you like for the UDP listening port.
Write the client’s incoming lines of information to a different location and prevent merging with the local log messages – rsyslog remote logging – prevent local messages to appear. Include as a first rule under the rules’ section starting with “RULES” of the rsyslog configuration file (/etc/rsyslog.conf):

# Remote logging
$template HostIPtemp,"/mnt/logging/%FROMHOST-IP%.log"
if ($fromhost-ip != "127.0.0.1" ) then ?HostIPtemp
& stop

Logs only of remote hosts are going to be saved under /mnt/logging/.log.
Keep on reading!

nginx proxy cache – log the upstream response server, time, cache status, connect time and more in nginx access logs

The Nginx upstream module exposes embedded variables, which we can use to log them in the Nginx access log files.
Some of the variables are really interesting and could be of great use to the system administrators and in general to tune your systems (content delivery network?). For example, you can log

  • $upstream_cache_status – the cache status of the object the server served. For each URI you will have in the logs if the item is from the cache (HIT) or the Nginx used an upstream server to get the item (MISS)
  • $upstream_response_time – the time Nginx proxy needed to get the resource from the upstream server
  • $upstream_addr – the Nginx upstream server used for the requested URI in the logs.
  • $upstream_connect_time – the connect time to the specific

And many more you may check the documentation at the bottom with heading “Embedded Variables” – http://nginx.org/en/docs/http/ngx_http_upstream_module.html

For example, in peak hours, you can see how the time to get the resource from the upstream servers changes.

And you can substruct the time from the time your server served the URI to the client.

Of course, you can use this with any upstream case not only with proxy cache! These variables may be used with application backend servers like PHP (FastCGI) application servers and more. In a single log in the access log file, there could be information not only for the URI but for the time spent to generate the request in the application server.

Example

Logging in JSON format (JSON is just for the example, you can use the default string):

        log_format main3 escape=json '{'
                '"remote_addr":"$remote_addr",'
                '"time_iso8601":"$time_iso8601",'
                '"request_uri":"$request_uri",'
                '"request_length":"$request_length",'
                '"request_method":"$request_method",'
                '"request_time":"$request_time",'
                '"server_port":"$server_port",'
                '"server_protocol":"$server_protocol",'
                '"ssl_protocol":"$ssl_protocol",'
                '"status":"$status",'
                '"bytes_sent":"$bytes_sent",'
                '"http_referer":"$http_referer",'
                '"http_user_agent":"$http_user_agent",'
                '"upstream_response_time":"$upstream_response_time",'
                '"upstream_addr":"$upstream_addr",'
                '"upstream_connect_time":"$upstream_connect_time",'
                '"upstream_cache_status":"$upstream_cache_status",'
                '"tcpinfo_rtt":"$tcpinfo_rtt",'
                '"tcpinfo_rttvar":"$tcpinfo_rttvar"'
                '}';

We included the variables we needed, but there are a lot more, check out the Nginx documentation for more.
Just add the above snippet to your Nginx configuration and activate it with the access_log directive:

access_log      /var/log/nginx/example.com-json.log main3;

“main3” is the name of the format and it could be anything you like.

And the logs look like:

{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T13:36:33+00:00","request_uri":"/i/example/bc/bcda7f798ea1c75f18838bc3f0ffbd1c_200.jpg","request_length":"412","request_method":"GET","request_time":"0.325","server_port":"8801","server_protocol":"HTTP/1.1","ssl_protocol":"","status":"404","bytes_sent":"332","http_referer":"https://example.com/test_1","http_user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12","upstream_response_time":"0.324","upstream_addr":"10.10.10.2","upstream_connect_time":"0.077","upstream_cache_status":"MISS","tcpinfo_rtt":"45614","tcpinfo_rttvar":"22807"}
{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T13:36:33+00:00","request_uri":"/i/example/2d/2df5f3dfe1754b3b4ba8ac66159c0384_200.jpg","request_length":"412","request_method":"GET","request_time":"0.242","server_port":"8801","server_protocol":"HTTP/1.1","ssl_protocol":"","status":"404","bytes_sent":"332","http_referer":"https://example.com/test_1","http_user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12","upstream_response_time":"0.242","upstream_addr":"10.10.10.2","upstream_connect_time":"0.000","upstream_cache_status":"MISS","tcpinfo_rtt":"46187","tcpinfo_rttvar":"23093"}
{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T13:36:41+00:00","request_uri":"/flv/example/test_1.ts?st=E05FMg-DSIAgRfVhbadUWQ&e=1568381799&sopt=pdlfwefdfsr","request_length":"357","request_method":"GET","request_time":"0.960","server_port":"8801","server_protocol":"HTTP/1.0","ssl_protocol":"","status":"200","bytes_sent":"3988358","http_referer":"","http_user_agent":"Lavf53.32.100","upstream_response_time":"0.959","upstream_addr":"10.10.10.2","upstream_connect_time":"0.000","upstream_cache_status":"MISS","tcpinfo_rtt":"46320","tcpinfo_rttvar":"91"}
{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T14:09:34+00:00","request_uri":"/flv/example/aee001dce114c88874b306bc73c2d482_1.ts?range=564-1804987","request_length":"562","request_method":"GET","request_time":"0.613","server_port":"8801","server_protocol":"HTTP/1.0","ssl_protocol":"","status":"200","bytes_sent":"5318082","http_referer":"","http_user_agent":"AppleCoreMedia/1.0.0.16E227 (iPad; U; CPU OS 12_2 like Mac OS X; en_us)","upstream_response_time":"","upstream_addr":"","upstream_connect_time":"","upstream_cache_status":"HIT","tcpinfo_rtt":"45322","tcpinfo_rttvar":"295"}

It’s easy to print them beatiful in the console with the “jq” tool

[root@srv logging]# tail -f 10.10.10.10.log|awk 'BEGIN {FS="{"} {print "{"$2}'|jq "."
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T13:36:33+00:00",
  "request_uri": "/i/example/bc/bcda7f798ea1c75f18838bc3f0ffbd1c_200.jpg",
  "request_length": "412",
  "request_method": "GET",
  "request_time": "0.325",
  "server_port": "8801",
  "server_protocol": "HTTP/1.1",
  "ssl_protocol": "",
  "status": "404",
  "bytes_sent": "332",
  "http_referer": "https://example.com/test_1",
  "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12",
  "upstream_response_time": "0.324",
  "upstream_addr": "10.10.10.2",
  "upstream_connect_time": "0.077",
  "upstream_cache_status": "MISS",
  "tcpinfo_rtt": "45614",
  "tcpinfo_rttvar": "22807"
}
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T13:36:33+00:00",
  "request_uri": "/i/example/2d/2df5f3dfe1754b3b4ba8ac66159c0384_200.jpg",
  "request_length": "412",
  "request_method": "GET",
  "request_time": "0.242",
  "server_port": "8801",
  "server_protocol": "HTTP/1.1",
  "ssl_protocol": "",
  "status": "404",
  "bytes_sent": "332",
  "http_referer": "https://example.com/test_1",
  "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12",
  "upstream_response_time": "0.242",
  "upstream_addr": "10.10.10.2",
  "upstream_connect_time": "0.000",
  "upstream_cache_status": "MISS",
  "tcpinfo_rtt": "46187",
  "tcpinfo_rttvar": "23093"
}
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T13:36:41+00:00",
  "request_uri": "/flv/example/test_1.ts?st=E05FMg-DSIAgRfVhbadUWQ&e=1568381799&sopt=pdlfwefdfsr",
  "request_length": "357",
  "request_method": "GET",
  "request_time": "0.960",
  "server_port": "8801",
  "server_protocol": "HTTP/1.0",
  "ssl_protocol": "",
  "status": "200",
  "bytes_sent": "3988358",
  "http_referer": "",
  "http_user_agent": "Lavf53.32.100",
  "upstream_response_time": "0.959",
  "upstream_addr": "10.10.10.2",
  "upstream_connect_time": "0.000",
  "upstream_cache_status": "MISS",
  "tcpinfo_rtt": "46320",
  "tcpinfo_rttvar": "91"
}
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T14:09:34+00:00",
  "request_uri": "/flv/example/aee001dce114c88874b306bc73c2d482_1.ts?range=564-1804987",
  "request_length": "562",
  "request_method": "GET",
  "request_time": "0.613",
  "server_port": "8801",
  "server_protocol": "HTTP/1.0",
  "ssl_protocol": "",
  "status": "200",
  "bytes_sent": "5318082",
  "http_referer": "",
  "http_user_agent": "AppleCoreMedia/1.0.0.16E227 (iPad; U; CPU OS 12_2 like Mac OS X; en_us)",
  "upstream_response_time": "",
  "upstream_addr": "",
  "upstream_connect_time": "",
  "upstream_cache_status": "HIT",
  "tcpinfo_rtt": "45322",
  "tcpinfo_rttvar": "295"
}

3 misses and 1 hit, the hit 3 of the upstream variables we used are blank, because the server took the item from the cache.

Live status information like used space and more for nginx proxy cache

Using the Nginx virtual host traffic status module you can have extended live information for your proxy cache module and the proxy cache upstream servers. We have covered the topic of how to install the module here – Install Nginx virtual host traffic status module – traffic information in Nginx and more per server block and upstreams and this article is just to show you what information you could have using the module with proxy cache (and the upstream servers) module.
In general, there is no live information about Nginx proxy cache. Of course, by the space it is occupied in the disk you can guess how much space is taken by your Nginx cache (or when you restart or upgrade the Nginx it would reinitialize the cache and when finished the numbers would be written in the error log). With this module “Nginx virtual host traffic status module” – https://github.com/vozlt/nginx-module-vts you would have additional status information page containing information for the proxy cache module (we included only for the proxy cache here, for more look at the other article mentioned above), too:

Per upstream server

  • state – up, down, backup server and so on.
  • Response Time – the time the server responded last time. You can use this to see how far away is your server and to detect problems with your upstreams connectivity.
  • Weight – the weight of the server in the group. It’s from the configuration file.
  • Max Fails – the max fail attempts before it blacklists the upstream for “Fail Timeout” time. It’s from the configuration file.
  • Fail Timeout – the time, which the server will be blacklisted and the time the all fails (from Max Fails) must occur. It’s from the configuration file.
  • RequestsTotal – from the start of the server, Req/s – Requests per second at the moment of loading the extended status page, Time.
  • ResponsesTotal and split by error codes – 1xx, 2xx, 3xx, 4xx, 5xx.
  • TrafficSent – total sent from the start of the server), Rcvd – total received from the start of the server, Sent/s – sent per second at the moment, Rcvd/s – received per second at the moment.

Per cache – i.e. key zone name

  • SizeCapacity – the capacity from the configuration file, Used – used space lively updated! With this you can have access to the used space of your zones with only loading a page – the extended status page of this module!
  • TrafficSent – total sent from the start of the server, Rcvd – total received from the start of the server, Sent/s – sent per second at the moment, Rcvd/s – received per second at the moment.
  • CacheMiss, Bypass, Expired, Stale, Updating, Revalidated, Hit, Scarce, Total – they all are self-explanatory and all counters are from the start of the server.

You can compute the effectiveness of your cache for a period of time. For example, you can make different graphs based on this data for long periods and for different short periods like in peaks of off-peaks. We might have an article on the subject.

SCREENSHOT 1) Cache with three cache zones and two upstream servers – main and backup

As you can see our biggest zone has 2.92T occupied and it is 100% of the available space, so probably the cache manager is deleteing at the moment. The hits are 24551772 and the total is 28023927 so the ratio in percentages is 87.6%! 87.6% of the hits of this zone is servered by the server without the need of touching the upstream servers. In the first cache zone we have more aggressive time expiring, so there were 21% requests, which were updated.

main menu
Cache with three cache zones and two upstream servers – main and backup

Keep on reading!

Install Nginx virtual host traffic status module – traffic information in nginx and more per server block and upstreams

This article is going to show how to compile and install the Nginx module – ngx_http_vhost_traffic_status.

The module gathers traffic information per the server blocks and upstream servers and shows information for Nginx proxy cache like used space.

In addition, the module shows the type of the Response – 1xx, 2xx, 3xx, 4xx, 5xx and total. So when if problems occur in a server block or an upstream server
This module nginx-module-vts offers really extended status information for your Nginx.
Here is one the status page of our web servers with 18 virtual hosts:

The status page shows all virtual hosts in section “Server zones” and all upstream servers for the FastCGI PHP backend servers.

Traffic, requests, and status codes are available. All data is available in JSON, too.

main menu
Traffic information in Nginx and more per server block and upstreams

Server zones information

  • Requests – Total, Requests/s, Time
  • Responses – 1xx, 2xx, 3xx, 4xx, 5xx, Total
  • Traffic – Sent, Received, Sent/s, Received/s
  • Cache – Miss, Bypass, Expired, Stale, Updating, Revalidated, Hit, Scarce, Total

In addition to the information above there are State, Response Time, Weight, MaxFails and FileTimeout for all the upstream servers. And for the Nginx proxy cache there are Size, Capacity (live information!) and all information above per zone – there is an additional article here Live status information like used space and more for nginx proxy cache.
Keep on reading!