collectd nginx plugin: curl_easy_perform failed because of selinux

Enabling the Nginx plugin for collectd under CentOS (or any other system using SELinux) might be confusing for a newbie. Most sources on the Internet would just install collectd-nginx:

yum install -y collectd-nginx

and configure it in the nginx.conf and collectd.conf. Still, the statistics might not work as expected, the collectd may not be able to gather statistics from the Nginx.

SELinux may prevent collectd (plugin) daemon to connect to Nginx and gather statistics from the Nginx stats page.

Checking the collectd log and it reports a problem:
Keep on reading!

aptly remove a package from a repository using the cli

Here is a fast tip – how to remove a package from our local aptly repository:

  1. Remove the package from the local repository.
  2. Create a new snapshot form the local repository.
  3. Publish the snapshot by switching to the newly created snapshot from the above step.

The commands executing over repository with name xenial-apps to remove package with name example-app and version 10.5.1.22-ubuntu20. The snapshot name xenial-apps1588149526 is just a temporary name used for the snapshot (the ID is unix timestamp of the current time).

aptly repo remove  xenial-apps 'example-app (= 10.5.1.22-ubuntu20)'
aptly snapshot create xenial-apps1588149526 from repo xenial-apps
aptly publish switch xenial-apps ubuntu xenial-apps1588149526

Real world example.

This is the log from our system with just changed names:
Keep on reading!

Kibana server is not ready yet – and Waiting for that migration to complete in the logs

Now, living in the cloud and big data there is a time when the admin may need to save all their logs in a central place! Elasticsearch and Kibana look good for the job! And after months of hassle-free work of the Elasticsearch and Kibana, Elasticsearch just stopped working and after a restart and upgrade (Elasticsearch and Kibana) Kibana showed an error message:

Kibana server is not ready yet

And if you have tried the STOP/START of Kibana and Elasticsearch and Kibana would still show the above message here is what you should do:

  1. Check if the two services are running! Kibana and Elasticsearch, if some of them is missing start it.
  2. Search for the logs and especially Elasticsearch logs. The first place to check is systemd logs with journalctl program (systemctl status also will point out the problem showing last lines of the logs).
  3. Look for the last lines and if they include

    Another Kibana instance appears to be migrating the index

    This article is probably the right place to solve the issue and start your setup successful.

STEP 1) Running services and analyzing the logs.

If kibana and elastisearch use systemd system to operate it is easy to access the logs with systemctl and journalctl
Check whether the kibana and elasticsearch are running with:

[root@loganalyzer ~]# ps ax|grep elasticsearch|grep -v grep
  258 ?        Ssl  836:31 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.locale.providers=COMPAT -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.io.tmpdir=/tmp/elasticsearch-13303119363353782625 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m -XX:MaxDirectMemorySize=536870912 -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -Des.bundled_jdk=true -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
  360 ?        Sl     0:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
[root@loganalyzer ~]# ps ax|grep kibana|grep -v grep
 1284 ?        Ssl    4:32 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

If one of the two services are missing you must start it! And second, each service should have only one instance (i.e. process)!
And check the kibana logs with journalctl

[root@loganalyzer ~]# journalctl -u kibana.service
.....
.....
Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","plugins","bfetch"],"pid":1219,"message":"Setting up plugin"}
Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","savedobjects-service"],"pid":1219,"message":"Waiting until all Elasticsearch nodes
 are compatible with Kibana before starting saved objects migrations..."}
Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","savedobjects-service"],"pid":1219,"message":"Starting saved objects migrations"}
Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","savedobjects-service"],"pid":1219,"message":"Creating index .kibana_task_manager_2
."}
Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["warning","savedobjects-service"],"pid":1219,"message":"Unable to connect to Elasticsearch
. Error: [resource_already_exists_exception] index [.kibana_task_manager_2/O070AunfSyG6hwd6_pqqRA] already exists, with { index_uuid=\"O070AunfSyG6hwd6_pqqRA\" & index=\".kibana_task_manager
_2\" }"}
Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["warning","savedobjects-service"],"pid":1219,"message":"Another Kibana instance appears to
 be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_2 
and restarting Kibana."}

The systemctl status may be used, too. The error and the index are shown in the last lines of the status output – look below.

The problem here is there was a migration of index .kibana_task_manager_2, but it was abandoned because of unknown reason and now we should delete it to be able to use our kibana service. The index name might be with another name but it is the same problem.

STEP 2) Delete kibana index

Delete the kibana index in elasticsearch backend using curl and HTTP/HTTPS request such as:

[root@loganalyzer kibana]# curl -XDELETE http://192.168.0.2:9200/.kibana_task_manager_2
{"acknowledged":true}

Keep on reading!

Cron missing path – executing docker/podman – adding network: failed to locate iptables

If you have ever happened to execute some complex scripts using the cron system you were inevitable to discover the Linux environment was different than the login or ssh shell. The different environment tends to lead to a missing or different PATH environment! Here is what happens with podman starting a container from a cron script:

time="2020-04-19T20:45:20Z" level=error msg="Error adding network: failed to locate iptables: exec: \"iptables\": executable file not found in $PATH"
time="2020-04-19T20:45:20Z" level=error msg="Error while adding pod to CNI network \"podman\": failed to locate iptables: exec: \"iptables\": executable file not found in $PATH"
Error: unable to start container "onedrive-cli": error configuring network namespace for container d297cf80db20441d4258a1acc7d810444795d1ca8730ab242d9fe8a13eaa697d: failed to locate iptables: exec: "iptables": executable file not found in $PATH

The iptables executable is missing because the PATH variable is different than the login or ssh shell one. Executing the commands or the script under ssh or login will result in no error and a proper podman (docker) execution!

A similar problem could have happened with another software trying to execute iptables or another tool, which is not found in the cron’s PATH environment because cron’s environment is very limited and

To ensure the PATH is like the user’s (root) environment just source the “profile” or “.bashrc” file of the current user before the execution of the script or in the first lines of it.
This would do the trick.

. /etc/profile

Or user’s custom

. ~/.bashrc

Or the default OS bashrc

. /etc/bashrc

The dot may be replaced by “source”:

source /etc/bashrc

All (environment) variables will be available after the source command.

Here is the difference:
The environment without the sourcing profile/bashrc file:

 
LANG=en_US.UTF-8
XDG_SESSION_ID=19118
USER=root
PWD=/root
HOME=/root
SHELL=/bin/sh
SHLVL=1
LOGNAME=root
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
XDG_RUNTIME_DIR=/run/user/0
PATH=/usr/bin:/bin
_=/usr/bin/env

Sourcing the “/etc/profile” file:

LANG=en_US.UTF-8
HISTCONTROL=ignoredups
HOSTNAME=srv.example.com
XDG_SESSION_ID=19165
USER=root
PWD=/root
HOME=/root
MAIL=/var/spool/mail/root
SHELL=/bin/bash
SHLVL=1
LOGNAME=root
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
XDG_RUNTIME_DIR=/run/user/0
PATH=/usr/local/sbin:/usr/sbin:/usr/bin:/bin
HISTSIZE=1000
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/usr/bin/env

Multiple additional envrinment varibles, which could be important for user’s scripts executed by the cron.

And in CentOS 8 the iptables happens to be in “/usr/sbin/iptables” – a path /usr/sbin not included in the default cron environment PATH variable!
Of course, the PATH environment may be edited in the cron scheduler with crontab (by just setting the PATH with a path) till the next path missing in it and included in the user’s path! It’s just better to ensure the two environments are the same every time by sourcing the environment configuration file such as /etc/profile or user’s bashrc (or the default on in /etc/bashrc?).

Mirror a PPA repositories using aptly – PHP (ppa:ondrej/php)

This is a simple example of how to mirror a PPA repository to a local server. The Ubuntu PPA to mirror is ppa:ondrej/php, which offers the user different PHP version generally not available in the Ubuntu installation. Of course, the user should be very careful about adding PPA repositories, because they are exactly what the abbreviation stands for Personal Package Archives.

If you want to know how to install and a brief description of what is aptly you may want to read our previous article – Install aptly under Ubuntu 18 LTS with Nginx serving the packages and the first steps

What we are going to do – this is what you need to have a mirror of an external application repository:

  1. Install aptly in Ubuntu 18 LTS
  2. Create a mirror in aptly
  3. Create a snapshot of the mirror created before
  4. Publish the snapshot to be used in other servers.

and at the last step there is an example how to use the mirror in your local machines.

STEP 1) Install aptly in Ubuntu 18.04 LTS.

As mentioned already you may follow our article on the subject – Install aptly under Ubuntu 18 LTS with Nginx serving the packages and the first steps. The following steps are based on this installation!
The aptly home directory is in “/srv/aptly”. We use the “aptly” user and change to it to manipulate the aptly installation.
Change the user to aptly, because under this user the mirror process will happen.

root@srv ~ # su - aptly
aptly@srv:~$

STEP 2) Create a mirror in aptly.

Prepare the keys (aptly needs to have the Ubuntu keys in its trustedkeys keyring):

aptly@srv:~$ gpg --no-default-keyring --keyring trustedkeys.gpg --keyserver pool.sks-keyservers.net --recv-keys 4F4EA0AAE5267A6C
gpg: requesting key E5267A6C from hkp server pool.sks-keyservers.net
gpg: key E5267A6C: public key "Launchpad PPA for Ond\xc5\x99ej Sur�" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

Here we’ve used the method to obtain the key from a GPG KEY server, but the key can be downloaded directrly from the original repository as suggested in the error message below.
If you are not sure where to download the key you could always just try to create the mirror ( in fact, this is in STEP 3) ) and get the error for missing key and how to obtain the key:
Keep on reading!

nginx proxy cache and expires directive – pass-through the origin cache control

Proxying static content sometimes requires to modify the expire directive on the proxy server, but sometimes it may just need to pass the origin expire directive. What if the “expires” is defined in the server section and ones need to pass through the value from the origin by the proxy server?

Simply switch off the expires by “expires off”. It will disable an earlier definition in the block it is used and the proxy answer to the client will include the origin header for the cache control.

It won’t disable the cache control meaning to add no-cache in proxy answer to the client. So if it is used “expires” in a block and a pass-through from the origin is required, just make a location block with “expires off”:

        server {
                listen          10.10.10.10:443 ssl http2;
                server_name     srv1.example.com;

                ssl_certificate  /etc/ssl/nginx/srv1.example.com.chain.crt;
                ssl_certificate_key /etc/ssl/nginx/srv1.example.com.key;

                resolver 8.8.8.8;

                client_max_body_size 12m;

                expires         -1;
                root            /mnt/storage/web/root;
                access_log      /mnt/storage/web/logs/srv1.example.com;
                error_log       /mnt/storage/web/logs/srv1.example.com warn;

                location / {
                        #proxy
                        proxy_buffer_size   128k;
                        proxy_buffers   4 256k;
                        proxy_busy_buffers_size   256k;
                        proxy_buffering off;
                        proxy_read_timeout 600;
                        proxy_send_timeout 600;
                        proxy_store off;
                        proxy_cache off;
                        proxy_redirect off;
                        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
                        proxy_no_cache $http_pragma $http_authorization;
                        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
                        proxy_cache_bypass $http_pragma $http_authorization;

                        proxy_pass https://https_backend;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
                        proxy_set_header Host $host;
                        proxy_set_header        X-Real-IP-EXAMPLE       $remote_addr;
                        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                }

                location ~* \.jpeg {
                        expires off;
                        #proxy
                        proxy_buffer_size   128k;
                        proxy_buffers   4 256k;
                        proxy_busy_buffers_size   256k;
                        proxy_buffering off;
                        proxy_read_timeout 600;
                        proxy_send_timeout 600;
                        proxy_store off;
                        proxy_cache off;
                        proxy_redirect off;
                        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
                        proxy_no_cache $http_pragma $http_authorization;
                        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
                        proxy_cache_bypass $http_pragma $http_authorization;
                        proxy_ignore_headers "Expires" "Cache-Control";


                        proxy_pass https://https_backend;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
                        proxy_set_header Host $host;
                        proxy_set_header        X-Real-IP-EXAMPLE       $remote_addr;
                        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                }

                 location ~* \.(jpg|gif|png|css|mcss|js|mjs|woff|woff2)$ {
                        expires 30d;
                        #proxy
                        proxy_buffer_size   128k;
                        proxy_buffers   4 256k;
                        proxy_busy_buffers_size   256k;
                        proxy_buffering off;
                        proxy_read_timeout 600;
                        proxy_send_timeout 600;
                        proxy_store off;
                        proxy_cache off;
                        proxy_redirect off;
                        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
                        proxy_no_cache $http_pragma $http_authorization;
                        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
                        proxy_cache_bypass $http_pragma $http_authorization;

                        proxy_pass http://http_backend;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
                        proxy_set_header Host $host;
                        proxy_set_header        X-Real-IP-EXAMPLE       $remote_addr;
                        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                }
        }

The above sample configuration defines 3 location blocks for 3 different “expires” cases:

  1. Uses the server block “expires -1” – proxy for dynamic code such as proxy site application code. No caching.
  2. Uses the location block .jpeg “expires off” – the proxy will pass-through the value from the origin server. Any value from the origin.
  3. Uses the location block .(gif|png|css|mcss|js|mjs|woff|woff2) “expires 30d” – it will set +30 days expire header no matter of the origin value. Origin server value is replaced by the today +30 days in the future cache cache header.

podman – Error adding network: failed to allocate for range 0: 10.88.0.46 has been allocated after server reboot

We’ve just stumbled on the following error with one of our podman CentOS 8 servers after restart:

[root@srv ~]# podman start mysql-slave
ERRO[0000] Error adding network: failed to allocate for range 0: 10.88.0.46 has been allocated to c97823be46832ddebbce29f3f51e3091620188710cb7ace246e173a7a981baed, duplicate allocation is not allowed 
ERRO[0000] Error while adding pod to CNI network "podman": failed to allocate for range 0: 10.88.0.46 has been allocated to c97823be46832ddebbce29f3f51e3091620188710cb7ace246e173a7a981baed, duplicate allocation is not allowed 
Error: unable to start container "mysql-slave": error configuring network namespace for container c97823be46832ddebbce29f3f51e3091620188710cb7ace246e173a7a981baed: failed to allocate for range 0: 10.88.0.46 has been allocated to c97823be46832ddebbce29f3f51e3091620188710cb7ace246e173a7a981baed, duplicate allocation is not allowed

Apparently, something got wrong, because the two containers were fine before restarting and they were multiple times stopped, started and restarted.

The solution is to remove IP-named files in /var/lib/cni/networks/podman and start the podman containers again.

It resembles to a bug https://github.com/containers/libpod/issues/3759, which should have already been closed by the new minor CentOS 8 releases.

The interesting part is that the container we are trying to start mysql-slave has c97823be46832ddebbce29f3f51e3091620188710cb7ace246e173a7a981baed, but it reports it cannot allocate it, because it has already been allocated to a container with the same ID. That’s the problem:

The IP-named files in /var/lib/cni/networks/podman were not removed when the podman container had stopped.

Typically, when a podman container executes a stop command, the process should remove the files in /var/lib/cni/networks/podman. Before restarting the CentOS 8 server you may need to stop the podman containers for now.

[root@srv ~]# cd /var/lib/cni/networks/podman
[root@srv podman]# ls -altr
total 24
-rwxr-x---. 1 root root    0  3 Dec  0,43 lock
drwxr-xr-x. 3 root root 4096  3 Dec  0,43 ..
-rw-r--r--. 1 root root   64  9 Dec 18,34 10.88.0.46
-rw-r--r--. 1 root root   64 16 Dec 12,01 10.88.0.47
-rw-r--r--. 1 root root   10  1 Mar  9,28 last_reserved_ip.0
-rw-r--r--. 1 root root   70  1 Mar  9,28 10.88.0.49
drwxr-xr-x. 2 root root 4096  1 Mar  9,28 .
[root@srv podman]# rm 10.88.0.46
rm: remove regular file '10.88.0.46'? y
[root@srv podman]# rm 10.88.0.47
rm: remove regular file '10.88.0.47'? y
[root@srv podman]# podman start mysql-slave
mysql-slave
[root@srv podman]# podman ps
CONTAINER ID  IMAGE                           COMMAND               CREATED       STATUS            PORTS  NAMES
c97823be4683  localhost/centos-mysql-5.6:0.9  /entrypoint.sh my...  2 months ago  Up 2 minutes ago         mysql-slave
e96134b31894  docker.io/example/client:latest   start-boinc.sh        2 months ago  Up 6 minutes ago         example-client
[root@srv podman]# ls -altr
общо 20
-rwxr-x---. 1 root root    0  3 Dec  0,43 lock
drwxr-xr-x. 3 root root 4096  3 Dec  0,43 ..
-rw-r--r--. 1 root root   70  1 Mar  9,28 10.88.0.49
-rw-r--r--. 1 root root   10  1 Mar  9,32 last_reserved_ip.0
-rw-r--r--. 1 root root   70  1 Mar  9,32 10.88.0.50
drwxr-xr-x. 2 root root 4096  1 Mar  9,32 .
[root@srv podman]#

We’ve deleted the old IPs (old by date!) 10.88.0.46 and 10.88.0.47 and the mysql-slave container started successfully.

firewalld and podman (or docker) – no internet in the container and could not resolve host

If you happen to use CentOS 8 you have already discovered that Red Hat (i.e. CentOS) switch to podman, which is a fork of docker. So probably the following fix might help to someone, which does not use CentOS 8 or podman. For now, podman and docker are 99.99% the same.
So creating and starting a container is easy and in most cases one command only, but you may stumble on the error your container could not resolve or could not connect to an IP even there is a ping to the IP!
The service in the container may live a happy life without Internet access but just the mapped ports from the outside world. Still, it may happen to need Internet access, let’s say if an update should be performed.
Here is how to fix podman (docker) missing the Internet access in the container:

  • No ping to the outside world. The chances you are missing
    sysctl -w net.ipv4.ip_forward=1
    

    And do not forget to make it permanent by adding the “net.ipv4.ip_forward=1” to /etc/sysctl.conf (or a file “.conf” in /etc/sysctl.d/).

  • ping to the outside IP of the container is available, but no connection to any service is available! Probably the NAT is not enabled in your podman docker configuration. In the case with firewalld, at least, you must enable the masquerade option of the public zone
    firewall-cmd --zone=public --add-masquerade
    firewall-cmd --permanent --zone=public --add-masquerade
    

    The second command with “–permanent” is to make the option permanent over reboots.

The error – Could not resolve host (Name or service not known) despite having servers in /etc/resolv.conf and ping to them!

One may think having IPs in /etc/resolv.conf and ping to them in the container should give the container access to the Internet. But the following error occurs:

[root@srv /]# yum install telnet
Loaded plugins: fastestmirror, ovl
Determining fastest mirrors
 * base: artfiles.org
 * extras: centos.mirror.net-d-sign.de
 * updates: centos.bio.lmu.de
http://mirror.fra10.de.leaseweb.net/centos/7.7.1908/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: mirror.fra10.de.leaseweb.net; Unknown error"
Trying other mirror.
http://artfiles.org/centos.org/7.7.1908/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: artfiles.org; Unknown error"
Trying other mirror.
^C

Exiting on user cancel
[root@srv /]# ^C
[root@srv /]# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=5.05 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=5.06 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 5.050/5.055/5.061/0.071 ms
[root@srv ~]# cat /etc/resolv.conf 
nameserver 8.8.8.8
nameserver 8.8.4.4
[root@srv /]# ping google.com
ping: google.com: Name or service not known

The error 2 – Can’t connect to despite having ping to the IP!

[root@srv /]# ping 2.2.2.2
PING 2.2.2.2 (2.2.2.2) 56(84) bytes of data.
64 bytes from 2.2.2.2: icmp_seq=1 ttl=56 time=9.15 ms
64 bytes from 2.2.2.2: icmp_seq=2 ttl=56 time=9.16 ms
^C
[root@srv2 /]# mysql -h2.2.2.2 -uroot -p
Enter password: 
ERROR 2003 (HY000): Can't connect to MySQL server on '2.2.2.2' (113)
[root@srv2 /]#

Despite having ping the MySQL server on 2.2.2.2 and despite the firewall on 2.2.2.2 allows outside connections the container could not connect to it. And testing other services like HTTP, HTTPS, FTP and so on resulted in “unable to connect“, too. Simply because the NAT (aka masquerade is not enabled in the firewall).

send access logs in json to Elasticsearch using rsyslog

Here is a simple example of how to send well-formatted JSON access logs directly to the Elasticsearch server.

It is as simple as Nginx (it could be any webserver) sends the access logs using UDP to the rsyslog server, which then sends well-formatted JSON data to the Elasticsearch server.

No other server program like logstash is used. The data is transformed in rsyslog and it is passed through a couple of modules to ensure the JSON is valid and Elasticsearch would not complain (and missing logs entry!).
Objectives:

  1. Nginx to send access logs using UDP to the rsyslog server.
  2. rsyslog server to accept UDP messages.
  3. rsyslog server transforms the web-server access logs from the Nginx server to JSON.
  4. rsyslog server sends the validated JSON to the Elasticsearch server.

The configuration and the commands are tested on CentOS 7, CentOS 8 and Ubuntu 18 LTS (just replace yum with apt).

STEP 1) Nginx to send access logs using UDP to the rsyslog server.

It is simple enough to send Nginx’ access logs to a UDP server (local or remote) there are two articles here: nginx remote logging to UDP rsyslog server (CentOS 7) and syslog – UDP local to rsyslog and send remote with TCP and compression. For simplicity, Nginx will send to the remote rsyslog server using UDP.
Instruct the Nginx to send access logs using UDP to the remote rsyslog server.
Define a new access log format in http serction:

        log_format mainJSON escape=json '@cee: {'
                '"vhost":"$server_name",'
                '"remote_addr":"$remote_addr",'
                '"time_iso8601":"$time_iso8601",'
                '"request_uri":"$request_uri",'
                '"request_length":"$request_length",'
                '"request_method":"$request_method",'
                '"request_time":"$request_time",'
                '"server_port":"$server_port",'
                '"server_protocol":"$server_protocol",'
                '"ssl_protocol":"$ssl_protocol",'
                '"status":"$status",'
                '"bytes_sent":"$bytes_sent",'
                '"http_referer":"$http_referer",'
                '"http_user_agent":"$http_user_agent",'
                '"upstream_response_time":"$upstream_response_time",'
                '"upstream_addr":"$upstream_addr",'
                '"upstream_connect_time":"$upstream_connect_time",'
                '"upstream_cache_status":"$upstream_cache_status",'
                '"tcpinfo_rtt":"$tcpinfo_rtt",'
                '"tcpinfo_rttvar":"$tcpinfo_rttvar"'
                '}';

It is a valid JSON object, but sometimes in user agent or referer contain non-standard and not valid characters, so it breaks the JSON format, which may lead to problems in Elasticsearch (read ahead).

In a server section of Nginx configuration file /etc/nginx/nginx.conf:

server {
     .....
     access_log      /var/log/nginx/example.com_access.log main;
     access_log      syslog:server=10.10.10.2:514,facility=local7,tag=nginx,severity=info mainJSON;
     .....
}

Keep on reading!