send access logs in json to Elasticsearch using rsyslog

Here is a simple example of how to send well-formatted JSON access logs directly to the Elasticsearch server.

It is as simple as Nginx (it could be any webserver) sends the access logs using UDP to the rsyslog server, which then sends well-formatted JSON data to the Elasticsearch server.

No other server program like logstash is used. The data is transformed in rsyslog and it is passed through a couple of modules to ensure the JSON is valid and Elasticsearch would not complain (and missing logs entry!).
Objectives:

  1. Nginx to send access logs using UDP to the rsyslog server.
  2. rsyslog server to accept UDP messages.
  3. rsyslog server transforms the web-server access logs from the Nginx server to JSON.
  4. rsyslog server sends the validated JSON to the Elasticsearch server.

The configuration and the commands are tested on CentOS 7, CentOS 8 and Ubuntu 18 LTS (just replace yum with apt).

STEP 1) Nginx to send access logs using UDP to the rsyslog server.

It is simple enough to send Nginx’ access logs to a UDP server (local or remote) there are two articles here: nginx remote logging to UDP rsyslog server (CentOS 7) and syslog – UDP local to rsyslog and send remote with TCP and compression. For simplicity, Nginx will send to the remote rsyslog server using UDP.
Instruct the Nginx to send access logs using UDP to the remote rsyslog server.
Define a new access log format in http serction:

        log_format mainJSON escape=json '@cee: {'
                '"vhost":"$server_name",'
                '"remote_addr":"$remote_addr",'
                '"time_iso8601":"$time_iso8601",'
                '"request_uri":"$request_uri",'
                '"request_length":"$request_length",'
                '"request_method":"$request_method",'
                '"request_time":"$request_time",'
                '"server_port":"$server_port",'
                '"server_protocol":"$server_protocol",'
                '"ssl_protocol":"$ssl_protocol",'
                '"status":"$status",'
                '"bytes_sent":"$bytes_sent",'
                '"http_referer":"$http_referer",'
                '"http_user_agent":"$http_user_agent",'
                '"upstream_response_time":"$upstream_response_time",'
                '"upstream_addr":"$upstream_addr",'
                '"upstream_connect_time":"$upstream_connect_time",'
                '"upstream_cache_status":"$upstream_cache_status",'
                '"tcpinfo_rtt":"$tcpinfo_rtt",'
                '"tcpinfo_rttvar":"$tcpinfo_rttvar"'
                '}';

It is a valid JSON object, but sometimes in user agent or referer contain non-standard and not valid characters, so it breaks the JSON format, which may lead to problems in Elasticsearch (read ahead).

In a server section of Nginx configuration file /etc/nginx/nginx.conf:

server {
     .....
     access_log      /var/log/nginx/example.com_access.log main;
     access_log      syslog:server=10.10.10.2:514,facility=local7,tag=nginx,severity=info mainJSON;
     .....
}

Keep on reading!