How to install GNU GCC 8 on CentOS 7

It has been long after releasing the GNU GCC 8.x, but at last, there is a trusted repository, which has offered us packages for GNU GCC 8.x, which won’t break your system! Many of us prefer CentOS 7 because it offers free enterprise-class operating system and as mentioned in our article before – How to install new gcc and development tools under CentOS 7 there are a couple of approved external repositories for CentOS, which you can trust https://wiki.centos.org/AdditionalResources/Repositories. In one of them Software Collection – https://www.softwarecollections.org/en/scls/ several months ago the GNU GCC 8.x packages were added!
At present, the GNU GCC version is gcc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3).
Here are the steps how you can install GNU GCC 8 and how you can use it:

STEP 1) Update your system and install the repository in your system

The commands:

yum update -y
yum -y install centos-release-scl

Keep on reading!

Live status information like used space and more for nginx proxy cache

Using the Nginx virtual host traffic status module you can have extended live information for your proxy cache module and the proxy cache upstream servers. We have covered the topic of how to install the module here – Install Nginx virtual host traffic status module – traffic information in Nginx and more per server block and upstreams and this article is just to show you what information you could have using the module with proxy cache (and the upstream servers) module.
In general, there is no live information about Nginx proxy cache. Of course, by the space it is occupied in the disk you can guess how much space is taken by your Nginx cache (or when you restart or upgrade the Nginx it would reinitialize the cache and when finished the numbers would be written in the error log). With this module “Nginx virtual host traffic status module” – https://github.com/vozlt/nginx-module-vts you would have additional status information page containing information for the proxy cache module (we included only for the proxy cache here, for more look at the other article mentioned above), too:

Per upstream server

  • state – up, down, backup server and so on.
  • Response Time – the time the server responded last time. You can use this to see how far away is your server and to detect problems with your upstreams connectivity.
  • Weight – the weight of the server in the group. It’s from the configuration file.
  • Max Fails – the max fail attempts before it blacklists the upstream for “Fail Timeout” time. It’s from the configuration file.
  • Fail Timeout – the time, which the server will be blacklisted and the time the all fails (from Max Fails) must occur. It’s from the configuration file.
  • RequestsTotal – from the start of the server, Req/s – Requests per second at the moment of loading the extended status page, Time.
  • ResponsesTotal and split by error codes – 1xx, 2xx, 3xx, 4xx, 5xx.
  • TrafficSent – total sent from the start of the server), Rcvd – total received from the start of the server, Sent/s – sent per second at the moment, Rcvd/s – received per second at the moment.

Per cache – i.e. key zone name

  • SizeCapacity – the capacity from the configuration file, Used – used space lively updated! With this you can have access to the used space of your zones with only loading a page – the extended status page of this module!
  • TrafficSent – total sent from the start of the server, Rcvd – total received from the start of the server, Sent/s – sent per second at the moment, Rcvd/s – received per second at the moment.
  • CacheMiss, Bypass, Expired, Stale, Updating, Revalidated, Hit, Scarce, Total – they all are self-explanatory and all counters are from the start of the server.

You can compute the effectiveness of your cache for a period of time. For example, you can make different graphs based on this data for long periods and for different short periods like in peaks of off-peaks. We might have an article on the subject.

SCREENSHOT 1) Cache with three cache zones and two upstream servers – main and backup

As you can see our biggest zone has 2.92T occupied and it is 100% of the available space, so probably the cache manager is deleteing at the moment. The hits are 24551772 and the total is 28023927 so the ratio in percentages is 87.6%! 87.6% of the hits of this zone is servered by the server without the need of touching the upstream servers. In the first cache zone we have more aggressive time expiring, so there were 21% requests, which were updated.

main menu
Cache with three cache zones and two upstream servers – main and backup

Keep on reading!

root cannot delete, move or change a file – Operation not permitted or Permission denied – immutable attribute

If you are the root user and some file (files or directories) cannot be deleted, removed, renamed or changed you probably deal with the immutable attribute set on (by a colleague of yours – installation setups tend to not set such attributes).

Here is what it looks like to have such a file

root@srv.remote /etc/apache2/vhosts.d # mv example.com.conf /root/old/apache/
mv: cannot move `example.com.conf' to `/root/old/apache/example.com.conf': Operation not permitted
root@srv.remote /etc/apache2/vhosts.d # lsattr example.com.conf
----i--------e- example.com.conf
root@srv.remote /etc/apache2/vhosts.d # rm example.com.conf
rm: cannot remove `example.com.conf': Operation not permitted
root@srv.remote /etc/apache2/vhosts.d # echo "teeest" >> example.com.conf
-bash: example.com.conf: Permission denied

Here is how you can set the attribute off.

You need first to set off the file’s immutable attribute and then to do whatever you intended to do in the first place – delete, rename, change and so on. Y

chattr -i filename.txt

In continuation of our example above:

root@srv.remote /etc/apache2/vhosts.d # chattr -i example.com.conf
root@srv.remote /etc/apache2/vhosts.d # lsattr example.com.conf
-------------e- example.com.conf
root@srv.remote /etc/apache2/vhosts.d # mv example.com.conf /root/old/apache/
root@srv.remote /etc/apache2/vhosts.d #

As you can see no immutable attribute no problem to move the file!

And just not note you need to install a package with the name e2fsprogs (not always in the default installation) in your Linux distribution to have lsattr, chattr and more!

emerge – cannot sync with gpg: keyserver refresh failed: General error because of wrong date

Trying to sync one of our virtual servers we got a sync error not able to refresh the OpenPGP keys. The virtual server was just resumed and it was OK before the pause. In addition, there were no errors in dmesg or some kind of kernel panics. All seemed to be working even the server was in the distributed compiling node and no problems there. But still, the emerge syncing the portage tree wasn’t possible!
And the problem was the date of our virtual server, which was 4 months behind the real date!

Check the time and date of the server – if it is behind or in the future with a big interval this is the root of the problems with the inability to refresh the GPG keys.

Just synchronize the clock of the server and be careful when you resume pause virtual servers! When you resume them you should synchronize the clock because in multiple environments the clock might be wrong!
We have multiple articles on the time syncronization topic – openntpd – immediately sync the clock on startup, simple time synchronization of a server (laptop, desktop) using built-in systemd-timesyncd service and more.

compile-local ~ # emerge --sync
>>> Syncing repository 'gentoo' into '/usr/portage'...
 * Using keys from /usr/share/openpgp-keys/gentoo-release.asc
 * Refreshing keys from keyserver ...OpenPGP keyring refresh failed:
gpg: refreshing 4 keys from hkps://hkps.pool.sks-keyservers.net
gpg: keyserver refresh failed: General error

OpenPGP keyring refresh failed:
gpg: refreshing 4 keys from hkps://hkps.pool.sks-keyservers.net
gpg: keyserver refresh failed: General error

OpenPGP keyring refresh failed:
gpg: refreshing 4 keys from hkps://hkps.pool.sks-keyservers.net
gpg: keyserver refresh failed: General error

OpenPGP keyring refresh failed:
gpg: refreshing 4 keys from hkps://hkps.pool.sks-keyservers.net
gpg: keyserver refresh failed: General error

OpenPGP keyring refresh failed:
gpg: refreshing 4 keys from hkps://hkps.pool.sks-keyservers.net
gpg: keyserver refresh failed: General error

OpenPGP keyring refresh failed:
gpg: refreshing 4 keys from hkps://hkps.pool.sks-keyservers.net
gpg: keyserver refresh failed: General error

OpenPGP keyring refresh failed:
gpg: refreshing 4 keys from hkps://hkps.pool.sks-keyservers.net
gpg: keyserver refresh failed: General error

^C

Exiting on signal Signals.SIGINT
compile-local ~ # date
Sat 08 Apr 2019 15:11:39 PM -00
compile-local ~ # /etc/init.d/ntpd restart
 * Starting OpenNTPD ...                                                                                                                                              [ ok ]
compile-local ~ # date
Sun 01 Sep 2019 08:02:34 AM -00
compile-local ~ #

Keep on reading!

openntpd – immediately sync the clock on startup

Here is our simple tip for your healthy server’s date and time:

Immediately synchronize the clock of your computer when using the openntpd (a lightweight version of ntpd with client-only mode).

Use the “-s” (lower “s” letter) to instruct the daemon ntpd to synchronize the clock immediately after it discovers a healthy time server!

-s          Try to set the time immediately at startup, as opposed to slowly adjusting the clock.  ntpd will stay in the foreground for up to 15 seconds waiting
                 for one of the configured NTP servers to reply.

Find the start-up configuration file in your “/etc” (for your Linux distribution, its name is probably ntpd, for Gentoo it is “/etc/conf.d/ntpd”, the thing is to find the start-up confiuration script, not the ntpd.conf, which is the ntpd configuration file for the daemon) and include “-s” in the NTPD_OPTS:

cat /etc/conf.d/ntpd 
# /etc/conf.d/ntpd: config file for openntpd's ntpd

NTPD_OPTS="-s"

Restart the service.

If you use it in a virtualized environment like containers (docker, lxc, lxd and so on) and qemu, virtualbox, vmware and so on and you often suspend the machine to synchronize the clock when you resume it you must manually restart the openntpd service!!! Or you are going to wait for slowly adjusting the time as usual.

Information status

There is a utility to check what’s going on with the openntpd – ntpctl. It has only three read-only commands:

usage: ntpctl -s all | peers | Sensors | status

Keep on reading!

Install Nginx virtual host traffic status module – traffic information in nginx and more per server block and upstreams

This article is going to show how to compile and install the Nginx module – ngx_http_vhost_traffic_status.

The module gathers traffic information per the server blocks and upstream servers and shows information for Nginx proxy cache like used space.

In addition, the module shows the type of the Response – 1xx, 2xx, 3xx, 4xx, 5xx and total. So when if problems occur in a server block or an upstream server
This module nginx-module-vts offers really extended status information for your Nginx.
Here is one the status page of our web servers with 18 virtual hosts:

The status page shows all virtual hosts in section “Server zones” and all upstream servers for the FastCGI PHP backend servers.

Traffic, requests, and status codes are available. All data is available in JSON, too.

main menu
Traffic information in Nginx and more per server block and upstreams

Server zones information

  • Requests – Total, Requests/s, Time
  • Responses – 1xx, 2xx, 3xx, 4xx, 5xx, Total
  • Traffic – Sent, Received, Sent/s, Received/s
  • Cache – Miss, Bypass, Expired, Stale, Updating, Revalidated, Hit, Scarce, Total

In addition to the information above there are State, Response Time, Weight, MaxFails and FileTimeout for all the upstream servers. And for the Nginx proxy cache there are Size, Capacity (live information!) and all information above per zone – there is an additional article here Live status information like used space and more for nginx proxy cache.
Keep on reading!

grep – find files, which have no match in their entire content

Here is a quick tip for a very useful option, which is not widely known!
If you want to search for a matching string in a file you can use “grep” to look for lines in the file with the matching string, but what if you would like to search for files, which DO NOT contain the search string?
The Unix-world command grep has the option “-L”, which will output the name of the file not containing the search string:

-L, –files-without-match – Suppress normal output; instead print the name of each input file from which no output would normally have been printed. The scanning will stop on the first match.

The quote is from grep’s man page!
Some simple examples:

myuser@srv:~/tmp$ grep -L DISTRIB_DESCRIPTION *
10.10.10.10
10.10.10.11
10.10.10.12
10.10.10.13
10.10.10.50
myuser@srv:~/tmp$ grep DISTRIB_DESCRIPTION *
10.10.10.14:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.15:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.16:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.17:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.18:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.19:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.20:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.51:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.52:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.53:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"

As you can see including the “-L” option will output only names, because no match is found in the files. And if you miss the option the output will show you all files and lines in the files matching the search string.
In addition, if you use “-l” (note this time the letter “l” is lower case), you the searching in the file stops on the first match, so searching in multiple files the output will include only one time the name and the match of a file on contrast when not using “-l” will output the same files’ names with the match the times matched:

myuser@srv:~/tmp$ grep VERSION *
10.10.10.14:VERSION="16.04.5 LTS (Xenial Xerus)"
10.10.10.14:VERSION_ID="16.04"
10.10.10.14:VERSION_CODENAME=xenial
10.10.10.15:VERSION="16.04.5 LTS (Xenial Xerus)"
10.10.10.15:VERSION_ID="16.04"
10.10.10.15:VERSION_CODENAME=xenial
10.10.10.16:VERSION="16.04.5 LTS (Xenial Xerus)"
10.10.10.16:VERSION_ID="16.04"
10.10.10.16:VERSION_CODENAME=xenial
myuser@srv:~/tmp$ grep -l VERSION *
10.10.10.14
10.10.10.15
10.10.10.16

nginx – remove “no live upstreams” for your backup connections

In this article, we discuss the use of Nginx upstream module for HTTP and CGI (FastCGI) requests.

Using Nginx upstream module is essential for scaling application backend, but there might be a few catch-ups. One of them is related to what will happen when a server fails?
There is a proxy_next_upstream directive (or for FastCGI – fastcgi_next_upstream), which instructs the upstream module what is a fail – is it a connection error or timeout or HTTP 500 returned by the upstream server or an ordinary HTTP 404 returned by the upstream server is also a fail. So when a failure is identified by the Nginx upstream module the upstream module will look for the next upstream server to handle the request. These directives instruct Nginx upstream module what is a failure then to handle the next upstream server if available.
The default values are too conservative (and probably it is better to be like that):

proxy_next_upstream error timeout;

And available options are:

proxy_next_upstream error | timeout | invalid_header | http_500 | http_502 | http_503 | http_504 | http_403 | http_404 | http_429 | non_idempotent | off ...;

They are the same for fastcgi_next_upstream, too.

Imagine you what to protect yourself from HTTP 500? Or even HTTP 403 and 404? It’s normal to include them in your configuration. But here is the catch-up:
What if an image is really missing (HTTP 404) on all your upstream backend servers? Or there is a syntax error in an application file (like a simple PHP file)? All your upstreams servers will return 404 or 500 (in case of application error) and all of them will be blacklisted for at least 20 seconds. Remember 404 or 500 is a failure and we need next upstream server and if all return failure for this particular request Nginx will return to the client there is a problem and will mark the server as down (unavailable for a period of time).

SO because of a single file (a problem in a single file), all the following requests will be denied with “502 Bad Gateway” or “500 Internal Server Error”, even your servers are healthy!

Just a tiny miss like a missing image could be misinterpreted as your upstream servers have problems, so they must be blocked! Even if you put a “backup” directive in the upstream server line!

The solution is to include one or more of your upstream servers with disabled failure count (fail_timeout=0s) as a backup server. So this server will be always available when all normal servers got blacklisted! And you are not going to receive any more “no live upstreams” and returning an error to your clients.
Here is a working configuration (it is the same for HTTP and FastCGI setups):

upstream backend {
     server   127.0.0.1:8000;
     server   10.10.10.10:9000 fail_timeout=20s;
     server   127.0.0.1:8000 backup fail_timeout=0s max_fails=1000;
}

And be careful what you add in proxy_next_upstream (or fastcgi_next_upstream). In general, HTTP 403 and 403 are not for this directive!

A real-world example (FastCGI)

One of our project using an Nginx with the upstream module to scale the PHP application backend began to serve only HTTP 502 to all clients! In the PHP logs, there was a rear syntax error on a single file (not in the main part of the site), but Nginx was answering to all requests, no matter of the URI with 502. What had happened? The two backend application servers had returned with 500 (because of this error) and were blacklisted for 20 seconds! And all the following requests were not served by the upstream backends because there were “no live upstreams”:

upstream backend-php {
        server   127.0.0.1:8000;
        server   178.63.22.46:9000 fail_timeout=20s;
        server   127.0.0.1:8000 backup;
}

with

fastcgi_next_upstream error timeout invalid_header http_500 http_503 http_429 non_idempotent;

Even we have a backup, it was also blacklisted. In fact, a scanner was scanning all of our PHP files and one URL returned syntax error, then all of the upstream servers were blacklisted and we experienced an effective DoS because of misconfiguration. After we changed our configuration to:

upstream backend-php {
        server   127.0.0.1:8000;
        server   10.10.10.10:9000 fail_timeout=20s;
        server   127.0.0.1:8000 backup fail_timeout=0s max_fails=1000;
}

Everything returned to normal. There was still this syntax error but did not stop all other valid URLs to be served. Of course, we fixed the broken file and stopped the scammer from scanning our site.

A real-world example 2 (HTTP)

In our proxy static cache servers in remote locations, we experienced periodically “no live upstreams” and our clients received “502 Bad Gateway” on-peak hours! The problem was we have too aggressive proxy connect, read and send timeout, but because we were serving a live TV we needed them. And on-peak if a single connection just huck-up for 5-10 seconds our upstream servers were blacklisted for 20 seconds! Using proxy_cache_lock could worsen the situation! Then we changed our configuration to have a backup upstream server, which effectively would not be blacklisted and lowered the proxy_cache_lock to be sure if a single connection failed for some reason all other might succeed in bringing the file to the cache!

proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_403 http_404;
proxy_connect_timeout 2s;
proxy_read_timeout 5s;
proxy_send_timeout 5s;
proxy_cache_lock on;
proxy_cache_lock_timeout 20s;
proxy_cache_lock_age 10s;

with upstream configuration:

upstream backend_http {
        server 10.10.10.10 fail_timeout=20s;
        server 10.10.10.11 backup fail_timeout=0s max_fails=1000;
        keepalive 16;
}

List Openstack container’s options with the swift command-line client – capabilities command

First, you need to install

swift command-line utility

and second, install the command-line tool to manage your account: Install OpenStack swift client only
With the capabilities command you may discover the following important policy and limits of your account like:

  • Listing limits – how many files (objects) will be in the output when using list command.
  • The maximum file size, which is supported by the server.
  • Maximum files (objects) for deletion per a single request. How many files you can delete with a single request, which is very convinient to put thousands of files per one requests, not to initiate an http(s) connection for each file (object), which could be thousands of files, or even millions!
  • Additinal plugins (in terms of Openstack – middleware), which are supported
  • Maximum container name length

and many more.

In general, you will need:

  1. username (–os-username) – Username
  2. password (–os-password) – Password
  3. authentication url (–os-auth-url) – The URL address, which authorize your requests, it generates a security token for your operations. Always use https!
  4. tenant name (–os-tenant-name) – Tenant is like a project.

All of the above information should be available from your OpenStack administrator.
Here an example output of the capabalities command:

myuser@myserver:~$ swift --os-username myusr --os-tenant-name myusr --os-password mypass --os-auth-url https://auth01.example.com:5000/v2.0 capabilities
Core: swift
 Options:
  account_autocreate: True
  account_listing_limit: 20000
  allow_account_management: False
  container_listing_limit: 20000
  extra_header_count: 0
  max_account_name_length: 256
  max_container_name_length: 256
  max_file_size: 5368709122
  max_header_size: 8192
  max_meta_count: 90
  max_meta_name_length: 128
  max_meta_overall_size: 4096
  max_meta_value_length: 256
  max_object_name_length: 1024
  policies: [{'name': 'Policy-0', 'default': True}]
  strict_cors_mode: True
Additional middleware: bulk_delete
 Options:
  max_deletes_per_request: 20000
Additional middleware: bulk_upload
 Options:
  max_containers_per_extraction: 20000
  max_failed_extractions: 1000
Additional middleware: container_sync
 Options:
  realms: {}
Additional middleware: crossdomain
Additional middleware: formpost
Additional middleware: keystoneauth
Additional middleware: slo
 Options:
  max_manifest_segments: 1000
  max_manifest_size: 2097152
  min_segment_size: 1048576
Additional middleware: staticweb

You can see various middleware are activated with specific options – bulk_upload – to upload multiple files with one request (a list with files) and bulk_delete – to delete multiple files per one request and so on.

ansible – using ansible vault with copy module to decrypt on-the-fly files

Here is an interesting tip for all who what to protect the sensitive information with ansible. Our example is simple enough – we want to protect our private key and we want to decrypt it when installing on the server. The copy ansible module has a decrypt feature and it can decrypt the file on-the-fly when the task is executed.
Here is how to use ansible vault to encrypt the file with the private key and the ansible playbook file to copy the file.

If you are a newbie in ansible you can check this article – First ansible use – install and execute a single command or multiple tasks in a playbook There you can see how to create your inventory file (and configure sudo if you remotely log in with unprivileged user) used herein the example.

STEP 1) Encrypt the file with ansible vault

myuser@srv ~ $ ansible-vault encrypt server.key
New Vault password: 
Confirm New Vault password: 
Encryption successful

You can see the file now is changed and starts with:

myuser@srv ~ $ cat server.key 
$ANSIBLE_VAULT;1.1;AES256
62363263663865646361643461663531373637386631646262366333663831643435633263363336
3735326665326363356566303566626638316662376432640a326362326230353966353431383164
35353531653331306430656562616165353632643330393662313535326438363964303436306639
....
....

STEP 2) Ansible playbook file to use copy and decrypt option

---
- hosts: all
  tasks:
    - name: Copy server private key
      copy:
        src: server.key
        dest: /etc/env/server.key
        decrypt: yes
        owner: root 
        group: root 
        mode: 400
        backup: no

STEP 3) Execute the ansible playbook

myuser@srv ~ $ ansible-playbook --ask-vault-pass -l srv3 -i ./inventory.ini ./playbook-example.yml -b
Vault password: 

PLAY [all] *****************************************************************************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************************************************************
ok: [srv3]

TASK [Copy server private key] *********************************************************************************************************************************************
changed: [srv3]

PLAY RECAP *****************************************************************************************************************************************************************
srv3                       : ok=2    changed=1    unreachable=0    failed=0   

And the file in the remote server (srv3 in the example) is unencrypted in /etc/env/server.key!