make Gluster daemon to resolve the proper hostnames of your peers

This is a useful tip for GlusterFS nodes. When adding a peer to a gluster cluster you may use the hostname (or IP) and the Gluster daemon on the added server tries to resolve the hostname from the IP, which contacts it (or if the cluster has multiple peers – multiple IP resolves would happen).
Here is a simple example. The cluster will have two peers (srv1.example.com and srv2.example.com):
Add the peer srv2.example.com to your cluster srv1.example.com (in fact, the cluster consists only from the local Gluster daemon):

[root@srv1 ~]# gluster peer probe srv2.example.com
peer probe: success.
[root@srv1 ~]# gluster peer status
Number of Peers: 1

Hostname: srv2.example.com
Uuid: 8322b61c-a94d-491b-afc9-9f10eb8e8b92
State: Peer in Cluster (Connected)

And when you check the status of the cluster in the second server srv2.example.com. The second server uses the PTR domain of the first server:

[root@srv2 ~]# gluster peer status
Number of Peers: 1

Hostname: static.123.123.123.123.clients.your-server.de
Uuid: 3d273834-eca6-4997-871f-1a282ca90fb0
State: Peer in Cluster (Connected)

You see the hostname is a temporary namestatic.123.123.123.123.clients.your-server.de, the PTR of the srv1.example.com. You may have problems in the future if you leave it like that and even it is the really uninformative domain name for your cluster’s configuration. To change the peer hostname in a cluster is really difficult and dangerous, so the option is to change the PTR of the servers’ IPs, but if you cannot do it or it is too slow to do it you can just use “/etc/hosts” file!

Use “/etc/hosts” to make Gluster daemon to resolve the proper hostnames of your peers!

Edit the “/etc/hosts” on (the first and) the (peer) second server (add the line, do not remove the others if they exit). Replace the IP with your first server’s IP and hostname.

123.123.123.123 srv1.example.com

And then add it to the cluster on the first server and check again in the second server:

[root@srv2 ~]# gluster peer status
Number of Peers: 1

Hostname: srv1.example.com
Uuid: 3d273834-eca6-4997-871f-1a282ca90fb0
State: Peer in Cluster (Connected)

And in the fist server:

[root@srv1 ~]# gluster peer status
Number of Peers: 1

Hostname: srv2.example.com
Uuid: 8322b61c-a94d-491b-afc9-9f10eb8e8b92
State: Peer in Cluster (Connected)

Now the two servers have the right hostnames for peers. And these hostnames will be used for the Gluster configuration saved in the servers.

In fact, it is a good idea to add all your cluster peers in the “/etc/hosts” on all servers:

123.123.123.123 srv1.example.com
124.124.124.124 srv2.example.com

Minimal quagga bgpd configuration to run and remote configure it

There are the three steps to configure your Quagga bgpd daemon to be able to run and configure remotely. The idea of this article is to show you how you can run the quagga bgpd with the minimal configuration and probably you might give the credential to a network administrator.
Summary – 3 files to change:

  1. /etc/quagga/daemons – enable BGPD daemon
  2. /etc/quagga/debian.conf – which IP to listen to
  3. /etc/quagga/bgpd.conf – BGP daemon configuration

Here are the steps:
Keep on reading!

Install Fedora Workstation 30 (Gnome GUI)

This tutorial will show you the simple steps of installing a modern Linux Distribution like Fedora 30 Workstation with Gnome for the user graphical interface. First, we present the basic steps for installing the Operating system in addition to your present operating systems (here we also have Windows 10) and then you can see some screenshots of the installed system and the look and feel of it. We have other tutorials showing more screenshots of the installed and working Fedora 29 (Gnome and KDE plasma) – so you can decide which of them to try first – coming soon.

The Fedora 30 Workstation comes with

  • Xorg X server – 1.20.4 XWayland is used by default
  • GNOME (the GUI) – 3.32.1
  • linux kernel – 5.0.9

Check out our article about what software is included in comming soon.

The installation process is very similar to the old Fedora Workstation 27, Fedora Workstation 28 and Fedora Workstation 29, in fact the main difference is the creation of an user, which the setup is not responsible anymore, the creation of an user is done by the first boot after installation. Our system was pretty good – Asus X399 with AMD Ryzen Threadripper 1950X and NVIDIA 1080 Ti and the setup loaded successfully and there were no problems till the end.

We used the following ISO for the installation process:

https://download.fedoraproject.org/pub/fedora/linux/releases/30/Workstation/x86_64/iso/Fedora-Workstation-Live-x86_64-30-1.2.iso

It is a LIVE image so you can try it before installing. The easiest way is just to download the image and burn it to a DVD disk and then follow the installation below:

SCREENSHOT 1) Here is our “UEFI BIOS->Boot->Boot Override” and in most modern motherboard you can choose to override the default boot devices.

Choose the “UEFI: HL-DT-STDVDRAM…” to boot and install Fedora Workstation 30 with UEFI support. You should do this, because most of the new hardware like video cards would not work properly without being in UEFI mode.

main menu
Boot from DVD/USB Installation

Keep on reading!

using portage eix for the first time – cannot open database file

Installing “app-portage/eix” in Gentoo to manage your portage updates you might encounter this error, when trying to use “eix” for the first time:

Writing database file /var/cache/eix/portage.eix...
cannot open database file /var/cache/eix/portage.eix for writing (mode = 'wb')

The chances are missing directory “/var/cache/eix/” or the user:group of the “/var/cache/eix/” is root:root, which is NOT right.

The user:group must be “portage:portage”.

So the solution is really simple:

mkdir -p /var/cache/eix
chown portage:portage /var/cache/eix

Output – the errors you might get

Using the eix-sync failed with:

root@srv1 ~ # eix-sync 
 * eix-cache does not exist
 * Running eix-update
Reading Portage settings...
Building database (/var/cache/eix/portage.eix)...
[0] "gentoo" /usr/portage/ (cache: metadata-md5-or-flat)
     Reading category 167|167 (100) Finished             
[1] "myportage" /usr/local/myportage (cache: parse|ebuild*#metadata-md5#metadata-flat#assign)
     Reading category 167|167 (100) Finished    
Applying masks...
Calculating hash tables...
Writing database file /var/cache/eix/portage.eix...
cannot open database file /var/cache/eix/portage.eix for writing (mode = 'wb')
 * eix-update failed
 * Time statistics:
     6 seconds for initial eix-update
     6 seconds total

Using the “eix-update” failed, too.

root@srv ~ # eix-update 
Reading Portage settings...
Building database (/var/cache/eix/portage.eix)...
[0] "gentoo" /usr/portage/ (cache: metadata-md5-or-flat)
     Reading category 167|167 (100) Finished             
[1] "myportage" /usr/local/myportage (cache: parse|ebuild*#metadata-md5#metadata-flat#assign)
     Reading category 167|167 (100) Finished    
Applying masks...
Calculating hash tables...
Writing database file /var/cache/eix/portage.eix...
cannot open database file /var/cache/eix/portage.eix for writing (mode = 'wb')

Output 2 – Successful update with eix

root@srv ~ # eix-update 
Reading Portage settings...
Building database (/var/cache/eix/portage.eix)...
[0] "gentoo" /usr/portage/ (cache: metadata-md5-or-flat)
     Reading category 167|167 (100) Finished             
[1] "myportage" /usr/local/myportage (cache: parse|ebuild*#metadata-md5#metadata-flat#assign)
     Reading category 167|167 (100) Finished    
Applying masks...
Calculating hash tables...
Writing database file /var/cache/eix/portage.eix...
Database contains 19544 packages in 167 categories

Kernel loads only single processor on multi-processor system – ACPI: x2apic entry ignored

There multiple reports on this issue with different processors

kernel, which worked perfectly on multiple systems, loads on our new server and only one processor is shown

Just for the record, the SMP is enabled in the kernel (and in the BIOS – Hyperthreading and multicores are enabled, too):

root@srv ~ # zcat /proc/config.gz | grep 'CONFIG_SMP'
CONFIG_SMP=y

The problem is x2APIC Support in the BIOS of your server.

Apparently, our kernel (version 4.18.12) missed the kernel feature:

root@srv ~ # zcat /proc/config.gz | grep -i 'x2apic'
root@srv

You can see no kernel configuration entry “CONFIG_X86_X2APIC=y” is shown from the above command.

And if your BIOS enables the support of x2APIC you may end up with just one processor under Linux.

This was the case in our server. The x2APIC support is enabled in the BIOS and our kernel (version 4.18.12) does not have CONFIG_X86_X2APIC enabled.
To fix this issue you first might disable the feature in the BIOS and you are going to have all your processors shown and they could be used to compile fast your new kernel (of course, in the case you use custom kernel) after you enable the feature in the kernel CONFIG_X86_X2APIC, which is under
Kernel Configuration —> [*] Support x2apic. The asterisk means it is enabled, so build your kernel. Check and enable this “Device Drivers –> IOMMU Hardware Support –> Support for Interrupt Remapping”, too.
Here you can see how to enable and disable processor x2APIC support in HP ProLiant DL160 Gen9 (2 processors Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz) – Enable or Disable the processor x2APIC support in HP ProLiant DL160 Gen9.
Keep on reading!

bind – dump cache, how much memory might be occupied by the query cache

It is difficult to understand how BIND manages the occupied memory of your server. And most of the problems for DNS forwarders are the memory related – it grows enormously and if you try limiting it you might end up with a DNS server dropping some connections.
So if you have a DNS BIND9 server (especially a forwarder) you can dump the query cache into a file and see the size of the file. Such you might get the tentative memory usage of your BIND9 server.

rdnc dumpdb

To dump the query cache.

A real world example. Always include “-all” to be sure all the cache is dumped in the file!

root@srv1 # rndc dumpdb -all
root@srv1 # ls -altr /var/bind/
total 141596
drwxrwxr-x 2 bind bind      4096 Jan 25 00:56 pri
drwxrwxr-x 2 bind bind      4096 Jan 25 00:56 dyn
-rw-rwxr-- 1 bind bind      3289 Jan 25 00:58 root.cache
drwxr-xr-x 5 bind bind      4096 Feb 13 05:15 ..
drwxrwxr-x 2 bind bind     12288 May  7 10:00 sec
drwxrwxr-x 5 bind bind      4096 May  8 03:52 .
-rw-r--r-- 1 bind bind 144810299 May  8 03:56 named_dump.db

As you can see our dump file is around 139 Mbytes size, so you can expect at least 140 Mbytes of memory to be used for the BIND9 query cache. You can track in your case the footprint of named process and the size of the dump file.

Here is what you can find in the named_dump.db file:

; Zone dump of '10.10.10.in-addr.arpa/IN/america'
;
10.10.10.in-addr.arpa.                      86400 IN SOA      ns1.exa-ns5.com. wdns.exa-ns5.com. 2065407385 60 30 2419200 30
10.10.10.in-addr.arpa.                      1800 IN NS        ns1.exa-ns5.com.
10.10.10.in-addr.arpa.                      1800 IN NS        ns2.exa-ns5.com.
10.10.10.in-addr.arpa.                      1800 IN NS        ns4.exa-ns5.com.
10.10.10.in-addr.arpa.                      1800 IN NS        ns5.exa-ns5.com.
1.10.10.10.in-addr.arpa.                    86400 IN PTR      1.example.com.
2.10.10.10.in-addr.arpa.                    86400 IN PTR      2.example.com.
;
; Zone dump of '10.10.11.in-addr.arpa/IN/america'
;
10.10.11.in-addr.arpa.                      86400 IN SOA      ns1.exa-ns5.com. wdns.exa-ns5.com. 2065407385 60 30 2419200 30
10.10.11.in-addr.arpa.                      1800 IN NS        ns1.exa-ns5.com.
10.10.11.in-addr.arpa.                      1800 IN NS        ns2.exa-ns5.com.
10.10.11.in-addr.arpa.                      1800 IN NS        ns4.exa-ns5.com.
10.10.11.in-addr.arpa.                      1800 IN NS        ns5.exa-ns5.com.
18.10.10.11.in-addr.arpa.                   86400 IN PTR      ns1.exa-ns5.com.
19.10.10.11.in-addr.arpa.                   86400 IN PTR      ns2.exa-ns5.com.

; Zone dump of 'example.com/IN/america'
;
example.com.                                    180 IN SOA        ns1.exa-ns5.com. support.example.com. 2065407734 60 30 2419200 30
example.com.                                    1800 IN NS        ns1.exa-ns5.com.
example.com.                                    1800 IN NS        ns2.exa-ns5.com.
example.com.                                    1800 IN NS        ns4.exa-ns5.com.
example.com.                                    1800 IN NS        ns5.exa-ns5.com.
example.com.                                    180 IN MX         1 ASPMX.L.GOOGLE.COM.
example.com.                                    180 IN MX         5 ALT1.ASPMX.L.GOOGLE.COM.
example.com.                                    180 IN MX         5 ALT2.ASPMX.L.GOOGLE.COM.
example.com.                                    180 IN MX         10 ASPMX2.GOOGLEMAIL.COM.
example.com.                                    180 IN MX         10 ASPMX3.GOOGLEMAIL.COM.
*.210.example.com.                            180 IN A          10.10.10.10
*.2107.example.com.                           180 IN A          10.10.10.134
*.2109.example.com.                           180 IN A          10.10.10.138
*.2115.example.com.                           180 IN A          10.10.10.98
*.2117.example.com.                           180 IN A          10.10.10.99
*.2119.example.com.                           180 IN A          10.10.11.2
*.2131.example.com.                           180 IN A          10.10.11.6
*.2246.example.com.                           180 IN A          10.11.11.13
*.2260.example.com.                           180 IN A          10.11.12.184
*.2271.example.com.                           180 IN A          10.11.13.158
*.2298.example.com.                           180 IN A          10.11.14.14
*.2292.example.com.                           180 IN A          10.10.15.65
*.2296.example.com.                           180 IN A          10.10.10.100

Here is the syntax

You can dump only a zone or view.

  dumpdb [-all|-cache|-zones] [view ...]
                Dump cache(s) to the dump file (named_dump.db).

List all your files (and directories) with file size over FTP without ls -R (recursive)

A great piece of software is

lftp – sophisticated file transfer program

This little console tool could ease your life significantly with many enhancements to the simple FTP protocol. This tip is for those how what to list all their files in a directory or the entire FTP account, but do not have ls command with recursive abilities. So the only option is to manually go through all the directories to fetch the listing information of the directories, but this could be automatically done by

lftp using the custom command “find” and if you add “-l” argument the output is like “ls -al” – file or directory, file permissions, user and group, file size, date and file name are shown on single line for each file.

Just execute the command with proper credentials and the starting directory of your choice. The command output could even be piped to another command.
Keep on reading!

bind – dns server queries statistics with statistics-file

There is an option “statistics-file” in the BIND9 configuration for query statistics. It will give you statistics for

  • Incoming Requests – total number of queries
  • Incoming Queries – queries by record type
  • Outgoing Queries – queries by record type per view
  • Name Server Statistics – extended queries statistics by network connection type (UDP, TCP or IPv4 and IPv6 interface), by type of answer (authoritative, non authoritative and so on) and more
  • Zone Maintenance Statistics – transfer and system queries
  • Resolver Statistics – recursive queries
  • Cache DB RRsets – cached resources records sets
  • Socket I/O Statistics – statistics numbers for UDP, TCP (for both IPv4 and IPV6) sockets and connections opened, closed, failure
  • Per Zone Query Statistics – so you can see how many queries you have for a zone in a view (and the transfers if the server is a slave)

In named.conf:

options {
....
    statistics-file "/var/log/named.stats";
    zone-statistics yes ;
....

But if you check in /var/log this file might be missing even your BIND server has been running for months!

This is because the statistics and the file is generated on request and is a snapshot at the moment you do the request

To request from the BIND server to generate the file is pretty easy:

root@srv ~ # rndc stats
root@srv ~ #

No standard output and you should have the stats file generated:

root@srv ~ # ls -altr /var/log/named.stats 
-rw-r--r-- 1 named named 174997 Apr  7 01:43 /var/log/named.stats

The generated requests are appended in the file with a UNIX timestamp.

....
--- Statistics Dump --- (1550561292)
+++ Statistics Dump +++ (1551233218)
....

Keep on reading!

Gentoo kde-frameworks/kdewebkit failed compilation with Qt5WebKit could not be found because dependency is required

Updating the KDE Plasma Desktop in our Gentoo workstations this time failed with

CMake Error at /usr/share/cmake/Modules/CMakeFindDependencyMacro.cmake:48 (find_package):
  Found package configuration file:

    /usr/lib64/cmake/Qt5WebKit/Qt5WebKitConfig.cmake

  but it set Qt5WebKit_FOUND to FALSE so package "Qt5WebKit" is considered to
  be NOT FOUND.  Reason given by package:

  Qt5WebKit could not be found because dependency is required to have exact
  version 5.11.x.

It was strange because the previous emerge included the QT upgrade from old 5.11.2 to 5.12.1 and this dependency should have been resolved properly before:

emerge -vau $(qlist -IC|grep dev-qt|sort|uniq)

But apparently despite that the emerge built all QT libraries in dependency order the “dev-qt/qtwebkit” was built against the old QT libraries. And this is what is saying the above error!

The solution is really simple just rebuild the dev-qt/qtwebkit

root@srv ~ # emerge -va dev-qt/qtwebkit

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild   R    ] dev-qt/qtwebkit-5.212.0_pre20180120:5/5.212::gentoo  USE="X geolocation hyphen jit multimedia opengl printsupport qml -gles2 -gstreamer -nsplugin 
-orientation -webp" 0 KiB

Total: 1 package (1 reinstall), Size of downloads: 0 KiB

Would you like to merge these packages? [Yes/No] yes

Keep on reading!

nginx with php fpm (fastcgi) and the warning – an upstream response is buffered to a temporary file /var/cache/nginx/fastcgi_temp

As the web grows and the technology advances the page size of the web sites also grows or just some times you might want to output a big chunk of data from your application server – PHP-FPM (but it could be any of another ruby, python, C, Django and more), for example.
Here is a fast configuration tip (note this is not the proxy-related warning!):

The default nginx buffers per CGI connection are too small

Here is what to do in your nginx configuration file:
First, look for a line “include /etc/nginx/fastcgi_params;” or similar and add or edit if they exist after this line:

        fastcgi_buffer_size 16k;
        fastcgi_buffers 32 16k;

Check out more for the buffers here http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_buffers
The warning should stop if it does not stop you can try raising them. It could consume more memory but could lower the IO usage of your disks and improve the performance of your site or whatever backend works!

Here is the warning in our nginx error logs. We got this warning when using php-fpm and the php output size was 325965 bytes (~320K).

2019/04/04 09:56:05 [warn] 24451#24451: *44269838 an upstream response is buffered to a temporary file /var/cache/nginx/fastcgi_temp/0/12/0019966120 while reading upstream, client: 10.10.10.10, server: srv17.srv.en, request: "GET /api/20140102/product HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "srv17.srv.en"
2019/04/04 09:56:07 [warn] 24451#24451: *44269849 an upstream response is buffered to a temporary file /var/cache/nginx/fastcgi_temp/2/12/0019966122 while reading upstream, client: 10.10.10.11, server: srv17.srv.en, request: "GET /api/20140102/product HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "srv17.srv.en"
2019/04/04 09:56:09 [warn] 24450#24450: *44269856 an upstream response is buffered to a temporary file /var/cache/nginx/fastcgi_temp/7/12/0019966127 while reading upstream, client: 10.10.10.12, server: srv17.srv.en, request: "GET /api/20140102/product HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "srv17.srv.en"