make Gluster daemon to resolve the proper hostnames of your peers

This is a useful tip for GlusterFS nodes. When adding a peer to a gluster cluster you may use the hostname (or IP) and the Gluster daemon on the added server tries to resolve the hostname from the IP, which contacts it (or if the cluster has multiple peers – multiple IP resolves would happen).
Here is a simple example. The cluster will have two peers (srv1.example.com and srv2.example.com):
Add the peer srv2.example.com to your cluster srv1.example.com (in fact, the cluster consists only from the local Gluster daemon):

[root@srv1 ~]# gluster peer probe srv2.example.com
peer probe: success.
[root@srv1 ~]# gluster peer status
Number of Peers: 1

Hostname: srv2.example.com
Uuid: 8322b61c-a94d-491b-afc9-9f10eb8e8b92
State: Peer in Cluster (Connected)

And when you check the status of the cluster in the second server srv2.example.com. The second server uses the PTR domain of the first server:

[root@srv2 ~]# gluster peer status
Number of Peers: 1

Hostname: static.123.123.123.123.clients.your-server.de
Uuid: 3d273834-eca6-4997-871f-1a282ca90fb0
State: Peer in Cluster (Connected)

You see the hostname is a temporary namestatic.123.123.123.123.clients.your-server.de, the PTR of the srv1.example.com. You may have problems in the future if you leave it like that and even it is the really uninformative domain name for your cluster’s configuration. To change the peer hostname in a cluster is really difficult and dangerous, so the option is to change the PTR of the servers’ IPs, but if you cannot do it or it is too slow to do it you can just use “/etc/hosts” file!

Use “/etc/hosts” to make Gluster daemon to resolve the proper hostnames of your peers!

Edit the “/etc/hosts” on (the first and) the (peer) second server (add the line, do not remove the others if they exit). Replace the IP with your first server’s IP and hostname.

123.123.123.123 srv1.example.com

And then add it to the cluster on the first server and check again in the second server:

[root@srv2 ~]# gluster peer status
Number of Peers: 1

Hostname: srv1.example.com
Uuid: 3d273834-eca6-4997-871f-1a282ca90fb0
State: Peer in Cluster (Connected)

And in the fist server:

[root@srv1 ~]# gluster peer status
Number of Peers: 1

Hostname: srv2.example.com
Uuid: 8322b61c-a94d-491b-afc9-9f10eb8e8b92
State: Peer in Cluster (Connected)

Now the two servers have the right hostnames for peers. And these hostnames will be used for the Gluster configuration saved in the servers.

In fact, it is a good idea to add all your cluster peers in the “/etc/hosts” on all servers:

123.123.123.123 srv1.example.com
124.124.124.124 srv2.example.com

Minimal quagga bgpd configuration to run and remote configure it

There are the three steps to configure your Quagga bgpd daemon to be able to run and configure remotely. The idea of this article is to show you how you can run the quagga bgpd with the minimal configuration and probably you might give the credential to a network administrator.
Summary – 3 files to change:

  1. /etc/quagga/daemons – enable BGPD daemon
  2. /etc/quagga/debian.conf – which IP to listen to
  3. /etc/quagga/bgpd.conf – BGP daemon configuration

Here are the steps:
Keep on reading!

Install Fedora Workstation 30 (Gnome GUI)

This tutorial will show you the simple steps of installing a modern Linux Distribution like Fedora 30 Workstation with Gnome for the user graphical interface. First, we present the basic steps for installing the Operating system in addition to your present operating systems (here we also have Windows 10) and then you can see some screenshots of the installed system and the look and feel of it. We have other tutorials showing more screenshots of the installed and working Fedora 29 (Gnome and KDE plasma) – so you can decide which of them to try first – coming soon.

The Fedora 30 Workstation comes with

  • Xorg X server – 1.20.4 XWayland is used by default
  • GNOME (the GUI) – 3.32.1
  • linux kernel – 5.0.9

Check out our article about what software is included in comming soon.

The installation process is very similar to the old Fedora Workstation 27, Fedora Workstation 28 and Fedora Workstation 29, in fact the main difference is the creation of an user, which the setup is not responsible anymore, the creation of an user is done by the first boot after installation. Our system was pretty good – Asus X399 with AMD Ryzen Threadripper 1950X and NVIDIA 1080 Ti and the setup loaded successfully and there were no problems till the end.

We used the following ISO for the installation process:

https://download.fedoraproject.org/pub/fedora/linux/releases/30/Workstation/x86_64/iso/Fedora-Workstation-Live-x86_64-30-1.2.iso

It is a LIVE image so you can try it before installing. The easiest way is just to download the image and burn it to a DVD disk and then follow the installation below:

SCREENSHOT 1) Here is our “UEFI BIOS->Boot->Boot Override” and in most modern motherboard you can choose to override the default boot devices.

Choose the “UEFI: HL-DT-STDVDRAM…” to boot and install Fedora Workstation 30 with UEFI support. You should do this, because most of the new hardware like video cards would not work properly without being in UEFI mode.

main menu
Boot from DVD/USB Installation

Keep on reading!

using portage eix for the first time – cannot open database file

Installing “app-portage/eix” in Gentoo to manage your portage updates you might encounter this error, when trying to use “eix” for the first time:

Writing database file /var/cache/eix/portage.eix...
cannot open database file /var/cache/eix/portage.eix for writing (mode = 'wb')

The chances are missing directory “/var/cache/eix/” or the user:group of the “/var/cache/eix/” is root:root, which is NOT right.

The user:group must be “portage:portage”.

So the solution is really simple:

mkdir -p /var/cache/eix
chown portage:portage /var/cache/eix

Output – the errors you might get

Using the eix-sync failed with:

root@srv1 ~ # eix-sync 
 * eix-cache does not exist
 * Running eix-update
Reading Portage settings...
Building database (/var/cache/eix/portage.eix)...
[0] "gentoo" /usr/portage/ (cache: metadata-md5-or-flat)
     Reading category 167|167 (100) Finished             
[1] "myportage" /usr/local/myportage (cache: parse|ebuild*#metadata-md5#metadata-flat#assign)
     Reading category 167|167 (100) Finished    
Applying masks...
Calculating hash tables...
Writing database file /var/cache/eix/portage.eix...
cannot open database file /var/cache/eix/portage.eix for writing (mode = 'wb')
 * eix-update failed
 * Time statistics:
     6 seconds for initial eix-update
     6 seconds total

Using the “eix-update” failed, too.

root@srv ~ # eix-update 
Reading Portage settings...
Building database (/var/cache/eix/portage.eix)...
[0] "gentoo" /usr/portage/ (cache: metadata-md5-or-flat)
     Reading category 167|167 (100) Finished             
[1] "myportage" /usr/local/myportage (cache: parse|ebuild*#metadata-md5#metadata-flat#assign)
     Reading category 167|167 (100) Finished    
Applying masks...
Calculating hash tables...
Writing database file /var/cache/eix/portage.eix...
cannot open database file /var/cache/eix/portage.eix for writing (mode = 'wb')

Output 2 – Successful update with eix

root@srv ~ # eix-update 
Reading Portage settings...
Building database (/var/cache/eix/portage.eix)...
[0] "gentoo" /usr/portage/ (cache: metadata-md5-or-flat)
     Reading category 167|167 (100) Finished             
[1] "myportage" /usr/local/myportage (cache: parse|ebuild*#metadata-md5#metadata-flat#assign)
     Reading category 167|167 (100) Finished    
Applying masks...
Calculating hash tables...
Writing database file /var/cache/eix/portage.eix...
Database contains 19544 packages in 167 categories

Enable or Disable the processor x2APIC support in HP ProLiant DL160 Gen9

This article is to show how to enable or disable the x2APIC processor feature from your BIOS in HP ProLiant DL160 Gen9. Generally, in other servers, you should find it under the Processor features menu. Here we show you how to Disable it in HP ProLiant DL160 Gen9:

STEP 1) To enter the BIOS press F9 during the start-up of the HP server.

main menu
System Utilities

Keep on reading!

Kernel loads only single processor on multi-processor system – ACPI: x2apic entry ignored

There multiple reports on this issue with different processors

kernel, which worked perfectly on multiple systems, loads on our new server and only one processor is shown

Just for the record, the SMP is enabled in the kernel (and in the BIOS – Hyperthreading and multicores are enabled, too):

root@srv ~ # zcat /proc/config.gz | grep 'CONFIG_SMP'
CONFIG_SMP=y

The problem is x2APIC Support in the BIOS of your server.

Apparently, our kernel (version 4.18.12) missed the kernel feature:

root@srv ~ # zcat /proc/config.gz | grep -i 'x2apic'
root@srv

You can see no kernel configuration entry “CONFIG_X86_X2APIC=y” is shown from the above command.

And if your BIOS enables the support of x2APIC you may end up with just one processor under Linux.

This was the case in our server. The x2APIC support is enabled in the BIOS and our kernel (version 4.18.12) does not have CONFIG_X86_X2APIC enabled.
To fix this issue you first might disable the feature in the BIOS and you are going to have all your processors shown and they could be used to compile fast your new kernel (of course, in the case you use custom kernel) after you enable the feature in the kernel CONFIG_X86_X2APIC, which is under
Kernel Configuration —> [*] Support x2apic. The asterisk means it is enabled, so build your kernel. Check and enable this “Device Drivers –> IOMMU Hardware Support –> Support for Interrupt Remapping”, too.
Here you can see how to enable and disable processor x2APIC support in HP ProLiant DL160 Gen9 (2 processors Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz) – Enable or Disable the processor x2APIC support in HP ProLiant DL160 Gen9.
Keep on reading!

bind – dump cache, how much memory might be occupied by the query cache

It is difficult to understand how BIND manages the occupied memory of your server. And most of the problems for DNS forwarders are the memory related – it grows enormously and if you try limiting it you might end up with a DNS server dropping some connections.
So if you have a DNS BIND9 server (especially a forwarder) you can dump the query cache into a file and see the size of the file. Such you might get the tentative memory usage of your BIND9 server.

rdnc dumpdb

To dump the query cache.

A real world example. Always include “-all” to be sure all the cache is dumped in the file!

root@srv1 # rndc dumpdb -all
root@srv1 # ls -altr /var/bind/
total 141596
drwxrwxr-x 2 bind bind      4096 Jan 25 00:56 pri
drwxrwxr-x 2 bind bind      4096 Jan 25 00:56 dyn
-rw-rwxr-- 1 bind bind      3289 Jan 25 00:58 root.cache
drwxr-xr-x 5 bind bind      4096 Feb 13 05:15 ..
drwxrwxr-x 2 bind bind     12288 May  7 10:00 sec
drwxrwxr-x 5 bind bind      4096 May  8 03:52 .
-rw-r--r-- 1 bind bind 144810299 May  8 03:56 named_dump.db

As you can see our dump file is around 139 Mbytes size, so you can expect at least 140 Mbytes of memory to be used for the BIND9 query cache. You can track in your case the footprint of named process and the size of the dump file.

Here is what you can find in the named_dump.db file:

; Zone dump of '10.10.10.in-addr.arpa/IN/america'
;
10.10.10.in-addr.arpa.                      86400 IN SOA      ns1.exa-ns5.com. wdns.exa-ns5.com. 2065407385 60 30 2419200 30
10.10.10.in-addr.arpa.                      1800 IN NS        ns1.exa-ns5.com.
10.10.10.in-addr.arpa.                      1800 IN NS        ns2.exa-ns5.com.
10.10.10.in-addr.arpa.                      1800 IN NS        ns4.exa-ns5.com.
10.10.10.in-addr.arpa.                      1800 IN NS        ns5.exa-ns5.com.
1.10.10.10.in-addr.arpa.                    86400 IN PTR      1.example.com.
2.10.10.10.in-addr.arpa.                    86400 IN PTR      2.example.com.
;
; Zone dump of '10.10.11.in-addr.arpa/IN/america'
;
10.10.11.in-addr.arpa.                      86400 IN SOA      ns1.exa-ns5.com. wdns.exa-ns5.com. 2065407385 60 30 2419200 30
10.10.11.in-addr.arpa.                      1800 IN NS        ns1.exa-ns5.com.
10.10.11.in-addr.arpa.                      1800 IN NS        ns2.exa-ns5.com.
10.10.11.in-addr.arpa.                      1800 IN NS        ns4.exa-ns5.com.
10.10.11.in-addr.arpa.                      1800 IN NS        ns5.exa-ns5.com.
18.10.10.11.in-addr.arpa.                   86400 IN PTR      ns1.exa-ns5.com.
19.10.10.11.in-addr.arpa.                   86400 IN PTR      ns2.exa-ns5.com.

; Zone dump of 'example.com/IN/america'
;
example.com.                                    180 IN SOA        ns1.exa-ns5.com. support.example.com. 2065407734 60 30 2419200 30
example.com.                                    1800 IN NS        ns1.exa-ns5.com.
example.com.                                    1800 IN NS        ns2.exa-ns5.com.
example.com.                                    1800 IN NS        ns4.exa-ns5.com.
example.com.                                    1800 IN NS        ns5.exa-ns5.com.
example.com.                                    180 IN MX         1 ASPMX.L.GOOGLE.COM.
example.com.                                    180 IN MX         5 ALT1.ASPMX.L.GOOGLE.COM.
example.com.                                    180 IN MX         5 ALT2.ASPMX.L.GOOGLE.COM.
example.com.                                    180 IN MX         10 ASPMX2.GOOGLEMAIL.COM.
example.com.                                    180 IN MX         10 ASPMX3.GOOGLEMAIL.COM.
*.210.example.com.                            180 IN A          10.10.10.10
*.2107.example.com.                           180 IN A          10.10.10.134
*.2109.example.com.                           180 IN A          10.10.10.138
*.2115.example.com.                           180 IN A          10.10.10.98
*.2117.example.com.                           180 IN A          10.10.10.99
*.2119.example.com.                           180 IN A          10.10.11.2
*.2131.example.com.                           180 IN A          10.10.11.6
*.2246.example.com.                           180 IN A          10.11.11.13
*.2260.example.com.                           180 IN A          10.11.12.184
*.2271.example.com.                           180 IN A          10.11.13.158
*.2298.example.com.                           180 IN A          10.11.14.14
*.2292.example.com.                           180 IN A          10.10.15.65
*.2296.example.com.                           180 IN A          10.10.10.100

Here is the syntax

You can dump only a zone or view.

  dumpdb [-all|-cache|-zones] [view ...]
                Dump cache(s) to the dump file (named_dump.db).

List all your files (and directories) with file size over FTP without ls -R (recursive)

A great piece of software is

lftp – sophisticated file transfer program

This little console tool could ease your life significantly with many enhancements to the simple FTP protocol. This tip is for those how what to list all their files in a directory or the entire FTP account, but do not have ls command with recursive abilities. So the only option is to manually go through all the directories to fetch the listing information of the directories, but this could be automatically done by

lftp using the custom command “find” and if you add “-l” argument the output is like “ls -al” – file or directory, file permissions, user and group, file size, date and file name are shown on single line for each file.

Just execute the command with proper credentials and the starting directory of your choice. The command output could even be piped to another command.
Keep on reading!

MariaDB/MySQL replication error – Error during XID COMMIT: failed to update GTID state in mysql.gtid_slave_pos

When in aggressive parallel mode MariaDB/MySQL replication could fail with:

Last_Errno: 1942
Last_Error: Error during XID COMMIT: failed to update GTID state in mysql.gtid_slave_pos: 1062: Duplicate entry '0-46158188501' for key 'PRIMARY'

This table is used for tracking the replication process and you might probably just do:

STOP/START SLAVE i.e. restart the replication and it would continue without errors.

MariaDB [(none)]> STOP SLAVE;
Query OK, 0 rows affected (0.08 sec)

MariaDB [(none)]> START SLAVE;
Query OK, 0 rows affected (0.00 sec)

Optimistic or aggressive mode runs conflicting transactions in parallel and it sometimes happens to roll back. In our case probably something happened and the rollback failed and STOP/START saved the replication.

* Additional thoughts

If you try STOP/START and you get the same error, probably it worth trying truncating the table “mysql.gtid_slave_pos” if you do not use GTID Replication feature (the “show slave status” says “Using_Gtid: No”). And even if you use “Using_Gtid: No” you could probably always stop the replication, “change master” to use the old style and start again? Probably switching off the aggressive mode might help, too!
Keep on reading!

Smart Array P440 – create RAID 1+0 (RAID 10) using Smart Storage Administrator

This article is to show how to create RAID 1+0 in Smart Array P440 hardware controller and what kind of migration is possible from RAID 1+0 on this controller.

Existing RAID 1+0 could be migrated to RAID 0 or RAID 5 or RAID 6 (so the RAID Level transformation is possible) with different Stip size (any of the supported in the controller) on-the-fly with no data loss!

No data loss is available to the tested server HP ProLiant DL160 GEN9, check the manual for your generation.
You may check our more detail article on how to start HPE Smart Storage Administrator on your server here: Review of Smart Array P440 on a server HP ProLiant DL160 Gen9 using iLO – create, modify, delete array and view controller settings

There are the steps to create RAID 1+0 using Smart Storage Administrator:

STEP 1) Click on “Create Array” to create a new array.

main menu
Logical Devices

Keep on reading!