Ubuntu with PHP 7.2 and mcrypt module

As mentioned in our previous article PHP 7.2 or PHP 7.3 with mcrypt – manual build the PHP versions 7.2 and 7.3 do not include PHP mcrypt module. The mcrypt module was part of PHP 5 till 7.1, in which it was deprecated and removed in 7.2.
In this article we show how to build mcrypt module for Ubuntu based on our previous article showed above. Because of the great popularity of Ubuntu and it has no PHP mcrypt package in Ubuntu package system unlike other Linux distributions (like Gentoo, which created a package) we decided to make this article.

For our purpose we use Ubuntu 18.04.2 LTS and here is what the steps to have the mcrypt PHP module:

STEP 1) Update and install mcrypt library and header development packet

sudo apt update -y
sudo apt install -y libmcrypt-dev

STEP 2) Install the GNU GCC build utility and the PHP dev packet

This is the compiler to build the module.

sudo apt install -y build-essential
sudo apt install -y php7.2-dev

STEP 3) Download the PHP mcrypt module and build it.

cd
mkdir mcrypt-php-module-manual
cd mcrypt-php-module-manual
wget https://pecl.php.net/get/mcrypt-1.0.2.tgz
tar xzf mcrypt-1.0.2.tgz
cd mcrypt-1.0.2
phpize
aclocal
libtoolize --force
autoheader
autoconf
./configure
make
sudo make install

STEP 4) Load the module in the PHP configuration (we use PHP-FPM and PHP-CLI) and block future PHP versions to be installed when apt update is used.

Because we compile the PHP mcrypt module for the specific currently installed PHP we do not want to upgrade our PHP when there is an update and the mcrypt module to fail to load. Each change of the PHP version (upgrade) would require a recompile against the current PHP version. To see more for holding and unholding Ubuntu packages – apt-mark – upgrade with the exception of certain packages Of course, if there is an update for PHP you must install it just recompile the mcrypt package, too!

echo "extension=mcrypt.so" > 20-mcrypt.ini
sudo cp 20-mcrypt.ini /etc/php/7.2/cli/conf.d/20-mcrypt.ini
sudo cp 20-mcrypt.ini /etc/php/7.2/fpm/conf.d/20-mcrypt.ini
sudo apt-mark hold php-cli php7.2-cli php-fpm php7.2-fpm

Keep on reading!

PHP 7.2 or PHP 7.3 with mcrypt – manual build

Newer PHP versions do not include PHP mcrypt library. The mcrypt module was part of PHP 5 till 7.1, in which it was deprecated and removed in 7.2. If you open the php.net documentation for mcrypt PHP functions you will see:

This function has been DEPRECATED as of PHP 7.1.0 and REMOVED as of PHP 7.2.0. Relying on this function is highly discouraged.

The mcrypt module is now in PHP PECL (repository for PHP Extensions) in https://pecl.php.net/package/mcrypt. As you can see in the description, this is a legacy module, which

Provides bindings for the unmaintained libmcrypt.

, so it is strongly recommended to replace it with OpenSSL (for example).
Still, if you need this legacy module – mcrypt and :

You may want to manually build the mcrypt module for your current installed PHP. Of course, the generic dependencies are:

  1. libmcrypt and its headers (if the Linux distribution) splits the binary and the headers
  2. GNU GCC
  3. PHP 7.2+
  4. download the latest mcrypt module source from https://pecl.php.net/package/mcrypt. For example, now it is https://pecl.php.net/get/mcrypt-1.0.2.tgz
mkdir /root/mcrypt-php-module-manual
cd /root/mcrypt-php-module-manual
wget https://pecl.php.net/get/mcrypt-1.0.2.tgz
tar xzf mcrypt-1.0.2.tgz
cd mcrypt-1.0.2
phpize
aclocal
libtoolize --force
autoheader
autoconf
./configure
make
make install

Do not use “make -j N” (“make -j 8”, for example), because it may fail to compile.
Keep on reading!

Ubuntu apt – InRelease is not valid yet (invalid for another 151d 18h 5min 59s)

Invalid time could cause your server (or probably your virtual server or docker instance) to be unable to use Ubuntu’s packaging system apt. It is a typical thing if your virtual or docker instance does not use automatic time synchronization.

It is really important even small installation and virtualized environments to have automatic time synchronization or the service they provide could become error prone with time!

The “apt” just reports the repositories are not valid yet:

myuser@my-server-pc:~$ sudo su
root@my-server-pc:/home/myuser# apt update
Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Reading package lists... Done                                 
E: Release file for http://archive.ubuntu.com/ubuntu/dists/bionic-updates/InRelease is not valid yet (invalid for another 151d 18h 5min 59s). Updates for this repository will not be applied.
E: Release file for http://archive.ubuntu.com/ubuntu/dists/bionic-backports/InRelease is not valid yet (invalid for another 151d 17h 16min 26s). Updates for this repository will not be applied.
E: Release file for http://archive.ubuntu.com/ubuntu/dists/bionic-security/InRelease is not valid yet (invalid for another 151d 17h 15min 3s). Updates for this repository will not be applied.
root@my-server-pc:/home/myuser# date
Thu Jan 17 15:11:56 UTC 2019

The clock shows 17 January 2019, but now is 18 June 2019! This is a Ubuntu virtual server with the minimal installation.

The solution is to synchronize your clock manually or use a service (the better way)!

Keep on reading!

CentOS 7 dracut-initqueue timeout and could not boot – warning /dev/disk/by-id/md-uuid- does not exist

Let’s say you update your software raid layout – create, delete or modify your software raid and reboot the system and your server does not start normally. After loading your remote video console (KVM) you see the boot process reports for a missing device and you are under console (dracut console). Your system is in “Emergency mode”.

The warning:

dracut-initqueue[504]: Warning: dracut-initqueue timeout - starting timeout scripts
dracut-initqueue[504]: Warning: dracut-initqueue timeout - starting timeout scripts
dracut-initqueue[504]: Warning: dracut-initqueue timeout - starting timeout scripts
....
....
dracut-initqueue[504]: Warning: could not boot.
dracut-initqueue[504]: Warning: /dev/disk/by-id/md-uuid-2fdc509e:8dd05ed3:c2350cb4:ea5a620d does not exist
      Starting Dracut Emergency Shell...
Warning: /dev/disk/by-id/md-uuid-2fdc509e:8dd05ed3:c2350cb4:ea5a620d does not exist

Generating "/run/initramfs/rdsosreport.txt"


Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.


dracut:/#

This article is for problems, which occur after manipulating a storage RAID devices, not the system (root) or boot devices!!! If the missing device the RAIDmd-uuid-2fdc509e:8dd05ed3:c2350cb4:ea5a620d does not exist” is either the root or the boot device, the propose solution here would not help with just exiting the Emergency Shell! In those cases, when the missing device is the root or boot before exiting the Emergency Shell the problem must be resolved, so the devices and their file system should be available. There is another article on the subject, which may help the reader in such cases – CentOS 8 dracut-initqueue timeout and could not boot – warning /dev/disk/by-id/md-uuid- does not exist – inactive raids.

SCREENSHOT 1) The boot process reports mutiple warning messages of dracut-initqueue timeout, because a drive cannot be found.

main menu
Warning: dracut-initqueue timeout – starting timeout scripts

Keep on reading!

Make systemd to save logs on the disk

On some Linux distributions, systemd log files are not saved on your disk, but only temporary in the memory and when you reboot all logs are discarded. So the systemd logs are not persistent, which could lead to missing important information if you want to check them when you are booted in a rescue disk or even if you just reboot your server. for exmaple,

if some important service failed to boot and your server is unreachable and you boot in rescue CD you do not have logs to check why the service failed and the (error) output of the process of starting the services!

Here is how you can enable the systemd logs to be persistent i.e. save them on the disk. This is tested on CentOS 7, which by default saves the systemd logs on memory!

STEP 1) Prepare the systemd log directory

mkdir -p /var/log/journal/
systemd-tmpfiles --create --prefix /var/log/journal/

STEP 2) Edit systemd configuration and reload the daemon

And ensure your configuration uses “Storage=persistent” in /etc/systemd/journald.conf

grep Storage /etc/systemd/journald.conf
Storage=persistent
systemctl restart systemd-journald

The last line with systemctl restart could be replace with

killall -USR1 systemd-journald

if you do not want to lose all your current logs in memory!

Bonus – systemd logs from multiple reboots

Here we have logs from 5 reboots. Here you can also see what are the right owner (systemd-journal) and Selinux labels of the “/var/log/journal/”

[root@srv ~]# ls -altrZ /var/log/journal/
drwxr-sr-x+ root systemd-journal system_u:object_r:var_log_t:s0   dbd91181db6b4c9f900d9b3a1651a8d5
drwxr-sr-x+ root systemd-journal system_u:object_r:var_log_t:s0   .
drwxr-xr-x. root root            system_u:object_r:var_log_t:s0   ..
[root@srv ~]# journalctl --disk-usage
Archived and active journals take up 112.0M on disk.
[root@srv ~]# journalctl --list-boots
-4 ec4146b78ac944b8a8d4116f259e09ee Thu 2019-06-06 23:39:14 UTC—Thu 2019-06-06 23:39:37 UTC
-3 ae3d39db626c4592aa84cc68072fbb32 Thu 2019-06-06 23:41:03 UTC—Thu 2019-06-06 23:42:13 UTC
-2 68c1ca07c05b4d59adcc9888c50f4065 Thu 2019-06-06 23:42:57 UTC—Fri 2019-06-07 00:13:27 UTC
-1 f7e8da6aaa8740faa05c4985c92023fd Fri 2019-06-07 00:14:08 UTC—Fri 2019-06-07 00:16:33 UTC
 0 45c00dc29e1a48298d9f87f5421468b4 Fri 2019-06-07 00:17:13 UTC—Mon 2019-06-10 01:39:17 UTC
[root@srv ~]# journalctl --boot=-2
-- Logs begin at Thu 2019-06-06 23:39:14 UTC, end at Mon 2019-06-10 01:39:17 UTC. --
Jun 06 23:42:57 srv systemd-journal[133]: Runtime journal is using 8.0M (max allowed 1.5G, trying to leave 2.3G free of 15.6G available → current limit 1.5G).
Jun 06 23:42:57 srv kernel: microcode: microcode updated early to revision 0x710, date = 2013-06-17
Jun 06 23:42:57 srv kernel: Initializing cgroup subsys cpuset
Jun 06 23:42:57 srv kernel: Initializing cgroup subsys cpu
Jun 06 23:42:57 srv kernel: Initializing cgroup subsys cpuacct
Jun 06 23:42:57 srv kernel: Linux version 3.10.0-514.10.2.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 S
Jun 06 23:42:57 srv kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-3.10.0-514.10.2.el7.x86_64 root=UUID=c9bec791-c77d-4189-b18a-9ddc728ee782 ro crashkernel=auto r
Jun 06 23:42:57 srv kernel: e820: BIOS-provided physical RAM map:
....
....
[root@srv ~]# journalctl --boot=-2 -u auditd
-- Logs begin at Thu 2019-06-06 23:39:14 UTC, end at Mon 2019-06-10 01:50:18 UTC. --
Jun 06 23:43:05 srv systemd[1]: Starting Security Auditing Service...
Jun 06 23:43:05 srv auditd[694]: Started dispatcher: /sbin/audispd pid: 698
Jun 06 23:43:05 srv audispd[698]: priority_boost_parser called with: 4
Jun 06 23:43:05 srv audispd[698]: max_restarts_parser called with: 10
Jun 06 23:43:05 srv audispd[698]: audispd initialized with q_depth=150 and 1 active plugins
Jun 06 23:43:05 srv augenrules[695]: /sbin/augenrules: No change
Jun 06 23:43:05 srv auditd[694]: Init complete, auditd 2.6.5 listening for events (startup state enable)
Jun 06 23:43:05 srv augenrules[695]: No rules
Jun 06 23:43:05 srv augenrules[695]: enabled 1
Jun 06 23:43:05 srv augenrules[695]: failure 1
Jun 06 23:43:05 srv augenrules[695]: pid 694
Jun 06 23:43:05 srv augenrules[695]: rate_limit 0
Jun 06 23:43:05 srv augenrules[695]: backlog_limit 320
Jun 06 23:43:05 srv augenrules[695]: lost 0
Jun 06 23:43:05 srv augenrules[695]: backlog 1
Jun 06 23:43:05 srv systemd[1]: Started Security Auditing Service.
Jun 06 23:56:48 srv auditd[694]: The audit daemon is exiting.
Jun 06 23:56:49 srv systemd[1]: Starting Security Auditing Service...
Jun 06 23:56:49 srv auditd[24744]: Started dispatcher: /sbin/audispd pid: 24746
Jun 06 23:56:49 srv audispd[24746]: audispd initialized with q_depth=250 and 1 active plugins
Jun 06 23:56:49 srv auditd[24744]: Init complete, auditd 2.8.4 listening for events (startup state enable)
Jun 06 23:56:49 srv augenrules[24750]: /sbin/augenrules: No change
Jun 06 23:56:49 srv augenrules[24750]: No rules
Jun 06 23:56:49 srv augenrules[24750]: enabled 1
Jun 06 23:56:49 srv augenrules[24750]: failure 1
Jun 06 23:56:49 srv augenrules[24750]: pid 24744
Jun 06 23:56:49 srv augenrules[24750]: rate_limit 0
Jun 06 23:56:49 srv augenrules[24750]: backlog_limit 320
Jun 06 23:56:49 srv augenrules[24750]: lost 0
Jun 06 23:56:49 srv augenrules[24750]: backlog 1
Jun 06 23:56:49 srv systemd[1]: Started Security Auditing Service.
Jun 07 00:13:26 srv systemd[1]: Stopping Security Auditing Service...
Jun 07 00:13:26 srv systemd[1]: Stopped Security Auditing Service.

Now you have logs of your booting process!

The systemd log files are accessible even if you’ve booted from a rescue CD and you chroot in your system!

Be careful with the disk free space when using disk storage for your systemd logs – Clear or delete systemd logs.

Failed to start Security Audit Service, Authorization Manager and Login Service

A power outrage caused one of our servers to shut down unexpectedly and after it had been powered up the server did not show up. The server was unreachable and apparently, the network did not bring up the interfaces.
Loading the IPMI KVM Console and rebooting the server there were three errors on the screen during the boot up of the CentOS 7:

[FAILED] Failed to start Security Audit Service.
See 'systemctl status auditd.service' for details.
....
....
[FAILED] Failed to start Authorization Manager.
See 'systemctl status polkit.service' for details.
....
....
[FAILED] Failed to start Login Service.
See 'systemctl status systemd-logind.service' for details.

And after the above last line, the system stopped loading.
The disks are clean, but there was no login service, so you cannot log in to the server through the keyboard and the monitor! There was no network as mentioned above, which meant no logging at all in the server. You might not know, but if auditd service is enabled you probably use Selinux!

STEP 1) Failed to start the three important services – Security Audit Service, Authorization Manager and Login Service.

So we ended up with unability to log in our server.

main menu

Not sure what exactly caused this problem (seems strange a perfectly working Selinux enabled CentOS 7 server to have miss-labeled files in the root only because of an unexpected shutdown), but to be able to fix the issue and bring back your server to life

you need a rescue CD/USB/DVD/PXE Server to boot from and mount the disks and relabel your root file system.

STEP 1) Boot from a rescue CD/USB/DVD/PXE Server.

In our case, we used the IPMI KVM Console and mounted a Gentoo ISO disk and then booted from it to have a bash shell in our system. Our root resides on software RAID 1, so cat the /proc/mdstat and mount your root file system somewhere (/mnt/gentoo is there by default…)

STEP 2) Booted in our rescue Gentoo CD and mount your root file system.

main menu
Rescue Gentoo CD

STEP 2) create a file “.autorelabel” in the mounting point of your root file system.

So in our case, we mounted our CentOS 7 root file system in /mnt/gentoo and you must create a file with patch “/mnt/gentoo/.autorelabel”. umount and reboot. And a few minutes later your server will be back from the dead. A quick and handful advice – edit your /etc/fstab to mount only the root file system by commenting out all other big storage mounts – of course, if it is possible. We have big storage with millions of files in /mnt/storage-01 and we put the “#” to comment out the line with it – we do not want to wait for relabeling this file system, because the problem apparently is in our root file system! If it is possible (it is highly recommended) to relabel only the root file system in such situations to be able to regain shell control over your server fast.

Bonus – booted in rescue but no logs

OK, we booted to the rescue and tried to see what was the error (with journalctl in chrooted /mnt/gentoo), which did not allow auditd, polkit and systemd-logind to fail to start, but it appeared by default the systemd logs are not persistent on the disk in CentOS 7, so when you reboot in rescue you do not have systemd logs from the last boot! As a piece of additional advice here you may consider enabling persistent systemd logs!

Quagga bgpd check whether the bgp session is established

If your quagga bgpd daemon is up and running (check out our article for Minimal quagga bgpd configuration to run and remote configure it) and you wonder how to check if everything is OK and the bgp session is established, here is a quick command line tip what you can do:

STEP 1) Check if your bgp daemon is connected to a remote bgp server (neighbor)

root@srv ~ # vtysh -c "show bgp neighbors"
BGP neighbor is 10.10.10.10, remote AS 16238, local AS 52218, external link
  BGP version 4, remote router ID 10.10.10.131
  BGP state = Established, up for 2d23h57m
  Last read 00:00:03, hold time is 9, keepalive interval is 3 seconds
  Neighbor capabilities:
    4 Byte AS: advertised
    Route refresh: advertised and received(old & new)
    Address family IPv4 Unicast: advertised and received
    Graceful Restart Capabilty: advertised
  Message statistics:
    Inq depth is 0
    Outq depth is 0
                         Sent       Rcvd
    Opens:                  1          1
    Notifications:          0          0
    Updates:                1          2
    Keepalives:         86323      86049
    Route Refresh:          0          0
    Capability:             0          0
    Total:              86325      86052
  Minimum time between advertisement runs is 30 seconds

 For address family: IPv4 Unicast
  Community attribute sent to this neighbor(both)
  Outbound path policy configured
  Outgoing update prefix filter list is *anydns-pfx
  12 accepted prefixes

  Connections established 1; dropped 0
  Last reset never
Local host: 10.10.10.5, Local port: 40172
Foreign host: 10.10.10.10, Foreign port: 179
Nexthop: 10.10.10.5
Nexthop global: ::
Nexthop local: ::
BGP connection: non shared network
Read thread: on  Write thread: off

STEP 2) Check the IP routes

root@srv ~ # vtysh -c "show ip bgp"
BGP table version is 0, local router ID is 10.10.10.5
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
              i internal, r RIB-failure, S Stale, R Removed
Origin codes: i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*> 0.0.0.0          10.10.10.10                         0 30627 i
*> 10.10.11.240/28
                    10.10.10.10           0             0 30627 ?
*> 10.10.11.234/31
                    10.10.10.10           0             0 30627 ?
*> 10.10.12.236/31
                    10.10.10.10           0             0 30627 ?
*> 10.10.13.242/32
                    10.10.10.10           0             0 30627 ?
*> 10.10.14.0/24  10.10.10.10           0             0 30627 ?
*> 10.10.15.0/24  10.10.10.10           0             0 30627 ?
*> 10.10.16.0/24  10.10.10.10           0             0 30627 ?
*> 11.11.11.0/24  0.0.0.0                  0         29873 i
*> 10.10.17.64/26 10.10.10.10           0             0 30627 ?
*> 10.10.18.240/29
                    10.10.10.10           0             0 30627 ?
*> 10.10.10.192/26
                    10.10.10.10           0             0 30627 ?
*> 10.10.10.192/26
                    10.10.10.10           0             0 30627 ?

Total number of prefixes 13

vtysh

vtysh – is the command line tool to manage Quagga BGP daemon locally.

Bonus Configuration

Here is our basic configuration in “/etc/quagga/bgpd.conf ”

hostname ns5.anycast.local1
password pppppppppp
log file /var/log/quagga/bgpd.log

router bgp 52218
bgp router-id 10.10.10.5
network 11.11.11.0/24
neighbor 10.10.10.10 remote-as 16238
neighbor 10.10.10.10 prefix-list anydns-pfx out
!
ip prefix-list anydns-pfx seq 5 permit 11.11.11.0/24
!
line vty

* All IPs are changed.

simple time synchronization of a server (laptop, desktop) using built-in systemd-timesyncd service

Here we offer you a relatively new way of keeping your server’s time (or your computer and laptop) synchronized with a reliable time service on the Internet.

systemd has a built-in feature – a small daemon (systemd-timesyncd) to periodically to contact NTP servers and keep the server’s clock synchronized with them!

Of course, you must use systemd in your Linux distribution. This article is for those Linux systems using systemd, not for upstart (sysvinit, openrc, upstart, runit and so on). Most of the modern Linux distributions use the systemd like Fedora, Ubuntu, CentOS, RedHat, Gentoo, SuSe and many more.

Once there were not many options to keep your server’s clock synced with NTP servers. Now we have simpler programs (some of which by the way could act as clients only!!!) – chrony, openntpd, systemd-timesyncd and more.
This time synchronization service is not going to open server port 123, it does not have the server capabilities of an NTP server. So you won’t need any firewall rules (like for ntpd). It is a simple client service to sync your time and keep it synchronized all the time with accuracy not more than 100ms.

Do not expect complex clock discipline like training or compensating. It just sets the time according to a selected time server from the configuration file in “/etc/systemd/timesyncd.conf”. The polling interval is automatically adjusted in minimal and maximal values from the configuration file and the daemon decides which is the actual interval based on the near-term drift it thinks. Possible back running clock if it needs to set in the past. The quality of the clock source could not be checked, so

in any case, you may not expect more than 100ms accuracy.

Of course, this service is actively developed and it has already many changes from the base client once it was!

Here is how you can enable it. Here are the steps:
Keep on reading!

make Gluster daemon to resolve the proper hostnames of your peers

This is a useful tip for GlusterFS nodes. When adding a peer to a gluster cluster you may use the hostname (or IP) and the Gluster daemon on the added server tries to resolve the hostname from the IP, which contacts it (or if the cluster has multiple peers – multiple IP resolves would happen).
Here is a simple example. The cluster will have two peers (srv1.example.com and srv2.example.com):
Add the peer srv2.example.com to your cluster srv1.example.com (in fact, the cluster consists only from the local Gluster daemon):

[root@srv1 ~]# gluster peer probe srv2.example.com
peer probe: success.
[root@srv1 ~]# gluster peer status
Number of Peers: 1

Hostname: srv2.example.com
Uuid: 8322b61c-a94d-491b-afc9-9f10eb8e8b92
State: Peer in Cluster (Connected)

And when you check the status of the cluster in the second server srv2.example.com. The second server uses the PTR domain of the first server:

[root@srv2 ~]# gluster peer status
Number of Peers: 1

Hostname: static.123.123.123.123.clients.your-server.de
Uuid: 3d273834-eca6-4997-871f-1a282ca90fb0
State: Peer in Cluster (Connected)

You see the hostname is a temporary namestatic.123.123.123.123.clients.your-server.de, the PTR of the srv1.example.com. You may have problems in the future if you leave it like that and even it is the really uninformative domain name for your cluster’s configuration. To change the peer hostname in a cluster is really difficult and dangerous, so the option is to change the PTR of the servers’ IPs, but if you cannot do it or it is too slow to do it you can just use “/etc/hosts” file!

Use “/etc/hosts” to make Gluster daemon to resolve the proper hostnames of your peers!

Edit the “/etc/hosts” on (the first and) the (peer) second server (add the line, do not remove the others if they exit). Replace the IP with your first server’s IP and hostname.

123.123.123.123 srv1.example.com

And then add it to the cluster on the first server and check again in the second server:

[root@srv2 ~]# gluster peer status
Number of Peers: 1

Hostname: srv1.example.com
Uuid: 3d273834-eca6-4997-871f-1a282ca90fb0
State: Peer in Cluster (Connected)

And in the fist server:

[root@srv1 ~]# gluster peer status
Number of Peers: 1

Hostname: srv2.example.com
Uuid: 8322b61c-a94d-491b-afc9-9f10eb8e8b92
State: Peer in Cluster (Connected)

Now the two servers have the right hostnames for peers. And these hostnames will be used for the Gluster configuration saved in the servers.

In fact, it is a good idea to add all your cluster peers in the “/etc/hosts” on all servers:

123.123.123.123 srv1.example.com
124.124.124.124 srv2.example.com

Minimal quagga bgpd configuration to run and remote configure it

There are the three steps to configure your Quagga bgpd daemon to be able to run and configure remotely. The idea of this article is to show you how you can run the quagga bgpd with the minimal configuration and probably you might give the credential to a network administrator.
Summary – 3 files to change:

  1. /etc/quagga/daemons – enable BGPD daemon
  2. /etc/quagga/debian.conf – which IP to listen to
  3. /etc/quagga/bgpd.conf – BGP daemon configuration

Here are the steps:
Keep on reading!