aptly mirror: ERROR: unable to update: no candidates for debian-installer/binary-amd64/Packages found

Always check the source what supports when trying to mirror! We have lost some time before discovering that our source repository does not support udeb and source packages! If you create a mirror with “-with-sources=true -with-udebs=true” the update process will require files, which may not exists in the source repository if it does not offer udeb or source files and you’ll end up with broken mirror and error for missing file!

Downloading & parsing package files...
Downloading http://aptly.example.com/ubuntu/dists/xenial-myrepos/main/binary-amd64/Packages.bz2...
ERROR: unable to update: no candidates for http://aptly-master.example.com/ubuntu/dists/xenial-myrepo/main/debian-installer/binary-amd64/Packages found

If you get error for “debian-installer/binary-amd64/Packages” not found, check the source repository if it offers udeb and/or source packages – probably not, so drop your mirror and recreate it including one or the two options

-with-sources=false -with-udebs=false

Keep on reading!

aptly – ERROR: unable to remove: published repo with storage:prefix/distribution ./mytest-stable not found

Sometimes the user manual may be unclear and you came here searching for a solution of dropping a published repository.
We have aptly version: 1.3.0 and here is the right syntax to remove a published repository.

First list the published repositories and reverse the “/” replacing it with space

The commands will be:

aptly publish list
Published repositories:
  * <name-distribution>/<release> [amd64] publishes {main: [xenial-<name>]: Some description}
aptly publish drop -force-drop <release> <name-distribution>

“name-distribution” is the “http://aptly.example.com/[name-distribution]” in the URL. For example, the repository URL of myrepo is “http://aptly.example.com/myrepo” and the name-distribution is “myrepo”.

A real world example

root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish list
Published repositories:
  * myrepo/stable [amd64] publishes {main: [xenial-myrepo]: Stable myrepo packages}
  * test/test [amd64] publishes {test: [test]: Test repo}
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish list --raw
myrepo stable
test test

We want to remove “myrepo/stable”:

root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop -force-drop stable myrepo
Removing /etc/aptly/.aptly/public/etc/dists...
Removing /etc/aptly/.aptly/public/etc/pool...

The published repository has been removed successfully.
root@srv-aptly:~#

The wrong syntax

You might have tried it that’s why you came here:

root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish list           
Published repositories:
  * myrepo/stable [amd64] publishes {main: [xenial-myrepo]: Stable myrepo packages}
  * test/test [amd64] publishes {test: [test]: Test repo}
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish list --raw
myrepo stable
test test
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop myrepo
ERROR: unable to remove: published repo with storage:prefix/distribution ./myrepo not found
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop myrepo stable
ERROR: unable to remove: published repo with storage:prefix/distribution stable/myrepo not found
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop myrepo-stable
ERROR: unable to remove: published repo with storage:prefix/distribution ./myrepo-stable not found
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop -force-drop myrepo-stable
ERROR: unable to remove: published repo with storage:prefix/distribution ./myrepo-stable not found
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop -force-drop myrepo stable
ERROR: unable to remove: published repo with storage:prefix/distribution stable/myrepo not found
root@srv-aptly:~#

aptly mirror – gpgv: Can’t check signature: public key not found

If you want to mirror repositories from your current aptly server to a new server you must import the GPG key from your old server because you are going to encounter the following error:

gpgv: Signature made Fri 22 Apr 2019 17:35:04 AM UTC using DSA key ID FDC7A25E
gpgv: Can't check signature: public key not found

Looks like some keys are missing in your trusted keyring, you may consider importing them from keyserver:

gpg --no-default-keyring --keyring trustedkeys.gpg --keyserver pool.sks-keyservers.net --recv-keys 181482CCFDC7A25E

Sometimes keys are stored in repository root in file named Release.key, to import such key:

wget -O - https://some.repo/repository/Release.key | gpg --no-default-keyring --keyring trustedkeys.gpg --import

ERROR: unable to fetch mirror: verification of detached signature failed: exit status 2

And the mirror command fails. The problem is

you must import the GPG key from your old server in trustedkeys.gpg (even if you have already imported it in the new server with apt-key!!!)

Here is how to list, export and import it (we are going to import it in default and trustedkeys.gpg, because it is more convenient, but it is not mandatory to be in the default).
Keep on reading!

Install OpenStack swift client only

To manage files in the OpenStack cloud you need the swift client. This is not so tiny python tool (a lot of dependencies), which offers by means of files (as files are objects for the OpenStack):

  • delete – Delete files, directories and sub-directories. Be careful with a simple command you can delete all your files at once. “Delete a container or objects within a container.”
  • download – Download files to your local storage. “Download objects from containers.”
  • list – List all files in the main directory (i.e. the container), cannot be listed files in sub-directories. The output will be a file with path per line or you can enable extended output to show file size, time modified, the type of the file and the full file path. “Lists the containers for the account or the objects for a container.”
  • post – Creates containers and updates metadata, could create seurity keys for temporary urls – “Updates meta information for the account, container, or object; creates containers if not present.”
  • copy – Copy files to a new place within a container or to a different container. “Copies object, optionally adds meta”
  • stat – Display information for files, container and your account. No information is available per directory. Most typically you will read the file length, the type and modifcation date. “Displays information for the account, container, or object”
  • upload – uploads files and whole directories in a container – “Uploads files or directories to the given container.” You can read our article here – Upload files and directories with swift in OpenStack
  • capabilities – List the configuration options of your account like file size limits, file amount limits , limit requests per seconds and so on and which additional middleware your instance supports like bulk_delete, bulk_upload (the name are self-explainatory) – “List cluster capabilities.”
  • tempurl – Generate a temporary url for a file protected with time validity and a key – “Create a temporary URL.”
  • auth – Show your authentication token and storage url – “Display auth related environment variables.”

The text in the hyphens is from the swift help command. If you do not know what is a container with simple words these are the main sub-directories in your account if you use the list command without any argument (not adding a name after “list”):

myuser@myserver:~$ swift --os-username myusr --os-tenant-name myusr --os-password mypass --os-auth-url https://auth01.example.com:5000/v2.0 list
mycontainer1
anyname

The best way is to install the swift client from the package system of your Linux Distribution. The package system ensures the programs you install and upgrade are consistent within dependencies upgrades.

When you install this package “python-swiftclient” it depends on multiple python packages, which might be upgraded later, too, but the package system of a Linux distribution ensure the programs of python-swiftclient will work with them

. On the contrary, if you install it manually even with “pip” as it is offered in OpenStack site it is unsure what will happen and even your client program from “python-swiftclient” could stop working because of an upgraded dependancy library. Of course, the packages from the official package system could be a little bit older (but probably more stable) than the manual installs from the OpenStack official site. Still if you would like to install it with “pip” here you can check how you can do it: https://www.swiftstack.com/docs/integration/python-swiftclient.html
Keep on reading!

Expand disk and the root partition of the QEMU virtual server

This article is to show how easy is to grow the size of QEMU virtual disk and its partitions (along with ext4 file system). Of course, you can use this article as an example of expanding the partitions of a physical disk.

Our setup is a QEMU virtual server using a raw image of 20G and the steps are as follow:

  1. Stop the virtual server
  2. Resize with qemu-img the raw image of the virtual server
  3. Start the virtual server
  4. Get a root ssh shell (probably by using openssh)
  5. Use parted to resize the partition (and fix the GPT of the disk – not the disk is larger, so the GPT table need fixing).
  6. Use resize2fs to resize the

STEP 1) Power off your virtual server.

The best way is to power it off within the server with the “poweroff” command. Be careful to check whether the host server killed the QEMU process. It is almost certain if the VNC port is released, the QEMU process has been exited.
If you use virsh (i.e. libvirt), you may execute:

virsh shutdown my-private-vm-01
virsh destroy my-private-vm-01

The destroy command ensures there is no QEMU process, which still operates over the image disk file. But it is dangerous for your data if you issue it on a running virtual server, because it may lose the unsaved data.
If you use QEMU manually wait for the process to exit or if you have enabled the management console connect to it using telnet and just quit – this will destory the QEMU virtual server process – again be careful with unsaved data.

[root@lsrv1 ~]# ps axuf|grep qemu
root     15575  2.3 50.1 13061032 8112212 ?    Sl   May08 1522:27 qemu-system-x86_64 -enable-kvm -smp 4,maxcpus=8 -daemonize -vnc :30 -cdrom /mnt/vm/isos/CentOS-7-x86_64-Minimal-1810.iso -drive file=/mnt/vm/images/templatesrv-wordpress.bin,cache=none,aio=threads,if=virtio -boot c -net nic,model=virtio,macaddr=00:00:00:00:00:30 -net tap,ifname=tap30,script=no,downscript=no -balloon virtio -m 8144 -monitor telnet:127.0.0.1:5830,server,nowait
[root@srv-host ~]# telnet 127.0.0.1 5830
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
QEMU 2.0.0 monitor - type 'help' for more information
(qemu) q
Connection closed by foreign host.

If you use a web interface (for example WebVirtMgr) check whether the virtual server is in power-off state.

STEP 2) Resize the image file of the virtual server.

Find where are located the virtual servers’ image files in your installation and use qemu-img. We want to increse the size with 174GB to 200GB.

qemu-img resize my-private-vm-01.img +174GB

STEP 3) Start your server.

Start your server by issuing a command with virsh or QEMU (qemu-system-x86_64) or from a web interface if use one (like WebVirtMgr).
*virsh and libvirt:

virsh start my-private-vm-01

*Manual start of QEMU emulator – qemu-system-x86_64:

qemu-system-x86_64 -enable-kvm -smp 4,maxcpus=8 -daemonize -vnc :30 -cdrom /mnt/vm/isos/CentOS-7-x86_64-Minimal-1810.iso -drive file=/mnt/vm/images/templatesrv-wordpress.bin,cache=none,aio=threads,if=virtio -boot c -net nic,model=virtio,macaddr=00:00:00:00:00:30 -net tap,ifname=tap30,script=no,downscript=no -balloon virtio -m 8144 -monitor telnet:127.0.0.1:5830,server,nowait

Or just use the web browser and start the virtual server from WebVirtMgr if it is what you use.

STEP 4) Open a shell to your server.

We use openssh client to connect to our server.

STEP 5) Use parted to resize the partition.

The program “parted” will report that the partition table does not use the whole available disk, which is perfectly normal because we’ve just increased the disk size. Just confirm to fix the GPT partition table:

parted /dev/vda
GNU Parted 3.2
Using /dev/vda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Warning: Not all of the space available to /dev/vda appears to be used, you can fix the GPT to use all of the space (an extra 367001600 blocks) or continue with the current
setting? 
Fix/Ignore? Fix                                                           
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 215GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2097kB  1049kB                        bios_grub
 2      2097kB  4096MB  4094MB  linux-swap(v1)
 3      4096MB  24.0GB  19.9GB  ext4

(parted) resizepart 3 -1                                                  
Warning: Partition /dev/vda3 is being used. Are you sure you want to continue?
parted: invalid token: -1
Yes/No? Yes
End?  [24.0GB]? -1
(parted) p
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 215GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2097kB  1049kB                        bios_grub
 2      2097kB  4096MB  4094MB  linux-swap(v1)
 3      4096MB  215GB   211GB   ext4

(parted) q                                                                
Information: You may need to update /etc/fstab.

There is a warning the partition is in use but it is perfectly OK to continue.
*parted: reports “invalid token: -1”, but it is accepted for the “End” parameter.

STEP 6) Resize the ext4 file system online

Use the tool resize2fs to resize EXT4.

resize2fs /dev/vda3
resize2fs 1.42.13 (17-May-2015)
Filesystem at /dev/vda3 is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 13
The filesystem on /dev/vda3 is now 51428620 (4k) blocks long.

To check the resize operation:

srv1-vm ~ # dmesg|grep EXT4
[  449.330140] EXT4-fs (vda3): resizing filesystem from 4859392 to 51428620 blocks
[  449.936044] EXT4-fs (vda3): resized filesystem to 51428620
srv1-vm ~ # df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           798M  3.6M  795M   1% /run
/dev/vda3       193G  5.7G  180G   4% /
tmpfs           3.9G  196K  3.9G   1% /dev/shm

Output log of the whole resize operation

srv-vm1 ~ # parted /dev/vda
GNU Parted 3.2
Using /dev/vda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 26.8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2097kB  1049kB                        bios_grub
 2      2097kB  4096MB  4094MB  linux-swap(v1)
 3      4096MB  24.0GB  19.9GB  ext4

(parted) q                                                                
srv-vm1 ~ # poweroff
Connection to srv-vm1 closed by remote host.
Connection to srv-vm1 closed.
myuser@gw1:~$ sshh srv1-host
srv1-host ~ # cd /mnt/vm/images
srv1-host images # qemu-img resize srv-vm1.img +174GB
Image resized.
srv1-host images # logout
Connection to srv1-host closed.
myuser@gw1:~$ sshh srv-vm1
srv-vm1 ~ # df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           798M  3.5M  795M   1% /run
/dev/vda3        19G  5.7G   12G  33% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           798M     0  798M   0% /run/user/0
srv-vm1 ~ # parted /dev/vda
GNU Parted 3.2
Using /dev/vda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Warning: Not all of the space available to /dev/vda appears to be used, you can fix the GPT to use all of the space (an extra 367001600 blocks) or continue with the current
setting? 
Fix/Ignore? Fix                                                           
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 215GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2097kB  1049kB                        bios_grub
 2      2097kB  4096MB  4094MB  linux-swap(v1)
 3      4096MB  24.0GB  19.9GB  ext4

(parted) q
Information: You may need to update /etc/fstab.

srv-vm1 ~ # parted /dev/vda                                           
GNU Parted 3.2
Using /dev/vda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) resizepart 3 -1                                                  
Warning: Partition /dev/vda3 is being used. Are you sure you want to continue?
parted: invalid token: -1                                                 
Yes/No? Yes                                                               
End?  [24.0GB]? -1                                                        
(parted) p                                                                
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 215GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  2097kB  1049kB                        bios_grub
 2      2097kB  4096MB  4094MB  linux-swap(v1)
 3      4096MB  215GB   211GB   ext4

(parted) q                                                                
Information: You may need to update /etc/fstab.

srv-vm1 ~ # resize2fs /dev/vda3
resize2fs 1.42.13 (17-May-2015)
Filesystem at /dev/vda3 is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 13
The filesystem on /dev/vda3 is now 51428620 (4k) blocks long.

srv-vm1 ~ # dmesg|grep EXT4
[  449.330140] EXT4-fs (vda3): resizing filesystem from 4859392 to 51428620 blocks
[  449.936044] EXT4-fs (vda3): resized filesystem to 51428620

srv-vm1 ~ # touch /forcefsck
srv-vm1 ~ # reboot

We rebooted the virtual machine with force check for precaution, but it is not reqiured.

Bonus – physical disks setup

Probably you would need an additional first step of copying your old disk to the new disk – basically, there are two ways to do it:

  • blind copy everything with a hardware or the Linux “dd” command.
  • use gparted to copy the GPT table and the partitions to the new disk

ansible making a link: error – refusing to convert from file to symlink

A quick notice for your ansible scripts and as a reminder the right syntax for making a link with ansible is:

- name: change version
  file: src="path-to-existing-file-or-directory" dest="path-to-the-name-of-the-symlink" state=link
  • src must be existing file on the file system with the full path. The link will point to this file!
  • dest must be the name of the link with the full path. The setup will create or change the where this link points to.

Common error is to swap the src and dst

Here is an example of this error:

TASK [PHP-prepare : change version] ****************************************
fatal: [localhost]: FAILED! => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "msg": "refusing to convert from file to symlink for /usr/bin/php7.2", "owner": "root", "path": "/usr/bin/php7.2", "size": 4488224, "state": "file", "uid": 0}

The bad ansible code:

- name: change version
  file: src="/etc/alternatives/php" dest="/usr/bin/php7.2" state=link

The right ansible code:

- name: change version
  file: src="/usr/bin/php7.2" dest="/etc/alternatives/php" state=link

Make systemd to save logs on the disk

On some Linux distributions, systemd log files are not saved on your disk, but only temporary in the memory and when you reboot all logs are discarded. So the systemd logs are not persistent, which could lead to missing important information if you want to check them when you are booted in a rescue disk or even if you just reboot your server. for exmaple,

if some important service failed to boot and your server is unreachable and you boot in rescue CD you do not have logs to check why the service failed and the (error) output of the process of starting the services!

Here is how you can enable the systemd logs to be persistent i.e. save them on the disk. This is tested on CentOS 7, which by default saves the systemd logs on memory!

STEP 1) Prepare the systemd log directory

mkdir -p /var/log/journal/
systemd-tmpfiles --create --prefix /var/log/journal/

STEP 2) Edit systemd configuration and reload the daemon

And ensure your configuration uses “Storage=persistent” in /etc/systemd/journald.conf

grep Storage /etc/systemd/journald.conf
Storage=persistent
systemctl restart systemd-journald

The last line with systemctl restart could be replace with

killall -USR1 systemd-journald

if you do not want to lose all your current logs in memory!

Bonus – systemd logs from multiple reboots

Here we have logs from 5 reboots. Here you can also see what are the right owner (systemd-journal) and Selinux labels of the “/var/log/journal/”

[root@srv ~]# ls -altrZ /var/log/journal/
drwxr-sr-x+ root systemd-journal system_u:object_r:var_log_t:s0   dbd91181db6b4c9f900d9b3a1651a8d5
drwxr-sr-x+ root systemd-journal system_u:object_r:var_log_t:s0   .
drwxr-xr-x. root root            system_u:object_r:var_log_t:s0   ..
[root@srv ~]# journalctl --disk-usage
Archived and active journals take up 112.0M on disk.
[root@srv ~]# journalctl --list-boots
-4 ec4146b78ac944b8a8d4116f259e09ee Thu 2019-06-06 23:39:14 UTC—Thu 2019-06-06 23:39:37 UTC
-3 ae3d39db626c4592aa84cc68072fbb32 Thu 2019-06-06 23:41:03 UTC—Thu 2019-06-06 23:42:13 UTC
-2 68c1ca07c05b4d59adcc9888c50f4065 Thu 2019-06-06 23:42:57 UTC—Fri 2019-06-07 00:13:27 UTC
-1 f7e8da6aaa8740faa05c4985c92023fd Fri 2019-06-07 00:14:08 UTC—Fri 2019-06-07 00:16:33 UTC
 0 45c00dc29e1a48298d9f87f5421468b4 Fri 2019-06-07 00:17:13 UTC—Mon 2019-06-10 01:39:17 UTC
[root@srv ~]# journalctl --boot=-2
-- Logs begin at Thu 2019-06-06 23:39:14 UTC, end at Mon 2019-06-10 01:39:17 UTC. --
Jun 06 23:42:57 srv systemd-journal[133]: Runtime journal is using 8.0M (max allowed 1.5G, trying to leave 2.3G free of 15.6G available → current limit 1.5G).
Jun 06 23:42:57 srv kernel: microcode: microcode updated early to revision 0x710, date = 2013-06-17
Jun 06 23:42:57 srv kernel: Initializing cgroup subsys cpuset
Jun 06 23:42:57 srv kernel: Initializing cgroup subsys cpu
Jun 06 23:42:57 srv kernel: Initializing cgroup subsys cpuacct
Jun 06 23:42:57 srv kernel: Linux version 3.10.0-514.10.2.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 S
Jun 06 23:42:57 srv kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-3.10.0-514.10.2.el7.x86_64 root=UUID=c9bec791-c77d-4189-b18a-9ddc728ee782 ro crashkernel=auto r
Jun 06 23:42:57 srv kernel: e820: BIOS-provided physical RAM map:
....
....
[root@srv ~]# journalctl --boot=-2 -u auditd
-- Logs begin at Thu 2019-06-06 23:39:14 UTC, end at Mon 2019-06-10 01:50:18 UTC. --
Jun 06 23:43:05 srv systemd[1]: Starting Security Auditing Service...
Jun 06 23:43:05 srv auditd[694]: Started dispatcher: /sbin/audispd pid: 698
Jun 06 23:43:05 srv audispd[698]: priority_boost_parser called with: 4
Jun 06 23:43:05 srv audispd[698]: max_restarts_parser called with: 10
Jun 06 23:43:05 srv audispd[698]: audispd initialized with q_depth=150 and 1 active plugins
Jun 06 23:43:05 srv augenrules[695]: /sbin/augenrules: No change
Jun 06 23:43:05 srv auditd[694]: Init complete, auditd 2.6.5 listening for events (startup state enable)
Jun 06 23:43:05 srv augenrules[695]: No rules
Jun 06 23:43:05 srv augenrules[695]: enabled 1
Jun 06 23:43:05 srv augenrules[695]: failure 1
Jun 06 23:43:05 srv augenrules[695]: pid 694
Jun 06 23:43:05 srv augenrules[695]: rate_limit 0
Jun 06 23:43:05 srv augenrules[695]: backlog_limit 320
Jun 06 23:43:05 srv augenrules[695]: lost 0
Jun 06 23:43:05 srv augenrules[695]: backlog 1
Jun 06 23:43:05 srv systemd[1]: Started Security Auditing Service.
Jun 06 23:56:48 srv auditd[694]: The audit daemon is exiting.
Jun 06 23:56:49 srv systemd[1]: Starting Security Auditing Service...
Jun 06 23:56:49 srv auditd[24744]: Started dispatcher: /sbin/audispd pid: 24746
Jun 06 23:56:49 srv audispd[24746]: audispd initialized with q_depth=250 and 1 active plugins
Jun 06 23:56:49 srv auditd[24744]: Init complete, auditd 2.8.4 listening for events (startup state enable)
Jun 06 23:56:49 srv augenrules[24750]: /sbin/augenrules: No change
Jun 06 23:56:49 srv augenrules[24750]: No rules
Jun 06 23:56:49 srv augenrules[24750]: enabled 1
Jun 06 23:56:49 srv augenrules[24750]: failure 1
Jun 06 23:56:49 srv augenrules[24750]: pid 24744
Jun 06 23:56:49 srv augenrules[24750]: rate_limit 0
Jun 06 23:56:49 srv augenrules[24750]: backlog_limit 320
Jun 06 23:56:49 srv augenrules[24750]: lost 0
Jun 06 23:56:49 srv augenrules[24750]: backlog 1
Jun 06 23:56:49 srv systemd[1]: Started Security Auditing Service.
Jun 07 00:13:26 srv systemd[1]: Stopping Security Auditing Service...
Jun 07 00:13:26 srv systemd[1]: Stopped Security Auditing Service.

Now you have logs of your booting process!

The systemd log files are accessible even if you’ve booted from a rescue CD and you chroot in your system!

Be careful with the disk free space when using disk storage for your systemd logs – Clear or delete systemd logs.

Quagga bgpd check whether the bgp session is established

If your quagga bgpd daemon is up and running (check out our article for Minimal quagga bgpd configuration to run and remote configure it) and you wonder how to check if everything is OK and the bgp session is established, here is a quick command line tip what you can do:

STEP 1) Check if your bgp daemon is connected to a remote bgp server (neighbor)

root@srv ~ # vtysh -c "show bgp neighbors"
BGP neighbor is 10.10.10.10, remote AS 16238, local AS 52218, external link
  BGP version 4, remote router ID 10.10.10.131
  BGP state = Established, up for 2d23h57m
  Last read 00:00:03, hold time is 9, keepalive interval is 3 seconds
  Neighbor capabilities:
    4 Byte AS: advertised
    Route refresh: advertised and received(old & new)
    Address family IPv4 Unicast: advertised and received
    Graceful Restart Capabilty: advertised
  Message statistics:
    Inq depth is 0
    Outq depth is 0
                         Sent       Rcvd
    Opens:                  1          1
    Notifications:          0          0
    Updates:                1          2
    Keepalives:         86323      86049
    Route Refresh:          0          0
    Capability:             0          0
    Total:              86325      86052
  Minimum time between advertisement runs is 30 seconds

 For address family: IPv4 Unicast
  Community attribute sent to this neighbor(both)
  Outbound path policy configured
  Outgoing update prefix filter list is *anydns-pfx
  12 accepted prefixes

  Connections established 1; dropped 0
  Last reset never
Local host: 10.10.10.5, Local port: 40172
Foreign host: 10.10.10.10, Foreign port: 179
Nexthop: 10.10.10.5
Nexthop global: ::
Nexthop local: ::
BGP connection: non shared network
Read thread: on  Write thread: off

STEP 2) Check the IP routes

root@srv ~ # vtysh -c "show ip bgp"
BGP table version is 0, local router ID is 10.10.10.5
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
              i internal, r RIB-failure, S Stale, R Removed
Origin codes: i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*> 0.0.0.0          10.10.10.10                         0 30627 i
*> 10.10.11.240/28
                    10.10.10.10           0             0 30627 ?
*> 10.10.11.234/31
                    10.10.10.10           0             0 30627 ?
*> 10.10.12.236/31
                    10.10.10.10           0             0 30627 ?
*> 10.10.13.242/32
                    10.10.10.10           0             0 30627 ?
*> 10.10.14.0/24  10.10.10.10           0             0 30627 ?
*> 10.10.15.0/24  10.10.10.10           0             0 30627 ?
*> 10.10.16.0/24  10.10.10.10           0             0 30627 ?
*> 11.11.11.0/24  0.0.0.0                  0         29873 i
*> 10.10.17.64/26 10.10.10.10           0             0 30627 ?
*> 10.10.18.240/29
                    10.10.10.10           0             0 30627 ?
*> 10.10.10.192/26
                    10.10.10.10           0             0 30627 ?
*> 10.10.10.192/26
                    10.10.10.10           0             0 30627 ?

Total number of prefixes 13

vtysh

vtysh – is the command line tool to manage Quagga BGP daemon locally.

Bonus Configuration

Here is our basic configuration in “/etc/quagga/bgpd.conf ”

hostname ns5.anycast.local1
password pppppppppp
log file /var/log/quagga/bgpd.log

router bgp 52218
bgp router-id 10.10.10.5
network 11.11.11.0/24
neighbor 10.10.10.10 remote-as 16238
neighbor 10.10.10.10 prefix-list anydns-pfx out
!
ip prefix-list anydns-pfx seq 5 permit 11.11.11.0/24
!
line vty

* All IPs are changed.

make Gluster daemon to resolve the proper hostnames of your peers

This is a useful tip for GlusterFS nodes. When adding a peer to a gluster cluster you may use the hostname (or IP) and the Gluster daemon on the added server tries to resolve the hostname from the IP, which contacts it (or if the cluster has multiple peers – multiple IP resolves would happen).
Here is a simple example. The cluster will have two peers (srv1.example.com and srv2.example.com):
Add the peer srv2.example.com to your cluster srv1.example.com (in fact, the cluster consists only from the local Gluster daemon):

[root@srv1 ~]# gluster peer probe srv2.example.com
peer probe: success.
[root@srv1 ~]# gluster peer status
Number of Peers: 1

Hostname: srv2.example.com
Uuid: 8322b61c-a94d-491b-afc9-9f10eb8e8b92
State: Peer in Cluster (Connected)

And when you check the status of the cluster in the second server srv2.example.com. The second server uses the PTR domain of the first server:

[root@srv2 ~]# gluster peer status
Number of Peers: 1

Hostname: static.123.123.123.123.clients.your-server.de
Uuid: 3d273834-eca6-4997-871f-1a282ca90fb0
State: Peer in Cluster (Connected)

You see the hostname is a temporary namestatic.123.123.123.123.clients.your-server.de, the PTR of the srv1.example.com. You may have problems in the future if you leave it like that and even it is the really uninformative domain name for your cluster’s configuration. To change the peer hostname in a cluster is really difficult and dangerous, so the option is to change the PTR of the servers’ IPs, but if you cannot do it or it is too slow to do it you can just use “/etc/hosts” file!

Use “/etc/hosts” to make Gluster daemon to resolve the proper hostnames of your peers!

Edit the “/etc/hosts” on (the first and) the (peer) second server (add the line, do not remove the others if they exit). Replace the IP with your first server’s IP and hostname.

123.123.123.123 srv1.example.com

And then add it to the cluster on the first server and check again in the second server:

[root@srv2 ~]# gluster peer status
Number of Peers: 1

Hostname: srv1.example.com
Uuid: 3d273834-eca6-4997-871f-1a282ca90fb0
State: Peer in Cluster (Connected)

And in the fist server:

[root@srv1 ~]# gluster peer status
Number of Peers: 1

Hostname: srv2.example.com
Uuid: 8322b61c-a94d-491b-afc9-9f10eb8e8b92
State: Peer in Cluster (Connected)

Now the two servers have the right hostnames for peers. And these hostnames will be used for the Gluster configuration saved in the servers.

In fact, it is a good idea to add all your cluster peers in the “/etc/hosts” on all servers:

123.123.123.123 srv1.example.com
124.124.124.124 srv2.example.com

Minimal quagga bgpd configuration to run and remote configure it

There are the three steps to configure your Quagga bgpd daemon to be able to run and configure remotely. The idea of this article is to show you how you can run the quagga bgpd with the minimal configuration and probably you might give the credential to a network administrator.
Summary – 3 files to change:

  1. /etc/quagga/daemons – enable BGPD daemon
  2. /etc/quagga/debian.conf – which IP to listen to
  3. /etc/quagga/bgpd.conf – BGP daemon configuration

Here are the steps:
Keep on reading!