Repairing damaged backup GPT with gdisk

Problem with network shared storage could lead to a damaged file system or even GPT tables, so the gdisk may help in this case.
Here it is a nasty the error:

[root@srv1 ~]# kpartx /dev/loop0
Alternate GPT is invalid, using primary GPT.
loop0p1 : 0 83881984 /dev/loop0 2048
[root@srv1 ~]# parted /dev/loop0
GNU Parted 3.1
Using /dev/loop0
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Error: The backup GPT table is corrupt, but the primary appears OK, so that will be used.
OK/Cancel? OK                                                             
Model: Loopback device (loopback)
Disk /dev/loop0: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  42.9GB  42.9GB               primary

(parted)

So parted reports the backup GPT is damaged, but how to fix it? The solution is to use gdisk and use write “w” command in it. gdisk also shows the exact error with the GPT table with “v” option:

[root@srv1 ~]# gdisk 
GPT fdisk (gdisk) version 0.8.10

Type device filename, or press <Enter> to exit: /dev/loop0
Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.

Warning! One or more CRCs don't match. You should repair the disk!

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: damaged

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************

Command (? for help): p
Disk /dev/loop0: 83886080 sectors, 40.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 7EDF123B-FBC4-4C09-B636-922BD165F862
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 83886046
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048        83884031   40.0 GiB    0700  primary

Command (? for help): v

Caution: The CRC for the backup partition table is invalid. This table may
be corrupt. This program will automatically create a new backup partition
table when you save your partitions.

Identified 1 problems!

Command (? for help): p
Disk /dev/loop0: 83886080 sectors, 40.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 7EDF123B-FBC4-4C09-B636-922BD165F862
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 83886046
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048        83884031   40.0 GiB    0700  primary

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/loop0.

Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

And the GPT backup in this loop device is fixed. Executing parted again reports no problems:

[root@srv1 ~]# parted /dev/loop0 print
Model: Loopback device (loopback)
Disk /dev/loop0: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  42.9GB  42.9GB               primary

Verify also reports nor error. More options are available:

[root@srv1 ~]# gdisk /dev/loop0
GPT fdisk (gdisk) version 0.8.10

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): h
b  back up GPT data to a file
c  change a partition's name
d  delete a partition
i  show detailed information on a partition
l  list known partition types
n  add a new partition
o  create a new empty GUID partition table (GPT)
p  print the partition table
q  quit without saving changes
r  recovery and transformation options (experts only)
s  sort partitions
t  change a partition's type code
v  verify disk
w  write table to disk and exit
x  extra functionality (experts only)
?  print this menu

Command (? for help): v

No problems found. 4029 free sectors (2.0 MiB) available in 2
segments, the largest of which is 2015 (1007.5 KiB) in size.

Command (? for help): q

storcli with multiple disks from different enclosures

Creating a Virtual device with the AVAGO storcli command-line tool under Linux. Two examples are included:

  1. All disks are from one of the enclosure. All disks are included explicitly.
  2. Disks from two enclosures are included. One controller with two enclosures.

Check out how to Install the new storcli to manage (LSI/AVAGO/Broadcom) MegaRAID controller under CentOS 7
There are 31 disks of 36 harddisk bays. 5 are missing on purpose for the examples.

The initial states of the controller and the disks.

livecd ~ # opt/MegaRAID/storcli/storcli /c0 show
Generating detailed summary of the adapter, it may take a while to complete.

CLI Version = 007.0510.0000.0000 May 4, 2018
Operating system = Linux 4.19.72-gentoo
Controller = 0
Status = Success
Description = None

Product Name = LSI 2108 MegaRAID
Serial Number = FW-ABQRCBEAARBWA
SAS Address =  5003048004015f00
PCI Address = 00:06:00:00
System Time = 07/20/2020 22:58:35
Mfg. Date = 00/00/00
Controller Time = 07/20/2020 22:58:36
FW Package Build = 12.15.0-0239
FW Version = 2.130.403-4660
BIOS Version = 3.30.02.2_4.16.08.00_0x06060A05
Driver Name = megaraid_sas
Driver Version = 07.706.03.00-rc1
Vendor Id = 0x1000
Device Id = 0x79
SubVendor Id = 0x15D9
SubDevice Id = 0x700
Host Interface = PCI-E
Device Interface = SAS-6G
Bus Number = 6
Device Number = 0
Function Number = 0
Physical Drives = 31

PD LIST :
=======

-----------------------------------------------------------------------------------
EID:Slt DID State DG     Size Intf Med SED PI SeSz Model                   Sp Type 
-----------------------------------------------------------------------------------
12:0      0 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HUA723020ALA640 D  -    
12:1      1 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    D  -    
12:2      2 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:3      3 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:4      4 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:5      5 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS722T2TALA604    D  -    
12:6      6 UGood -  1.817 TB SATA HDD N   N  512B TOSHIBA DT01ACA200      D  -    
12:7      7 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:8      8 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:9      9 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:10    10 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:11    11 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
37:0     13 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:1     14 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:2     15 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:3     16 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HUA723020ALA640 U  -    
37:4     17 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:6     19 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:7     20 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS722T2TALA604    U  -    
37:8     21 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    U  -    
37:10    23 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:11    24 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:13    26 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:14    27 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    U  -    
37:16    29 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    U  -    
37:17    30 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HUA723020ALA640 U  -    
37:19    32 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:20    33 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:21    34 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS722T2TALA604    U  -    
37:22    35 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:23    36 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    U  -    
-----------------------------------------------------------------------------------

EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info
SeSz-Sector Size|Sp-Spun|U-Up|D-Down/PowerSave|T-Transition|F-Foreign
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded

Keep on reading!

Cron missing path – executing docker/podman – adding network: failed to locate iptables

If you have ever happened to execute some complex scripts using the cron system you were inevitable to discover the Linux environment was different than the login or ssh shell. The different environment tends to lead to a missing or different PATH environment! Here is what happens with podman starting a container from a cron script:

time="2020-04-19T20:45:20Z" level=error msg="Error adding network: failed to locate iptables: exec: \"iptables\": executable file not found in $PATH"
time="2020-04-19T20:45:20Z" level=error msg="Error while adding pod to CNI network \"podman\": failed to locate iptables: exec: \"iptables\": executable file not found in $PATH"
Error: unable to start container "onedrive-cli": error configuring network namespace for container d297cf80db20441d4258a1acc7d810444795d1ca8730ab242d9fe8a13eaa697d: failed to locate iptables: exec: "iptables": executable file not found in $PATH

The iptables executable is missing because the PATH variable is different than the login or ssh shell one. Executing the commands or the script under ssh or login will result in no error and a proper podman (docker) execution!

A similar problem could have happened with another software trying to execute iptables or another tool, which is not found in the cron’s PATH environment because cron’s environment is very limited and

To ensure the PATH is like the user’s (root) environment just source the “profile” or “.bashrc” file of the current user before the execution of the script or in the first lines of it.
This would do the trick.

. /etc/profile

Or user’s custom

. ~/.bashrc

Or the default OS bashrc

. /etc/bashrc

The dot may be replaced by “source”:

source /etc/bashrc

All (environment) variables will be available after the source command.

Here is the difference:
The environment without the sourcing profile/bashrc file:

 
LANG=en_US.UTF-8
XDG_SESSION_ID=19118
USER=root
PWD=/root
HOME=/root
SHELL=/bin/sh
SHLVL=1
LOGNAME=root
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
XDG_RUNTIME_DIR=/run/user/0
PATH=/usr/bin:/bin
_=/usr/bin/env

Sourcing the “/etc/profile” file:

LANG=en_US.UTF-8
HISTCONTROL=ignoredups
HOSTNAME=srv.example.com
XDG_SESSION_ID=19165
USER=root
PWD=/root
HOME=/root
MAIL=/var/spool/mail/root
SHELL=/bin/bash
SHLVL=1
LOGNAME=root
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
XDG_RUNTIME_DIR=/run/user/0
PATH=/usr/local/sbin:/usr/sbin:/usr/bin:/bin
HISTSIZE=1000
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/usr/bin/env

Multiple additional envrinment varibles, which could be important for user’s scripts executed by the cron.

And in CentOS 8 the iptables happens to be in “/usr/sbin/iptables” – a path /usr/sbin not included in the default cron environment PATH variable!
Of course, the PATH environment may be edited in the cron scheduler with crontab (by just setting the PATH with a path) till the next path missing in it and included in the user’s path! It’s just better to ensure the two environments are the same every time by sourcing the environment configuration file such as /etc/profile or user’s bashrc (or the default on in /etc/bashrc?).

Inactive array – mdadm: Cannot get array info for /dev/md126

Replacing a disk maybe sometimes challenging, especially with software RAID. If the software RAID1 went inactive this article might be for you!
Booting from a LIVECD or a rescue PXE system and all RAID devices got inactive despite the loaded personalities. We have similar article on the subject – Recovering MD array and mdadm: Cannot get array info for /dev/md0

livecd ~ # cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] [raid0] [raid1] [raid10] [linear] [multipath] 
md125 : inactive sdb3[1](S)
      1047552 blocks super 1.2
       
md126 : inactive sdb1[1](S)
      52427776 blocks super 1.2
       
md127 : inactive sdb2[1](S)
      16515072 blocks super 1.2
       
unused devices: <none>

Despite the personalities are loaded, which means the kernel modules are successfully loaded – “[raid6] [raid5] [raid4] [raid0] [raid1] [raid10] [linear] [multipath] “. Still, something got wrong and the device’s personality is unrecognized and is inactive state.
A device in inactive state cannot be recovered and it cannot be added disks:

livecd ~ # mdadm --add /dev/md125 /dev/sda3
mdadm: Cannot get array info for /dev/md125

In general, to recover a RAID in inactive state:

  1. Check if the kernel modules are loaded. If the RAID setups are using RAID1, the “Personalities” line in /proc/mdstat should include it as “[raid1]”
  2. Try to run the device with “mdadm –run”
  3. Add the missing device to the RAID device with “mdadm –add” if the status of the RAID device goes to “active (auto-read-only)” or just “active”.
  4. Wait for the RAID device to recover.

Keep on reading!

display packages from the Gentoo emerge resume list

A quick tip and an easy hack to see the emerge resume list, which will be used by the “emerge –resume” command. There are two resume lists – resume and resume_backup in the binary file /var/cache/edb/mtimedb.
The first two commands are python 3+.
The resume and resume_backup lists

python -c 'import portage; print(portage.mtimedb.get("resume", {}).get("mergelist"))'
python -c 'import portage; print(portage.mtimedb.get("resume_backup", {}).get("mergelist"))'

If you need the command for the python 2 you should use the following:

python -c 'import portage; print portage.mtimedb.get("resume", {}).get("mergelist")'
python -c 'import portage; print portage.mtimedb.get("resume_backup", {}).get("mergelist")'

Real world example

Resume list for

desktop ~ # emerge -v --nodeps $(qlist -IC|grep dev-qt)

These are the packages that would be merged, in order:

[ebuild     U  ] dev-qt/designer-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="declarative webkit -debug -test" 8,493 KiB
[ebuild     U  ] dev-qt/linguist-tools-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="qml -debug -test" 0 KiB
[ebuild     U  ] dev-qt/qdbus-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test" 0 KiB
[ebuild     U  ] dev-qt/qtbluetooth-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -qml -test" 2,727 KiB
[ebuild   R    ] dev-qt/qtchooser-66::gentoo  USE="-test" 32 KiB
[ebuild     U  ] dev-qt/qtconcurrent-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test" 48,549 KiB
[ebuild     U  ] dev-qt/qtcore-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="icu -debug (-systemd) -test" 0 KiB
[ebuild     U  ] dev-qt/qtdbus-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test" 0 KiB
[ebuild     U  ] dev-qt/qtdeclarative-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="jit widgets -debug -gles2 -localstorage -test" 20,753 KiB
[ebuild     U  ] dev-qt/qtgraphicaleffects-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test" 13,713 KiB
[ebuild     U  ] dev-qt/qtgui-5.14.0-r1:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="accessibility dbus egl evdev gif ibus jpeg libinput png udev vnc wayland%* xcb -debug -eglfs -gles2 -test -tslib -tuio" 0 KiB
[ebuild     U  ] dev-qt/qthelp-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test" 0 KiB
[ebuild     U  ] dev-qt/qtimageformats-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="mng -debug -test (-jpeg2k%)" 1,768 KiB
[ebuild     U  ] dev-qt/qtlocation-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test" 5,974 KiB
[ebuild   R    ] dev-qt/qtlockedfile-2.4.1_p20171024::gentoo  USE="-doc" 694 KiB
[ebuild     U  ] dev-qt/qtmultimedia-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="alsa gstreamer openal pulseaudio qml widgets -debug -gles2 -test" 3,671 KiB
[ebuild     U  ] dev-qt/qtnetwork-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="libproxy networkmanager ssl -bindist -connman -debug -sctp -test" 0 KiB
[ebuild     U  ] dev-qt/qtopengl-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -gles2 -test" 0 KiB
[ebuild     U  ] dev-qt/qtpaths-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test" 0 KiB
[ebuild     U  ] dev-qt/qtpositioning-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="qml -debug -geoclue -test" 0 KiB
[ebuild     U  ] dev-qt/qtprintsupport-5.14.0-r1:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="cups -debug -gles2 -test" 0 KiB
[ebuild     U  ] dev-qt/qtquickcontrols2-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test -widgets" 7,947 KiB
[ebuild     U  ] dev-qt/qtquickcontrols-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="widgets -debug -test" 5,843 KiB
[ebuild     U  ] dev-qt/qtscript-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="jit scripttools -debug -test" 2,584 KiB
[ebuild     U  ] dev-qt/qtsensors-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -qml -test" 1,996 KiB
[ebuild   R    ] dev-qt/qtsingleapplication-2.6.1_p20171024::gentoo  USE="X -doc" 0 KiB
[ebuild     U  ] dev-qt/qtspeech-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test" 99 KiB
[ebuild     U  ] dev-qt/qtsql-5.14.0:5/5.14.0::gentoo [5.12.2:5/5.12.2::gentoo] USE="mysql sqlite -debug -freetds -oci8 -odbc -postgres -test" 0 KiB
[ebuild     U  ] dev-qt/qtsvg-5.14.0-r1:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test" 1,822 KiB
[ebuild     U  ] dev-qt/qttest-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test" 0 KiB
[ebuild     U  ] dev-qt/qttranslations-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test" 1,302 KiB
[ebuild     U  ] dev-qt/qtvirtualkeyboard-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="spell xcb -debug -handwriting -test" 10,705 KiB
[ebuild     U  ] dev-qt/qtwayland-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="libinput xcomposite -debug -test" 532 KiB
[ebuild     U  ] dev-qt/qtwebchannel-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="qml -debug -test" 192 KiB
[ebuild     U  ] dev-qt/qtwebengine-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="alsa pulseaudio system-icu widgets -bindist -debug -designer -jumbo-build -pax_kernel -system-ffmpeg -test (-geolocation%*)" 235,904 KiB
[ebuild     U  ] dev-qt/qtwebkit-5.212.0_pre20190629:5/5.212::gentoo [5.212.0_pre20180120:5/5.212::gentoo] USE="X geolocation hyphen jit multimedia opengl printsupport qml -gles2 -gstreamer -nsplugin -orientation -webp" 12,166 KiB
[ebuild     U  ] dev-qt/qtwidgets-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="gtk png xcb -debug -gles2 -test" 0 KiB
[ebuild     U  ] dev-qt/qtx11extras-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test" 125 KiB
[ebuild     U  ] dev-qt/qtxml-5.14.0:5/5.14::gentoo [5.12.2:5/5.12::gentoo] USE="-debug -test" 0 KiB
[ebuild     U  ] dev-qt/qtxmlpatterns-5.14.0:5/5.14::gentoo [5.11.1:5/5.11::gentoo] USE="-debug -qml% -test" 1,362 KiB

Total: 40 packages (37 upgrades, 3 reinstalls), Size of downloads: 388,941 KiB


>>> Verifying ebuild manifests
>>> Running pre-merge checks for dev-qt/qtwebkit-5.212.0_pre20190629

>>> Emerging (1 of 40) dev-qt/designer-5.14.0::gentoo
 * Fetching files in the background.
 * To view fetch progress, run in another terminal:
 * tail -f /var/log/emerge-fetch.log
 * qttools-everywhere-src-5.14.0.tar.xz BLAKE2B SHA512 size ;-) ...                                                                                                  [ ok ]
!!! Failed to set new SELinux execution context. Is your current SELinux context allowed to run Portage?
>>> Unpacking source...
>>> Unpacking qttools-everywhere-src-5.14.0.tar.xz to /var/tmp/portage/dev-qt/designer-5.14.0/work
^C

Exiting on signal 2
sandbox:stop  caught signal 2 in pid 22886
sandbox:stop  Send signal 4 more times to force SIGKILL
Sandboxed process killed by signal: Interrupt
^C * The ebuild phase 'die_hooks' has been killed by signal 2.

 * Messages for package dev-qt/designer-5.14.0:

We interrupted it on purpose with Ctrl+C.

The output of the above python command:

desktop ~ # python -c 'import portage; print(portage.mtimedb.get("resume", {}).get("mergelist"))'
[['ebuild', '/', 'dev-qt/designer-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/linguist-tools-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qdbus-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtbluetooth-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtchooser-66', 'merge'], ['ebuild', '/', 'dev-qt/qtconcurrent-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtcore-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtdbus-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtdeclarative-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtgraphicaleffects-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtgui-5.14.0-r1', 'merge'], ['ebuild', '/', 'dev-qt/qthelp-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtimageformats-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtlocation-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtlockedfile-2.4.1_p20171024', 'merge'], ['ebuild', '/', 'dev-qt/qtmultimedia-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtnetwork-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtopengl-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtpaths-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtpositioning-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtprintsupport-5.14.0-r1', 'merge'], ['ebuild', '/', 'dev-qt/qtquickcontrols2-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtquickcontrols-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtscript-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtsensors-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtsingleapplication-2.6.1_p20171024', 'merge'], ['ebuild', '/', 'dev-qt/qtspeech-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtsql-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtsvg-5.14.0-r1', 'merge'], ['ebuild', '/', 'dev-qt/qttest-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qttranslations-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtvirtualkeyboard-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtwayland-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtwebchannel-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtwebengine-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtwebkit-5.212.0_pre20190629', 'merge'], ['ebuild', '/', 'dev-qt/qtwidgets-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtx11extras-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtxml-5.14.0', 'merge'], ['ebuild', '/', 'dev-qt/qtxmlpatterns-5.14.0', 'merge']]
desktop ~ # python -c 'import portage; print(portage.mtimedb.get("resume_backup", {}).get("mergelist"))'
None

bonding – write error – device or resource busy – operation not permitted

Recently, there was a little bit of confusion when following the article about activating network bonding without ifenslave – How to enable Linux bonding without ifenslave. At first, there were couple of errors:

livecd ~ # echo balance-alb > /sys/class/net/bond0/bonding/mode
-bash: echo: write error: Device or resource busy
livecd ~ # echo "+enp129s0f0" > /sys/class/net/bond0/bonding/slaves
-bash: echo: write error: Operation not permitted

Or similar error when changing the bonding mode:

livecd ~ # echo 4 > /sys/class/net/bond0/bonding/mode
-bash: echo: write error: Directory not empty
livecd ~ # echo 802.3ad > /sys/class/net/bond0/bonding/mode
-bash: echo: write error: Directory not empty

The server just booted in rescue live cd and there is no active network configuration:

SCREENSHOT 1) Apparently, the /sys/class/net/bond0/bonding/mode and /sys/class/net/bond0/bonding/slaves are in read only state.

No writes means no new configuration could be installed and the bonding cannot be configured (reconfigured).

main menu
device or resource busy – operation not permitted

Bonding mode could be changed only when the bonding device is in DOWN state.

Network interfaces could be added to the boding device only if they were in DOWN state, too.

In addition, changing bonding mode could only happen if there were no network interfaces added to the bonding interface.

Keep on reading!

root cannot delete, move or change a file – Operation not permitted or Permission denied – immutable attribute

If you are the root user and some file (files or directories) cannot be deleted, removed, renamed or changed you probably deal with the immutable attribute set on (by a colleague of yours – installation setups tend to not set such attributes).

Here is what it looks like to have such a file

root@srv.remote /etc/apache2/vhosts.d # mv example.com.conf /root/old/apache/
mv: cannot move `example.com.conf' to `/root/old/apache/example.com.conf': Operation not permitted
root@srv.remote /etc/apache2/vhosts.d # lsattr example.com.conf
----i--------e- example.com.conf
root@srv.remote /etc/apache2/vhosts.d # rm example.com.conf
rm: cannot remove `example.com.conf': Operation not permitted
root@srv.remote /etc/apache2/vhosts.d # echo "teeest" >> example.com.conf
-bash: example.com.conf: Permission denied

Here is how you can set the attribute off.

You need first to set off the file’s immutable attribute and then to do whatever you intended to do in the first place – delete, rename, change and so on. Y

chattr -i filename.txt

In continuation of our example above:

root@srv.remote /etc/apache2/vhosts.d # chattr -i example.com.conf
root@srv.remote /etc/apache2/vhosts.d # lsattr example.com.conf
-------------e- example.com.conf
root@srv.remote /etc/apache2/vhosts.d # mv example.com.conf /root/old/apache/
root@srv.remote /etc/apache2/vhosts.d #

As you can see no immutable attribute no problem to move the file!

And just not note you need to install a package with the name e2fsprogs (not always in the default installation) in your Linux distribution to have lsattr, chattr and more!

Update supermicro X10SLM-F firmware BIOS under Linux with the SUM cli

Here is how we updated our Supermicro server X10SLM-F with the latest firmware at the moment.

Our current BIOS firmware version is 2.0

[root@srv ~]# lshw|grep -A 14 "core$"
  *-core
       description: Motherboard
       product: X10SLM-F
       vendor: Supermicro
       physical id: 0
       version: 1.02
       serial: 11111111111
       slot: To be filled by O.E.M.
     *-firmware
          description: BIOS
          vendor: American Megatrends Inc.
          physical id: 0
          version: 2.0
          date: 04/24/2014
          size: 64KiB

Keep on reading!

sed with delimiter – any other single character in replacing words or characters

It is not so known fact that this powerful unix world command

sed

could be used with other delimiter than “/” (when replacing words or characters), which is used in 100% of the time in the Internet examples.
You probably know the syntax from the manual like:

s/regexp/replacement/
              Attempt to match regexp against the pattern space.  If successful, replace that portion matched with replacement.  The replacement may  con‐
              tain  the special character & to refer to that portion of the pattern space which matched, and the special escapes \1 through \9 to refer to
              the corresponding matching sub-expressions in the regexp.

The Internet examples always use “/” as shown in the man, BUT you CAN use “#” instead of “/” for example:

s#regexp#replacement#g

or

s:regexp:replacement:g

or you can even use letter character (not a special character as “!@#$%#^”):

sRregexpRreplacementRg

and with “r” lower case (you see not so readable, but possible!)

srregexprreplacementrg

Here are the examples:

sed 's@else@elllllse@g' test.php
sed 's#else#elllllse#g' test.php
sed 's:else:elllllse:g' test.php
sed 's~else~elllllse~g' test.php
sed 's!else!elllllse!g' test.php
sed 'spelsepelllllsepg' test.php
sed 'sRelseRelllllseRg' test.php
#could be paired with "-i", too
sed 'spelsepelllllsepg' -i test.php
sed 'sRelseRelllllseRg' -i test.php

What does it mean to you? The simple implication is you are not forced to escape characters in your regexp and replacement part. Consider you want to replace part of a unix/linux path string: /home/myuser/Desktop/mydirectory1/myfile.log if you use the default “/” you MUST escape all the “/” in your string:

sed 's/\/home\/myuser\/Desktop\/mydirectory1\/myfile.log/\/home\/user\/Desktop\/mydirectory2\/myfile.log/g' test.log

versus the simpler and more readable version with “#”

sed 's#/home/myuser/Desktop/mydirectory1/myfile.log#/home/user/Desktop/mydirectory2/myfile.log#g' test.log

And consider you use an variable:

cat file.test | sed "s/\[version\]/${PKGVERSION}/g"

* additional explanations:

  • /g (or what ever character is used for delimiter like “@”, “:”, “~” and so on) is for “Apply the replacement to all matches to the regexp, not just the first.”
  • -i (used in some of the examples above) is for inline replacement – the file you add in the sed command will be modified and the replacement will be saved in the file (the default behavior is to show the modified output in the standard output – console output)

Clear or delete systemd logs

Systemd linux distros use Journald service to collect and store logs in the system. Here are a couple of tips if you have problems with the space they occupy. It is good that all systemd linux distros support it – CentOS 7, Ubuntu 16+, Fedora, OpenSuse and so on.

TIP 1) Remove the old archive logs older than 10 days with

Time based removal of old logs. It will remove the old files. This command won’t change the configuration, so it just has a temporary effect.

journalctl --vacuum-time=10d

TIP 2) Remove the old archive logs greater than 1G

Size based removal of old logs. It will reduce the size of the logs to this specified size. This command won’t change the configuration, so it just has a temporary effect.

journalctl --vacuum-size=1024M

TIP 3) Show disk usage

[root@srv ~]# journalctl --disk-usage
Archived and active journals take up 785.5M on disk.

TIP 4) Show all logs and information for them

Where are the log files, the size they occupy, the time period of the entries in them and more:

[root@srv0 ~]# journalctl --header
File Path: /run/log/journal/bb5aa69bfa2b483386443ede7a92a45a/system.journal
File ID: 7deaff4610a94c82aab85386597e825b
Machine ID: bb5aa69bfa2b483386443ede7a92a45a
Boot ID: e89507fa3f814b8a96f9d70914c120b0
Sequential Number ID: 541a59d9a68342d3980d86e565a48c7d
State: ONLINE
Compatible Flags:
Incompatible Flags: COMPRESSED-XZ
Header size: 240
Arena size: 102956816
Data Hash Table Size: 178744
Field Hash Table Size: 333
Rotate Suggested: no
Head Sequential Number: 1239405
Tail Sequential Number: 1341511
Head Realtime Timestamp: mon 2018-06-25 10:09:09 UTC
Tail Realtime Timestamp: tue 2018-06-28 00:07:10 UTC
Tail Monotonic Timestamp: 1month 2d 21h 6min 47.287s
Objects: 258136
Entry Objects: 102107
Data Objects: 129585
Data Hash Table Fill: 72.5%
Field Objects: 36
Field Hash Table Fill: 10.8%
Tag Objects: 0
Entry Array Objects: 26406
Disk usage: 98.1M

File Path: /run/log/journal/bb5aa69bfa2b483386443ede7a92a45a/system@541a59d9a68342d3980d86e565a48c7d-000000000011512d-00056f3f3ff56172.journal
File ID: a8e1f042bc144df78f37005b0b555a82
Machine ID: bb5aa69bfa2b483386443ede7a92a45a
Boot ID: e89507fa3f814b8a96f9d70914c120b0
Sequential Number ID: 541a59d9a68342d3980d86e565a48c7d
State: ARCHIVED
Compatible Flags:
Incompatible Flags: COMPRESSED-XZ
Header size: 240
Arena size: 102956816
Data Hash Table Size: 178744
Field Hash Table Size: 333
Rotate Suggested: no
Head Sequential Number: 1134893
Tail Sequential Number: 1239404
Head Realtime Timestamp: fri 2018-06-22 18:32:10 UTC
Tail Realtime Timestamp: mon 2018-06-25 10:09:09 UTC
Tail Monotonic Timestamp: 1month 7h 8min 45.991s
Objects: 263445
Entry Objects: 104512
Data Objects: 131953
Data Hash Table Fill: 73.8%
Field Objects: 34
Field Hash Table Fill: 10.2%
Tag Objects: 0
Entry Array Objects: 26944
Disk usage: 98.1M

File Path: /run/log/journal/bb5aa69bfa2b483386443ede7a92a45a/system@541a59d9a68342d3980d86e565a48c7d-00000000000fc083-00056f0d6ef70320.journal
File ID: 007272b1f181487d83116bee96c40c30
Machine ID: bb5aa69bfa2b483386443ede7a92a45a
Boot ID: e89507fa3f814b8a96f9d70914c120b0
Sequential Number ID: 541a59d9a68342d3980d86e565a48c7d
State: ARCHIVED
Compatible Flags:
Incompatible Flags: COMPRESSED-XZ
Header size: 240
Arena size: 102956816
Data Hash Table Size: 178744
Field Hash Table Size: 333
Rotate Suggested: no
Head Sequential Number: 1032323
Tail Sequential Number: 1134892
Head Realtime Timestamp: wed 2018-06-20 07:06:10 UTC
Tail Realtime Timestamp: fri 2018-06-22 18:32:10 UTC
Tail Monotonic Timestamp: 4w 2h 1min 46.980s
Objects: 263998
Entry Objects: 102570
Data Objects: 133456
Data Hash Table Fill: 74.7%
Field Objects: 37
Field Hash Table Fill: 11.1%
Tag Objects: 0
Entry Array Objects: 27933
Disk usage: 98.1M

File Path: /run/log/journal/bb5aa69bfa2b483386443ede7a92a45a/system@541a59d9a68342d3980d86e565a48c7d-00000000000e3f67-00056ee09376d287.journal
File ID: 4c268268a6e34d6297c7ae9ca01fc31b
Machine ID: bb5aa69bfa2b483386443ede7a92a45a
Boot ID: e89507fa3f814b8a96f9d70914c120b0
Sequential Number ID: 541a59d9a68342d3980d86e565a48c7d
State: ARCHIVED
Compatible Flags:
Incompatible Flags: COMPRESSED-XZ
Header size: 240
Arena size: 102956816
Data Hash Table Size: 178744
Field Hash Table Size: 333
Rotate Suggested: yes
Head Sequential Number: 933735
Tail Sequential Number: 1032322
Head Realtime Timestamp: mon 2018-06-18 01:35:09 UTC
Tail Realtime Timestamp: wed 2018-06-20 07:06:10 UTC
Tail Monotonic Timestamp: 3w 4d 14h 35min 47.248s
Objects: 261498
Entry Objects: 98588
Data Objects: 134059
Data Hash Table Fill: 75.0%
Field Objects: 40
Field Hash Table Fill: 12.0%
Tag Objects: 0
Entry Array Objects: 28809
Disk usage: 98.1M

File Path: /run/log/journal/bb5aa69bfa2b483386443ede7a92a45a/system@541a59d9a68342d3980d86e565a48c7d-00000000000cbf3b-00056eb3ea255b2a.journal
File ID: 7019a721bebe4e00b76410a6e7052c6d
Machine ID: bb5aa69bfa2b483386443ede7a92a45a
Boot ID: e89507fa3f814b8a96f9d70914c120b0
Sequential Number ID: 541a59d9a68342d3980d86e565a48c7d
State: ARCHIVED
Compatible Flags:
Incompatible Flags: COMPRESSED-XZ
Header size: 240
Arena size: 102956816
Data Hash Table Size: 178744
Field Hash Table Size: 333
Rotate Suggested: yes
Head Sequential Number: 835387
Tail Sequential Number: 933734
Head Realtime Timestamp: fri 2018-06-15 20:18:10 UTC
Tail Realtime Timestamp: mon 2018-06-18 01:35:09 UTC
Tail Monotonic Timestamp: 3w 2d 9h 4min 45.972s
Objects: 260794
Entry Objects: 98348
Data Objects: 134060
Data Hash Table Fill: 75.0%
Field Objects: 33
Field Hash Table Fill: 9.9%
Tag Objects: 0
Entry Array Objects: 28351
Disk usage: 98.1M

File Path: /run/log/journal/bb5aa69bfa2b483386443ede7a92a45a/system@541a59d9a68342d3980d86e565a48c7d-00000000000b2fac-00056e827d3375dd.journal
File ID: 61c16e8acc364d97b8c0c7cc2afcb6ec
Machine ID: bb5aa69bfa2b483386443ede7a92a45a
Boot ID: e89507fa3f814b8a96f9d70914c120b0
Sequential Number ID: 541a59d9a68342d3980d86e565a48c7d
State: ARCHIVED
Compatible Flags:
Incompatible Flags: COMPRESSED-XZ
Header size: 240
Arena size: 102956816
Data Hash Table Size: 178744
Field Hash Table Size: 333
Rotate Suggested: no
Head Sequential Number: 733100
Tail Sequential Number: 835386
Head Realtime Timestamp: wed 2018-06-13 09:20:08 UTC
Tail Realtime Timestamp: fri 2018-06-15 20:18:10 UTC
Tail Monotonic Timestamp: 3w 3h 47min 46.827s
Objects: 264298
Entry Objects: 102287
Data Objects: 134043
Data Hash Table Fill: 75.0%
Field Objects: 33
Field Hash Table Fill: 9.9%
Tag Objects: 0
Entry Array Objects: 27933
Disk usage: 98.1M

File Path: /run/log/journal/bb5aa69bfa2b483386443ede7a92a45a/system@541a59d9a68342d3980d86e565a48c7d-000000000009966a-00056e4cbbb6ebfc.journal
File ID: c90df7bdc4654ab7a4ac955c74a779a3
Machine ID: bb5aa69bfa2b483386443ede7a92a45a
Boot ID: e89507fa3f814b8a96f9d70914c120b0
Sequential Number ID: 541a59d9a68342d3980d86e565a48c7d
State: ARCHIVED
Compatible Flags:
Incompatible Flags: COMPRESSED-XZ
Header size: 240
Arena size: 102956816
Data Hash Table Size: 178744
Field Hash Table Size: 333
Rotate Suggested: no
Head Sequential Number: 628330
Tail Sequential Number: 733099
Head Realtime Timestamp: sun 2018-06-10 17:12:09 UTC
Tail Realtime Timestamp: wed 2018-06-13 09:19:10 UTC
Tail Monotonic Timestamp: 2w 4d 16h 48min 46.993s
Objects: 263295
Entry Objects: 104770
Data Objects: 131816
Data Hash Table Fill: 73.7%
Field Objects: 37
Field Hash Table Fill: 11.1%
Tag Objects: 0
Entry Array Objects: 26670
Disk usage: 98.1M

File Path: /run/log/journal/bb5aa69bfa2b483386443ede7a92a45a/system@541a59d9a68342d3980d86e565a48c7d-000000000007fd20-00056e16f3135f30.journal
File ID: 32915d651dd84728b9e4a33a97b480cf
Machine ID: bb5aa69bfa2b483386443ede7a92a45a
Boot ID: e89507fa3f814b8a96f9d70914c120b0
Sequential Number ID: 541a59d9a68342d3980d86e565a48c7d
State: ARCHIVED
Compatible Flags:
Incompatible Flags: COMPRESSED-XZ
Header size: 240
Arena size: 102956816
Data Hash Table Size: 178744
Field Hash Table Size: 333
Rotate Suggested: no
Head Sequential Number: 523552
Tail Sequential Number: 628329
Head Realtime Timestamp: fri 2018-06-08 01:02:10 UTC
Tail Realtime Timestamp: sun 2018-06-10 17:12:09 UTC
Tail Monotonic Timestamp: 2w 2d 41min 46.200s
Objects: 263287
Entry Objects: 104778
Data Objects: 131696
Data Hash Table Fill: 73.7%
Field Objects: 36
Field Hash Table Fill: 10.8%
Tag Objects: 0
Entry Array Objects: 26775
Disk usage: 98.1M
[root@srv0 ~]#

* Deleting logs

[root@srv0 ~]# journalctl --vacuum-size=128M
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a33241-000570199db5be3c.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a34463-0005701a0b24d0fc.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a3568c-0005701a79e388c7.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a36899-0005701ae529f439.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a37ac8-0005701b50781f62.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a38cde-0005701bbd3b60c3.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a39ef6-0005701c28dcf40e.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a3b0f1-0005701c9416ee94.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a3c309-0005701cffe19015.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a3d521-0005701d6c79b9cb.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a3e748-0005701dd7c60e79.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a3f92d-0005701e48413eb8.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a40b0a-0005701ec83a0c74.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a41ce2-0005701f3fdcadc6.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a42f04-0005701fabfefbca.journal (8.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a4410e-000570201757a256.journal (120.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a60a8f-0005702ac025bb0c.journal (120.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a7d254-000570359f2bd7eb.journal (120.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000a99a34-000570405b5e812e.journal (120.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000ab639b-0005704b05553c7d.journal (120.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000ad2c3e-00057055cb403fae.journal (120.0M).
Deleted archived journal /var/log/journal/9be717e698354ec481abb641cf4085c3/system@79d0440cc1174c2db132b707a0567bcb-0000000000aef5c8-000570607686fd78.journal (120.0M).
Vacuuming done, freed 960.0M of archived journals from /var/log/journal/9be717e698354ec481abb641cf4085c3.