rsync server under CentOS 8 with SELinux enabled

Here is a quick and useful tip on how to run a rsync daemon under CentOS 8 with SELinux in Enforcing mode.
There are three basic steps:

  1. rsync daemon installation and configuration.
  2. firewall configuration.
  3. SELinux configuration.

STEP 1) rsync daemon installation and configuration.

Under CentOS 8 rsync daemon files are in a separate rpm package rsync-daemon (more on the subject rsync daemon in CentOS 8):

[root@srv ~]# dnf install -y rsync-daemon
Last metadata expiration check: 2:45:48 ago on Thu Apr  7 07:40:42 2022.
Dependencies resolved.
==============================================================================================================
 Package                     Architecture          Version                        Repository             Size
==============================================================================================================
Installing:
 rsync-daemon                noarch                3.1.3-14.el8                   baseos                 43 k

Transaction Summary
==============================================================================================================
Install  1 Package

Total download size: 43 k
Installed size: 17 k
Downloading Packages:
rsync-daemon-3.1.3-14.el8.noarch.rpm                                           98 kB/s |  43 kB     00:00    
--------------------------------------------------------------------------------------------------------------
Total                                                                          81 kB/s |  43 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                      1/1 
  Installing       : rsync-daemon-3.1.3-14.el8.noarch                                                     1/1 
  Running scriptlet: rsync-daemon-3.1.3-14.el8.noarch                                                     1/1 
  Verifying        : rsync-daemon-3.1.3-14.el8.noarch                                                     1/1 

Installed:
  rsync-daemon-3.1.3-14.el8.noarch                                                                            

Complete!

Keep on reading!

QEMU full virtualization – CPU emulations (enable/disable CPU flags/instruction sets) of QEMU 6.2.0

This article is an updated version of the old QEMU article about CPU flags available for version 2.0.0QEMU full virtualization – CPU emulations (enable/disable CPU flags/instruction sets) of QEMU 2.0.0.
The latest version of QEMU is 6.2.0 and it offers way more CPU flags and features! You can use QEMU with a nearly native full virtualization. Here are some important tips for the guest CPU to consider when using QEMU directly (without any virtualization manager like virt-manager, libvirt and so on).

TIP 1)Choose your host CPU emulation

You can see what options are available for host emulation with:

root@srv ~ # qemu-system-x86_64 -cpu help
Available CPUs:
x86 486                   (alias configured by machine type)                        
x86 486-v1                                                                          
x86 Broadwell             (alias configured by machine type)                        
x86 Broadwell-IBRS        (alias of Broadwell-v3)                                   
x86 Broadwell-noTSX       (alias of Broadwell-v2)                                   
x86 Broadwell-noTSX-IBRS  (alias of Broadwell-v4)                                   
x86 Broadwell-v1          Intel Core Processor (Broadwell)                          
x86 Broadwell-v2          Intel Core Processor (Broadwell, no TSX)                  
x86 Broadwell-v3          Intel Core Processor (Broadwell, IBRS)                    
x86 Broadwell-v4          Intel Core Processor (Broadwell, no TSX, IBRS)            
x86 Cascadelake-Server    (alias configured by machine type)                        
x86 Cascadelake-Server-noTSX  (alias of Cascadelake-Server-v3)                          
x86 Cascadelake-Server-v1  Intel Xeon Processor (Cascadelake)                        
x86 Cascadelake-Server-v2  Intel Xeon Processor (Cascadelake) [ARCH_CAPABILITIES]    
x86 Cascadelake-Server-v3  Intel Xeon Processor (Cascadelake) [ARCH_CAPABILITIES, no TSX]
x86 Cascadelake-Server-v4  Intel Xeon Processor (Cascadelake) [ARCH_CAPABILITIES, no TSX]
x86 Conroe                (alias configured by machine type)                        
x86 Conroe-v1             Intel Celeron_4x0 (Conroe/Merom Class Core 2)             
x86 Cooperlake            (alias configured by machine type)                        
x86 Cooperlake-v1         Intel Xeon Processor (Cooperlake)                         
x86 Denverton             (alias configured by machine type)                        
x86 Denverton-v1          Intel Atom Processor (Denverton)                          
x86 Denverton-v2          Intel Atom Processor (Denverton) [no MPX, no MONITOR]     
x86 Dhyana                (alias configured by machine type)                        
x86 Dhyana-v1             Hygon Dhyana Processor                                    
x86 EPYC                  (alias configured by machine type)                        
x86 EPYC-IBPB             (alias of EPYC-v2)                                        
x86 EPYC-Milan            (alias configured by machine type)                        
x86 EPYC-Milan-v1         AMD EPYC-Milan Processor                                  
x86 EPYC-Rome             (alias configured by machine type)                        
x86 EPYC-Rome-v1          AMD EPYC-Rome Processor                                   
x86 EPYC-Rome-v2          AMD EPYC-Rome Processor                                   
x86 EPYC-v1               AMD EPYC Processor                                        
x86 EPYC-v2               AMD EPYC Processor (with IBPB)                            
x86 EPYC-v3               AMD EPYC Processor                                        
x86 Haswell               (alias configured by machine type)                        
x86 Haswell-IBRS          (alias of Haswell-v3)                                     
x86 Haswell-noTSX         (alias of Haswell-v2)                                     
x86 Haswell-noTSX-IBRS    (alias of Haswell-v4)                                     
x86 Haswell-v1            Intel Core Processor (Haswell)                            
x86 Haswell-v2            Intel Core Processor (Haswell, no TSX)                    
x86 Haswell-v3            Intel Core Processor (Haswell, IBRS)                      
x86 Haswell-v4            Intel Core Processor (Haswell, no TSX, IBRS)              
x86 Icelake-Client        (alias configured by machine type)                        
x86 Icelake-Client-noTSX  (alias of Icelake-Client-v2)                              
x86 Icelake-Client-v1     Intel Core Processor (Icelake) [deprecated]               
x86 Icelake-Client-v2     Intel Core Processor (Icelake) [no TSX, deprecated]       
x86 Icelake-Server        (alias configured by machine type)                        
x86 Icelake-Server-noTSX  (alias of Icelake-Server-v2)                              
x86 Icelake-Server-v1     Intel Xeon Processor (Icelake)                            
x86 Icelake-Server-v2     Intel Xeon Processor (Icelake) [no TSX]                   
x86 Icelake-Server-v3     Intel Xeon Processor (Icelake)                            
x86 Icelake-Server-v4     Intel Xeon Processor (Icelake)                            
x86 IvyBridge             (alias configured by machine type)                        
x86 IvyBridge-IBRS        (alias of IvyBridge-v2)                                   
x86 IvyBridge-v1          Intel Xeon E3-12xx v2 (Ivy Bridge)                        
x86 IvyBridge-v2          Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS)                  
x86 KnightsMill           (alias configured by machine type)                        
x86 KnightsMill-v1        Intel Xeon Phi Processor (Knights Mill)                   
x86 Nehalem               (alias configured by machine type)                        
x86 Nehalem-IBRS          (alias of Nehalem-v2)                                     
x86 Nehalem-v1            Intel Core i7 9xx (Nehalem Class Core i7)                 
x86 Nehalem-v2            Intel Core i7 9xx (Nehalem Core i7, IBRS update)          
x86 Opteron_G1            (alias configured by machine type)                        
x86 Opteron_G1-v1         AMD Opteron 240 (Gen 1 Class Opteron)                     
x86 Opteron_G2            (alias configured by machine type)                        
x86 Opteron_G2-v1         AMD Opteron 22xx (Gen 2 Class Opteron)                    
x86 Opteron_G3            (alias configured by machine type)                        
x86 Opteron_G3-v1         AMD Opteron 23xx (Gen 3 Class Opteron)                    
x86 Opteron_G4            (alias configured by machine type)                        
x86 Opteron_G4-v1         AMD Opteron 62xx class CPU                                
x86 Opteron_G5            (alias configured by machine type)                        
x86 Opteron_G5-v1         AMD Opteron 63xx class CPU                                
x86 Penryn                (alias configured by machine type)                        
x86 Penryn-v1             Intel Core 2 Duo P9xxx (Penryn Class Core 2)              
x86 SandyBridge           (alias configured by machine type)                        
x86 SandyBridge-IBRS      (alias of SandyBridge-v2)                                 
x86 SandyBridge-v1        Intel Xeon E312xx (Sandy Bridge)                          
x86 SandyBridge-v2        Intel Xeon E312xx (Sandy Bridge, IBRS update)             
x86 Skylake-Client        (alias configured by machine type)                        
x86 Skylake-Client-IBRS   (alias of Skylake-Client-v2)                              
x86 Skylake-Client-noTSX-IBRS  (alias of Skylake-Client-v3)                              
x86 Skylake-Client-v1     Intel Core Processor (Skylake)                            
x86 Skylake-Client-v2     Intel Core Processor (Skylake, IBRS)                      
x86 Skylake-Client-v3     Intel Core Processor (Skylake, IBRS, no TSX)              
x86 Skylake-Server        (alias configured by machine type)                        
x86 Skylake-Server-IBRS   (alias of Skylake-Server-v2)                              
x86 Skylake-Server-noTSX-IBRS  (alias of Skylake-Server-v3)                              
x86 Skylake-Server-v1     Intel Xeon Processor (Skylake)                            
x86 Skylake-Server-v2     Intel Xeon Processor (Skylake, IBRS)                      
x86 Skylake-Server-v3     Intel Xeon Processor (Skylake, IBRS, no TSX)              
x86 Skylake-Server-v4     Intel Xeon Processor (Skylake, IBRS, no TSX)              
x86 Snowridge             (alias configured by machine type)                        
x86 Snowridge-v1          Intel Atom Processor (SnowRidge)                          
x86 Snowridge-v2          Intel Atom Processor (Snowridge, no MPX)                  
x86 Westmere              (alias configured by machine type)                        
x86 Westmere-IBRS         (alias of Westmere-v2)                                    
x86 Westmere-v1           Westmere E56xx/L56xx/X56xx (Nehalem-C)                    
x86 Westmere-v2           Westmere E56xx/L56xx/X56xx (IBRS update)                  
x86 athlon                (alias configured by machine type)                        
x86 athlon-v1             QEMU Virtual CPU version 2.5+                             
x86 core2duo              (alias configured by machine type)                        
x86 core2duo-v1           Intel(R) Core(TM)2 Duo CPU     T7700  @ 2.40GHz           
x86 coreduo               (alias configured by machine type)                        
x86 coreduo-v1            Genuine Intel(R) CPU           T2600  @ 2.16GHz           
x86 kvm32                 (alias configured by machine type)                        
x86 kvm32-v1              Common 32-bit KVM processor                               
x86 kvm64                 (alias configured by machine type)                        
x86 kvm64-v1              Common KVM processor                                      
x86 n270                  (alias configured by machine type)                        
x86 n270-v1               Intel(R) Atom(TM) CPU N270   @ 1.60GHz                    
x86 pentium               (alias configured by machine type)                        
x86 pentium-v1                                                                      
x86 pentium2              (alias configured by machine type)                        
x86 pentium2-v1                                                                     
x86 pentium3              (alias configured by machine type)                        
x86 pentium3-v1                                                                     
x86 phenom                (alias configured by machine type)                        
x86 phenom-v1             AMD Phenom(tm) 9550 Quad-Core Processor                   
x86 qemu32                (alias configured by machine type)                        
x86 qemu32-v1             QEMU Virtual CPU version 2.5+                             
x86 qemu64                (alias configured by machine type)                        
x86 qemu64-v1             QEMU Virtual CPU version 2.5+                             
x86 base                  base CPU model type with no features enabled              
x86 host                  KVM processor with all supported host features            
x86 max                   Enables all features supported by the accelerator in the current host

Recognized CPUID flags:
  3dnow 3dnowext 3dnowprefetch abm ace2 ace2-en acpi adx aes amd-no-ssb
  amd-ssbd amd-stibp apic arat arch-capabilities avic avx avx2
  avx512-4fmaps avx512-4vnniw avx512-bf16 avx512-fp16 avx512-vp2intersect
  avx512-vpopcntdq avx512bitalg avx512bw avx512cd avx512dq avx512er avx512f
  avx512ifma avx512pf avx512vbmi avx512vbmi2 avx512vl avx512vnni bmi1 bmi2
  bus-lock-detect cid cldemote clflush clflushopt clwb clzero cmov
  cmp-legacy core-capability cr8legacy cx16 cx8 dca de decodeassists ds
  ds-cpl dtes64 erms est extapic f16c flushbyasid fma fma4 fpu fsgsbase
  fsrm full-width-write fxsr fxsr-opt gfni hle ht hypervisor ia64 ibpb ibrs
  ibrs-all ibs intel-pt intel-pt-lip invpcid invtsc kvm-asyncpf
  kvm-asyncpf-int kvm-hint-dedicated kvm-mmu kvm-msi-ext-dest-id
  kvm-nopiodelay kvm-poll-control kvm-pv-eoi kvm-pv-ipi kvm-pv-sched-yield
  kvm-pv-tlb-flush kvm-pv-unhalt kvm-steal-time kvmclock kvmclock
  kvmclock-stable-bit la57 lahf-lm lbrv lm lwp mca mce md-clear mds-no
  misalignsse mmx mmxext monitor movbe movdir64b movdiri mpx msr mtrr
  nodeid-msr npt nrip-save nx osvw pae pat pause-filter pbe pcid pclmulqdq
  pcommit pdcm pdpe1gb perfctr-core perfctr-nb pfthreshold pge phe phe-en
  pks pku pmm pmm-en pn pni popcnt pschange-mc-no pse pse36 rdctl-no rdpid
  rdrand rdseed rdtscp rsba rtm sep serialize sha-ni skinit
  skip-l1dfl-vmentry smap smep smx spec-ctrl split-lock-detect ss ssb-no
  ssbd sse sse2 sse4.1 sse4.2 sse4a ssse3 stibp svm svm-lock svme-addr-chk
  syscall taa-no tbm tce tm tm2 topoext tsc tsc-adjust tsc-deadline
  tsc-scale tsx-ctrl tsx-ldtrk umip v-vmsave-vmload vaes vgif virt-ssbd
  vmcb-clean vme vmx vmx-activity-hlt vmx-activity-shutdown
  vmx-activity-wait-sipi vmx-apicv-register vmx-apicv-vid vmx-apicv-x2apic
  vmx-apicv-xapic vmx-cr3-load-noexit vmx-cr3-store-noexit
  vmx-cr8-load-exit vmx-cr8-store-exit vmx-desc-exit vmx-encls-exit
  vmx-entry-ia32e-mode vmx-entry-load-bndcfgs vmx-entry-load-efer
  vmx-entry-load-pat vmx-entry-load-perf-global-ctrl vmx-entry-load-pkrs
  vmx-entry-load-rtit-ctl vmx-entry-noload-debugctl vmx-ept vmx-ept-1gb
  vmx-ept-2mb vmx-ept-advanced-exitinfo vmx-ept-execonly vmx-eptad
  vmx-eptp-switching vmx-exit-ack-intr vmx-exit-clear-bndcfgs
  vmx-exit-clear-rtit-ctl vmx-exit-load-efer vmx-exit-load-pat
  vmx-exit-load-perf-global-ctrl vmx-exit-load-pkrs
  vmx-exit-nosave-debugctl vmx-exit-save-efer vmx-exit-save-pat
  vmx-exit-save-preemption-timer vmx-flexpriority vmx-hlt-exit vmx-ins-outs
  vmx-intr-exit vmx-invept vmx-invept-all-context vmx-invept-single-context
  vmx-invept-single-context vmx-invept-single-context-noglobals
  vmx-invlpg-exit vmx-invpcid-exit vmx-invvpid vmx-invvpid-all-context
  vmx-invvpid-single-addr vmx-io-bitmap vmx-io-exit vmx-monitor-exit
  vmx-movdr-exit vmx-msr-bitmap vmx-mtf vmx-mwait-exit vmx-nmi-exit
  vmx-page-walk-4 vmx-page-walk-5 vmx-pause-exit vmx-ple vmx-pml
  vmx-posted-intr vmx-preemption-timer vmx-rdpmc-exit vmx-rdrand-exit
  vmx-rdseed-exit vmx-rdtsc-exit vmx-rdtscp-exit vmx-secondary-ctls
  vmx-shadow-vmcs vmx-store-lma vmx-true-ctls vmx-tsc-offset
  vmx-unrestricted-guest vmx-vintr-pending vmx-vmfunc
  vmx-vmwrite-vmexit-fields vmx-vnmi vmx-vnmi-pending vmx-vpid
  vmx-wbinvd-exit vmx-xsaves vmx-zero-len-inject vpclmulqdq waitpkg
  wbnoinvd wdt x2apic xcrypt xcrypt-en xgetbv1 xop xsave xsavec xsaveerptr
  xsaveopt xsaves xstore xstore-en xtpr

The number of supported flags grew enormously compared to the old versions of QEMU and in fact, they include almost all available CPU flags. The supported CPUs are also several times more than before! The above list of supported CPUs means the virtual guest machine could use one of them and the guest operating system will have all the flags the CPU supports. In fact, the guest virtual system will report to the OS it has the selected CPU from the list above.
Keep on reading!

Minimal network installation of Fedora 35 Server

This tutorial will show you the simple steps of installing a modern Linux Distribution Fedora 35 Server edition. Fedora line offers many bleeding-edge Linux technologies than the more enterprise CentOS of the same RPM Linux family.

In fact, if the user needs a server with the latest Linux stable software Fedora server is the right and easy choice for a server!

For example, the Fedora 35 Server comes and updates to the latest stable Linux:

  • Linux kernel : 5.16.
  • Python : 3.10.2
  • GLibc : 2.34
  • OpenSSL : 1.1.1l
  • systemd : 249.9

Of course, one can expect latest version of GCC (11.2.x), PHP (8.0.16), GO (1.16.14), MySQL Server (8.0.27), PostgreSQL (13.4), Nginx (1.20.2), Apache (2.4.52) and so on. Almost all of them are the latest stable version in their Internet sites.
Just be careful, the Fedora life cycle is 13 months from the release to the EOL (End of Life)! Of course, a dist-upgrade is supported and indeed, it has been flawless for years!

We used the following ISO for the installation process from https://getfedora.org/en/server/download/:

https://download.fedoraproject.org/pub/fedora/linux/releases/35/Server/x86_64/iso/Fedora-Server-netinst-x86_64-35-1.2.iso

It is a LIVE image so you can try it before installing it. The easiest way is just to download the image and burn it to a DVD disk and then follow the installation below (USB flash drive could be also created from this ISO):

SCREENSHOT 1) If you booted from the DVD you would get this first screen – select “Install Fedora 35” and hit Enter

main menu
Start Fedora 35 Server

Keep on reading!

Change found sources for kernel version when packages need the kernel sources to compile

Multiple Gentoo packages may need kernel sources to compile. There are packages, which are external modules such as virtualbox-modules or video drivers or wifi drivers or more. All these packages expect the current loaded kernel sources are present and to use them when compiling the external kernel module. But sometimes the proper kernel sources are missing, those needed to compile the kernel module in such a way to load it in the currently loaded kernel.

This article is valid not only for Gentoo Linux distribution but any Linux and kernel sources. So, if the user needs to have properly configured kernel sources for the currently loaded kernel, this is one way to do it right.

Here is an example: Updated kernel, but no sources are kept and then the VirtualBox needs to update to a newer version, but with the missing kernel sources of the currently loaded kernel updating the VirtualBox will cause the VirtualBox to stop working!

root@srv ~ # uname -a
Linux srv 5.15.5-gentoo #2 SMP Tue Nov 30 16:08:49 EET 2021 x86_64 Intel(R) Core(TM) i7-5500U CPU @ 2.40GHz GenuineIntel GNU/Linux
root@srv ~ # emerge -va app-emulation/virtualbox-modules

These are the packages that would be merged, in order:

[ebuild     U  ] app-emulation/virtualbox-modules-6.1.32:0/6.1::gentoo [6.1.26:0/6.1::gentoo] USE="dist-kernel -pax-kernel" 660 KiB

Total: 1 packages (1 upgrades), Size of downloads: 0 KiB

Would you like to merge these packages? [Yes/No] yes

>>> Verifying ebuild manifests

>>> Running pre-merge checks for app-emulation/virtualbox-6.1.32-r1

>>> Emerging (1 of 1) app-emulation/virtualbox-modules-6.1.32::gentoo
 * Fetching files in the background.
 * To view fetch progress, run in another terminal:
 * tail -f /var/log/emerge-fetch.log
 * vbox-kernel-module-src-6.1.32.tar.xz BLAKE2B SHA512 size ;-) ...                                                                                                                   [ ok ]
 * Determining the location of the kernel source code
 * Found kernel source directory:
 *     /usr/src/linux
 * Found sources for kernel version:
 *     5.14.2-gentoo-x86_64-genkernel-NEW2
 * Checking for suitable kernel configuration options...                                                                                                                              [ ok ]
>>> Unpacking source...
>>> Unpacking vbox-kernel-module-src-6.1.32.tar.xz to /var/tmp/portage/app-emulation/virtualbox-modules-6.1.32/work
>>> Source unpacked in /var/tmp/portage/app-emulation/virtualbox-modules-6.1.32/work
>>> Preparing source in /var/tmp/portage/app-emulation/virtualbox-modules-6.1.32/work ...
>>> Source prepared.
>>> Configuring source in /var/tmp/portage/app-emulation/virtualbox-modules-6.1.32/work ...
>>> Source configured.
>>> Compiling source in /var/tmp/portage/app-emulation/virtualbox-modules-6.1.32/work ...

Here is the problem: the currently loaded kernel is version 5.15.5-gentoo and the emerge system finds only sources for 5.14.2-gentoo-x86_64-genkernel-NEW2, which will use to produce modules for the 5.14.2-gentoo-x86_64-genkernel-NEW2. It is obvious enough the modules compiled against kernel sources of 5.14.2-gentoo-x86_64-genkernel-NEW2 version won’t be possible to be loaded under the currently load kernel with version 5.15.5-gentoo.
Here is how to fix this:

  1. Get the kernel sources for 5.15.5-gentoo in /usr/src/linux
  2. Save the currently loaded kernel config in /usr/src/linux/.config
  3. Load the configuration and prepare the kernel sources. No need to compile the kernel sources.

STEP 1) Get the kernel sources for 5.15.5-gentoo

emerge -v =gentoo-sources-5.15.5
rm -f /usr/src/linux
ln -s /usr/src/linux-5.15.5-gentoo /usr/src/linux

These commands will install the needed kernel version and a link to the kernel sources will be created.
Of course, change the kernel version to the proper version if needed.

STEP 2) Save the currently loaded kernel config in /usr/src/linux/.config

zcat /proc/config.gz > /usr/src/linux/.config

If the /proc/config.gz is missing, copy the configuration from the /boot for the currently loaded kernel:

cat /boot/config-5.15.5-gentoo > /usr/src/linux-5.15.5-gentoo/.config

STEP 3) Load the configuration and prepare the kernel sources.

No need to compile the whole kernel source tree. Just to commands to configure and prepare the kernel sources:

cd /usr/src/linux-5.15.5-gentoo
make oldconfig
make prepare

The commands take Here is the output of the last commands:

root@srv1 ~ # cd /usr/src/linux-5.15.5-gentoo
root@srv1 linux # make oldconfig
  HOSTCC  scripts/basic/fixdep
  HOSTCC  scripts/kconfig/conf.o
  HOSTCC  scripts/kconfig/confdata.o
  HOSTCC  scripts/kconfig/expr.o
  HOSTCC  scripts/kconfig/lexer.lex.o
  HOSTCC  scripts/kconfig/menu.o
  HOSTCC  scripts/kconfig/parser.tab.o
  HOSTCC  scripts/kconfig/preprocess.o
  HOSTCC  scripts/kconfig/symbol.o
  HOSTCC  scripts/kconfig/util.o
  HOSTLD  scripts/kconfig/conf
#
# configuration written to .config
#
root@srv1 linux # make prepare
  SYNC    include/config/auto.conf.cmd
  HOSTCC  arch/x86/tools/relocs_32.o
  HOSTCC  arch/x86/tools/relocs_64.o
  HOSTCC  arch/x86/tools/relocs_common.o
  HOSTLD  arch/x86/tools/relocs
  HOSTCC  scripts/selinux/genheaders/genheaders
  HOSTCC  scripts/selinux/mdp/mdp
  HOSTCC  scripts/bin2c
  HOSTCC  scripts/kallsyms
  HOSTCC  scripts/sorttable
  HOSTCC  scripts/asn1_compiler
  HOSTCC  scripts/extract-cert
  UPD     include/config/kernel.release
  UPD     include/generated/utsrelease.h
  CC      scripts/mod/empty.o
  HOSTCC  scripts/mod/mk_elfconfig
  MKELF   scripts/mod/elfconfig.h
  HOSTCC  scripts/mod/modpost.o
  CC      scripts/mod/devicetable-offsets.s
  HOSTCC  scripts/mod/file2alias.o
  HOSTCC  scripts/mod/sumversion.o
  HOSTLD  scripts/mod/modpost
  CC      kernel/bounds.s
  UPD     include/generated/bounds.h
  CC      arch/x86/kernel/asm-offsets.s
  UPD     include/generated/asm-offsets.h
  CALL    scripts/checksyscalls.sh
  CALL    scripts/atomic/check-atomics.sh
  DESCEND objtool
  HOSTCC  /usr/src/linux-5.15.5-gentoo/tools/objtool/fixdep.o
  HOSTLD  /usr/src/linux-5.15.5-gentoo/tools/objtool/fixdep-in.o
  LINK    /usr/src/linux-5.15.5-gentoo/tools/objtool/fixdep
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/exec-cmd.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/help.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/pager.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/parse-options.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/run-command.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/sigchain.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/subcmd-config.o
  LD      /usr/src/linux-5.15.5-gentoo/tools/objtool/libsubcmd-in.o
  AR      /usr/src/linux-5.15.5-gentoo/tools/objtool/libsubcmd.a
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/arch/x86/special.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/arch/x86/decode.o
  LD      /usr/src/linux-5.15.5-gentoo/tools/objtool/arch/x86/objtool-in.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/weak.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/check.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/special.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/orc_gen.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/orc_dump.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/builtin-check.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/builtin-orc.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/elf.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/objtool.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/libstring.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/libctype.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/str_error_r.o
  CC      /usr/src/linux-5.15.5-gentoo/tools/objtool/librbtree.o
  LD      /usr/src/linux-5.15.5-gentoo/tools/objtool/objtool-in.o
  LINK    /usr/src/linux-5.15.5-gentoo/tools/objtool/objtool

And from now on whenever the kernel sources are needed to compile modules or libraries against, the proper kernel sources will be used of the currently loaded kernel.
The Gentoo emerge command from the begging of this article, but this time with the properly configured kernel sources. The VirtualBox modules are compiled against the loaded kernel, so loading them is not an issue anymore!

root@srv ~ # uname -a
Linux srv 5.15.5-gentoo #2 SMP Tue Nov 30 16:08:49 EET 2021 x86_64 Intel(R) Core(TM) i7-5500U CPU @ 2.40GHz GenuineIntel GNU/Linux
root@srv ~ # emerge -va app-emulation/virtualbox-modules

These are the packages that would be merged, in order:

[ebuild     U  ] app-emulation/virtualbox-modules-6.1.32:0/6.1::gentoo [6.1.26:0/6.1::gentoo] USE="dist-kernel -pax-kernel" 660 KiB

Total: 1 packages (1 upgrades), Size of downloads: 0 KiB

Would you like to merge these packages? [Yes/No] yes

>>> Verifying ebuild manifests

>>> Running pre-merge checks for app-emulation/virtualbox-6.1.32-r1

>>> Emerging (1 of 1) app-emulation/virtualbox-modules-6.1.32::gentoo
 * Fetching files in the background.
 * To view fetch progress, run in another terminal:
 * tail -f /var/log/emerge-fetch.log
 * vbox-kernel-module-src-6.1.32.tar.xz BLAKE2B SHA512 size ;-) ...                                                                                                                   [ ok ]
 * Determining the location of the kernel source code
 * Found kernel source directory:
 *     /usr/src/linux
 * Found sources for kernel version:
 *     5.15.5-gentoo-gentoo
 * Checking for suitable kernel configuration options...                                                                                                                              [ ok ]
>>> Unpacking source...
>>> Unpacking vbox-kernel-module-src-6.1.32.tar.xz to /var/tmp/portage/app-emulation/virtualbox-modules-6.1.32/work
>>> Source unpacked in /var/tmp/portage/app-emulation/virtualbox-modules-6.1.32/work
>>> Preparing source in /var/tmp/portage/app-emulation/virtualbox-modules-6.1.32/work ...
>>> Source prepared.
>>> Configuring source in /var/tmp/portage/app-emulation/virtualbox-modules-6.1.32/work ...
>>> Source configured.
>>> Compiling source in /var/tmp/portage/app-emulation/virtualbox-modules-6.1.32/work ...
....
....
root@srv ~ # modprobe vboxdrv
Feb 15 14:15:10 www kernel: vboxdrv: loading out-of-tree module taints kernel.
Feb 15 14:15:10 www kernel: vboxdrv: Found 4 processor cores
Feb 15 14:15:10 www kernel: vboxdrv: TSC mode is Invariant, tentative frequency 2394461773 Hz
Feb 15 14:15:10 www kernel: vboxdrv: Successfully loaded version 6.1.32 r149290 (interface 0x00320000)
Feb 15 14:15:10 www kernel: VBoxNetFlt: Successfully started.

Virtualbox machine boots from usb drive

First, at present, booting from USB is impossible with VirtualBox! But there is a really easy workaround to use VMDK, which is just a container file describing physical devices (or files) to use in virtual machines like VirtualBox or VMware.
Because the USB is just another physical device attached to the machine this article will help to attach the USB drive to a virtual machine – Add a raw disk to a virtualbox virtual machine. Then boot from the newly attached disk.

Here is the quick tip for the USB drive:

  1. Attach the USB drive and find its device path. Under Windows, it would be something like “\\.\PhysicalDrive3” (open “Disk Management” if not sure) and under Linux it would be /dev/sdc, for example. This is the third disk device (including USB disk devices) connected to the machine.
  2. Make the VMDK from the USB physical device.
    Under Windows:

    VBoxManage.exe internalcommands createrawvmdk -filename "c:\Users\homer\.VirtualBox\windows11pro-install-usb.vmdk" -rawdisk \\.\PhysicalDrive3
    

    Under Linux:

    VBoxManage internalcommands createrawvmdk -filename /home/myuser/.VirtualBox/windows11pro-install-usb.vmdk.vmdk -rawdisk /dev/sdc
    
  3. Attach it the virtual machine: Settings -> Storage -> Storage Devices.

    First, a click on “Adds hard disk” would show a menu to add a new hard disk and then a click on “Add” (“Add Disk Image”) shows a file browse dialog to locate the VMDK file.

    main menu
    Storage Devices
  4. Boot from this device by selecting it manually from the boot menu (F12 would boot in Boot menu) or set the VMKD disk to be on the Port 0 in the above step.

For more details (not just the commands to generate the VMDK container file) follow the above URL to the proposed article – Add a raw disk to a virtualbox virtual machine

Install and deploy MySQL 8 InnoDB Cluster with 3 nodes under CentOS 8 and MySQL Router for HA

This article is going to show how to install a MySQL server and deploy a MySQL 8 InnoDB Cluster with three nodes behind a MySQL router to archive a high availability with MySQL database back-end.

In really simple words, MySQL 8.0 InnoDB Cluster is just MySQL replication on steroids – i.e. a little more additional work between the servers in the group before committing the transactions. It uses MySQL Group Replication plugin, which allows the group to operate in two different modes:

  1. a single-primary mode with automatic primary election. Only one server gets the updates.
  2. a multi-master mode – all servers accept the updates. For advanced setups.

Group Replication is bi-directional, the servers communicate with each other and use row replication to replicate the data. The main limitation is that only the MySQL InnoDB engine is supported, because of the transactions support. So the performance (and most features and caveats) of MySQL InnoDB is not impacted by cluster setup and overhead compared to the MySQL in replication mode (or a single server setups) from the previous MySQL versions. Still, all read-write transactions commit only after they have been approved by the group – a verification process providing consensus between the servers. In fact, most of the features like GUIDs, row-based replication (i.e. different replication modes) are developed and available to older versions. The new part is handled by Group Communication System (GCS) protocols, which provide a failure detection mechanism, a group membership service, and a safe and completely ordered message delivery (more on the subject here https://dev.mysql.com/doc/refman/8.0/en/group-replication-background.html).
In addition to the group replication, MySQL Router 8.0 provides the HAhigh availability. The program, which redirects, fails over, balances to the right server in the group is the MySQL Router. Clients may connect directly to the servers in the group, but only if the clients connect using MySQL router will have HA because Group Replication does not have a built-in method for it. It is worth noting, there could be many MySQL Routers in different servers, they do not need to communicate or synchronize anything with each other. So the router could be installed in the same place, where the application is installed or on a separate dedicated server, or on every MySQL server in the group.

Key points in this article of MySQL InnoDB Cluster deployment:

  • CentOS 8 Stream is used for the operating system
  • SELinux tuning to allow MySQL process to connect the network.
  • CentOS 8 firewall tuning to unblock the nodes traffic between them.
  • Disable mysql package system module to use the official MySQL repository.
  • Three MySQL 8.0.28 server nodes will be installed
  • To create and manage the cluster MySQL Shell 8.0 and dba object in it are used.
  • Three MySQL routers on each MySQL node will be installed.
  • Each server will have the domains of the all three servers in /etc/hosts file – db-cluster-1, db-cluster-2, db-cluster-3.
  • The cluster is in group replication with one primary (i.e. master) and two secondary nodes (i.e. slaves)

STEP 1) Install CentOS 8 Stream.

There is an article with the CentOS 8 – How to do a network installation of CentOS 8 (8.0.1950) – minimal server installation, which installation is essentially the same as CentOS 8 Stream.

STEP 2) Prepare the CentOS 8 Stream to install MySQL 8 server.

At present, the latest MySQL Community edition is 8.0.28. The preferred way to install the MySQL server is to download the RPM repository file from MySQL web site – https://dev.mysql.com/downloads/repo/yum/
Keep on reading!

Debug options for LXC and lxc-start when lxc container could not start

Setup and running LXC container is really easy, but sometimes it is unclear why the LXC container could not start. Most of the time, there is a generic error, which says nothing for the real reason:

root@srv ~ # lxc-start -n test-lxc
lxc-start: test-lxc: lxccontainer.c: wait_on_daemonized_start: 867 Received container state "ABORTING" instead of "RUNNING"
lxc-start: test-lxc: tools/lxc_start.c: main: 306 The container failed to start
lxc-start: test-lxc: tools/lxc_start.c: main: 309 To get more details, run the container in foreground mode
lxc-start: test-lxc: tools/lxc_start.c: main: 311 Additional information can be obtained by setting the --logfile and --logpriority options

No specific reason why the LXC container test-lxc can not be started and the lxc-start command failed. There is just an offer to use the logging options and here is how the administrator of the box may do it by including the following lxc-start options:

-l DEBUG –logfile=test-lxc.log –logpriority=9

Here is a real-world example of an old kernel trying to run LXC 4.0
Keep on reading!

Gentoo – bash: su: command not found – missing su flag

Upgrading multiple packages may lead to interesting results especially if the queue has not finished yet or the fails with an error! Apparently, there are two main ways to have the basic command SU in the system installed:

  1. sys-apps/shadow
  2. sys-apps/util-linux

At some point, the default inclusion of su flags in the above packages had changed from sys-apps/shadow to sys-apps/util-linux, which may lead to the following interesting error:

user@srv ~ $ su
bash: /bin/su: command not found

Just check, which of the above packages includes the su flag and re-emerge it. At present, sys-apps/util-linux includes it by default and it should work without any explicit activation in portage package use (i.e. /etc/portage/package.use/mybase or /etc/portage/make.conf, for example) file.

At the moment, here is the default:

user@srv ~ # emerge -vp shadow util-linux

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild   R    ] sys-apps/shadow-4.11.1:0/4::gentoo  USE="acl (audit) nls pam (selinux) (split-usr) xattr -bcrypt -cracklib -skey -su" 0 KiB
[ebuild   R    ] sys-apps/util-linux-2.37.2-r3::gentoo  USE="(audit) (caps) cramfs hardlink logger ncurses nls pam python readline (selinux) (split-usr) su suid udev (unicode) -build -cryptsetup -fdformat -kill -magic (-rtas) -slang -static-libs -systemd -test -tty-helpers" ABI_X86="32 (64) (-x32)" PYTHON_TARGETS="python3_8 -python3_9 -python3_10" 0 KiB

Total: 2 packages (2 reinstalls), Size of downloads: 0 KiB

su flag is missing in sys-apps/shadow and is included in sys-apps/util-linux

Installing single node Elasticsearch 7.16 and Kibana 7.16 behind nginx web server under CentOS 8

This article will show how to install two big software – Elasticsearch to store information and Kibana to visualize the information under CentOS 8. Elasticsearch is ideal to store big data such as logs from user activities or server logs – one central repository for data, which is structured properly and it could be easily accessed and manipulated with various software.
Kibana is used mainly for visualizing the data stored in the Elasticseach server and manage the Elasticsearch service by the web. ste

Here is a simple example: send the web servers logs in Elasticsearch and visual statistical data with Kibana.

Using the rpm repository for the two software is the best option for installation and in future upgrades.

STEP 1) Install the CentOS 8.

How to install CentOS 8 could be found here – How to do a network installation of CentOS 8 (8.0.1950) – minimal server installation.
Or if a container approach is needed, there is a how to with LXC containerRun LXC CentOS 8 container with bridged network under CentOS 8.

STEP 2) Install the Elasticsearch.

This installation and configuration is for single node server setup.
First, create a rpm repository file /etc/yum.repos.d/elasticsearch.repo and fill it with the Elasticsearch repository information:

[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Then import the Elasticsearch GPG key and install the Elasticsearch software:

[root@loganalyzer ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@loganalyzer ~]# dnf install elasticsearch
Last metadata expiration check: 0:00:19 ago on 11.12.2021 (Sat) 12:43:24 UTC.
Dependencies resolved.
==========================================================================================================================================
 Package            Architecture             Version                     Repository                                Size
==========================================================================================================================================
Installing:
 elasticsearch      x86_64                   7.16.0-1                    elasticsearch                             327 M

Transaction Summary
=========================================================================================================================================
Install  1 Package

Total download size: 327 M
Installed size: 526 M
Is this ok [y/N]: y
Downloading Packages:
elasticsearch-7.16.0-x86_64.rpm                                                                                 43 MB/s | 327 MB     00:07    
------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                           43 MB/s | 327 MB     00:07     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                     1/1 
  Running scriptlet: elasticsearch-7.16.0-1.x86_64                                                                                                                                       1/1 
Creating elasticsearch group... OK
Creating elasticsearch user... OK

  Installing       : elasticsearch-7.16.0-1.x86_64                                                                                                                                       1/1 
  Running scriptlet: elasticsearch-7.16.0-1.x86_64                                                                                                                                       1/1 
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
 sudo systemctl daemon-reload
 sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
 sudo systemctl start elasticsearch.service

Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore

[/usr/lib/tmpfiles.d/elasticsearch.conf:1] Line references path below legacy directory /var/run/, updating /var/run/elasticsearch → /run/elasticsearch; please update the tmpfiles.d/ drop-in file accordingly.

  Verifying        : elasticsearch-7.16.0-1.x86_64                                                                                                                                       1/1 

Installed:
  elasticsearch-7.16.0-1.x86_64                                                                                                                                                              

Complete!

The configuration files are placed in /etc/elasticsearch/:
Keep on reading!