MPEG-DASH and ClearKey, CENC drm encryption with Nginx, bento4 and dashjs under CentOS 8

The purpose of this article is to demonstrate a simple and plain example of ClearKey DRM encryption using a DASH stream.
Usually, the ClearKey is used only for testing the encryption key and the DRM setup, because the decrypting key is transferred in a plain text to the browser. In simple DRM words, the key is transferred in plain text, and the handle of the decryption is not in some proprietary module such as CMD – Content Decryption Modules. The CMD is a proprietary module in the browsers or the players, which works like a black box when handling the decryption key. The most popular DRMs are Google’s Widevine, Apple’s Fireplay, and Microsoft PlayReady, which work through a proprietary module – CMD (Content Decryption Modules) in the browser (or the OS and player).
All the three DRMs work basically in a similar way:

  • There is a (encryption) key and a (encryption) keyID, which purpose is to identify the (encryption) key.
  • The video file is encrypted with the key and it includes the keyID.
  • The client needs to have the appropriate CMD (Content Decryption Modules) to decrypt the video.
  • The clients receive a license from a license server, which is encrypted data for the CDM on how to decrypt the video identified by the keyID. In fact, the client sends the keyID and receives the proper license (i.e. license binary data) for this keyID. That’s why keyID is included in the encrypted video. Bare in mind, the CMD is proprietary Content Decryption Module offered by the creator of the DRM – Google, Apple, Microsoft or another and it lives in the browser (OS or player). All popular browsers support at least one of the proprietary DRMs.

ClearKey is like the proprietary DRM schemes, but without the CMD (Content Decryption Modules).

The “org.w3.clearkey” Key System uses plain-text clear (unencrypted) key(s) to decrypt the source. No additional client-side content protection is required.

So, in general, there is no need for a license server when using ClearKey DRM.
Of course, an additional attempt to hide the plain-text key could be made using an extension to the client’s player such as javascript modules and etc. In general, it is perceived this approach to be less secure, because it is much easier to debug the javascript code on the client side. More on ClearKeyhttps://www.w3.org/TR/encrypted-media/#clear-key

Here are all the steps from the server till the client to use ClearKey.

STEP 1) Download and install bento4 software.

bento4 is an open source toolkit for manipulating some of the most common video formats – MP4 and DASH/HLS/CMAF media. The download page is https://www.bento4.com/downloads/ and the Linux binary for latest stable version: https://www.bok.net/Bento4/binaries/Bento4-SDK-1-6-0-639.x86_64-unknown-linux.zip. There is also a source code snapshot link.
Download the famous blender video for the demostration: https://download.blender.org/demo/movies/BBB/bbb_sunflower_1080p_30fps_normal.mp4
Download and unpack the binary Bento4-SDK-1-6-0-639.x86_64-unknown-linux.zip.
Keep on reading!

Install newer version of python 3.10 under CentOS 8

At present, the default version of python under CentOS 8 is Python 3.6.8, which is 6 years old. More and more python software needs newer versions, so it is a vital for pretty stable Linux distro to have an easy way to install newer programming languages like python!
Using Conda it is really easy to manage different environments for different python versions!

Conda is an open source package management system and environment management system that runs on Windows, macOS and Linux.

More on CondaInstalling conda command line in various systems with miniconda and create a simple python environment and all Conda tags – https://ahelpme.com/category/software/anaconda/. This article is not intended to introduce the reader with Conda, but to show how easy is to install the newer version of python 3.10 under CentOS 8 and it is easy because of using the Conda package management system!

To summarize, the purpose is to have a user with python 3.10. The user can be an ordinary or administrative one or even root.
Using this method older or newer versions of python may be installed on the same machine (at the same time).

STEP 1) Install the latest Miniconda3

The installation is easy and for more details check out the first link above.
Keep on reading!

How to run QEMU full virtualization with MacVTap networking using NetworkManager under CentOS 8

In addition to the previously presented article on the subject Howto do QEMU full virtualization with MacVTap networking this one shows how to run a QEMU virtual machine with a MAcVTap device in bridge mode on the host server configured only by using the NetworkManager cli – nmcli.

It is worth mentioning the MacVTap is a virtual bridge, which will make the host and the guest device show up directly on the host switch. So when using QEMU, the guest virtualized system will be as if it is connected to the host switch with one limitation – the host and guest cannot communicate with each other. The IPs of the host won’t be reachable from the guest, so NAT (masquerade) between the host and guest is not possible with this setup. Still, if the NAT server is on another server or a real IP is planned for the guest, MacVTap is the right functionality to use with the QEMU guest system.

Summary

  1. Add MacVTap device in bridge mode with name macvtap0.
  2. Install QEMU.
  3. Create QEMU local disk.
  4. Run a QEMU virtual server.

STEP 1) Add MacVTap device in bridge mode with name macvtap0

[root@srv ~]# nmcli connection add type macvlan dev enp0s3 mode bridge tap yes ifname macvtap0 con-name macvtap0 ip4 0.0.0.0/24
Connection 'macvtap0' (7a5ef04c-ea98-4642-ac5d-4239f715f631) successfully added.
[root@srv ~]# nmcli con
NAME      UUID                                  TYPE      DEVICE   
enp0s3    09497bbf-da59-42b7-a72c-d69369760b36  ethernet  enp0s3   
macvtap0  7a5ef04c-ea98-4642-ac5d-4239f715f631  macvlan   macvtap0 

First, create a MacVTap device with the name macvtap0 in bridge mode with the network interface enp0s3 a and a connection with the name macvtap0. The IP is set to manual mode.
More detailed information on how to create and add MacVTap device with the NetworkManager here – Create MacVTap device using NetworkManager nmcli under CentOS 8

STEP 2) Install QEMU.

Install the QEMU virtual tools under CentOS 8 Stream. At present, the QEMU version is 6.2, which is pretty new.
Keep on reading!

How to run QEMU full virtualization with bridged networking using NetworkManager under CentOS 8

In addition to the previously presented article on the subject Howto do QEMU full virtualization with bridged networking this one shows how to run a QEMU virtual machine with a bridge networking on the host server configured only by using the NetworkManager cli – nmcli.

It is worth mentioning the bridge interface presented in this article is a local bridge device for the server and no Internet addresses or real (or main or Internet-connected) network cards are bound to it. So no MAC addresses of slaved bridged devices will leave the server.
If a network bridge, which includes the Internet (main) server network device is needed, for example, to set real IPs in a virtual machine, there is another article on the bridge networking subject – Replace current interface configuration with a bridge device using nmcli (NetworkManager)

Summary

  1. Add bridge and TUN/TAP device.
  2. Install QEMU.
  3. Create QEMU local disk.
  4. Run a QEMU virtual server.

STEP 1) Add bridge and TUN/TAP device.

[root@srv ~]# nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.addresses "192.168.0.1/24"
Connection 'br0' (ad6878c8-1e06-4af8-a81f-1eb39e761df8) successfully added.
[root@srv ~]# nmcli connection up br0
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
[root@srv ~]# nmcli connection add type tun ifname tap0 con-name tap0 mode tap owner 0 ip4 0.0.0.0/24
Connection 'tap0' (dacee2be-a14b-4cf5-83d4-96d072a96725) successfully added.
[root@srv ~]# nmcli con add type bridge-slave ifname tap0 master br0
Connection 'bridge-slave-tap0' (66490382-b239-4eb2-ae1d-ee811e39596c) successfully added.
[root@srv ~]# nmcli con
NAME               UUID                                  TYPE      DEVICE 
System eno1        abf4c85b-57cc-4484-4fa9-b4a71689c359  ethernet  eno1   
br0                ad6878c8-1e06-4af8-a81f-1eb39e761df8  bridge    br0    
tap0               dacee2be-a14b-4cf5-83d4-96d072a96725  tun       tap0   
bridge-slave-tap0  66490382-b239-4eb2-ae1d-ee811e39596c  ethernet  -- 

First, a bridge device is added with manual IP. If the IP is skipped the bridge interface br0 would have DHCP enabled by default, which may not be the desired.
More detailed information on how to create and add TUN/TAP device with the NetworkManager here – Create bridge and add TUN/TAP device using NetworkManager nmcli under CentOS 8

STEP 2) Install QEMU.

Install the QEMU virtual tools under CentOS 8 Stream. At present, the QEMU version is 6.2, which is pretty new.
Keep on reading!

rsync server under CentOS 8 with SELinux enabled

Here is a quick and useful tip on how to run a rsync daemon under CentOS 8 with SELinux in Enforcing mode.
There are three basic steps:

  1. rsync daemon installation and configuration.
  2. firewall configuration.
  3. SELinux configuration.

STEP 1) rsync daemon installation and configuration.

Under CentOS 8 rsync daemon files are in a separate rpm package rsync-daemon (more on the subject rsync daemon in CentOS 8):

[root@srv ~]# dnf install -y rsync-daemon
Last metadata expiration check: 2:45:48 ago on Thu Apr  7 07:40:42 2022.
Dependencies resolved.
==============================================================================================================
 Package                     Architecture          Version                        Repository             Size
==============================================================================================================
Installing:
 rsync-daemon                noarch                3.1.3-14.el8                   baseos                 43 k

Transaction Summary
==============================================================================================================
Install  1 Package

Total download size: 43 k
Installed size: 17 k
Downloading Packages:
rsync-daemon-3.1.3-14.el8.noarch.rpm                                           98 kB/s |  43 kB     00:00    
--------------------------------------------------------------------------------------------------------------
Total                                                                          81 kB/s |  43 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                      1/1 
  Installing       : rsync-daemon-3.1.3-14.el8.noarch                                                     1/1 
  Running scriptlet: rsync-daemon-3.1.3-14.el8.noarch                                                     1/1 
  Verifying        : rsync-daemon-3.1.3-14.el8.noarch                                                     1/1 

Installed:
  rsync-daemon-3.1.3-14.el8.noarch                                                                            

Complete!

Keep on reading!

Starting up standalone ClickHouse server with basic configuration in docker

ClickHouse is a powerful column-oriented database written in C, which generates analytical and statistical reports in real-time using SQL statements!

It supports on-the-fly compression of the data, cluster setup of replicas and shards instances over thousands of servers, and multi-master cluster modes.

The ClickHouse is an ideal instrument for weblogs and easy real-time generating reports of the weblogs! Or for storing the data of user behaviour and interactions with web sites or applications.
The easiest way to run a CLickHouse instance is within a docker/podman container. The docker hub hosts official containers image maintained by the ClickHouse developers.
And this article will show how to run a ClickHouse standalone server, how to manage the ClickHouse configuration features, and what obstacles the user may encounter.

Here are some key points:

  • Main server configuration file is config.xml (in /etc/clickhouse-server/config.xml) – all server’s settings like listening port, ports, logger, remote access, cluster setup (shards and replicas), system settings (time zone, umask, and more), monitoring, query logs, dictionaries, compressions and so on. Check out the server settings: https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings/
  • The main user configuration file is users.xml (in /etc/clickhouse-server/users.xml), which specifies profiles, users, passwords, ACL, quotas, and so on. It also supports SQL-driven user configuration, check out the available settings and users’ options – https://clickhouse.com/docs/en/operations/settings/settings-users/
  • By default, there is a root user with administrative privileges without password, which could only connect to the server from the localhost.
  • Do not edit the main configuration file(s). Some options may get deprecated and removed and the modified configuration file to become incompatible with the new releases.
  • Every configuration setting could be overriden with configuration files in config.d/. A good practice is to have a configuration file per each setting, which overrides the default one in config.xml. For example:
    root@srv ~ # ls -al /etc/clickhouse-server/config.d/
    total 48
    drwxr-xr-x 2 root root 4096 Nov 22 04:40 .
    drwxr-xr-x 4 root root 4096 Nov 22 04:13 ..
    -rw-r--r-- 1 root root  343 Sep 16  2021 00-path.xml
    -rw-r--r-- 1 root root   58 Nov 22 04:40 01-listen.xml
    -rw-r--r-- 1 root root  145 Feb  3  2020 02-log_to_console.xml
    

    There are three configurations files, which override the default paths (00-path.xml), change the default listen setting (01-listen.xml), and log to console (02-log_to_console.xml). Here is what to expect in 00-path.xml

    <yandex>
        <path replace="replace">/mnt/storage/ClickHouse/var/</path>
        <tmp_path replace="replace">/mnt/storage/ClickHouse/tmp/</tmp_path>
        <user_files_path replace="replace">/mnt/storage/ClickHouse/var/user_files/</user_files_path>
        <format_schema_path replace="replace">/mnt/storage/ClickHouse/format_schemas/</format_schema_path>
    </yandex>
    

    So the default settings in config.xml path, tmp_path, user_files_path and format_schema_path will be replaced with the above values.
    To open the ClickHouse for the outer world, i.e. listen to 0.0.0.0 just include a configuration file like 01-listen.xml.

    <yandex>
        <listen_host>0.0.0.0</listen_host>
    </yandex>
    
  • When all additional (including user) configuration files are processed and the result is written in preprocessed_configs/ directory in var directory, for example /var/lib/clickhouse/preprocessed_configs/
  • The configuration directories are reloaded each 3600 seconds (by default, it could be changed) by the ClickHouse server and on a change in the configuration files new processed ones are generated and in most cases the changes are loaded on-the-fly. Still, there are settings, which require manual restart of the main process. Check out the manual for more details.
  • By default, the logger is in the trace log level, which may generate an enormous amount of logging data. So just change the settings to something more production meaningful like warning level (in config.d/04-part_log.xml).
    <yandex>
        <logger>
            <level>warning</level>
        </logger>
    </yandex>
    
  • ClickHouse default ports:
    • 8123 is the HTTP client port (8443 is the HTTPS). The client can connect with curl or wget or other command-line HTTP(S) clients to manage and insert data in databases and tables.
    • 9000 is the native TCP/IP client port (9440 is the TLS enabled port for this service) to manage and insert data in databases and tables.
    • 9004 is the MySQL protocol port. ClickHouse supports MySQL wire protocol and it can be enabled by the
      <yandex>
          <mysql_port>9004</mysql_port>
      </yandex>
      
    • 9009 is the port, which ClickHouse uses to exchange data between ClickHouse servers when using cluster setup and replicas/shards.
  • There is a flag directory, in which files with special names may instruct ClickHouse to process commands. For example, creating a blank file with the name: /var/lib/clickhouse/flags/force_restore_data will instruct the ClickHouse to begin a restore procedure for the server.
  • A good practice is to make backup of the whole configuration directory despite the main configuration file(s) are not changed and in original state.
  • The SQL commands, which are supported by CickHouse server: https://clickhouse.com/docs/en/sql-reference/ and https://clickhouse.com/docs/en/sql-reference/statements/
  • The basic and fundamental table type is MergeTree, which is designed for inserting a very large amount of data into a table – https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree/
  • Bear in mind, ClickHouse supports SQL syntax and some of the SQL statements, but UPDATE and DELETE statements are not supported, just INSERTs! The main idea behind the ClickHouse is not to change the data, but to add only!
  • Batch INSERTs are the preferred way of inserting data! In fact, there is a recommendation of 1 INSERT per a second in the ClickHouse manual

Keep on reading!

QEMU full virtualization – CPU emulations (enable/disable CPU flags/instruction sets) of QEMU 6.2.0

This article is an updated version of the old QEMU article about CPU flags available for version 2.0.0QEMU full virtualization – CPU emulations (enable/disable CPU flags/instruction sets) of QEMU 2.0.0.
The latest version of QEMU is 6.2.0 and it offers way more CPU flags and features! You can use QEMU with a nearly native full virtualization. Here are some important tips for the guest CPU to consider when using QEMU directly (without any virtualization manager like virt-manager, libvirt and so on).

TIP 1)Choose your host CPU emulation

You can see what options are available for host emulation with:

root@srv ~ # qemu-system-x86_64 -cpu help
Available CPUs:
x86 486                   (alias configured by machine type)                        
x86 486-v1                                                                          
x86 Broadwell             (alias configured by machine type)                        
x86 Broadwell-IBRS        (alias of Broadwell-v3)                                   
x86 Broadwell-noTSX       (alias of Broadwell-v2)                                   
x86 Broadwell-noTSX-IBRS  (alias of Broadwell-v4)                                   
x86 Broadwell-v1          Intel Core Processor (Broadwell)                          
x86 Broadwell-v2          Intel Core Processor (Broadwell, no TSX)                  
x86 Broadwell-v3          Intel Core Processor (Broadwell, IBRS)                    
x86 Broadwell-v4          Intel Core Processor (Broadwell, no TSX, IBRS)            
x86 Cascadelake-Server    (alias configured by machine type)                        
x86 Cascadelake-Server-noTSX  (alias of Cascadelake-Server-v3)                          
x86 Cascadelake-Server-v1  Intel Xeon Processor (Cascadelake)                        
x86 Cascadelake-Server-v2  Intel Xeon Processor (Cascadelake) [ARCH_CAPABILITIES]    
x86 Cascadelake-Server-v3  Intel Xeon Processor (Cascadelake) [ARCH_CAPABILITIES, no TSX]
x86 Cascadelake-Server-v4  Intel Xeon Processor (Cascadelake) [ARCH_CAPABILITIES, no TSX]
x86 Conroe                (alias configured by machine type)                        
x86 Conroe-v1             Intel Celeron_4x0 (Conroe/Merom Class Core 2)             
x86 Cooperlake            (alias configured by machine type)                        
x86 Cooperlake-v1         Intel Xeon Processor (Cooperlake)                         
x86 Denverton             (alias configured by machine type)                        
x86 Denverton-v1          Intel Atom Processor (Denverton)                          
x86 Denverton-v2          Intel Atom Processor (Denverton) [no MPX, no MONITOR]     
x86 Dhyana                (alias configured by machine type)                        
x86 Dhyana-v1             Hygon Dhyana Processor                                    
x86 EPYC                  (alias configured by machine type)                        
x86 EPYC-IBPB             (alias of EPYC-v2)                                        
x86 EPYC-Milan            (alias configured by machine type)                        
x86 EPYC-Milan-v1         AMD EPYC-Milan Processor                                  
x86 EPYC-Rome             (alias configured by machine type)                        
x86 EPYC-Rome-v1          AMD EPYC-Rome Processor                                   
x86 EPYC-Rome-v2          AMD EPYC-Rome Processor                                   
x86 EPYC-v1               AMD EPYC Processor                                        
x86 EPYC-v2               AMD EPYC Processor (with IBPB)                            
x86 EPYC-v3               AMD EPYC Processor                                        
x86 Haswell               (alias configured by machine type)                        
x86 Haswell-IBRS          (alias of Haswell-v3)                                     
x86 Haswell-noTSX         (alias of Haswell-v2)                                     
x86 Haswell-noTSX-IBRS    (alias of Haswell-v4)                                     
x86 Haswell-v1            Intel Core Processor (Haswell)                            
x86 Haswell-v2            Intel Core Processor (Haswell, no TSX)                    
x86 Haswell-v3            Intel Core Processor (Haswell, IBRS)                      
x86 Haswell-v4            Intel Core Processor (Haswell, no TSX, IBRS)              
x86 Icelake-Client        (alias configured by machine type)                        
x86 Icelake-Client-noTSX  (alias of Icelake-Client-v2)                              
x86 Icelake-Client-v1     Intel Core Processor (Icelake) [deprecated]               
x86 Icelake-Client-v2     Intel Core Processor (Icelake) [no TSX, deprecated]       
x86 Icelake-Server        (alias configured by machine type)                        
x86 Icelake-Server-noTSX  (alias of Icelake-Server-v2)                              
x86 Icelake-Server-v1     Intel Xeon Processor (Icelake)                            
x86 Icelake-Server-v2     Intel Xeon Processor (Icelake) [no TSX]                   
x86 Icelake-Server-v3     Intel Xeon Processor (Icelake)                            
x86 Icelake-Server-v4     Intel Xeon Processor (Icelake)                            
x86 IvyBridge             (alias configured by machine type)                        
x86 IvyBridge-IBRS        (alias of IvyBridge-v2)                                   
x86 IvyBridge-v1          Intel Xeon E3-12xx v2 (Ivy Bridge)                        
x86 IvyBridge-v2          Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS)                  
x86 KnightsMill           (alias configured by machine type)                        
x86 KnightsMill-v1        Intel Xeon Phi Processor (Knights Mill)                   
x86 Nehalem               (alias configured by machine type)                        
x86 Nehalem-IBRS          (alias of Nehalem-v2)                                     
x86 Nehalem-v1            Intel Core i7 9xx (Nehalem Class Core i7)                 
x86 Nehalem-v2            Intel Core i7 9xx (Nehalem Core i7, IBRS update)          
x86 Opteron_G1            (alias configured by machine type)                        
x86 Opteron_G1-v1         AMD Opteron 240 (Gen 1 Class Opteron)                     
x86 Opteron_G2            (alias configured by machine type)                        
x86 Opteron_G2-v1         AMD Opteron 22xx (Gen 2 Class Opteron)                    
x86 Opteron_G3            (alias configured by machine type)                        
x86 Opteron_G3-v1         AMD Opteron 23xx (Gen 3 Class Opteron)                    
x86 Opteron_G4            (alias configured by machine type)                        
x86 Opteron_G4-v1         AMD Opteron 62xx class CPU                                
x86 Opteron_G5            (alias configured by machine type)                        
x86 Opteron_G5-v1         AMD Opteron 63xx class CPU                                
x86 Penryn                (alias configured by machine type)                        
x86 Penryn-v1             Intel Core 2 Duo P9xxx (Penryn Class Core 2)              
x86 SandyBridge           (alias configured by machine type)                        
x86 SandyBridge-IBRS      (alias of SandyBridge-v2)                                 
x86 SandyBridge-v1        Intel Xeon E312xx (Sandy Bridge)                          
x86 SandyBridge-v2        Intel Xeon E312xx (Sandy Bridge, IBRS update)             
x86 Skylake-Client        (alias configured by machine type)                        
x86 Skylake-Client-IBRS   (alias of Skylake-Client-v2)                              
x86 Skylake-Client-noTSX-IBRS  (alias of Skylake-Client-v3)                              
x86 Skylake-Client-v1     Intel Core Processor (Skylake)                            
x86 Skylake-Client-v2     Intel Core Processor (Skylake, IBRS)                      
x86 Skylake-Client-v3     Intel Core Processor (Skylake, IBRS, no TSX)              
x86 Skylake-Server        (alias configured by machine type)                        
x86 Skylake-Server-IBRS   (alias of Skylake-Server-v2)                              
x86 Skylake-Server-noTSX-IBRS  (alias of Skylake-Server-v3)                              
x86 Skylake-Server-v1     Intel Xeon Processor (Skylake)                            
x86 Skylake-Server-v2     Intel Xeon Processor (Skylake, IBRS)                      
x86 Skylake-Server-v3     Intel Xeon Processor (Skylake, IBRS, no TSX)              
x86 Skylake-Server-v4     Intel Xeon Processor (Skylake, IBRS, no TSX)              
x86 Snowridge             (alias configured by machine type)                        
x86 Snowridge-v1          Intel Atom Processor (SnowRidge)                          
x86 Snowridge-v2          Intel Atom Processor (Snowridge, no MPX)                  
x86 Westmere              (alias configured by machine type)                        
x86 Westmere-IBRS         (alias of Westmere-v2)                                    
x86 Westmere-v1           Westmere E56xx/L56xx/X56xx (Nehalem-C)                    
x86 Westmere-v2           Westmere E56xx/L56xx/X56xx (IBRS update)                  
x86 athlon                (alias configured by machine type)                        
x86 athlon-v1             QEMU Virtual CPU version 2.5+                             
x86 core2duo              (alias configured by machine type)                        
x86 core2duo-v1           Intel(R) Core(TM)2 Duo CPU     T7700  @ 2.40GHz           
x86 coreduo               (alias configured by machine type)                        
x86 coreduo-v1            Genuine Intel(R) CPU           T2600  @ 2.16GHz           
x86 kvm32                 (alias configured by machine type)                        
x86 kvm32-v1              Common 32-bit KVM processor                               
x86 kvm64                 (alias configured by machine type)                        
x86 kvm64-v1              Common KVM processor                                      
x86 n270                  (alias configured by machine type)                        
x86 n270-v1               Intel(R) Atom(TM) CPU N270   @ 1.60GHz                    
x86 pentium               (alias configured by machine type)                        
x86 pentium-v1                                                                      
x86 pentium2              (alias configured by machine type)                        
x86 pentium2-v1                                                                     
x86 pentium3              (alias configured by machine type)                        
x86 pentium3-v1                                                                     
x86 phenom                (alias configured by machine type)                        
x86 phenom-v1             AMD Phenom(tm) 9550 Quad-Core Processor                   
x86 qemu32                (alias configured by machine type)                        
x86 qemu32-v1             QEMU Virtual CPU version 2.5+                             
x86 qemu64                (alias configured by machine type)                        
x86 qemu64-v1             QEMU Virtual CPU version 2.5+                             
x86 base                  base CPU model type with no features enabled              
x86 host                  KVM processor with all supported host features            
x86 max                   Enables all features supported by the accelerator in the current host

Recognized CPUID flags:
  3dnow 3dnowext 3dnowprefetch abm ace2 ace2-en acpi adx aes amd-no-ssb
  amd-ssbd amd-stibp apic arat arch-capabilities avic avx avx2
  avx512-4fmaps avx512-4vnniw avx512-bf16 avx512-fp16 avx512-vp2intersect
  avx512-vpopcntdq avx512bitalg avx512bw avx512cd avx512dq avx512er avx512f
  avx512ifma avx512pf avx512vbmi avx512vbmi2 avx512vl avx512vnni bmi1 bmi2
  bus-lock-detect cid cldemote clflush clflushopt clwb clzero cmov
  cmp-legacy core-capability cr8legacy cx16 cx8 dca de decodeassists ds
  ds-cpl dtes64 erms est extapic f16c flushbyasid fma fma4 fpu fsgsbase
  fsrm full-width-write fxsr fxsr-opt gfni hle ht hypervisor ia64 ibpb ibrs
  ibrs-all ibs intel-pt intel-pt-lip invpcid invtsc kvm-asyncpf
  kvm-asyncpf-int kvm-hint-dedicated kvm-mmu kvm-msi-ext-dest-id
  kvm-nopiodelay kvm-poll-control kvm-pv-eoi kvm-pv-ipi kvm-pv-sched-yield
  kvm-pv-tlb-flush kvm-pv-unhalt kvm-steal-time kvmclock kvmclock
  kvmclock-stable-bit la57 lahf-lm lbrv lm lwp mca mce md-clear mds-no
  misalignsse mmx mmxext monitor movbe movdir64b movdiri mpx msr mtrr
  nodeid-msr npt nrip-save nx osvw pae pat pause-filter pbe pcid pclmulqdq
  pcommit pdcm pdpe1gb perfctr-core perfctr-nb pfthreshold pge phe phe-en
  pks pku pmm pmm-en pn pni popcnt pschange-mc-no pse pse36 rdctl-no rdpid
  rdrand rdseed rdtscp rsba rtm sep serialize sha-ni skinit
  skip-l1dfl-vmentry smap smep smx spec-ctrl split-lock-detect ss ssb-no
  ssbd sse sse2 sse4.1 sse4.2 sse4a ssse3 stibp svm svm-lock svme-addr-chk
  syscall taa-no tbm tce tm tm2 topoext tsc tsc-adjust tsc-deadline
  tsc-scale tsx-ctrl tsx-ldtrk umip v-vmsave-vmload vaes vgif virt-ssbd
  vmcb-clean vme vmx vmx-activity-hlt vmx-activity-shutdown
  vmx-activity-wait-sipi vmx-apicv-register vmx-apicv-vid vmx-apicv-x2apic
  vmx-apicv-xapic vmx-cr3-load-noexit vmx-cr3-store-noexit
  vmx-cr8-load-exit vmx-cr8-store-exit vmx-desc-exit vmx-encls-exit
  vmx-entry-ia32e-mode vmx-entry-load-bndcfgs vmx-entry-load-efer
  vmx-entry-load-pat vmx-entry-load-perf-global-ctrl vmx-entry-load-pkrs
  vmx-entry-load-rtit-ctl vmx-entry-noload-debugctl vmx-ept vmx-ept-1gb
  vmx-ept-2mb vmx-ept-advanced-exitinfo vmx-ept-execonly vmx-eptad
  vmx-eptp-switching vmx-exit-ack-intr vmx-exit-clear-bndcfgs
  vmx-exit-clear-rtit-ctl vmx-exit-load-efer vmx-exit-load-pat
  vmx-exit-load-perf-global-ctrl vmx-exit-load-pkrs
  vmx-exit-nosave-debugctl vmx-exit-save-efer vmx-exit-save-pat
  vmx-exit-save-preemption-timer vmx-flexpriority vmx-hlt-exit vmx-ins-outs
  vmx-intr-exit vmx-invept vmx-invept-all-context vmx-invept-single-context
  vmx-invept-single-context vmx-invept-single-context-noglobals
  vmx-invlpg-exit vmx-invpcid-exit vmx-invvpid vmx-invvpid-all-context
  vmx-invvpid-single-addr vmx-io-bitmap vmx-io-exit vmx-monitor-exit
  vmx-movdr-exit vmx-msr-bitmap vmx-mtf vmx-mwait-exit vmx-nmi-exit
  vmx-page-walk-4 vmx-page-walk-5 vmx-pause-exit vmx-ple vmx-pml
  vmx-posted-intr vmx-preemption-timer vmx-rdpmc-exit vmx-rdrand-exit
  vmx-rdseed-exit vmx-rdtsc-exit vmx-rdtscp-exit vmx-secondary-ctls
  vmx-shadow-vmcs vmx-store-lma vmx-true-ctls vmx-tsc-offset
  vmx-unrestricted-guest vmx-vintr-pending vmx-vmfunc
  vmx-vmwrite-vmexit-fields vmx-vnmi vmx-vnmi-pending vmx-vpid
  vmx-wbinvd-exit vmx-xsaves vmx-zero-len-inject vpclmulqdq waitpkg
  wbnoinvd wdt x2apic xcrypt xcrypt-en xgetbv1 xop xsave xsavec xsaveerptr
  xsaveopt xsaves xstore xstore-en xtpr

The number of supported flags grew enormously compared to the old versions of QEMU and in fact, they include almost all available CPU flags. The supported CPUs are also several times more than before! The above list of supported CPUs means the virtual guest machine could use one of them and the guest operating system will have all the flags the CPU supports. In fact, the guest virtual system will report to the OS it has the selected CPU from the list above.
Keep on reading!

Virtualbox machine boots from usb drive

First, at present, booting from USB is impossible with VirtualBox! But there is a really easy workaround to use VMDK, which is just a container file describing physical devices (or files) to use in virtual machines like VirtualBox or VMware.
Because the USB is just another physical device attached to the machine this article will help to attach the USB drive to a virtual machine – Add a raw disk to a virtualbox virtual machine. Then boot from the newly attached disk.

Here is the quick tip for the USB drive:

  1. Attach the USB drive and find its device path. Under Windows, it would be something like “\\.\PhysicalDrive3” (open “Disk Management” if not sure) and under Linux it would be /dev/sdc, for example. This is the third disk device (including USB disk devices) connected to the machine.
  2. Make the VMDK from the USB physical device.
    Under Windows:

    VBoxManage.exe internalcommands createrawvmdk -filename "c:\Users\homer\.VirtualBox\windows11pro-install-usb.vmdk" -rawdisk \\.\PhysicalDrive3
    

    Under Linux:

    VBoxManage internalcommands createrawvmdk -filename /home/myuser/.VirtualBox/windows11pro-install-usb.vmdk.vmdk -rawdisk /dev/sdc
    
  3. Attach it the virtual machine: Settings -> Storage -> Storage Devices.

    First, a click on “Adds hard disk” would show a menu to add a new hard disk and then a click on “Add” (“Add Disk Image”) shows a file browse dialog to locate the VMDK file.

    main menu
    Storage Devices
  4. Boot from this device by selecting it manually from the boot menu (F12 would boot in Boot menu) or set the VMKD disk to be on the Port 0 in the above step.

For more details (not just the commands to generate the VMDK container file) follow the above URL to the proposed article – Add a raw disk to a virtualbox virtual machine

Install and deploy MySQL 8 InnoDB Cluster with 3 nodes under CentOS 8 and MySQL Router for HA

This article is going to show how to install a MySQL server and deploy a MySQL 8 InnoDB Cluster with three nodes behind a MySQL router to archive a high availability with MySQL database back-end.

In really simple words, MySQL 8.0 InnoDB Cluster is just MySQL replication on steroids – i.e. a little more additional work between the servers in the group before committing the transactions. It uses MySQL Group Replication plugin, which allows the group to operate in two different modes:

  1. a single-primary mode with automatic primary election. Only one server gets the updates.
  2. a multi-master mode – all servers accept the updates. For advanced setups.

Group Replication is bi-directional, the servers communicate with each other and use row replication to replicate the data. The main limitation is that only the MySQL InnoDB engine is supported, because of the transactions support. So the performance (and most features and caveats) of MySQL InnoDB is not impacted by cluster setup and overhead compared to the MySQL in replication mode (or a single server setups) from the previous MySQL versions. Still, all read-write transactions commit only after they have been approved by the group – a verification process providing consensus between the servers. In fact, most of the features like GUIDs, row-based replication (i.e. different replication modes) are developed and available to older versions. The new part is handled by Group Communication System (GCS) protocols, which provide a failure detection mechanism, a group membership service, and a safe and completely ordered message delivery (more on the subject here https://dev.mysql.com/doc/refman/8.0/en/group-replication-background.html).
In addition to the group replication, MySQL Router 8.0 provides the HAhigh availability. The program, which redirects, fails over, balances to the right server in the group is the MySQL Router. Clients may connect directly to the servers in the group, but only if the clients connect using MySQL router will have HA because Group Replication does not have a built-in method for it. It is worth noting, there could be many MySQL Routers in different servers, they do not need to communicate or synchronize anything with each other. So the router could be installed in the same place, where the application is installed or on a separate dedicated server, or on every MySQL server in the group.

Key points in this article of MySQL InnoDB Cluster deployment:

  • CentOS 8 Stream is used for the operating system
  • SELinux tuning to allow MySQL process to connect the network.
  • CentOS 8 firewall tuning to unblock the nodes traffic between them.
  • Disable mysql package system module to use the official MySQL repository.
  • Three MySQL 8.0.28 server nodes will be installed
  • To create and manage the cluster MySQL Shell 8.0 and dba object in it are used.
  • Three MySQL routers on each MySQL node will be installed.
  • Each server will have the domains of the all three servers in /etc/hosts file – db-cluster-1, db-cluster-2, db-cluster-3.
  • The cluster is in group replication with one primary (i.e. master) and two secondary nodes (i.e. slaves)

STEP 1) Install CentOS 8 Stream.

There is an article with the CentOS 8 – How to do a network installation of CentOS 8 (8.0.1950) – minimal server installation, which installation is essentially the same as CentOS 8 Stream.

STEP 2) Prepare the CentOS 8 Stream to install MySQL 8 server.

At present, the latest MySQL Community edition is 8.0.28. The preferred way to install the MySQL server is to download the RPM repository file from MySQL web site – https://dev.mysql.com/downloads/repo/yum/
Keep on reading!

conda command-line search and install a package in a new environment – tensorflow

Using conda from Anaconda is really easy to install complex environments even like TensorFlow on many different Linux distributions and Windows.
conda utility and its multiple environments guarantee no changes from the package system of the current Linux distribution. Installing operating system updates may break fine-tuned and complex development environments. Installing packages from conda minimizes the OS-related problems and offers the user to use of complex development setups in various Linux distributions like CentOS, Fedora, Manjdaro, Mint, Debian, Ubuntu, Elementary OS with the same command line interface.
Using pip instead of conda may lead to a broken environment after simple OS package updates.

STEP 1) Install conda command line utility.

The install is easy enough, just follow this article – Installing conda command line in various systems with miniconda and create a simple python environment
The conda command-line utility is installed by Miniconda3.

STEP 2) Search for conda packages.

Use the search command to find packages. All available versions are displayed supported for the current installation.
Keep on reading!