storcli with multiple disks from different enclosures

Creating a Virtual device with the AVAGO storcli command-line tool under Linux. Two examples are included:

  1. All disks are from one of the enclosure. All disks are included explicitly.
  2. Disks from two enclosures are included. One controller with two enclosures.

Check out how to Install the new storcli to manage (LSI/AVAGO/Broadcom) MegaRAID controller under CentOS 7
There are 31 disks of 36 harddisk bays. 5 are missing on purpose for the examples.

The initial states of the controller and the disks.

livecd ~ # opt/MegaRAID/storcli/storcli /c0 show
Generating detailed summary of the adapter, it may take a while to complete.

CLI Version = 007.0510.0000.0000 May 4, 2018
Operating system = Linux 4.19.72-gentoo
Controller = 0
Status = Success
Description = None

Product Name = LSI 2108 MegaRAID
Serial Number = FW-ABQRCBEAARBWA
SAS Address =  5003048004015f00
PCI Address = 00:06:00:00
System Time = 07/20/2020 22:58:35
Mfg. Date = 00/00/00
Controller Time = 07/20/2020 22:58:36
FW Package Build = 12.15.0-0239
FW Version = 2.130.403-4660
BIOS Version = 3.30.02.2_4.16.08.00_0x06060A05
Driver Name = megaraid_sas
Driver Version = 07.706.03.00-rc1
Vendor Id = 0x1000
Device Id = 0x79
SubVendor Id = 0x15D9
SubDevice Id = 0x700
Host Interface = PCI-E
Device Interface = SAS-6G
Bus Number = 6
Device Number = 0
Function Number = 0
Physical Drives = 31

PD LIST :
=======

-----------------------------------------------------------------------------------
EID:Slt DID State DG     Size Intf Med SED PI SeSz Model                   Sp Type 
-----------------------------------------------------------------------------------
12:0      0 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HUA723020ALA640 D  -    
12:1      1 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    D  -    
12:2      2 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:3      3 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:4      4 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:5      5 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS722T2TALA604    D  -    
12:6      6 UGood -  1.817 TB SATA HDD N   N  512B TOSHIBA DT01ACA200      D  -    
12:7      7 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:8      8 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:9      9 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:10    10 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:11    11 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
37:0     13 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:1     14 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:2     15 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:3     16 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HUA723020ALA640 U  -    
37:4     17 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:6     19 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:7     20 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS722T2TALA604    U  -    
37:8     21 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    U  -    
37:10    23 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:11    24 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:13    26 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:14    27 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    U  -    
37:16    29 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    U  -    
37:17    30 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HUA723020ALA640 U  -    
37:19    32 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:20    33 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:21    34 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS722T2TALA604    U  -    
37:22    35 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:23    36 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    U  -    
-----------------------------------------------------------------------------------

EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info
SeSz-Sector Size|Sp-Spun|U-Up|D-Down/PowerSave|T-Transition|F-Foreign
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded

Keep on reading!

aptly delete a mirror and remove all files

Executing drop command on a mirror will only remove the meta information for the mirror and it will not remove the package files occupying space on the file system.

Dropping mirror in aptly supposes to execute a clean command with aplty

aptly db cleanup

The newly created Bionic mirrors in the prevoius article on the aptly subject – Mirror the official Ubuntu repositories using aptly will be deleted here and removing all files with:

aptly@srv:~$ aptly mirror drop bionic-main
Mirror `bionic-main` has been removed.
aptly@srv:~$ aptly mirror drop bionic-security-main
Mirror `binonic-security-main` has been removed.
aptly@srv:~$ aptly mirror drop bionic-universe     
Mirror `bionic-universe` has been removed.
aptly@srv:~$ aptly mirror drop bionic-updates-main
Mirror `binonic-updates-main` has been removed.
aptly@srv:~$ aptly mirror drop bionic-updates-universe
Mirror `bionic-updates-universe` has been removed.
aptly@srv:~$ aptly mirror list
No mirrors found, create one with `aptly mirror create ...`.

The occupied space on the disk mounted in /srv is 270G:

aptly@srv:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.9G     0  1.9G   0% /dev
tmpfs           395M  3.5M  391M   1% /run
/dev/sda3        19G  4.6G   13G  27% /
tmpfs           2.0G  204K  2.0G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/sda4       470G  270G  176G  61% /srv
tmpfs           395M     0  395M   0% /run/user/0
tmpfs           395M     0  395M   0% /run/user/1001

Actually freeing the space on the disk with the clean aptly command:

aptly@srv:~$ aptly db cleanup
Loading mirrors, local repos, snapshots and published repos...
Loading list of all packages...
Deleting unreferenced packages (143121)...
Building list of files referenced by packages...
Building list of files in package pool...
Deleting unreferenced files (194097)...
Disk space freed: 268.80 GiB...
Compacting database...

The occupied space on the disk mounted in /srv is below 2G after the cleaning command:

aptly@srv:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.9G     0  1.9G   0% /dev
tmpfs           395M  3.5M  391M   1% /run
/dev/sda3        19G  4.6G   13G  27% /
tmpfs           2.0G  204K  2.0G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/sda4       470G    1G  176G   1% /srv
tmpfs           395M     0  395M   0% /run/user/0
tmpfs           395M     0  395M   0% /run/user/1001

Upgrading Ubuntu 18 to Ubuntu 20 – software versions upgrade table – head to head

In the following article a comparison between two LTS version of Ubuntu is presented – Ubuntu 18.04 LTS (Bionic) versus Ubuntu 20.04 LTS (Focal). The latest version of Ubuntu 18.04 and Ubuntu 20.04 (17.06.2020) is used to generate the software versions below.

In the Desktop world upgrading to the new and latest version of a Linux distribution is almost mandatory, but in the server world, upgrading is more complicated. The first step in updating a server is to check what software versions come with the new distribution version and then check whether the running custom (application) software supports the software versions. For example, updating to a new distribution version, which comes with PHP 7.4, but the current application supports only 7.2 is not very wise and in addition, the current version may have years with support in the future.

Having a head to head version comparison to check is the main target of this article – a fast check what version the user could expect from the new (aka latest) Linux distribution.

SoftwareUbuntu 20.04Ubuntu 18.04
Linux kernel



5.4.0
5.6.0
4.15.0
4.18.0
5.0.0
5.3.0
5.4.0
libc2.312.27
OpenSSL
1.1.1f
1.0.2n
1.1.1
GNU GCC


7.5.0
8.4.0
9.3.0
10-20200411
4.8.5
5.5.0
6.5.0
7.5.0
8.4.0
PHP7.47.2
Python2.7.17
3.8.2
2.7.15
3.6.7
Perl5.30.05.26.1
Ruby2.72.5.1
OpenJDK8u252-b09
11.0.7
13.0.3
14.0.1
8u252-b09
11.0.7
Go lang1.13.8
1.14.2
1.8
1.9
1.10
Rust1.41.01.41.0
llvm



6.0.1
7.0.1
8.0.1
9.0.1
10.0.0
3.7.1
3.9.1
4.0.1
5.0.1
6.0
7
8
9
10.0.0
nodejs10.19.08.10.0
Subversion1.131.9.7
Git2.25.22.17.1
Apache2.4.412.4.29
Nginx1.17.101.14.0
MySQL server8.0.205.7.30
MariaDB10.3.2210.1.44
PostgreSQL12.210.12
SQLite3.22.03.31.1
Xorg X server1.20.81.19.6
Gnome Shell3.36.23.28.4

edit mysql options in docker (or docker-compose) mysql

Modifying the default options for the docker (podman) MySQL server is essential. The default MySQL options are too conservative and even for simple (automation?) tests the options could be .
For example, modifying only one or two of the default InnoDB configuration options may lead to boosting multiple times faster execution of SQL queries and the related automation tests.

Here are three simple ways to modify the (default or current) MySQL my.cnf configuration options:

  • Command-line arguments. All MySQL configuration options could be overriden by passing them in the command line of mysqld binary. The format is:
    --variable-name=value
    

    and the variable names could be obtained by

    mysqld --verbose --help
    

    and for the live configuration options:

    mysqladmin variables
    
  • Options in a additional configuration file, which will be included in the main configuration. The options in /etc/mysql/conf.d/config-file.cnftake precedence.
  • Replacing the default my.cnf configuration file/etc/mysql/my.cnf.

Check out also the official page – https://hub.docker.com/_/mysql.
Under CentOS 8 docker is replaced by podman and just replace the docker with podman in all of the commands below.

OPTION 1) Command-line arguments.

This is the simplest way of modifying the default my.cnf (the one, which comes with the docker image or this in the current docker image file). It is fast and easy to use and change, just a little bit of much writing in the command-line. As mentioned above all MySQL options could be changed by a command-line argument to the mysqld binary. For example:

mysqld --innodb_buffer_pool_size=1024M

It will start MySQL server with variable innodb_buffer_pool_size set to 1G. Translating it to (for multiple options just add them at the end of the command):

  • docker run

    root@srv ~ # docker run --name my-mysql -v /var/lib/mysql:/var/lib/mysql \
    -e MYSQL_ROOT_PASSWORD=111111 \
    -d mysql:8 \
    --innodb_buffer_pool_size=1024M \
    --innodb_read_io_threads=4 \
    --innodb_flush_log_at_trx_commit=2 \
    --innodb_flush_method=O_DIRECT
    1bb7f415ab03b8bfd76d1cf268454e3c519c52dc383b1eb85024e506f1d04dea
    root@srv ~ # docker exec -it my-mysql mysqladmin -p111111 variables|grep innodb_buffer_pool_size
    | innodb_buffer_pool_size                                  | 1073741824
    
  • docker-compose:

    # Docker MySQL arguments example
    version: '3.1'
    
    services:
    
      db:
        image: mysql:8
        command: --default-authentication-plugin=mysql_native_password --innodb_buffer_pool_size=1024M --innodb_read_io_threads=4 --innodb_flush_log_at_trx_commit=2 --innodb_flush_method=O_DIRECT
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: 111111
        volumes:
         - /var/lib/mysql_data:/var/lib/mysql
        ports:
          - "3306:3306"
    

    Here is how to run it (the above text file should be named docker-compose.yml and the file should be in the current directory when executing the command below):

    root@srv ~ # docker-compose up
    Creating network "docker-compose-mysql_default" with the default driver
    Creating my-mysql ... done
    Attaching to my-mysql
    my-mysql | 2020-06-16 09:45:35+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.20-1debian10 started.
    my-mysql | 2020-06-16 09:45:35+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
    my-mysql | 2020-06-16 09:45:35+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.20-1debian10 started.
    my-mysql | 2020-06-16T09:45:36.293747Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
    my-mysql | 2020-06-16T09:45:36.293906Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.20) starting as process 1
    my-mysql | 2020-06-16T09:45:36.307654Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
    my-mysql | 2020-06-16T09:45:36.942424Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
    my-mysql | 2020-06-16T09:45:37.136537Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/var/run/mysqld/mysqlx.sock' bind-address: '::' port: 33060
    my-mysql | 2020-06-16T09:45:37.279733Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
    my-mysql | 2020-06-16T09:45:37.306693Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
    my-mysql | 2020-06-16T09:45:37.353358Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.20'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server - GPL.
    

    And check the option:

    root@srv ~ # docker exec -it my-mysql mysqladmin -p111111 variables|grep innodb_buffer_pool_size
    | innodb_buffer_pool_size                                  | 1073741824
    

OPTION 2) Options in a additional configuration file.

Create a MySQL option file with name config-file.cnf:

[mysqld]
innodb_buffer_pool_size=1024M
innodb_read_io_threads=4
innodb_flush_log_at_trx_commit=2
innodb_flush_method=O_DIRECT
  1. docker run
  2. The source path must be absolute path!

    docker run --name my-mysql \
    -v /var/lib/mysql_data:/var/lib/mysql \
    -v /etc/mysql/docker-instances/config-file.cnf:/etc/mysql/conf.d/config-file.cnf \
    -e MYSQL_ROOT_PASSWORD=111111 \
    -d mysql:8
    
  3. docker-compose
    The source path may not be absolute path.

    # Docker MySQL arguments example
    version: '3.1'
    
    services:
    
      db:
        container_name: my-mysql
        image: mysql:8
        command: --default-authentication-plugin=mysql_native_password
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: 111111
        volumes:
         - /var/lib/mysql_data:/var/lib/mysql
         - ./config-file.cnf:/etc/mysql/conf.d/config-file.cnf
        ports:
          - "3306:3306"
    

OPTION 3) Replacing the default my.cnf configuration file.

Add the modified options to a my.cnf template file and map it to the container on /etc/mysql/my.cnf. When overwriting the main MySQL option file – my.cnf you may map the whole /etc/mysql directory (just replace /etc/mysql/my.cnf with /etc/mysql below), too. The source file (or directory) may be any file (or directory) not the /etc/mysql/my.cnf (or /etc/mysql)

  • docker run:
    The source path must be absolute path.

    docker run --name my-mysql \
    -v /var/lib/mysql_data:/var/lib/mysql \
    -v /etc/mysql/my.cnf:/etc/mysql/my.cnf \
    -e MYSQL_ROOT_PASSWORD=111111 \
    --publish 3306:3306 \
    -d mysql:8
    

    Note: here a new option “–publish 3306:3306” is included to show how to map the ports out of the container like all the examples with the docker-compose here.

  • docker-compose:
    The source path may not be absolute path, but the current directory.

    # Use root/example as user/password credentials
    version: '3.1'
    
    services:
    
      db:
        container_name: my-mysql
        image: mysql:8
        command: --default-authentication-plugin=mysql_native_password
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: 111111
        volumes:
         - /var/lib/mysql_data:/var/lib/mysql
         - ./mysql/my.cnf:/etc/mysql/my.cnf
        ports:
          - "3306:3306"
    

Configure Bond (802.3ad LACP) device in CentOS 8 – configuration files

Upgrading to a bond device is a common step when the server exhausts its current network port bandwidth.
The hardware setup of the bond example here is:

  • two 10G network cards – ens1f0 and ens1f0
  • bond name – bond0
  • bond mode – 802.3ad – Link Aggregation Control Protocol (LACP)

The systemd reconfiguration procedure consists of:

  • Stop the network target
    systemctl stop network
    
  • Set several configuration files – network device files for the network interfaces, bonding interface – master and slave devices.
  • Start the network target
    systemctl start network
    

*Note: the 802.3ad bonding mode needs aditional configuration in the switch of which the server is connected.

The example here is using CentOS 8 configuration file to make a permanent (i.e. persistent over reboots using the CentOS 8 network configuration files) bonding configuration.
Check out the official bonding documentation for all modes and options – https://www.kernel.org/doc/Documentation/networking/bonding.txt.

CONF 1) Configure the network interfaces.

The interface should be in down state in the configuration file.
Interface 1 – /etc/sysconfig/network-scripts/ifcfg-ens1f0:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens1f0
UUID=3b399a23-570e-45ed-9369-4ff5b87efb2c
DEVICE=ens1f0
ONBOOT=no

Interface 2 – /etc/sysconfig/network-scripts/ifcfg-ens1f1:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens1f1
UUID=ecdc5d5b-9739-4424-9d67-362411974281
DEVICE=ens1f1
ONBOOT=no

CONF 2) Configure bonding master device – create a bonding group bond0

This device should be started up at boot.
Bonding device 1 – with name bond0 – /etc/sysconfig/network-scripts/ifcfg-Bond_connection_1:

BONDING_OPTS="downdelay=200 miimon=100 mode=802.3ad updelay=200"
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=10.10.10.10
PREFIX=24
GATEWAY=10.10.10.1
DNS1=10.10.10.2
DNS2=10.10.10.3
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_PRIVACY=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME="Bond connection 1"
UUID=f0a35f9a-20e4-484e-850c-689128642555
DEVICE=bond0
ONBOOT=yes

BONDING_OPTS are specific options for the bonding group with name bond0 and the bonding mode is set here, too.

CONF 3) Configure bonding slave devices – the two network cards

Adding the two network cards to the bonding group bond0. These devices should be started up at boot.
Interface 1 – /etc/sysconfig/network-scripts/ifcfg-bond0_slave_1:

HWADDR=90:E2:BA:8A:13:8C
TYPE=Ethernet
NAME="bond0 slave 1"
UUID=c49e0ced-6411-41fa-9a3b-a01a430664a7
DEVICE=ens1f0
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Interface 2 – /etc/sysconfig/network-scripts/ifcfg-bond0_slave_2:

HWADDR=90:E2:BA:8A:13:8D
TYPE=Ethernet
NAME="bond0 slave 2"
UUID=90de1cad-1d9f-48cb-8e5a-7d8bfdde91d2
DEVICE=ens1f1
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Set up (802.3ad LACP) bonding when installing CentOS 8

This article is to show how the user could install CentOS 8 (the steps are the same with CentOS 7) with a much complex network setup such as Bonding device in 802.3ad mode (LACP – Link Aggregation Control Protocol).
The whole installation procedure is not included here, but there are couple of other article on the subject “Install CentOS 7 or CentOS 8”:

Similar configuration files will be generated as in Configure Bond (802.3ad LACP) device in CentOS 8 – configuration files

SCREENSHOT 1) Click on “Network and Host Name” to configure the machine networking.

main menu
Installation Summary – Network and Host Name

Keep on reading!

Chromium browser in Ubuntu 20.04 LTS without snap to use in docker container

Ubuntu team has its own vision for the snap (https://snapcraft.io/) service and that’s why they have moved the really big and difficult to maintain Chromium browser package in the snap package. Unfortunately, the snap has many issues with docker containers and in short, it is way difficult to run snap in a docker container. The user may just want not to mess with snap packages (despite this is the future according to the Ubuntu team) or like most developers they all need a browser for their tests executed in a container.
Whether you are a developer or an ordinary user this article is for you, who wants Chromium browser installed not from the snap service under Ubuntu 20.04 LTS!
There are multiple options, which could end up with a Chromium browser installed on the system, not from the snap service:

  1. Using Debian package and Debian repository. The problem here is that using simultaneously Ubuntu and Debian repository on one machine is not a good idea! Despite the hack, Debian packages are with low priority – https://askubuntu.com/questions/1204571/chromium-without-snap/1206153#1206153
  2. Using Google Chromehttps://www.google.com/chrome/?platform=linux. It is just a single Debian package, which provides Chromium-like browser and all dependencies requesting the Chromium browser package are fulfilled.
  3. Using Chromium team dev or beta PPA (https://launchpad.net/~chromium-team) for the nearest version if still missing Ubuntu packages for Focal (Ubuntu 20.04 LTS).
  4. more options available?

This article will show how to use Ubuntu 18 (Bionic) Chromium browser package from Chromium team beta PPA under Ubuntu 20.04 LTS (Focal). Bionic package from the very same repository of Ubuntu Chromium team may be used, too.

All dependencies will be downloaded from the Ubuntu 20.04 and just several Chromium-* packages will be downloaded from the Chromium team PPA Ubuntu 19 repository. The chances to break something are really small compared to the options 1 above, which uses the Debian packages and repositories. Hope, soon we are going to have focal (Ubuntu 20.04 LTS) packages in the Ubuntu Chromium team PPA!

Dockerfile

An example of a Dockerfile installing Chromium (and python3 selenium for automating web browser interactions)

RUN apt-key adv --fetch-keys "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xea6e302dc78cc4b087cfc3570ebea9b02842f111" \
&& echo 'deb http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic main ' >> /etc/apt/sources.list.d/chromium-team-beta.list \
&& apt update
RUN export DEBIAN_FRONTEND=noninteractive \
&& export DEBCONF_NONINTERACTIVE_SEEN=true \
&& apt-get -y install chromium-browser
RUN apt-get -y install python3-selenium

First command adds the repository key and the repository to the Ubuntu source lists. Note we are adding the “bionic main”, not “focal main”.
From the all dependencies of the Bionic chromium-browser only three packages are pulled from the Bionic repository and all other are from the Ubuntu 20 (Focal):

.....
Get:1 http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic/main amd64 chromium-codecs-ffmpeg-extra amd64 84.0.4147.38-0ubuntu0.18.04.1 [1174 kB]
.....
Get:5 http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic/main amd64 chromium-browser amd64 84.0.4147.38-0ubuntu0.18.04.1 [67.8 MB]
.....
Get:187 http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic/main amd64 chromium-browser-l10n all 84.0.4147.38-0ubuntu0.18.04.1 [3429 kB]
.....

Here is the whole Dockerfile sample file:

#
#   Docker file for the image "chromium brower without snap"
#
FROM ubuntu:20.04
MAINTAINER myuser@example.com

#chromium browser
#original PPA repository, use if our local fails
RUN echo "tzdata tzdata/Areas select Etc" | debconf-set-selections && echo "tzdata tzdata/Zones/Etc select UTC" | debconf-set-selections
RUN export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NONINTERACTIVE_SEEN=true
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get -y install gnupg2 apt-utils wget
#RUN wget -O /root/chromium-team-beta.pub "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xea6e302dc78cc4b087cfc3570ebea9b02842f111" && apt-key add /root/chromium-team-beta.pub
RUN apt-key adv --fetch-keys "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xea6e302dc78cc4b087cfc3570ebea9b02842f111" && echo 'deb http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic main ' >> /etc/apt/sources.list.d/chromium-team-beta.list && apt update
RUN export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NONINTERACTIVE_SEEN=true && apt-get -y install chromium-browser
RUN apt-get -y install python3-selenium

Desktop install

The desktop installation is almost the same as the Dockerfile above. Just execute the following lines:
Keep on reading!

Copy files with read errors successfully – skipping only errors (i.e. bad sectors)

Sometimes disks have errors or an SSD disk has a bad NAND cell. Saving the whole hard disk data may not be needed and when only a specific file or two are important and which cannot be copied by cp or rsync because of “Unrecovered read error”.
Furthermore, the SSD reallocates the bad cells, when there are writes to the cell(s), which may not occur years, but reading may be each day. Reading from a sector with bad NAND cells will result in slow IO (multiple read commands are executed before giving up). Copying the file to a new place without only 512 bytes may not harm the data, but it is difficult to be done with the generic tool for copying.
This article is to save single files from a mounted ext4 file system with bad sectors using the ddrescue tool – https://www.gnu.org/software/ddrescue/ In fact, the ddrescue could save files or whole devices.

STEP 1) Install ddrescue.

Installing ddrescue is pretty easy. The tool is included in almost all Linux distributions and it doesn’t have many dependencies. Apparently, there is another dd_rescue tool, which is different than this one, just follow the link above for the tool used here.
CentOS 7/8 or Fedora:

yum install -y ddrescue

Ubuntu last 10 years versions:

apt install -y gddrescue

Gentoo:

emerge -v ddrescue

STEP 2) Rescuing a single file with read errors because of bad sectors in a mounted file system.

[root@srv Snapshots]# ddrescue -v \{9f02ae0a-6dae-4729-b6a6-ec3f0550f294\}.vdi test2.vdi
GNU ddrescue 1.25
About to copy 15724 MBytes from '{9f02ae0a-6dae-4729-b6a6-ec3f0550f294}.vdi' to 'test2.vdi'
    Starting positions: infile = 0 B,  outfile = 0 B
    Copy block size: 128 sectors       Initial skip size: 384 sectors
Sector size: 512 Bytes

Press Ctrl-C to interrupt
     ipos:   13495 MB, non-trimmed:        0 B,  current rate:       0 B/s
     opos:   13495 MB, non-scraped:        0 B,  average rate:    162 MB/s
non-tried:        0 B,  bad-sector:     8192 B,    error rate:    4608 B/s
  rescued:   15724 MB,   bad areas:        2,        run time:      1m 36s
pct rescued:   99.99%, read errors:       18,  remaining time:          0s
                              time since last successful read:          0s
Finished                                      
[root@srv Snapshots]# ls -al
total 52602944
drwx------. 2 root root        4096 Jun  2 02:22 .
drwxr-xr-x. 4 root root        4096 Jun  1 14:16 ..
-rw-------. 1 root root   459981735 Nov  8  2018 2018-11-08T15-19-17-776317000Z.sav
-rw-------. 1 root root   566704069 Jun  1 14:16 2020-06-01T11-16-05-735318000Z.sav
-rw-------. 1 root root  8329887744 Jun  1 12:53 {3d30ebea-2e2f-4e33-8088-d3d66f315e2c}.vdi
-rw-------. 1 root root 15724445696 Nov  8  2018 {9f02ae0a-6dae-4729-b6a6-ec3f0550f294}.vdi
-rw-------. 1 root root  4012900352 Jun  1 14:16 {f7e72510-7dce-48fd-b62c-630664ad984f}.vdi
-rw-r--r--. 1 root root 15724445696 Jun  2 02:24 test2.vdi
-rw-------. 1 root root  9051041792 Jun  2 02:19 test.vdi

Here is an animated gif of the ddrescue procedure:

main menu
ddrescue – copy files with bad sectors

Keep on reading!

Change the location of container storage in podman (with SELinux enabled)

There two main options to change the location of all the containers’ storages:

  • “mount bind” the new location to the default storage directory (look Note 1)
  • Change the path of the location in the configuration file /etc/containers/storage.conf

You should stop all your containers though it is not mandatory.

You should stop the containers (if any) and copy the directory, because when reconfigured the storage path podman won’t access the ones in the old path – containers and images!

STEP 1) Change the storage path in the podman configuration file.

If the SELinux has been disabled, which should not be done, it is just a matter of changing a path option in the configuration file /etc/containers/storage.conf

# Primary Read/Write location of container storage
graphroot = "/var/lib/containers/storage"

Change it to whatever path you like. Mostly, it should point to the big storage device. In our case, the big storage is mounted under “/mnt/mystorage/virtual/storage”. Change the options to:

# Primary Read/Write location of container storage
graphroot = "/mnt/mystorage/virtual/storage"

Check the running configuration with:

[root@lsrv1 mystorage]# podman info
host:
  BuildahVersion: 1.12.0-dev
  CgroupVersion: v1
  Conmon:
    package: conmon-2.0.8-1.el7.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.8, commit: f85c8b1ce77b73bcd48b2d802396321217008762'
  Distribution:
    distribution: '"centos"'
    version: "7"
  MemFree: 191963136
  MemTotal: 16563531776
  OCIRuntime:
    name: runc
    package: runc-1.0.0-67.rc10.el7_8.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 7857680384
  SwapTotal: 8581541888
  arch: amd64
  cpus: 8
  eventlogger: journald
  hostname: lsrv1
  kernel: 3.10.0-1062.9.1.el7.x86_64
  os: linux
  rootless: false
  uptime: 607h 10m 53.36s (Approximately 25.29 days)
registries:
  blocked: null
  insecure: null
  search:
  - registry.access.redhat.com
  - registry.fedoraproject.org
  - registry.centos.org
  - docker.io
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: overlay
  GraphOptions: {}
  GraphRoot: /mnt/mystorage/virtual/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 0
  RunRoot: /var/run/containers/storage
  VolumePath: /mnt/mystorage/virtual/storage/volumes

Keep on reading!

Software and technical overview of Ubuntu 20.04 LTS server edition

Ubuntu 20.04 LTS server edition offers the following software and versions:

Software

  • linux kernel – 5.4.0 – 5.4.0-29-generic
  • System
    • linux-firmware – 1.187
    • libc – 2.31 – 2.31-0ubuntu9
    • GNU GCC – multiple versions available – 7.5.0, 8.4.0, 9.3.0 and 10-20200411. The exact versions – 7.5.0-6ubuntu2, 8.4.0-3ubuntu2, 9.3.0-10ubuntu2 and 10-20200411-0ubuntu1
    • OpenSSL – 1.1.1f – 1.1.1f-1ubuntu2
    • coreutils – 8.308.30-3ubuntu2
    • apt – 2.0.2ubuntu0.1
    • rsyslog – 8.2001.0 – 8.2001.0-1ubuntu1
  • Servers
    • Apache – 2.4.41 – 2.4.41-4ubuntu3
    • Nginx – 1.17.10 – 1.17.10-0ubuntu1
    • MySQL server – 8.0.208.0.20-0ubuntu0.20.04.1
    • MariaDB server – 10.3.22 – 10.3.22-1ubuntu1
    • PostgreSQL – 12.2-4
  • Programming
      LTS the user may install

    • PHP – 7.4 – 7.4.3-4ubuntu1.1
    • python – 3.8.2 (3.8.2-0ubuntu2) and also includes 2.7.17 (2.7.17-2ubuntu4)
    • perl – 5.30.0 and also includes perl 6 6.d-2
    • ruby – 2.7 – 2.7+1
    • OpenJDK – includes multiple versions – 8, 11, 13 and 14. The exact versions are 8u252-b09-1ubuntu1, 11.0.7+10-3ubuntu1, 13.0.3+3-1ubuntu2 and 14.0.1+7-1ubuntu1
    • Go lang – multiple versions – 1.13.8 and 1.14.2. The exact versions – 1.13.8-1ubuntu1 and 1.14.2-1ubuntu1
    • Rust – 1.41.0 – 1.41.0+dfsg1+llvm-0ubuntu2
    • Subversion – 1.13.0 – 1.13.0-3
    • Git – 2.25.1 – 2.25.1-1ubuntu3
    • llvm – multiple versions – 6, 7, 8, 9 and 10. The exact versions – 6.0.1-14, 7.0.1-12, 8.0.1-9, 9.0.1-12, 10.0-50~exp1
  • Graphical User Interface
    • Xorg X server – 1.20.8 – 1.20.8-2ubuntu2
    • GNOME (the GUI) – 3.36.x – Gnome Shell – 3.36.1

Note: Not all of the above software comes installed by default. The versions above are valid for the intial release so in fact, these are the minimal versions you get with Ubuntu 20 LTS and installing and updating it after the initial date may update some of the above packages with new versions. Installed packages are 582 occupying 11G space.

During the installation wizard you may want to install the following snap software environments. Of course, this software is available after the installation setup, too.

The test server is equipped with “Threadripper 1950X AMD“, which is 16 cores CPU.
Check out Minimal installation of Ubuntu server 20.04 LTS, too.

Keep on reading!