CentOS 8 add a storage driver (megaraid_sas) when booting the installation disk

Installing CentOS 8 in relatively old hardware maybe a real challenge because of an old hardware device like storage, network, or both.
This article shows how to make the CentOS 8 Installation wizard detect the storage – a hardware controller AOC-USAS2LP-H8iR (smc2108 with LSI 2108). Unfortunately, the CentOS 8 (in fact, RHEL 8 removed the support, too) team decided to remove support for the LSI SAS2008/2108/2116 storage controllers by removing the “megaraid_sas” kernel driver. There are still servers in production with similar controllers, which were sold 4-5 years ago from the big vendors such as DELL, HP, and so on.

The method here is to boot the installation CD/USB with modified kernel boot parameters to include an URL link to the installation driver iso (where the megaraid_sas driver is included).

The offered way to load the megaraid_sas (or any other driver) includes:

  1. Use assisted driver update to load an elrepo driver ISO during the first stage of the CentOS 8 Installation Wizard. elrepo is a famous community efford – http://elrepo.org/tiki/tiki-index.php. More on the assited diver update here – https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/performing_an_advanced_rhel_installation/updating-drivers-during-installation_installing-rhel-as-an-experienced-user#performing-an-assisted-driver-update_updating-drivers-during-installation
  2. Configure the network of the server to be able to download the driver ISO in the early stage of the CentOS 8 Installation Wizard. Add boot parameters to set up a valid network configuration.

The installation CD/USB can download an iso with kernel drivers. And of course, to download a file from the Internet a network should be set in the earliest stage of the CentOS 8 installation wizard.
The added string to the boot CD/USB CentOS 8 Installation disk is:

 inst.dd=https://elrepo.org/linux/dud/el8/x86_64/dd-megaraid_sas-07.710.50.00-1.el8_2.elrepo.iso ip=10.10.10.10::10.10.10.1:255.255.255.0::enp8s0f0:off nameserver=8.8.8.8

SCREENSHOT 1) Select with the arrows “Install CentOS Linux 8” and hit “TAB” button to edit the boot parameters.

As shown in the picture just add ” inst.dd=https://elrepo.org/linux/dud/el8/x86_64/dd-megaraid_sas-07.710.50.00-1.el8_2.elrepo.iso ip=10.10.10.10::10.10.10.1:255.255.255.0::enp8s0f0:off nameserver=8.8.8.8″. The “inst.dd” instructs the installation wizard where are the driver ISO located. The “ip” and “nameserver” command just sets a proper network in the early stage of the CentOS 8 Installation wizard to be able to download the driver ISO. Setting the network by these parameters is really important, because the download of the driver iso happens in this early stage of loading the installation wizard. Replace the IP and the whole network configuration if needed.

main menu
Installation wizard edit boot parameters

Keep on reading!

storcli with multiple disks from different enclosures

Creating a Virtual device with the AVAGO storcli command-line tool under Linux. Two examples are included:

  1. All disks are from one of the enclosure. All disks are included explicitly.
  2. Disks from two enclosures are included. One controller with two enclosures.

Check out how to Install the new storcli to manage (LSI/AVAGO/Broadcom) MegaRAID controller under CentOS 7
There are 31 disks of 36 harddisk bays. 5 are missing on purpose for the examples.

The initial states of the controller and the disks.

livecd ~ # opt/MegaRAID/storcli/storcli /c0 show
Generating detailed summary of the adapter, it may take a while to complete.

CLI Version = 007.0510.0000.0000 May 4, 2018
Operating system = Linux 4.19.72-gentoo
Controller = 0
Status = Success
Description = None

Product Name = LSI 2108 MegaRAID
Serial Number = FW-ABQRCBEAARBWA
SAS Address =  5003048004015f00
PCI Address = 00:06:00:00
System Time = 07/20/2020 22:58:35
Mfg. Date = 00/00/00
Controller Time = 07/20/2020 22:58:36
FW Package Build = 12.15.0-0239
FW Version = 2.130.403-4660
BIOS Version = 3.30.02.2_4.16.08.00_0x06060A05
Driver Name = megaraid_sas
Driver Version = 07.706.03.00-rc1
Vendor Id = 0x1000
Device Id = 0x79
SubVendor Id = 0x15D9
SubDevice Id = 0x700
Host Interface = PCI-E
Device Interface = SAS-6G
Bus Number = 6
Device Number = 0
Function Number = 0
Physical Drives = 31

PD LIST :
=======

-----------------------------------------------------------------------------------
EID:Slt DID State DG     Size Intf Med SED PI SeSz Model                   Sp Type 
-----------------------------------------------------------------------------------
12:0      0 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HUA723020ALA640 D  -    
12:1      1 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    D  -    
12:2      2 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:3      3 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:4      4 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:5      5 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS722T2TALA604    D  -    
12:6      6 UGood -  1.817 TB SATA HDD N   N  512B TOSHIBA DT01ACA200      D  -    
12:7      7 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:8      8 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:9      9 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:10    10 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
12:11    11 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 D  -    
37:0     13 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:1     14 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:2     15 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:3     16 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HUA723020ALA640 U  -    
37:4     17 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:6     19 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:7     20 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS722T2TALA604    U  -    
37:8     21 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    U  -    
37:10    23 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:11    24 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:13    26 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:14    27 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    U  -    
37:16    29 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    U  -    
37:17    30 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HUA723020ALA640 U  -    
37:19    32 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:20    33 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:21    34 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS722T2TALA604    U  -    
37:22    35 UGood -  1.817 TB SATA HDD N   N  512B Hitachi HDS723020BLA642 U  -    
37:23    36 UGood -  1.817 TB SATA HDD N   N  512B HGST HUS724020ALA640    U  -    
-----------------------------------------------------------------------------------

EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info
SeSz-Sector Size|Sp-Spun|U-Up|D-Down/PowerSave|T-Transition|F-Foreign
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded

Keep on reading!

Upgrading Ubuntu 18 to Ubuntu 20 – software versions upgrade table – head to head

In the following article a comparison between two LTS version of Ubuntu is presented – Ubuntu 18.04 LTS (Bionic) versus Ubuntu 20.04 LTS (Focal). The latest version of Ubuntu 18.04 and Ubuntu 20.04 (17.06.2020) is used to generate the software versions below.

In the Desktop world upgrading to the new and latest version of a Linux distribution is almost mandatory, but in the server world, upgrading is more complicated. The first step in updating a server is to check what software versions come with the new distribution version and then check whether the running custom (application) software supports the software versions. For example, updating to a new distribution version, which comes with PHP 7.4, but the current application supports only 7.2 is not very wise and in addition, the current version may have years with support in the future.

Having a head to head version comparison to check is the main target of this article – a fast check what version the user could expect from the new (aka latest) Linux distribution.

SoftwareUbuntu 20.04Ubuntu 18.04
Linux kernel



5.4.0
5.6.0
4.15.0
4.18.0
5.0.0
5.3.0
5.4.0
libc2.312.27
OpenSSL
1.1.1f
1.0.2n
1.1.1
GNU GCC


7.5.0
8.4.0
9.3.0
10-20200411
4.8.5
5.5.0
6.5.0
7.5.0
8.4.0
PHP7.47.2
Python2.7.17
3.8.2
2.7.15
3.6.7
Perl5.30.05.26.1
Ruby2.72.5.1
OpenJDK8u252-b09
11.0.7
13.0.3
14.0.1
8u252-b09
11.0.7
Go lang1.13.8
1.14.2
1.8
1.9
1.10
Rust1.41.01.41.0
llvm



6.0.1
7.0.1
8.0.1
9.0.1
10.0.0
3.7.1
3.9.1
4.0.1
5.0.1
6.0
7
8
9
10.0.0
nodejs10.19.08.10.0
Subversion1.131.9.7
Git2.25.22.17.1
Apache2.4.412.4.29
Nginx1.17.101.14.0
MySQL server8.0.205.7.30
MariaDB10.3.2210.1.44
PostgreSQL12.210.12
SQLite3.22.03.31.1
Xorg X server1.20.81.19.6
Gnome Shell3.36.23.28.4

Configure Bond (802.3ad LACP) device in CentOS 8 – configuration files

Upgrading to a bond device is a common step when the server exhausts its current network port bandwidth.
The hardware setup of the bond example here is:

  • two 10G network cards – ens1f0 and ens1f0
  • bond name – bond0
  • bond mode – 802.3ad – Link Aggregation Control Protocol (LACP)

The systemd reconfiguration procedure consists of:

  • Stop the network target
    systemctl stop network
    
  • Set several configuration files – network device files for the network interfaces, bonding interface – master and slave devices.
  • Start the network target
    systemctl start network
    

*Note: the 802.3ad bonding mode needs aditional configuration in the switch of which the server is connected.

The example here is using CentOS 8 configuration file to make a permanent (i.e. persistent over reboots using the CentOS 8 network configuration files) bonding configuration.
Check out the official bonding documentation for all modes and options – https://www.kernel.org/doc/Documentation/networking/bonding.txt.

CONF 1) Configure the network interfaces.

The interface should be in down state in the configuration file.
Interface 1 – /etc/sysconfig/network-scripts/ifcfg-ens1f0:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens1f0
UUID=3b399a23-570e-45ed-9369-4ff5b87efb2c
DEVICE=ens1f0
ONBOOT=no

Interface 2 – /etc/sysconfig/network-scripts/ifcfg-ens1f1:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens1f1
UUID=ecdc5d5b-9739-4424-9d67-362411974281
DEVICE=ens1f1
ONBOOT=no

CONF 2) Configure bonding master device – create a bonding group bond0

This device should be started up at boot.
Bonding device 1 – with name bond0 – /etc/sysconfig/network-scripts/ifcfg-Bond_connection_1:

BONDING_OPTS="downdelay=200 miimon=100 mode=802.3ad updelay=200"
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=10.10.10.10
PREFIX=24
GATEWAY=10.10.10.1
DNS1=10.10.10.2
DNS2=10.10.10.3
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_PRIVACY=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME="Bond connection 1"
UUID=f0a35f9a-20e4-484e-850c-689128642555
DEVICE=bond0
ONBOOT=yes

BONDING_OPTS are specific options for the bonding group with name bond0 and the bonding mode is set here, too.

CONF 3) Configure bonding slave devices – the two network cards

Adding the two network cards to the bonding group bond0. These devices should be started up at boot.
Interface 1 – /etc/sysconfig/network-scripts/ifcfg-bond0_slave_1:

HWADDR=90:E2:BA:8A:13:8C
TYPE=Ethernet
NAME="bond0 slave 1"
UUID=c49e0ced-6411-41fa-9a3b-a01a430664a7
DEVICE=ens1f0
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Interface 2 – /etc/sysconfig/network-scripts/ifcfg-bond0_slave_2:

HWADDR=90:E2:BA:8A:13:8D
TYPE=Ethernet
NAME="bond0 slave 2"
UUID=90de1cad-1d9f-48cb-8e5a-7d8bfdde91d2
DEVICE=ens1f1
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Set up (802.3ad LACP) bonding when installing CentOS 8

This article is to show how the user could install CentOS 8 (the steps are the same with CentOS 7) with a much complex network setup such as Bonding device in 802.3ad mode (LACP – Link Aggregation Control Protocol).
The whole installation procedure is not included here, but there are couple of other article on the subject “Install CentOS 7 or CentOS 8”:

Similar configuration files will be generated as in Configure Bond (802.3ad LACP) device in CentOS 8 – configuration files

SCREENSHOT 1) Click on “Network and Host Name” to configure the machine networking.

main menu
Installation Summary – Network and Host Name

Keep on reading!

Chromium browser in Ubuntu 20.04 LTS without snap to use in docker container

Ubuntu team has its own vision for the snap (https://snapcraft.io/) service and that’s why they have moved the really big and difficult to maintain Chromium browser package in the snap package. Unfortunately, the snap has many issues with docker containers and in short, it is way difficult to run snap in a docker container. The user may just want not to mess with snap packages (despite this is the future according to the Ubuntu team) or like most developers they all need a browser for their tests executed in a container.
Whether you are a developer or an ordinary user this article is for you, who wants Chromium browser installed not from the snap service under Ubuntu 20.04 LTS!
There are multiple options, which could end up with a Chromium browser installed on the system, not from the snap service:

  1. Using Debian package and Debian repository. The problem here is that using simultaneously Ubuntu and Debian repository on one machine is not a good idea! Despite the hack, Debian packages are with low priority – https://askubuntu.com/questions/1204571/chromium-without-snap/1206153#1206153
  2. Using Google Chromehttps://www.google.com/chrome/?platform=linux. It is just a single Debian package, which provides Chromium-like browser and all dependencies requesting the Chromium browser package are fulfilled.
  3. Using Chromium team dev or beta PPA (https://launchpad.net/~chromium-team) for the nearest version if still missing Ubuntu packages for Focal (Ubuntu 20.04 LTS).
  4. more options available?

This article will show how to use Ubuntu 18 (Bionic) Chromium browser package from Chromium team beta PPA under Ubuntu 20.04 LTS (Focal). Bionic package from the very same repository of Ubuntu Chromium team may be used, too.

All dependencies will be downloaded from the Ubuntu 20.04 and just several Chromium-* packages will be downloaded from the Chromium team PPA Ubuntu 19 repository. The chances to break something are really small compared to the options 1 above, which uses the Debian packages and repositories. Hope, soon we are going to have focal (Ubuntu 20.04 LTS) packages in the Ubuntu Chromium team PPA!

Dockerfile

An example of a Dockerfile installing Chromium (and python3 selenium for automating web browser interactions)

RUN apt-key adv --fetch-keys "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xea6e302dc78cc4b087cfc3570ebea9b02842f111" \
&& echo 'deb http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic main ' >> /etc/apt/sources.list.d/chromium-team-beta.list \
&& apt update
RUN export DEBIAN_FRONTEND=noninteractive \
&& export DEBCONF_NONINTERACTIVE_SEEN=true \
&& apt-get -y install chromium-browser
RUN apt-get -y install python3-selenium

First command adds the repository key and the repository to the Ubuntu source lists. Note we are adding the “bionic main”, not “focal main”.
From the all dependencies of the Bionic chromium-browser only three packages are pulled from the Bionic repository and all other are from the Ubuntu 20 (Focal):

.....
Get:1 http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic/main amd64 chromium-codecs-ffmpeg-extra amd64 84.0.4147.38-0ubuntu0.18.04.1 [1174 kB]
.....
Get:5 http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic/main amd64 chromium-browser amd64 84.0.4147.38-0ubuntu0.18.04.1 [67.8 MB]
.....
Get:187 http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic/main amd64 chromium-browser-l10n all 84.0.4147.38-0ubuntu0.18.04.1 [3429 kB]
.....

Here is the whole Dockerfile sample file:

#
#   Docker file for the image "chromium brower without snap"
#
FROM ubuntu:20.04
MAINTAINER myuser@example.com

#chromium browser
#original PPA repository, use if our local fails
RUN echo "tzdata tzdata/Areas select Etc" | debconf-set-selections && echo "tzdata tzdata/Zones/Etc select UTC" | debconf-set-selections
RUN export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NONINTERACTIVE_SEEN=true
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get -y install gnupg2 apt-utils wget
#RUN wget -O /root/chromium-team-beta.pub "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xea6e302dc78cc4b087cfc3570ebea9b02842f111" && apt-key add /root/chromium-team-beta.pub
RUN apt-key adv --fetch-keys "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xea6e302dc78cc4b087cfc3570ebea9b02842f111" && echo 'deb http://ppa.launchpad.net/chromium-team/beta/ubuntu bionic main ' >> /etc/apt/sources.list.d/chromium-team-beta.list && apt update
RUN export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NONINTERACTIVE_SEEN=true && apt-get -y install chromium-browser
RUN apt-get -y install python3-selenium

Desktop install

The desktop installation is almost the same as the Dockerfile above. Just execute the following lines:
Keep on reading!

Copy files with read errors successfully – skipping only errors (i.e. bad sectors)

Sometimes disks have errors or an SSD disk has a bad NAND cell. Saving the whole hard disk data may not be needed and when only a specific file or two are important and which cannot be copied by cp or rsync because of “Unrecovered read error”.
Furthermore, the SSD reallocates the bad cells, when there are writes to the cell(s), which may not occur years, but reading may be each day. Reading from a sector with bad NAND cells will result in slow IO (multiple read commands are executed before giving up). Copying the file to a new place without only 512 bytes may not harm the data, but it is difficult to be done with the generic tool for copying.
This article is to save single files from a mounted ext4 file system with bad sectors using the ddrescue tool – https://www.gnu.org/software/ddrescue/ In fact, the ddrescue could save files or whole devices.

STEP 1) Install ddrescue.

Installing ddrescue is pretty easy. The tool is included in almost all Linux distributions and it doesn’t have many dependencies. Apparently, there is another dd_rescue tool, which is different than this one, just follow the link above for the tool used here.
CentOS 7/8 or Fedora:

yum install -y ddrescue

Ubuntu last 10 years versions:

apt install -y gddrescue

Gentoo:

emerge -v ddrescue

STEP 2) Rescuing a single file with read errors because of bad sectors in a mounted file system.

[root@srv Snapshots]# ddrescue -v \{9f02ae0a-6dae-4729-b6a6-ec3f0550f294\}.vdi test2.vdi
GNU ddrescue 1.25
About to copy 15724 MBytes from '{9f02ae0a-6dae-4729-b6a6-ec3f0550f294}.vdi' to 'test2.vdi'
    Starting positions: infile = 0 B,  outfile = 0 B
    Copy block size: 128 sectors       Initial skip size: 384 sectors
Sector size: 512 Bytes

Press Ctrl-C to interrupt
     ipos:   13495 MB, non-trimmed:        0 B,  current rate:       0 B/s
     opos:   13495 MB, non-scraped:        0 B,  average rate:    162 MB/s
non-tried:        0 B,  bad-sector:     8192 B,    error rate:    4608 B/s
  rescued:   15724 MB,   bad areas:        2,        run time:      1m 36s
pct rescued:   99.99%, read errors:       18,  remaining time:          0s
                              time since last successful read:          0s
Finished                                      
[root@srv Snapshots]# ls -al
total 52602944
drwx------. 2 root root        4096 Jun  2 02:22 .
drwxr-xr-x. 4 root root        4096 Jun  1 14:16 ..
-rw-------. 1 root root   459981735 Nov  8  2018 2018-11-08T15-19-17-776317000Z.sav
-rw-------. 1 root root   566704069 Jun  1 14:16 2020-06-01T11-16-05-735318000Z.sav
-rw-------. 1 root root  8329887744 Jun  1 12:53 {3d30ebea-2e2f-4e33-8088-d3d66f315e2c}.vdi
-rw-------. 1 root root 15724445696 Nov  8  2018 {9f02ae0a-6dae-4729-b6a6-ec3f0550f294}.vdi
-rw-------. 1 root root  4012900352 Jun  1 14:16 {f7e72510-7dce-48fd-b62c-630664ad984f}.vdi
-rw-r--r--. 1 root root 15724445696 Jun  2 02:24 test2.vdi
-rw-------. 1 root root  9051041792 Jun  2 02:19 test.vdi

Here is an animated gif of the ddrescue procedure:

main menu
ddrescue – copy files with bad sectors

Keep on reading!

Software and technical overview of Ubuntu 20.04 LTS server edition

Ubuntu 20.04 LTS server edition offers the following software and versions:

Software

  • linux kernel – 5.4.0 – 5.4.0-29-generic
  • System
    • linux-firmware – 1.187
    • libc – 2.31 – 2.31-0ubuntu9
    • GNU GCC – multiple versions available – 7.5.0, 8.4.0, 9.3.0 and 10-20200411. The exact versions – 7.5.0-6ubuntu2, 8.4.0-3ubuntu2, 9.3.0-10ubuntu2 and 10-20200411-0ubuntu1
    • OpenSSL – 1.1.1f – 1.1.1f-1ubuntu2
    • coreutils – 8.308.30-3ubuntu2
    • apt – 2.0.2ubuntu0.1
    • rsyslog – 8.2001.0 – 8.2001.0-1ubuntu1
  • Servers
    • Apache – 2.4.41 – 2.4.41-4ubuntu3
    • Nginx – 1.17.10 – 1.17.10-0ubuntu1
    • MySQL server – 8.0.208.0.20-0ubuntu0.20.04.1
    • MariaDB server – 10.3.22 – 10.3.22-1ubuntu1
    • PostgreSQL – 12.2-4
  • Programming
      LTS the user may install

    • PHP – 7.4 – 7.4.3-4ubuntu1.1
    • python – 3.8.2 (3.8.2-0ubuntu2) and also includes 2.7.17 (2.7.17-2ubuntu4)
    • perl – 5.30.0 and also includes perl 6 6.d-2
    • ruby – 2.7 – 2.7+1
    • OpenJDK – includes multiple versions – 8, 11, 13 and 14. The exact versions are 8u252-b09-1ubuntu1, 11.0.7+10-3ubuntu1, 13.0.3+3-1ubuntu2 and 14.0.1+7-1ubuntu1
    • Go lang – multiple versions – 1.13.8 and 1.14.2. The exact versions – 1.13.8-1ubuntu1 and 1.14.2-1ubuntu1
    • Rust – 1.41.0 – 1.41.0+dfsg1+llvm-0ubuntu2
    • Subversion – 1.13.0 – 1.13.0-3
    • Git – 2.25.1 – 2.25.1-1ubuntu3
    • llvm – multiple versions – 6, 7, 8, 9 and 10. The exact versions – 6.0.1-14, 7.0.1-12, 8.0.1-9, 9.0.1-12, 10.0-50~exp1
  • Graphical User Interface
    • Xorg X server – 1.20.8 – 1.20.8-2ubuntu2
    • GNOME (the GUI) – 3.36.x – Gnome Shell – 3.36.1

Note: Not all of the above software comes installed by default. The versions above are valid for the intial release so in fact, these are the minimal versions you get with Ubuntu 20 LTS and installing and updating it after the initial date may update some of the above packages with new versions. Installed packages are 582 occupying 11G space.

During the installation wizard you may want to install the following snap software environments. Of course, this software is available after the installation setup, too.

The test server is equipped with “Threadripper 1950X AMD“, which is 16 cores CPU.
Check out Minimal installation of Ubuntu server 20.04 LTS, too.

Keep on reading!

Minimal installation of Ubuntu server 20.04 LTS

This tutorial will show you the simple steps of installing a modern Linux Distribution – Ubuntu server 20.04 LTS edition. Following most of the default options during the setup configuration for simplicity.

Here are some basic data from the default installation setup settings:

  1. Installed packages – ~582 occupying 11G of space.
  2. 3 partitions when using automatic patition layout – boot efi, swap and root.
  3. ext4 used for the root parition.

We used the following ISO for the installation process – Ubuntu 20.04 LTS (Focal Fossa):

http://releases.ubuntu.com/focal/ubuntu-20.04-live-server-amd64.iso

It is a LIVE image so you can try it before installing it. The easiest way is just to download the image and burn it to a DVD disk and then follow the installation below:

SCREENSHOT 1) Boot from the disk or USB – whatever you made after downloading the ISO file from Ubuntu official source.

On the image here the DVD is used to boot in UEFI mode installation.

main menu
boot uefi dvd

Keep on reading!

collectd nginx plugin: curl_easy_perform failed because of selinux

Enabling the Nginx plugin for collectd under CentOS (or any other system using SELinux) might be confusing for a newbie. Most sources on the Internet would just install collectd-nginx:

yum install -y collectd-nginx

and configure it in the nginx.conf and collectd.conf. Still, the statistics might not work as expected, the collectd may not be able to gather statistics from the Nginx.

SELinux may prevent collectd (plugin) daemon to connect to Nginx and gather statistics from the Nginx stats page.

Checking the collectd log and it reports a problem:
Keep on reading!