gentoo network interface with hyphen in the name

Using the OpenRC (i.e. init system) and network names with special symbols like hyphen in the name may lead to errors of “command not found” and “No such file or directory

The hyphen in the network interface name must be replaced in the configuration file with an underline and the init name file should be with the hyphen.

Proper configuration for network interface name with hyphen mv-eth0

  • In the configuration file /etc/conf.d/net:
    config_mv_eth0="
    192.168.0.202/24
    "
    routes_mv_eth0="
    default via 192.168.0.1
    "
    
  • The network interface init file is with hyphen:
    root@srv /etc/init.d # ln -s net.lo net.mv-eth0
    

And starting the network is successful:

root@srv ~ # /etc/init.d/net.mv-eth0 start
 * Caching service dependencies ...                                                                                                                                                     [ ok ]
 * Bringing up interface mv-eth0
 *   Caching network module dependencies
 *   192.168.0.202/24 ...                                                                                                                                                               [ ok ]
 *   Adding routes
 *     default via 192.168.0.1 ...                                                                                                                                                      [ ok ]
 *   Waiting for tentative IPv6 addresses to complete DAD (5 seconds) ..

Virtualization software may include to the network interface name not so typical alphabets. For example, systemd-nspawn will give name to the guest’s macvlan network with mv-{host_network_name} and iv-{host_network_name} for ipvlan.

Wrong configuration with a hyphen in the network interface name.

The configuration file /etc/conf.d/net:

config_mv-eth0="
192.168.0.202/24
"
routes_mv-eth0="
default via 192.168.0.1
"

Starting the network with such configuration will result in multiple errors:

root@srv ~ # /etc/init.d # /etc/init.d/net.mv-eth0 start
 * Caching service dependencies ...
/etc/init.d/../conf.d/net: line 3: config_mv-eth0=
192.168.0.202/24
: No such file or directory
/etc/init.d/../conf.d/net: line 6: $'routes_mv-eth0=\ndefault via 192.168.0.1\n': command not found
/etc/init.d/../conf.d/net: line 3: config_mv-eth0=
192.168.0.202/24
: No such file or directory
/etc/init.d/../conf.d/net: line 6: $'routes_mv-eth0=\ndefault via 192.168.0.1\n': command not found                                                                                  [ ok ]
/etc/init.d/../conf.d/net: line 3: config_mv-eth0=
192.168.0.202/24
: No such file or directory
/etc/init.d/../conf.d/net: line 6: $'routes_mv-eth0=\ndefault via 192.168.0.1\n': command not found
 * net.mv-eth0: error loading /etc/init.d/../conf.d/net
 * ERROR: net.mv-eth0 failed to start

xdg and autostart in Linux X server regardless the desktop environment

There is a tool xdg, which manages application integration with the different GUI Desktops in the Linux world. One of the features it offers is to autostart an application when the X window system starts and it is perfectly normal to have a bunch of running programs that cannot be found in the Windowing manager settings like KDE System Settings -> Autostart, GNOME Tweak tool and Autostart and so on.

xdg offers autostart of Linux appilcations mainly Desktop when the GUI windowing system starts

There two main paths to look for entries to autostart:

  1. /etc/xdg/autostart – called system-wide and most of the application will place files when they are installed.
  2. [user’s home]/.config/autostart – user’s applications to start when the user logs in .

With xdg autostart feature the user can explain himself why the Windowing systems like KDE or GNOME start tens of applications (not exactly related to the base GUI windowing system).

There is a security problem here, which is sometimes installing a package will place an autostart file there because the maintainer decided it is important but the package might be just a dependency and the next time the user logs in unwanted program might execute and open ports!

For example, Rygel is an open-source UPnP/DLNA MediaServer and it might be installed as a dependency but it places an autostart file, which starts a UPnP/DLNA server and exports the /home/[user’s directory]/Videos, /home/[user’s directory]/Pictures and more to the local network. Another example is with the GNOME index system tracker and the tracker-store, which may easily eat the RAM, disk, CPU, battery on a system without GNOME but with a different GUI!

Here is what a typical Ubuntu 18.04 system might autostart

Keep on reading!

starting Hashicorp vault in server mode under docker container

Running Hashicorp vault in development mode is really easy, but starting the vault in server mode under a docker container may have some changes described in this article.

There are several simple steps, which is hard to get in one place, to run a Hashicorp vault in server mode (under docker):

  1. Prepare the directories to map in the docker. The data in the directories will be safe and won’t be deleted if the container is deleted.
  2. Prepare an initial base configuration to start the server. Without it, the server won’t startup. Even it is really simple.
  3. Start the Hashicorp vault process in a docker container.
  4. Initiliza the vault. During this step, the server will generate the database backend storage (files or in-memory or cloud backends) and 5 unseal keys and an administrative root token will be generated. To manage the vault an administrative user is required.
  5. Unseal the vault. Unencrypt the database backend to use the service with at least three commands and three different unseal keys generated during the initialization step.
  6. Login with the administrative user and enable vault engine to store values (or generate tokens, passwords, and so on). The example here enables the secret engine to store key:value backend. Check out the secrets engines – https://www.vaultproject.io/docs/secrets

STEP 1) Summary of the mapped directories in the docker

Three directories are preserved:

  • /vault/config – contains configuration files in HCL or JSON format.
  • /vault/data – the place, where the encrypted database files will be kept only if a similar storage engine is used like “file” or “raft” storages. More information here – https://www.vaultproject.io/docs/configuration/storage
  • /vault/log – writing persistent audit logs. This feature should be enabled explicitly in the configuration.

The base directory used is /srv/vault/. And the three direcotries are created as follow and will be mapped in the docker container:

mkdir -p /srv/vault/config /srv/vault/data /srv/vault/log

The server’s directory /srv/vault/config will be mapped in docker’s directory /vault/config and the other two, too.

STEP 2) Initial base configuration

The initial configuration file is placed in /vault/config/config.hcl and is using HCL format – https://github.com/hashicorp/hcl. The initial configuration is minimal:

storage "raft" {
  path    = "/vault/data"
  node_id = "node1"
}

listener "tcp" {
  address     = "127.0.0.1:8200"
  tls_disable = 1
}

disable_mlock = true

api_addr = "http://127.0.0.1:8200"
cluster_addr = "https://127.0.0.1:8201"
ui = true

Place the file in /srv/vault/config/config.hcl

STEP 3) Start the Hashicorp vault server in docker

Mapping the three directories.

root@srv ~ # docker run --cap-add=IPC_LOCK -v /srv/vault/config:/vault/config -v /srv/vault/data:/vault/data -v /srv/vault/logs:/vault/logs --name=srv-vault vault server
==> Vault server configuration:

             Api Address: http://127.0.0.1:8200
                     Cgo: disabled
         Cluster Address: https://127.0.0.1:8201
              Go Version: go1.14.7
              Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: raft (HA available)
                 Version: Vault v1.5.3
             Version Sha: 9fcd81405feb320390b9d71e15a691c3bc1daeef

==> Vault server started! Log data will stream in below:

Keep on reading!

Add cachecade SSD to an existing virtual drive in AVAGO MegaRAID with storcli64

Increase the IO performance of big storage consisting of hard drives with the CacheCade feature with MegaRAID hardware controllers. Here are the commands to ensure adding an SSD to cache storage of hard drives and the whole output:

storcli64 /c0 add VD cachecade r0 drives=252:0 WB
storcli64 /c0/v0 set ssdcaching=on

Summary and explanation

  • The controller used is AVAGO MegaRAID SAS9361-4i
  • Three 12T hard drives and 1 SSD 1T.
  • RAID5 of 3 hard drive. The name of the virtual drive is v0.
  • The CacheCade is in RAID0. The name of the virtual drive is v1.
  • The CacheCade is in WriteBack mode. The WriteBack mode is dangerous when the CacheCade device is RAID0, because there is no redundancy, but it’s OK if the purpose of the machine is just a proxy cache. WriteBack mode leads to much more performance to gain from the cache device over slow device sotrage, but should be carefully planned when using RAID0 cache (or single) device.
  • Using storcli – the cli command line tool to manage a (LSI) MegaRAID.
  • Add CacheCade RAID0 device by the first command. It will create a virtual drive RAID0 with the name v1.
  • Ensure the caching is enabled for the virtual device it should be by explicitly setting it with the second command. Adding only a CacheCade device may not add the cache virtual device to the existing virtual devices!

Output of a real world example

Keep on reading!

multiple random crashes of firefox in a docker container under Linux

Multiple random crashes in Firefox under Linux, when started in a docker container with errors of the kind:

[myuser@92ee57f7f63a ~]$ Exiting due to channel error.
Exiting due to channel error.
[myuser@92ee57f7f63a ~]$ firefox 
ExceptionHandler::GenerateDump cloned child 4864
ExceptionHandler::SendContinueSignalToChild sent continue signal to child
ExceptionHandler::WaitForContinueSignal waiting for continue signal...
Exiting due to channel error.
Exiting due to channel error.
Exiting due to channel error.
Exiting due to channel error.
Bus error (core dumped)
[myuser@92ee57f7f63a ~]$ 
###!!! [Parent][MessageChannel] Error: (msgtype=0x5A001C,name=PHttpChannel::Msg_DeleteSelf) Channel error: cannot send/recv
###!!! [Parent][MessageChannel] Error: (msgtype=0x5A001C,name=PHttpChannel::Msg_DeleteSelf) Channel error: cannot send/recv
###!!! [Parent][MessageChannel] Error: (msgtype=0x5A001C,name=PHttpChannel::Msg_DeleteSelf) Channel error: cannot send/recv
###!!! [Parent][MessageChannel] Error: (msgtype=0x5A001C,name=PHttpChannel::Msg_DeleteSelf) Channel error: cannot send/recv
###!!! [Parent][MessageChannel] Error: (msgtype=0x5A001C,name=PHttpChannel::Msg_DeleteSelf) Channel error: cannot send/recv

The chances to stop these multiple random crashes if you just increase the memory for the /dev/shm device in the docker container is really good! The default values in the docker /dev/shm are really low (in the tested machine only 64Mbytes) and the recommended values are at least 2Gbytes. More shared tabs more shared memory is needed.

Such strange and random crashes might cause a testing case to fail in an automation testing suite using the Selenium driver to start a Firefox headless instance.

To increase the shared memory size in the docker container, the container should be started with “–shm-size” option. For example:

docker run -it --shm-size=2048m my-gui-fedora:31

If the option –shm-size is missing (probably an old docker software) mapping an already mounted tmpfs directory in the host machine to the container is also a solution:

mkdir /dev/dockershm
mount -t tmpfs -o size=2048m tmpfs /dev/shmdocker
chmod 1777 /dev/shmdocker
docker run -d -v /mnt/shmdocker:/dev/shm my-gui-fedora:31

Remounting in the docker container is also possible:

srv myuser # docker exec -it my-gui bash
[myuser@92ee57f7f63a ~]$ sudo mount -o remount,size=2048m /dev/shm

Note the container should be run with –privileged option to be able to remount the /dev/shm.

The build docker command also has the –shm-size for the purpose of the building process not changing the default shared memory size in the containers based on the image afterward.

Using xtrabackup to make fast MySQL backups – backup and restore

Percona provides a really interesting tool for MySQL backups! It works on a live running MySQL server by copying the MySQL binary files from the data directory and because the tool knows how the engines work (InnoDB, MyISAM, and so on) it can make point-in-time consistent MySQL data files. Of course, using this tool on a database with InnoDB tables only is the best case, because no write lock will be used.

There are two main tasks when making a backup and one, which is not mandatory:

  1. Execute xtrabackup with –backup option to copy the MySQL data files and additional information for the tool
  2. Execute xtrabackup with –prepare option to prepare the MySQL (InnoDB) files ready for use of a MySQL server. The files are consistented, i.e. the whole backup is consistent despite the copy of the different files that happened at different times.
  3. Execute xtrabackup with –prepare option again to further prepare the MySQL (InnoDB) files ready for use of a MySQL server. Additional preparation such as InnoDB log files and more. This step is not mandatory and it may be skipped because the MySQL server, which uses the data files, will create the InnoDB log files.

The directory with MySQL copied files contains not only the MySQL files but additional information plus my.cnf (MySQL current configuration) backup. The backup files may be prepared (and restore) in a different server than the original, on which the backup was made.

xtrabackup may enable compressing on-the-fly when copying the binary files with the option –compress but the steps to use (restore) the backup in a new server are different – additional step to decompress before prepare. So the above steps become:

  1. xtrabackup –backup –compress – copy and compress on-the-fly.
  2. xtrabackup –decompress – decompress the backup files
  3. xtrabackup –prepare – make point-in-time consistent MySQL data files, which are ready for a MySQL server.
  4. xtrabackup –prepare – additional preparation to make the start up of the MySQL server with those files faster.

And here is a real-world example:

STEP 1) Install the xtrabackup utility

yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm
yum install -y percona-xtrabackup-24 qpress

The percona-xtrabackup-24 provides the percona xtrabackup tool for backup of MySQL 5.1, 5.5, 5.6 and 5.7 servers, as well as Percona Server for MySQL with XtraDB. Install percona-xtrabackup-80 for use with MySQL 8 and later.
The whole output is included in the Bonus 3 section below.

STEP 2) Make backup

Make the backup. The datadir of the MySQL server is needed and a new direcotry for the backup files. The MySQL server is running and serving requests.

[root@srv ~]# xtrabackup --backup --slave-info --datadir=/var/lib/mysql/ --target-dir=/mnt/backups/sql-xtrabackup/
xtrabackup: recognized server arguments: --datadir=/var/lib/mysql --log_bin --server-id=1 --open_files_limit=5000 --innodb_buffer_pool_size=256M --innodb_log_buffer_size=32M --innodb_log_files_in_group=2 --innodb_flush_log_at_trx_commit=0 --innodb_file_per_table=1 --innodb_flush_method=O_DIRECT --datadir=/var/lib/mysql/ 
xtrabackup: recognized client arguments: --password=* --backup=1 --slave-info=1 --target-dir=/mnt/backups/sql-xtrabackup/ 
200919 14:35:11  version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' (using password: YES).
200919 14:35:11  version_check Connected to MySQL server
200919 14:35:11  version_check Executing a version check against the server...
200919 14:35:11  version_check Done.
200919 14:35:11 Connecting to MySQL server host: localhost, user: not set, password: set, port: not set, socket: not set
Using server version 5.7.31-log
xtrabackup version 2.4.20 based on MySQL server 5.7.26 Linux (x86_64) (revision id: c8b4056)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql/
xtrabackup: open files limit requested 5000, set to 5000
xtrabackup: using the following InnoDB configuration:
xtrabackup:   innodb_data_home_dir = .
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 2
xtrabackup:   innodb_log_file_size = 50331648
xtrabackup: using O_DIRECT
InnoDB: Number of pools: 1
200919 14:35:11 >> log scanned up to (1135932912)
xtrabackup: Generating a list of tablespaces
InnoDB: Allocated tablespace ID 34 for mywordpress/wp_users, old maximum was 0
200919 14:35:12 [01] Copying ./ibdata1 to /mnt/backups/sql-xtrabackup/ibdata1
200919 14:35:12 >> log scanned up to (1135932912)
200919 14:35:13 >> log scanned up to (1135932912)
200919 14:35:14 [01]        ...done
200919 14:35:14 >> log scanned up to (1135932912)
200919 14:35:15 >> log scanned up to (1135932912)
200919 14:35:16 [01] Copying ./mywordpress/wp_users.ibd to /mnt/backups/sql-xtrabackup/mywordpress/wp_users.ibd
200919 14:35:16 [01]        ...done
200919 14:35:16 [01] Copying ./mywordpress/wp_gglcptch_whitelist.ibd to /mnt/backups/sql-xtrabackup/mywordpress/wp_gglcptch_whitelist.ibd
200919 14:35:16 [01]        ...done
200919 14:35:16 [01] Copying ./mywordpress/wp_usermeta.ibd to /mnt/backups/sql-xtrabackup/mywordpress/wp_usermeta.ibd
200919 14:35:16 [01]        ...done
200919 14:35:16 >> log scanned up to (1135932912)
200919 14:35:17 [01] Copying ./mywordpress/wp_postmeta.ibd to /mnt/backups/sql-xtrabackup/mywordpress/wp_postmeta.ibd
200919 14:35:17 [01]        ...done
200919 14:35:17 [01] Copying ./mywordpress/wp_posts.ibd to /mnt/backups/sql-xtrabackup/mywordpress/wp_posts.ibd
200919 14:35:17 >> log scanned up to (1135932912)
200919 14:35:18 >> log scanned up to (1135932912)
200919 14:35:19 [01]        ...done
200919 14:35:19 >> log scanned up to (1135932912)
.....
.....
200919 14:36:30 [01] Copying ./mysql/columns_priv.MYD to /mnt/backups/sql-xtrabackup/mysql/columns_priv.MYD
200919 14:36:30 [01]        ...done
200919 14:36:30 >> log scanned up to (1135933567)
200919 14:36:31 [01] Copying ./mysql/proxies_priv.frm to /mnt/backups/sql-xtrabackup/mysql/proxies_priv.frm
200919 14:36:31 [01]        ...done
200919 14:36:31 [01] Copying ./mysql/help_category.frm to /mnt/backups/sql-xtrabackup/mysql/help_category.frm
200919 14:36:31 [01]        ...done
200919 14:36:31 [01] Copying ./mysql/proc.frm to /mnt/backups/sql-xtrabackup/mysql/proc.frm
200919 14:36:31 [01]        ...done
200919 14:36:31 Finished backing up non-InnoDB tables and files
200919 14:36:31 [00] Writing /mnt/backups/sql-xtrabackup/xtrabackup_slave_info
200919 14:36:31 [00]        ...done
200919 14:36:31 [00] Writing /mnt/backups/sql-xtrabackup/xtrabackup_binlog_info
200919 14:36:31 [00]        ...done
200919 14:36:31 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS...
xtrabackup: The latest check point (for incremental): '1135933558'
xtrabackup: Stopping log copying thread.
.200919 14:36:31 >> log scanned up to (1135933567)

200919 14:36:32 Executing UNLOCK TABLES
200919 14:36:32 All tables unlocked
200919 14:36:32 [00] Copying ib_buffer_pool to /mnt/backups/sql-xtrabackup/ib_buffer_pool
200919 14:36:32 [00]        ...done
200919 14:36:32 Backup created in directory '/mnt/backups/sql-xtrabackup/'
MySQL binlog position: filename 'srv-bin.000001', position '1004'
200919 14:36:32 [00] Writing /mnt/backups/sql-xtrabackup/backup-my.cnf
200919 14:36:32 [00]        ...done
200919 14:36:32 [00] Writing /mnt/backups/sql-xtrabackup/xtrabackup_info
200919 14:36:32 [00]        ...done
xtrabackup: Transaction log of lsn (1135932903) to (1135933567) was copied.
200919 14:36:33 completed OK!

In the end, there must be written: “completed OK!”.
The whole output of the command is included in the Bonus 1 section below. An example with compress option enabled is in Bonus 2 section below.
Here is what contains the backup diectory:
Keep on reading!

gitlab in podman cannot create unix sockets in glusterfs because of SELinux

Installing gitlab-ee (and gitlab-ce) under CentOS 7 with enabled SELinux (i.e. enforcing mode) looped endlessly the container in restarting the installation process! There were multiple errors for missing sockets in the podman logs of the gitlab container. Here are some of the errors:
Missing postgresql unix socket in “/var/opt/gitlab/postgresql”:

Recipe: gitlab::database_migrations
  * bash[migrate gitlab-rails database] action run
    [execute] rake aborted!
              PG::ConnectionBad: could not connect to server: No such file or directory
                Is the server running locally and accepting
                connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"?
              /opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:53:in `block (3 levels) in <top (required)>'
              /opt/gitlab/embedded/bin/bundle:23:in `load'
              /opt/gitlab/embedded/bin/bundle:23:in `<main>'
              Tasks: TOP => gitlab:db:configure
              (See full trace by running task with --trace)
    
    
    Error executing action `run` on resource 'bash[migrate gitlab-rails database]'
.....
.....
Running handlers:
There was an error running gitlab-ctl reconfigure:

bash[migrate gitlab-rails database] (gitlab::database_migrations line 55) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of "bash"  "/tmp/chef-script20200915-35-lemic5" ----
STDOUT: rake aborted!
PG::ConnectionBad: could not connect to server: No such file or directory
        Is the server running locally and accepting
        connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"?
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:53:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/bin/bundle:23:in `load'
/opt/gitlab/embedded/bin/bundle:23:in `<main>'
Tasks: TOP => gitlab:db:configure
(See full trace by running task with --trace)
STDERR: 
---- End output of "bash"  "/tmp/chef-script20200915-35-lemic5" ----
Ran "bash"  "/tmp/chef-script20200915-35-lemic5" returned 1

Missing redis socket in

Running handlers:
There was an error running gitlab-ctl reconfigure:

redis_service[redis] (redis::enable line 19) had an error: RuntimeError: ruby_block[warn pending redis restart] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/redis/resources/service.rb line 65) had an error: RuntimeError: Execution of the command `/opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket INFO` failed with a non-zero exit code (1)
stdout: 
stderr: Could not connect to Redis at /var/opt/gitlab/redis/redis.socket: No such file or directory

It should be noted that the /var/opt/gitlab directory has been mapped in /mnt/storage/podman/gitlab/data. GlusterFS is used for /mnt/storage, so the gitlab files resides on a GlusterFS volume.

ERROR 1) Cannot create unix socket.

Checking the /var/log/audit/audit.log reveiled the problem immediately:
Keep on reading!

gpg list key and display key details from a file (without importing the key)

Files with GPG keyspublic or private. Here is how to get more information without importing the keys.
GPG cli could give enough information for an explored key in a file:

  • public or private key
  • encrypted or unencrypted key
  • user id description (including email)
  • key id and issuer fpr v4
  • when the key was generated and when it will expire
  • the algo for the encrypted key
  • more

The key may be in binary or ascii format. No difference.
Here is the GNU GPG cli command:

gpg --list-packets < ./filewith.key

All examples below are made with gpg (GnuPG) 2.2.19.
Keep on reading!

syslog – UDP local to syslog-ng and send remote. Forward syslog to remote server.

After writing an article for the rsyslog daemon about forwarding local UDP logging to a remote server using TCP – UDP local to rsyslog and send remote with TCP and compression this time going to use syslog-ng daemon for those who use it as default in their Linux distribution.
As mentioned in the previous article always use a non-blocking way of writing logs using UDP locally and then transfer (forward) the logs to the centralized log server(s). The example here transfers the web server’s access logs to a remote server. The web server is an Nginx web server.
The goal is to use

  • UDP for the client program (Nginx in the case) for non-blocking log writes.
  • TCP between our local machine and the remote syslog server – to be sure not to lose messages on bad connectivity.
  • local caching for our client machine – not to lose messages if the remote syslog is temporary unreachable.

The configuration and the commands are tested on CentOS 7, CentOS 8, Gentoo and Ubuntu 18 LTS. Check out UDP remote logging here – nginx remote logging to UDP rsyslog server (CentOS 7) to see how to build the server-side part – the syslog server accepting the syslog messages and writing them into files.

STEP 1) Listen for local UDP connections

Configuration file /etc/syslog-ng/syslog-ng.conf

source udp_local {
    network(ip(127.0.0.1) port(514) transport("udp") so_rcvbuf(67108864) log_fetch_limit(1000) max-connections(1000) log-iw-size(1000000));
};

Keep on reading!

CentOS 8 add a storage driver (megaraid_sas) when booting the installation disk

Installing CentOS 8 in relatively old hardware maybe a real challenge because of an old hardware device like storage, network, or both.
This article shows how to make the CentOS 8 Installation wizard detect the storage – a hardware controller AOC-USAS2LP-H8iR (smc2108 with LSI 2108). Unfortunately, the CentOS 8 (in fact, RHEL 8 removed the support, too) team decided to remove support for the LSI SAS2008/2108/2116 storage controllers by removing the “megaraid_sas” kernel driver. There are still servers in production with similar controllers, which were sold 4-5 years ago from the big vendors such as DELL, HP, and so on.

The method here is to boot the installation CD/USB with modified kernel boot parameters to include an URL link to the installation driver iso (where the megaraid_sas driver is included).

The offered way to load the megaraid_sas (or any other driver) includes:

  1. Use assisted driver update to load an elrepo driver ISO during the first stage of the CentOS 8 Installation Wizard. elrepo is a famous community efford – http://elrepo.org/tiki/tiki-index.php. More on the assited diver update here – https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/performing_an_advanced_rhel_installation/updating-drivers-during-installation_installing-rhel-as-an-experienced-user#performing-an-assisted-driver-update_updating-drivers-during-installation
  2. Configure the network of the server to be able to download the driver ISO in the early stage of the CentOS 8 Installation Wizard. Add boot parameters to set up a valid network configuration.

The installation CD/USB can download an iso with kernel drivers. And of course, to download a file from the Internet a network should be set in the earliest stage of the CentOS 8 installation wizard.
The added string to the boot CD/USB CentOS 8 Installation disk is:

 inst.dd=https://elrepo.org/linux/dud/el8/x86_64/dd-megaraid_sas-07.710.50.00-1.el8_2.elrepo.iso ip=10.10.10.10::10.10.10.1:255.255.255.0::enp8s0f0:off nameserver=8.8.8.8

SCREENSHOT 1) Select with the arrows “Install CentOS Linux 8” and hit “TAB” button to edit the boot parameters.

As shown in the picture just add ” inst.dd=https://elrepo.org/linux/dud/el8/x86_64/dd-megaraid_sas-07.710.50.00-1.el8_2.elrepo.iso ip=10.10.10.10::10.10.10.1:255.255.255.0::enp8s0f0:off nameserver=8.8.8.8″. The “inst.dd” instructs the installation wizard where are the driver ISO located. The “ip” and “nameserver” command just sets a proper network in the early stage of the CentOS 8 Installation wizard to be able to download the driver ISO. Setting the network by these parameters is really important, because the download of the driver iso happens in this early stage of loading the installation wizard. Replace the IP and the whole network configuration if needed.

main menu
Installation wizard edit boot parameters

Keep on reading!