Upload files and directories with swift in OpenStack

First, you need to install

swift command line utility

and here is how to do it: Install OpenStack swift client only
In general you will need:

  1. username (–os-username) – Username
  2. password (–os-password) – Password
  3. authentication url (–os-auth-url) – The URL address, which authorize your requests, it generates a security token for your operations. Always use https!
  4. tenant name (–os-tenant-name) – Tenant is like a project.

All of the above information should be available from your OpenStack administrator.
For the examples we assume there is a container “mytest” (it’s like a main directory from the root). You cannot upload files in the root, because this is the place for containers only i.e. directories. You must always upload files under container (i.e. directory aka folder).

To upload a single file with swift cli execute:

myuser@myserver:~$ swift --os-username myuser --os-tenant-name mytenant --os-password mypass --os-auth-url https://auth-url.example.com/v2.0/ upload mytest ./file1.log 
file1.log

Keep on reading!

aptly – ERROR: unable to remove: published repo with storage:prefix/distribution ./mytest-stable not found

Sometimes the user manual may be unclear and you came here searching for a solution of dropping a published repository.
We have aptly version: 1.3.0 and here is the right syntax to remove a published repository.

First list the published repositories and reverse the “/” replacing it with space

The commands will be:

aptly publish list
Published repositories:
  * <name-distribution>/<release> [amd64] publishes {main: [xenial-<name>]: Some description}
aptly publish drop -force-drop <release> <name-distribution>

“name-distribution” is the “http://aptly.example.com/[name-distribution]” in the URL. For example, the repository URL of myrepo is “http://aptly.example.com/myrepo” and the name-distribution is “myrepo”.

A real world example

root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish list
Published repositories:
  * myrepo/stable [amd64] publishes {main: [xenial-myrepo]: Stable myrepo packages}
  * test/test [amd64] publishes {test: [test]: Test repo}
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish list --raw
myrepo stable
test test

We want to remove “myrepo/stable”:

root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop -force-drop stable myrepo
Removing /etc/aptly/.aptly/public/etc/dists...
Removing /etc/aptly/.aptly/public/etc/pool...

The published repository has been removed successfully.
root@srv-aptly:~#

The wrong syntax

You might have tried it that’s why you came here:

root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish list           
Published repositories:
  * myrepo/stable [amd64] publishes {main: [xenial-myrepo]: Stable myrepo packages}
  * test/test [amd64] publishes {test: [test]: Test repo}
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish list --raw
myrepo stable
test test
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop myrepo
ERROR: unable to remove: published repo with storage:prefix/distribution ./myrepo not found
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop myrepo stable
ERROR: unable to remove: published repo with storage:prefix/distribution stable/myrepo not found
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop myrepo-stable
ERROR: unable to remove: published repo with storage:prefix/distribution ./myrepo-stable not found
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop -force-drop myrepo-stable
ERROR: unable to remove: published repo with storage:prefix/distribution ./myrepo-stable not found
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop -force-drop myrepo stable
ERROR: unable to remove: published repo with storage:prefix/distribution stable/myrepo not found
root@srv-aptly:~#

ansible making a link: error – refusing to convert from file to symlink

A quick notice for your ansible scripts and as a reminder the right syntax for making a link with ansible is:

- name: change version
  file: src="path-to-existing-file-or-directory" dest="path-to-the-name-of-the-symlink" state=link
  • src must be existing file on the file system with the full path. The link will point to this file!
  • dest must be the name of the link with the full path. The setup will create or change the where this link points to.

Common error is to swap the src and dst

Here is an example of this error:

TASK [PHP-prepare : change version] ****************************************
fatal: [localhost]: FAILED! => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "msg": "refusing to convert from file to symlink for /usr/bin/php7.2", "owner": "root", "path": "/usr/bin/php7.2", "size": 4488224, "state": "file", "uid": 0}

The bad ansible code:

- name: change version
  file: src="/etc/alternatives/php" dest="/usr/bin/php7.2" state=link

The right ansible code:

- name: change version
  file: src="/usr/bin/php7.2" dest="/etc/alternatives/php" state=link

Ubuntu apt – InRelease is not valid yet (invalid for another 151d 18h 5min 59s)

Invalid time could cause your server (or probably your virtual server or docker instance) to be unable to use Ubuntu’s packaging system apt. It is a typical thing if your virtual or docker instance does not use automatic time synchronization.

It is really important even small installation and virtualized environments to have automatic time synchronization or the service they provide could become error prone with time!

The “apt” just reports the repositories are not valid yet:

myuser@my-server-pc:~$ sudo su
root@my-server-pc:/home/myuser# apt update
Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Reading package lists... Done                                 
E: Release file for http://archive.ubuntu.com/ubuntu/dists/bionic-updates/InRelease is not valid yet (invalid for another 151d 18h 5min 59s). Updates for this repository will not be applied.
E: Release file for http://archive.ubuntu.com/ubuntu/dists/bionic-backports/InRelease is not valid yet (invalid for another 151d 17h 16min 26s). Updates for this repository will not be applied.
E: Release file for http://archive.ubuntu.com/ubuntu/dists/bionic-security/InRelease is not valid yet (invalid for another 151d 17h 15min 3s). Updates for this repository will not be applied.
root@my-server-pc:/home/myuser# date
Thu Jan 17 15:11:56 UTC 2019

The clock shows 17 January 2019, but now is 18 June 2019! This is a Ubuntu virtual server with the minimal installation.

The solution is to synchronize your clock manually or use a service (the better way)!

Keep on reading!

Make systemd to save logs on the disk

On some Linux distributions, systemd log files are not saved on your disk, but only temporary in the memory and when you reboot all logs are discarded. So the systemd logs are not persistent, which could lead to missing important information if you want to check them when you are booted in a rescue disk or even if you just reboot your server. for exmaple,

if some important service failed to boot and your server is unreachable and you boot in rescue CD you do not have logs to check why the service failed and the (error) output of the process of starting the services!

Here is how you can enable the systemd logs to be persistent i.e. save them on the disk. This is tested on CentOS 7, which by default saves the systemd logs on memory!

STEP 1) Prepare the systemd log directory

mkdir -p /var/log/journal/
systemd-tmpfiles --create --prefix /var/log/journal/

STEP 2) Edit systemd configuration and reload the daemon

And ensure your configuration uses “Storage=persistent” in /etc/systemd/journald.conf

grep Storage /etc/systemd/journald.conf
Storage=persistent
systemctl restart systemd-journald

The last line with systemctl restart could be replace with

killall -USR1 systemd-journald

if you do not want to lose all your current logs in memory!

Bonus – systemd logs from multiple reboots

Here we have logs from 5 reboots. Here you can also see what are the right owner (systemd-journal) and Selinux labels of the “/var/log/journal/”

[root@srv ~]# ls -altrZ /var/log/journal/
drwxr-sr-x+ root systemd-journal system_u:object_r:var_log_t:s0   dbd91181db6b4c9f900d9b3a1651a8d5
drwxr-sr-x+ root systemd-journal system_u:object_r:var_log_t:s0   .
drwxr-xr-x. root root            system_u:object_r:var_log_t:s0   ..
[root@srv ~]# journalctl --disk-usage
Archived and active journals take up 112.0M on disk.
[root@srv ~]# journalctl --list-boots
-4 ec4146b78ac944b8a8d4116f259e09ee Thu 2019-06-06 23:39:14 UTC—Thu 2019-06-06 23:39:37 UTC
-3 ae3d39db626c4592aa84cc68072fbb32 Thu 2019-06-06 23:41:03 UTC—Thu 2019-06-06 23:42:13 UTC
-2 68c1ca07c05b4d59adcc9888c50f4065 Thu 2019-06-06 23:42:57 UTC—Fri 2019-06-07 00:13:27 UTC
-1 f7e8da6aaa8740faa05c4985c92023fd Fri 2019-06-07 00:14:08 UTC—Fri 2019-06-07 00:16:33 UTC
 0 45c00dc29e1a48298d9f87f5421468b4 Fri 2019-06-07 00:17:13 UTC—Mon 2019-06-10 01:39:17 UTC
[root@srv ~]# journalctl --boot=-2
-- Logs begin at Thu 2019-06-06 23:39:14 UTC, end at Mon 2019-06-10 01:39:17 UTC. --
Jun 06 23:42:57 srv systemd-journal[133]: Runtime journal is using 8.0M (max allowed 1.5G, trying to leave 2.3G free of 15.6G available → current limit 1.5G).
Jun 06 23:42:57 srv kernel: microcode: microcode updated early to revision 0x710, date = 2013-06-17
Jun 06 23:42:57 srv kernel: Initializing cgroup subsys cpuset
Jun 06 23:42:57 srv kernel: Initializing cgroup subsys cpu
Jun 06 23:42:57 srv kernel: Initializing cgroup subsys cpuacct
Jun 06 23:42:57 srv kernel: Linux version 3.10.0-514.10.2.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 S
Jun 06 23:42:57 srv kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-3.10.0-514.10.2.el7.x86_64 root=UUID=c9bec791-c77d-4189-b18a-9ddc728ee782 ro crashkernel=auto r
Jun 06 23:42:57 srv kernel: e820: BIOS-provided physical RAM map:
....
....
[root@srv ~]# journalctl --boot=-2 -u auditd
-- Logs begin at Thu 2019-06-06 23:39:14 UTC, end at Mon 2019-06-10 01:50:18 UTC. --
Jun 06 23:43:05 srv systemd[1]: Starting Security Auditing Service...
Jun 06 23:43:05 srv auditd[694]: Started dispatcher: /sbin/audispd pid: 698
Jun 06 23:43:05 srv audispd[698]: priority_boost_parser called with: 4
Jun 06 23:43:05 srv audispd[698]: max_restarts_parser called with: 10
Jun 06 23:43:05 srv audispd[698]: audispd initialized with q_depth=150 and 1 active plugins
Jun 06 23:43:05 srv augenrules[695]: /sbin/augenrules: No change
Jun 06 23:43:05 srv auditd[694]: Init complete, auditd 2.6.5 listening for events (startup state enable)
Jun 06 23:43:05 srv augenrules[695]: No rules
Jun 06 23:43:05 srv augenrules[695]: enabled 1
Jun 06 23:43:05 srv augenrules[695]: failure 1
Jun 06 23:43:05 srv augenrules[695]: pid 694
Jun 06 23:43:05 srv augenrules[695]: rate_limit 0
Jun 06 23:43:05 srv augenrules[695]: backlog_limit 320
Jun 06 23:43:05 srv augenrules[695]: lost 0
Jun 06 23:43:05 srv augenrules[695]: backlog 1
Jun 06 23:43:05 srv systemd[1]: Started Security Auditing Service.
Jun 06 23:56:48 srv auditd[694]: The audit daemon is exiting.
Jun 06 23:56:49 srv systemd[1]: Starting Security Auditing Service...
Jun 06 23:56:49 srv auditd[24744]: Started dispatcher: /sbin/audispd pid: 24746
Jun 06 23:56:49 srv audispd[24746]: audispd initialized with q_depth=250 and 1 active plugins
Jun 06 23:56:49 srv auditd[24744]: Init complete, auditd 2.8.4 listening for events (startup state enable)
Jun 06 23:56:49 srv augenrules[24750]: /sbin/augenrules: No change
Jun 06 23:56:49 srv augenrules[24750]: No rules
Jun 06 23:56:49 srv augenrules[24750]: enabled 1
Jun 06 23:56:49 srv augenrules[24750]: failure 1
Jun 06 23:56:49 srv augenrules[24750]: pid 24744
Jun 06 23:56:49 srv augenrules[24750]: rate_limit 0
Jun 06 23:56:49 srv augenrules[24750]: backlog_limit 320
Jun 06 23:56:49 srv augenrules[24750]: lost 0
Jun 06 23:56:49 srv augenrules[24750]: backlog 1
Jun 06 23:56:49 srv systemd[1]: Started Security Auditing Service.
Jun 07 00:13:26 srv systemd[1]: Stopping Security Auditing Service...
Jun 07 00:13:26 srv systemd[1]: Stopped Security Auditing Service.

Now you have logs of your booting process!

The systemd log files are accessible even if you’ve booted from a rescue CD and you chroot in your system!

Be careful with the disk free space when using disk storage for your systemd logs – Clear or delete systemd logs.

Quagga bgpd check whether the bgp session is established

If your quagga bgpd daemon is up and running (check out our article for Minimal quagga bgpd configuration to run and remote configure it) and you wonder how to check if everything is OK and the bgp session is established, here is a quick command line tip what you can do:

STEP 1) Check if your bgp daemon is connected to a remote bgp server (neighbor)

root@srv ~ # vtysh -c "show bgp neighbors"
BGP neighbor is 10.10.10.10, remote AS 16238, local AS 52218, external link
  BGP version 4, remote router ID 10.10.10.131
  BGP state = Established, up for 2d23h57m
  Last read 00:00:03, hold time is 9, keepalive interval is 3 seconds
  Neighbor capabilities:
    4 Byte AS: advertised
    Route refresh: advertised and received(old & new)
    Address family IPv4 Unicast: advertised and received
    Graceful Restart Capabilty: advertised
  Message statistics:
    Inq depth is 0
    Outq depth is 0
                         Sent       Rcvd
    Opens:                  1          1
    Notifications:          0          0
    Updates:                1          2
    Keepalives:         86323      86049
    Route Refresh:          0          0
    Capability:             0          0
    Total:              86325      86052
  Minimum time between advertisement runs is 30 seconds

 For address family: IPv4 Unicast
  Community attribute sent to this neighbor(both)
  Outbound path policy configured
  Outgoing update prefix filter list is *anydns-pfx
  12 accepted prefixes

  Connections established 1; dropped 0
  Last reset never
Local host: 10.10.10.5, Local port: 40172
Foreign host: 10.10.10.10, Foreign port: 179
Nexthop: 10.10.10.5
Nexthop global: ::
Nexthop local: ::
BGP connection: non shared network
Read thread: on  Write thread: off

STEP 2) Check the IP routes

root@srv ~ # vtysh -c "show ip bgp"
BGP table version is 0, local router ID is 10.10.10.5
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
              i internal, r RIB-failure, S Stale, R Removed
Origin codes: i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*> 0.0.0.0          10.10.10.10                         0 30627 i
*> 10.10.11.240/28
                    10.10.10.10           0             0 30627 ?
*> 10.10.11.234/31
                    10.10.10.10           0             0 30627 ?
*> 10.10.12.236/31
                    10.10.10.10           0             0 30627 ?
*> 10.10.13.242/32
                    10.10.10.10           0             0 30627 ?
*> 10.10.14.0/24  10.10.10.10           0             0 30627 ?
*> 10.10.15.0/24  10.10.10.10           0             0 30627 ?
*> 10.10.16.0/24  10.10.10.10           0             0 30627 ?
*> 11.11.11.0/24  0.0.0.0                  0         29873 i
*> 10.10.17.64/26 10.10.10.10           0             0 30627 ?
*> 10.10.18.240/29
                    10.10.10.10           0             0 30627 ?
*> 10.10.10.192/26
                    10.10.10.10           0             0 30627 ?
*> 10.10.10.192/26
                    10.10.10.10           0             0 30627 ?

Total number of prefixes 13

vtysh

vtysh – is the command line tool to manage Quagga BGP daemon locally.

Bonus Configuration

Here is our basic configuration in “/etc/quagga/bgpd.conf ”

hostname ns5.anycast.local1
password pppppppppp
log file /var/log/quagga/bgpd.log

router bgp 52218
bgp router-id 10.10.10.5
network 11.11.11.0/24
neighbor 10.10.10.10 remote-as 16238
neighbor 10.10.10.10 prefix-list anydns-pfx out
!
ip prefix-list anydns-pfx seq 5 permit 11.11.11.0/24
!
line vty

* All IPs are changed.

make Gluster daemon to resolve the proper hostnames of your peers

This is a useful tip for GlusterFS nodes. When adding a peer to a gluster cluster you may use the hostname (or IP) and the Gluster daemon on the added server tries to resolve the hostname from the IP, which contacts it (or if the cluster has multiple peers – multiple IP resolves would happen).
Here is a simple example. The cluster will have two peers (srv1.example.com and srv2.example.com):
Add the peer srv2.example.com to your cluster srv1.example.com (in fact, the cluster consists only from the local Gluster daemon):

[root@srv1 ~]# gluster peer probe srv2.example.com
peer probe: success.
[root@srv1 ~]# gluster peer status
Number of Peers: 1

Hostname: srv2.example.com
Uuid: 8322b61c-a94d-491b-afc9-9f10eb8e8b92
State: Peer in Cluster (Connected)

And when you check the status of the cluster in the second server srv2.example.com. The second server uses the PTR domain of the first server:

[root@srv2 ~]# gluster peer status
Number of Peers: 1

Hostname: static.123.123.123.123.clients.your-server.de
Uuid: 3d273834-eca6-4997-871f-1a282ca90fb0
State: Peer in Cluster (Connected)

You see the hostname is a temporary namestatic.123.123.123.123.clients.your-server.de, the PTR of the srv1.example.com. You may have problems in the future if you leave it like that and even it is the really uninformative domain name for your cluster’s configuration. To change the peer hostname in a cluster is really difficult and dangerous, so the option is to change the PTR of the servers’ IPs, but if you cannot do it or it is too slow to do it you can just use “/etc/hosts” file!

Use “/etc/hosts” to make Gluster daemon to resolve the proper hostnames of your peers!

Edit the “/etc/hosts” on (the first and) the (peer) second server (add the line, do not remove the others if they exit). Replace the IP with your first server’s IP and hostname.

123.123.123.123 srv1.example.com

And then add it to the cluster on the first server and check again in the second server:

[root@srv2 ~]# gluster peer status
Number of Peers: 1

Hostname: srv1.example.com
Uuid: 3d273834-eca6-4997-871f-1a282ca90fb0
State: Peer in Cluster (Connected)

And in the fist server:

[root@srv1 ~]# gluster peer status
Number of Peers: 1

Hostname: srv2.example.com
Uuid: 8322b61c-a94d-491b-afc9-9f10eb8e8b92
State: Peer in Cluster (Connected)

Now the two servers have the right hostnames for peers. And these hostnames will be used for the Gluster configuration saved in the servers.

In fact, it is a good idea to add all your cluster peers in the “/etc/hosts” on all servers:

123.123.123.123 srv1.example.com
124.124.124.124 srv2.example.com

List all your files (and directories) with file size over FTP without ls -R (recursive)

A great piece of software is

lftp – sophisticated file transfer program

This little console tool could ease your life significantly with many enhancements to the simple FTP protocol. This tip is for those how what to list all their files in a directory or the entire FTP account, but do not have ls command with recursive abilities. So the only option is to manually go through all the directories to fetch the listing information of the directories, but this could be automatically done by

lftp using the custom command “find” and if you add “-l” argument the output is like “ls -al” – file or directory, file permissions, user and group, file size, date and file name are shown on single line for each file.

Just execute the command with proper credentials and the starting directory of your choice. The command output could even be piped to another command.
Keep on reading!

nginx with php fpm (fastcgi) and the warning – an upstream response is buffered to a temporary file /var/cache/nginx/fastcgi_temp

As the web grows and the technology advances the page size of the web sites also grows or just some times you might want to output a big chunk of data from your application server – PHP-FPM (but it could be any of another ruby, python, C, Django and more), for example.
Here is a fast configuration tip (note this is not the proxy-related warning!):

The default nginx buffers per CGI connection are too small

Here is what to do in your nginx configuration file:
First, look for a line “include /etc/nginx/fastcgi_params;” or similar and add or edit if they exist after this line:

        fastcgi_buffer_size 16k;
        fastcgi_buffers 32 16k;

Check out more for the buffers here http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_buffers
The warning should stop if it does not stop you can try raising them. It could consume more memory but could lower the IO usage of your disks and improve the performance of your site or whatever backend works!

Here is the warning in our nginx error logs. We got this warning when using php-fpm and the php output size was 325965 bytes (~320K).

2019/04/04 09:56:05 [warn] 24451#24451: *44269838 an upstream response is buffered to a temporary file /var/cache/nginx/fastcgi_temp/0/12/0019966120 while reading upstream, client: 10.10.10.10, server: srv17.srv.en, request: "GET /api/20140102/product HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "srv17.srv.en"
2019/04/04 09:56:07 [warn] 24451#24451: *44269849 an upstream response is buffered to a temporary file /var/cache/nginx/fastcgi_temp/2/12/0019966122 while reading upstream, client: 10.10.10.11, server: srv17.srv.en, request: "GET /api/20140102/product HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "srv17.srv.en"
2019/04/04 09:56:09 [warn] 24450#24450: *44269856 an upstream response is buffered to a temporary file /var/cache/nginx/fastcgi_temp/7/12/0019966127 while reading upstream, client: 10.10.10.12, server: srv17.srv.en, request: "GET /api/20140102/product HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "srv17.srv.en"

Unpack centos 7 initramfs file with and without dracut skipcpio

In CentOS 7 the initramfs consists of two concatenated gzipped cpio files. If you want to check what files and probably configuration files are included you can unpack it, but you should use

the dracut tool skipcpio

/usr/lib/dracut/skipcpio <initramfs-file> | zcat | cpio -id --no-absolute-filenames

The following is the output of a CentOS 7

[root@srv ~]# mkdir initramfs-unpacked
[root@srv ~]# cd initramfs-unpacked/
[root@srv initramfs-unpacked]# /usr/lib/dracut/skipcpio /boot/initramfs-3.10.0-957.10.1.el7.x86_64.img | zcat | cpio -id --no-absolute-filenames
164026 blocks
[root@srv initramfs-unpacked]# ls -al
общо 52
drwxr-xr-x. 12 root root 4096  1 Apr 11,48 .
dr-xr-x---.  5 root root 4096  1 Apr 11,48 ..
lrwxrwxrwx.  1 root root    7  1 Apr 11,48 bin -> usr/bin
drwxr-xr-x.  2 root root 4096  1 Apr 11,48 dev
drwxr-xr-x.  9 root root 4096  1 Apr 11,48 etc
lrwxrwxrwx.  1 root root   23  1 Apr 11,48 init -> usr/lib/systemd/systemd
lrwxrwxrwx.  1 root root    7  1 Apr 11,48 lib -> usr/lib
lrwxrwxrwx.  1 root root    9  1 Apr 11,48 lib64 -> usr/lib64
drwxr-xr-x.  2 root root 4096  1 Apr 11,48 proc
drwxr-xr-x.  2 root root 4096  1 Apr 11,48 root
drwxr-xr-x.  2 root root 4096  1 Apr 11,48 run
lrwxrwxrwx.  1 root root    8  1 Apr 11,48 sbin -> usr/sbin
-rwxr-xr-x.  1 root root 3117  1 Apr 11,48 shutdown
drwxr-xr-x.  2 root root 4096  1 Apr 11,48 sys
drwxr-xr-x.  2 root root 4096  1 Apr 11,48 sysroot
drwxr-xr-x.  2 root root 4096  1 Apr 11,48 tmp
drwxr-xr-x.  7 root root 4096  1 Apr 11,48 usr
drwxr-xr-x.  3 root root 4096  1 Apr 11,48 var
[root@srv initramfs-unpacked]# ls -al /boot/
общо 114812
dr-xr-xr-x.  6 root root     4096 30 Mar  2,36 .
dr-xr-xr-x. 19 root root     4096 30 Mar  2,37 ..
-rw-r--r--.  1 root root   151923 18 Mar 15,10 config-3.10.0-957.10.1.el7.x86_64
drwxr-xr-x.  3 root root     4096 28 Jan 20,52 efi
drwxr-xr-x.  2 root root     4096 30 Mar  2,29 grub
drwx------.  5 root root     4096 29 Mar 13,50 grub2
-rw-------.  1 root root 44256471 28 Jan 20,57 initramfs-0-rescue-05cb8c7b39fe0f70e3ce97e5beab809d.img
-rw-------.  1 root root 44821343 29 Mar 13,50 initramfs-3.10.0-957.10.1.el7.x86_64.img
-rw-------.  1 root root 10982937 30 Mar  2,36 initramfs-3.10.0-957.10.1.el7.x86_64kdump.img
drwx------.  2 root root    16384 29 Mar 13,46 lost+found
-rw-r--r--.  1 root root   314087 18 Mar 15,10 symvers-3.10.0-957.10.1.el7.x86_64.gz
-rw-------.  1 root root  3544363 18 Mar 15,10 System.map-3.10.0-957.10.1.el7.x86_64
-rwxr-xr-x.  1 root root  6639808 28 Jan 20,57 vmlinuz-0-rescue-05cb8c7b39fe0f70e3ce97e5beab809d
-rwxr-xr-x.  1 root root  6643904 18 Mar 15,10 vmlinuz-3.10.0-957.10.1.el7.x86_64
-rw-r--r--.  1 root root      171 18 Mar 15,10 .vmlinuz-3.10.0-957.10.1.el7.x86_64.hmac

You can see the init is handled by systemd!

Not using dracut skipcpio

early_cpio – dracut set this file at the beginning of the CentOS 7 initramfs. It contains the CPU microcode.
You can check it with “file” command and if it shows: “ASCII cpio archive (SVR4 with no CRC)” there is a microcode prepended to the initramfs file.

And here without the dracut skipcpio tool with an example:

  1. cpio the original initramfs and write down the number of blocks reported
  2. use dd to skip the first blocks from the above step
  3. Uncompress (and unpack) the file created by dd – this is the real initramfs file.

Here is how you can do it:

[root@srv ~]# file /boot/initramfs-3.10.0-957.10.1.el7.x86_64.img
/boot/initramfs-3.10.0-957.10.1.el7.x86_64.img: ASCII cpio archive (SVR4 with no CRC)
[root@srv ~]# mkdir initramfs-unpacked-3
[root@srv ~]# cd initramfs-unpacked-3
[root@srv initramfs-unpacked-3]# cat /boot/initramfs-3.10.0-957.10.1.el7.x86_64.img | cpio -idmv
.
early_cpio
kernel
kernel/x86
kernel/x86/microcode
kernel/x86/microcode/AuthenticAMD.bin
kernel/x86/microcode/GenuineIntel.bin
3412 blocks
[root@srv initramfs-unpacked-3]# dd if=/boot/initramfs-3.10.0-957.10.1.el7.x86_64.img of=initramfs-tmp.img bs=512 skip=3412
84129+1 records in
84129+1 records out
43074399 bytes (43 MB) copied, 0.191311 s, 225 MB/s
[root@srv initramfs-unpacked-3]# ls
early_cpio  initramfs-tmp.img  kernel
[root@srv initramfs-unpacked-3]# file initramfs-tmp.img 
initramfs-tmp.img: gzip compressed data, from Unix, last modified: Fri Mar 29 13:49:41 2019, max compression
[root@srv initramfs-unpacked-3]# zcat ./initramfs-tmp.img | cpio -idm
164026 blocks
[root@srv initramfs-unpacked-3]# ls -al
total 42128
drwxr-xr-x. 13 root root     4096 Apr  1 12:38 .
dr-xr-x---. 10 root root     4096 Apr  1 12:38 ..
lrwxrwxrwx.  1 root root        7 Apr  1 12:38 bin -> usr/bin
drwxr-xr-x.  2 root root     4096 Apr  1 12:38 dev
-rw-r--r--.  1 root root        2 Mar 29 13:49 early_cpio
drwxr-xr-x.  9 root root     4096 Apr  1 12:38 etc
lrwxrwxrwx.  1 root root       23 Apr  1 12:38 init -> usr/lib/systemd/systemd
-rw-r--r--.  1 root root 43074399 Apr  1 12:35 initramfs-tmp.img
drwxr-xr-x.  3 root root     4096 Mar 29 13:49 kernel
lrwxrwxrwx.  1 root root        7 Apr  1 12:38 lib -> usr/lib
lrwxrwxrwx.  1 root root        9 Apr  1 12:38 lib64 -> usr/lib64
drwxr-xr-x.  2 root root     4096 Mar 29 13:49 proc
drwxr-xr-x.  2 root root     4096 Mar 29 13:49 root
drwxr-xr-x.  2 root root     4096 Mar 29 13:49 run
lrwxrwxrwx.  1 root root        8 Apr  1 12:38 sbin -> usr/sbin
-rwxr-xr-x.  1 root root     3117 Nov  2 17:40 shutdown
drwxr-xr-x.  2 root root     4096 Mar 29 13:49 sys
drwxr-xr-x.  2 root root     4096 Mar 29 13:49 sysroot
drwxr-xr-x.  2 root root     4096 Mar 29 13:49 tmp
drwxr-xr-x.  7 root root     4096 Apr  1 12:38 usr
drwxr-xr-x.  3 root root     4096 Apr  1 12:38 var