Copy files with read errors successfully – skipping only errors (i.e. bad sectors)

Sometimes disks have errors or an SSD disk has a bad NAND cell. Saving the whole hard disk data may not be needed and when only a specific file or two are important and which cannot be copied by cp or rsync because of “Unrecovered read error”.
Furthermore, the SSD reallocates the bad cells, when there are writes to the cell(s), which may not occur years, but reading may be each day. Reading from a sector with bad NAND cells will result in slow IO (multiple read commands are executed before giving up). Copying the file to a new place without only 512 bytes may not harm the data, but it is difficult to be done with the generic tool for copying.
This article is to save single files from a mounted ext4 file system with bad sectors using the ddrescue tool – https://www.gnu.org/software/ddrescue/ In fact, the ddrescue could save files or whole devices.

STEP 1) Install ddrescue.

Installing ddrescue is pretty easy. The tool is included in almost all Linux distributions and it doesn’t have many dependencies. Apparently, there is another dd_rescue tool, which is different than this one, just follow the link above for the tool used here.
CentOS 7/8 or Fedora:

yum install -y ddrescue

Ubuntu last 10 years versions:

apt install -y gddrescue

Gentoo:

emerge -v ddrescue

STEP 2) Rescuing a single file with read errors because of bad sectors in a mounted file system.

[root@srv Snapshots]# ddrescue -v \{9f02ae0a-6dae-4729-b6a6-ec3f0550f294\}.vdi test2.vdi
GNU ddrescue 1.25
About to copy 15724 MBytes from '{9f02ae0a-6dae-4729-b6a6-ec3f0550f294}.vdi' to 'test2.vdi'
    Starting positions: infile = 0 B,  outfile = 0 B
    Copy block size: 128 sectors       Initial skip size: 384 sectors
Sector size: 512 Bytes

Press Ctrl-C to interrupt
     ipos:   13495 MB, non-trimmed:        0 B,  current rate:       0 B/s
     opos:   13495 MB, non-scraped:        0 B,  average rate:    162 MB/s
non-tried:        0 B,  bad-sector:     8192 B,    error rate:    4608 B/s
  rescued:   15724 MB,   bad areas:        2,        run time:      1m 36s
pct rescued:   99.99%, read errors:       18,  remaining time:          0s
                              time since last successful read:          0s
Finished                                      
[root@srv Snapshots]# ls -al
total 52602944
drwx------. 2 root root        4096 Jun  2 02:22 .
drwxr-xr-x. 4 root root        4096 Jun  1 14:16 ..
-rw-------. 1 root root   459981735 Nov  8  2018 2018-11-08T15-19-17-776317000Z.sav
-rw-------. 1 root root   566704069 Jun  1 14:16 2020-06-01T11-16-05-735318000Z.sav
-rw-------. 1 root root  8329887744 Jun  1 12:53 {3d30ebea-2e2f-4e33-8088-d3d66f315e2c}.vdi
-rw-------. 1 root root 15724445696 Nov  8  2018 {9f02ae0a-6dae-4729-b6a6-ec3f0550f294}.vdi
-rw-------. 1 root root  4012900352 Jun  1 14:16 {f7e72510-7dce-48fd-b62c-630664ad984f}.vdi
-rw-r--r--. 1 root root 15724445696 Jun  2 02:24 test2.vdi
-rw-------. 1 root root  9051041792 Jun  2 02:19 test.vdi

Here is an animated gif of the ddrescue procedure:

main menu
ddrescue – copy files with bad sectors

Keep on reading!

Software and technical overview of Ubuntu 20.04 LTS server edition

Ubuntu 20.04 LTS server edition offers the following software and versions:

Software

  • linux kernel – 5.4.0 – 5.4.0-29-generic
  • System
    • linux-firmware – 1.187
    • libc – 2.31 – 2.31-0ubuntu9
    • GNU GCC – multiple versions available – 7.5.0, 8.4.0, 9.3.0 and 10-20200411. The exact versions – 7.5.0-6ubuntu2, 8.4.0-3ubuntu2, 9.3.0-10ubuntu2 and 10-20200411-0ubuntu1
    • OpenSSL – 1.1.1f – 1.1.1f-1ubuntu2
    • coreutils – 8.308.30-3ubuntu2
    • apt – 2.0.2ubuntu0.1
    • rsyslog – 8.2001.0 – 8.2001.0-1ubuntu1
  • Servers
    • Apache – 2.4.41 – 2.4.41-4ubuntu3
    • Nginx – 1.17.10 – 1.17.10-0ubuntu1
    • MySQL server – 8.0.208.0.20-0ubuntu0.20.04.1
    • MariaDB server – 10.3.22 – 10.3.22-1ubuntu1
    • PostgreSQL – 12.2-4
  • Programming
      LTS the user may install

    • PHP – 7.4 – 7.4.3-4ubuntu1.1
    • python – 3.8.2 (3.8.2-0ubuntu2) and also includes 2.7.17 (2.7.17-2ubuntu4)
    • perl – 5.30.0 and also includes perl 6 6.d-2
    • ruby – 2.7 – 2.7+1
    • OpenJDK – includes multiple versions – 8, 11, 13 and 14. The exact versions are 8u252-b09-1ubuntu1, 11.0.7+10-3ubuntu1, 13.0.3+3-1ubuntu2 and 14.0.1+7-1ubuntu1
    • Go lang – multiple versions – 1.13.8 and 1.14.2. The exact versions – 1.13.8-1ubuntu1 and 1.14.2-1ubuntu1
    • Rust – 1.41.0 – 1.41.0+dfsg1+llvm-0ubuntu2
    • Subversion – 1.13.0 – 1.13.0-3
    • Git – 2.25.1 – 2.25.1-1ubuntu3
    • llvm – multiple versions – 6, 7, 8, 9 and 10. The exact versions – 6.0.1-14, 7.0.1-12, 8.0.1-9, 9.0.1-12, 10.0-50~exp1
  • Graphical User Interface
    • Xorg X server – 1.20.8 – 1.20.8-2ubuntu2
    • GNOME (the GUI) – 3.36.x – Gnome Shell – 3.36.1

Note: Not all of the above software comes installed by default. The versions above are valid for the intial release so in fact, these are the minimal versions you get with Ubuntu 20 LTS and installing and updating it after the initial date may update some of the above packages with new versions. Installed packages are 582 occupying 11G space.

During the installation wizard you may want to install the following snap software environments. Of course, this software is available after the installation setup, too.

The test server is equipped with “Threadripper 1950X AMD“, which is 16 cores CPU.
Check out Minimal installation of Ubuntu server 20.04 LTS, too.

Keep on reading!

Minimal installation of Ubuntu server 20.04 LTS

This tutorial will show you the simple steps of installing a modern Linux Distribution – Ubuntu server 20.04 LTS edition. Following most of the default options during the setup configuration for simplicity.

Here are some basic data from the default installation setup settings:

  1. Installed packages – ~582 occupying 11G of space.
  2. 3 partitions when using automatic patition layout – boot efi, swap and root.
  3. ext4 used for the root parition.

We used the following ISO for the installation process – Ubuntu 20.04 LTS (Focal Fossa):

http://releases.ubuntu.com/focal/ubuntu-20.04-live-server-amd64.iso

It is a LIVE image so you can try it before installing it. The easiest way is just to download the image and burn it to a DVD disk and then follow the installation below:

SCREENSHOT 1) Boot from the disk or USB – whatever you made after downloading the ISO file from Ubuntu official source.

On the image here the DVD is used to boot in UEFI mode installation.

main menu
boot uefi dvd

Keep on reading!

collectd nginx plugin: curl_easy_perform failed because of selinux

Enabling the Nginx plugin for collectd under CentOS (or any other system using SELinux) might be confusing for a newbie. Most sources on the Internet would just install collectd-nginx:

yum install -y collectd-nginx

and configure it in the nginx.conf and collectd.conf. Still, the statistics might not work as expected, the collectd may not be able to gather statistics from the Nginx.

SELinux may prevent collectd (plugin) daemon to connect to Nginx and gather statistics from the Nginx stats page.

Checking the collectd log and it reports a problem:
Keep on reading!

Cron missing path – executing docker/podman – adding network: failed to locate iptables

If you have ever happened to execute some complex scripts using the cron system you were inevitable to discover the Linux environment was different than the login or ssh shell. The different environment tends to lead to a missing or different PATH environment! Here is what happens with podman starting a container from a cron script:

time="2020-04-19T20:45:20Z" level=error msg="Error adding network: failed to locate iptables: exec: \"iptables\": executable file not found in $PATH"
time="2020-04-19T20:45:20Z" level=error msg="Error while adding pod to CNI network \"podman\": failed to locate iptables: exec: \"iptables\": executable file not found in $PATH"
Error: unable to start container "onedrive-cli": error configuring network namespace for container d297cf80db20441d4258a1acc7d810444795d1ca8730ab242d9fe8a13eaa697d: failed to locate iptables: exec: "iptables": executable file not found in $PATH

The iptables executable is missing because the PATH variable is different than the login or ssh shell one. Executing the commands or the script under ssh or login will result in no error and a proper podman (docker) execution!

A similar problem could have happened with another software trying to execute iptables or another tool, which is not found in the cron’s PATH environment because cron’s environment is very limited and

To ensure the PATH is like the user’s (root) environment just source the “profile” or “.bashrc” file of the current user before the execution of the script or in the first lines of it.
This would do the trick.

. /etc/profile

Or user’s custom

. ~/.bashrc

Or the default OS bashrc

. /etc/bashrc

The dot may be replaced by “source”:

source /etc/bashrc

All (environment) variables will be available after the source command.

Here is the difference:
The environment without the sourcing profile/bashrc file:

 
LANG=en_US.UTF-8
XDG_SESSION_ID=19118
USER=root
PWD=/root
HOME=/root
SHELL=/bin/sh
SHLVL=1
LOGNAME=root
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
XDG_RUNTIME_DIR=/run/user/0
PATH=/usr/bin:/bin
_=/usr/bin/env

Sourcing the “/etc/profile” file:

LANG=en_US.UTF-8
HISTCONTROL=ignoredups
HOSTNAME=srv.example.com
XDG_SESSION_ID=19165
USER=root
PWD=/root
HOME=/root
MAIL=/var/spool/mail/root
SHELL=/bin/bash
SHLVL=1
LOGNAME=root
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
XDG_RUNTIME_DIR=/run/user/0
PATH=/usr/local/sbin:/usr/sbin:/usr/bin:/bin
HISTSIZE=1000
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/usr/bin/env

Multiple additional envrinment varibles, which could be important for user’s scripts executed by the cron.

And in CentOS 8 the iptables happens to be in “/usr/sbin/iptables” – a path /usr/sbin not included in the default cron environment PATH variable!
Of course, the PATH environment may be edited in the cron scheduler with crontab (by just setting the PATH with a path) till the next path missing in it and included in the user’s path! It’s just better to ensure the two environments are the same every time by sourcing the environment configuration file such as /etc/profile or user’s bashrc (or the default on in /etc/bashrc?).

lvm2: remove unknown physical volume device in a logical volume with a caching using vgreduce

Following the article SSD cache device to a hard disk drive using LVM the initial setup is:

  1. Single slow 2T harddisk. So no redundancy. If it fails all data is gone.
  2. SSD cache device for the slow harddisk above.

And here is how to handle a slow device failure! The data is all gone because there is no redundancy option when using a single device for data. The data is not valuable, because it is a cache server. This article will show what to expect when the slow device fails and how to replace it.
The original slow device is missing and replaced by a new one and the partitions are as follow:

  1. /dev/sda4 – the slow device. New device.
  2. /dev/sdb5 – the SSD device. Still in the LVM2 Volume Group and the Logical Volume.

Physical Volume is missing marked as “[unknown]”. The pvdisplay shows metadata information for the missing device. The vgdisplay and lvdisplay show information for the group and the logical volume, but the logical volume is in not available status. So the logical volume cannot be used, which is kind of normal when only the cache device is present.

Slow drive failure and the LVM2 status

[root@srv ~]# ssm list
------------------------------------------------------
Device        Free        Used      Total  Pool       
------------------------------------------------------
/dev/sda                          2.00 TB             
/dev/sda1  0.00 KB    50.00 GB   50.03 GB  md         
/dev/sda2  0.00 KB    15.75 GB   15.76 GB  md         
/dev/sda3  0.00 KB  1023.00 MB    1.00 GB  md         
/dev/sda4                         1.93 TB             
/dev/sdb                        894.25 GB             
/dev/sdb1  0.00 KB    50.00 GB   50.03 GB  md         
/dev/sdb2  0.00 KB    15.75 GB   15.76 GB  md         
/dev/sdb3  0.00 KB  1023.00 MB    1.00 GB  md         
/dev/sdb4                         1.00 KB             
/dev/sdb5  0.00 KB   675.00 GB  675.00 GB  VG_storage1
/dev/sdb6                       152.46 GB             
[unknown]  0.00 KB     1.93 TB    1.93 TB  VG_storage1
------------------------------------------------------
-----------------------------------------------------
Pool         Type  Devices     Free     Used    Total  
-----------------------------------------------------
VG_storage1  lvm   2        0.00 KB  2.59 TB  2.59 TB  
-----------------------------------------------------
------------------------------------------------------------------------------
Volume      Pool  Volume size  FS       FS size       Free  Type   Mount point
------------------------------------------------------------------------------
/dev/md125  md       50.00 GB  ext4    50.00 GB   44.41 GB  raid1  /          
/dev/md126  md     1023.00 MB  ext4  1023.00 MB  788.84 MB  raid1  /boot      
/dev/md127  md       15.75 GB                               raid1             
/dev/sdb6           152.46 GB  ext4   152.46 GB  145.77 GB                    
------------------------------------------------------------------------------
----------------------------------------------------------------------------------
Snapshot                      Origin               Pool         Volume size  Type   
----------------------------------------------------------------------------------
/dev/VG_storage1/lv_storage1  [lv_storage1_corig]  VG_storage1      1.93 TB  cache  
----------------------------------------------------------------------------------
[root@srv ~]# pvdisplay 
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  --- Physical volume ---
  PV Name               /dev/sdb5
  VG Name               VG_storage1
  PV Size               675.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              172799
  Free PE               0
  Allocated PE          172799
  PV UUID               oLn3hh-ROFU-WSW8-0m8P-YLWY-Akoz-nCxh96
   
  --- Physical volume ---
  PV Name               [unknown]
  VG Name               VG_storage1
  PV Size               1.93 TiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              507188
  Free PE               0
  Allocated PE          507188
  PV UUID               IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd
   
[root@srv ~]# pvscan 
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  PV /dev/sdb5   VG VG_storage1     lvm2 [<675.00 GiB / 0    free]
  PV [unknown]   VG VG_storage1     lvm2 [1.93 TiB / 0    free]
  Total: 2 [2.59 TiB] / in use: 2 [2.59 TiB] / in no VG: 0 [0   ]
[root@srv ~]# vgdisplay 
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  --- Volume group ---
  VG Name               VG_storage1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  10
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                1
  VG Size               2.59 TiB
  PE Size               4.00 MiB
  Total PE              679987
  Alloc PE / Size       679987 / 2.59 TiB
  Free  PE / Size       0 / 0   
  VG UUID               eZ2ZIb-jcDl-kPLj-oFwJ-LLuN-VxLD-rVJclP
   
[root@srv ~]# vgscan 
  Reading volume groups from cache.
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  Found volume group "VG_storage1" using metadata type lvm2
[root@srv ~]# lvdisplay 
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  --- Logical volume ---
  LV Path                /dev/VG_storage1/lv_storage1
  LV Name                lv_storage1
  VG Name                VG_storage1
  LV UUID                NFWlWF-VmSO-HVr4-72RW-YY82-1ax2-92cI6P
  LV Write Access        read/write
  LV Creation host, time srv.example.com, 2019-10-25 17:18:41 +0000
  LV Cache pool name     lv_cache
  LV Cache origin name   lv_storage1_corig
  LV Status              NOT available
  LV Size                1.93 TiB
  Current LE             507188
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
[root@srv ~]# lvscan 
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  inactive          '/dev/VG_storage1/lv_storage1' [1.93 TiB] inherit
[root@srv ~]# lvs
lvs     lvscan  
[root@srv ~]# lvs -a
  WARNING: Device for PV IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd not found or rejected by a filter.
  Couldn't find device with uuid IfF78Q-KV3N-GH94-6wvU-23ku-jC20-S8tcBd.
  LV                  VG          Attr       LSize   Pool       Origin              Data%  Meta%  Move Log Cpy%Sync Convert
  [lv_cache]          VG_storage1 Cwi---C--- 674.90g                                                                       
  [lv_cache_cdata]    VG_storage1 Cwi------- 674.90g                                                                       
  [lv_cache_cmeta]    VG_storage1 ewi-------  48.00m                                                                       
  lv_storage1         VG_storage1 Cwi---C-p-   1.93t [lv_cache] [lv_storage1_corig]                                        
  [lv_storage1_corig] VG_storage1 owi---C-p-   1.93t                                                                       
  [lvol0_pmspare]     VG_storage1 ewi-------  48.00m

Keep on reading!

Inactive array – mdadm: Cannot get array info for /dev/md126

Replacing a disk maybe sometimes challenging, especially with software RAID. If the software RAID1 went inactive this article might be for you!
Booting from a LIVECD or a rescue PXE system and all RAID devices got inactive despite the loaded personalities. We have similar article on the subject – Recovering MD array and mdadm: Cannot get array info for /dev/md0

livecd ~ # cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] [raid0] [raid1] [raid10] [linear] [multipath] 
md125 : inactive sdb3[1](S)
      1047552 blocks super 1.2
       
md126 : inactive sdb1[1](S)
      52427776 blocks super 1.2
       
md127 : inactive sdb2[1](S)
      16515072 blocks super 1.2
       
unused devices: <none>

Despite the personalities are loaded, which means the kernel modules are successfully loaded – “[raid6] [raid5] [raid4] [raid0] [raid1] [raid10] [linear] [multipath] “. Still, something got wrong and the device’s personality is unrecognized and is inactive state.
A device in inactive state cannot be recovered and it cannot be added disks:

livecd ~ # mdadm --add /dev/md125 /dev/sda3
mdadm: Cannot get array info for /dev/md125

In general, to recover a RAID in inactive state:

  1. Check if the kernel modules are loaded. If the RAID setups are using RAID1, the “Personalities” line in /proc/mdstat should include it as “[raid1]”
  2. Try to run the device with “mdadm –run”
  3. Add the missing device to the RAID device with “mdadm –add” if the status of the RAID device goes to “active (auto-read-only)” or just “active”.
  4. Wait for the RAID device to recover.

Keep on reading!

Mirror a PPA repositories using aptly – PHP (ppa:ondrej/php)

This is a simple example of how to mirror a PPA repository to a local server. The Ubuntu PPA to mirror is ppa:ondrej/php, which offers the user different PHP version generally not available in the Ubuntu installation. Of course, the user should be very careful about adding PPA repositories, because they are exactly what the abbreviation stands for Personal Package Archives.

If you want to know how to install and a brief description of what is aptly you may want to read our previous article – Install aptly under Ubuntu 18 LTS with Nginx serving the packages and the first steps

What we are going to do – this is what you need to have a mirror of an external application repository:

  1. Install aptly in Ubuntu 18 LTS
  2. Create a mirror in aptly
  3. Create a snapshot of the mirror created before
  4. Publish the snapshot to be used in other servers.

and at the last step there is an example how to use the mirror in your local machines.

STEP 1) Install aptly in Ubuntu 18.04 LTS.

As mentioned already you may follow our article on the subject – Install aptly under Ubuntu 18 LTS with Nginx serving the packages and the first steps. The following steps are based on this installation!
The aptly home directory is in “/srv/aptly”. We use the “aptly” user and change to it to manipulate the aptly installation.
Change the user to aptly, because under this user the mirror process will happen.

root@srv ~ # su - aptly
aptly@srv:~$

STEP 2) Create a mirror in aptly.

Prepare the keys (aptly needs to have the Ubuntu keys in its trustedkeys keyring):

aptly@srv:~$ gpg --no-default-keyring --keyring trustedkeys.gpg --keyserver pool.sks-keyservers.net --recv-keys 4F4EA0AAE5267A6C
gpg: requesting key E5267A6C from hkp server pool.sks-keyservers.net
gpg: key E5267A6C: public key "Launchpad PPA for Ond\xc5\x99ej Sur�" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

Here we’ve used the method to obtain the key from a GPG KEY server, but the key can be downloaded directrly from the original repository as suggested in the error message below.
If you are not sure where to download the key you could always just try to create the mirror ( in fact, this is in STEP 3) ) and get the error for missing key and how to obtain the key:
Keep on reading!

Overwrite Return-Path with postfix because of “550-Sender verification is required but failed”

Sending emails from web applications like PHP may result in rejecting the emails from some servers. Fighting spam emails results in too strict filters and rules, which reject the mails even before the anti-spam service of the accepting server. Here is an error:

Apr  1 04:10:18 srv-mail postfix/pickup[26902]: AB13578FAB3: uid=1015 from=<www-data>
Apr  1 04:10:18 srv-mail postfix/cleanup[21182]: AB13578FAB3: message-id=<20200401041018.AB13578FAB3@www.mydomain.com>
Apr  1 04:10:18 srv-mail postfix/qmgr[6485]: AB13578FAB3: from=<www-data@www.mydomain.com>, size=7923, nrcpt=1 (queue active)
Apr  1 04:10:19 srv-mail postfix/smtp[45689]: AB13578FAB3: to=<mailbox@example.com>, relay=mx.example.com[1.1.1.1]:25, delay=11, delays=0.02/0.01/0.65/10, dsn=5.0.0, status=bounced (host mx.example.com[1.1.1.1] said: 550-Sender verification is required but failed. (ID:550:0:5 550 (smtp1.mx.example.com)): www-data@mydomain.com (in reply to MAIL FROM command))

The receiving server has too strict rules!

It just expects the “From” and the “Return-Path” headers to contain the same string – the sender’s email box.

As you can see, from the example above, the application sends all emails (from let’s say web forms) from the www-data@mydomain.com and probably the www-data is the username of the OS user, under which the application executes.
Or you want to overwrite the Return-Path because it uses the username of the application, which sent the email like “web”, “apache”, “www-data” and so on.
Here is how to overwrite the Return-Path with postfix mail system.

STEP 1) Edit postfix configuration

Add a line in /etc/postfix/main.cf (it is perfectly fine to be on the last line):

smtp_generic_maps = hash:/etc/postfix/generic

And create the file /etc/postfix/generic with mapping “old@mailbox.com new@mail.com”:

www-data@mydomain.com no-reply@domain.com

The domains of the emails may be different or the same. It doesn’t matter. If you do not know what is your “www-data@mydomain.com” the mail logs in /var/log/messages or /varlog/mail maight help to find the emailbox or just send yourself an email and look for the Return-Path.
And a real-world example for /etc/postfix/generic

www-data@www.mydomain.com no-reply@ahelpme.com

STEP 2) Generate the hash file, which postfix will use. Reload the postfix.

The postfix will use the hash file add in the configuration. Just execute:

postmap /etc/postfix/generic

The above command will create a binary file /etc/postfix/generic.db, which will be used by the postfix mail system. Do not edit the file directly. To add entry, just use a text editor and edit /etc/postfix/generic (without the “.db” suffix) and then reload/restart the postfix to enable the new configuration.
And reload (or restart) postfix with

systemctl reload postfix

or for init systems:

/etc/init.d/postfix restart

Supermicro H11DSU-iN and dual AMD EPYC 7282 with Linux installed – system (GCC flags),kernel and device information

Here is what you can expect if you have new Linux kernel 5.5.11 installed on with AMD EPYC 7282.

  • 8x SATA Controllers – SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
  • 4x Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
  • 8x RAM DDIM in eight-channel mode

lspci – devices

Note there are 8 SATA controllers (SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)) for a total of 14 SATA3, 2 SATA DOM, 4 NVMe!

00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)
00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 0
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 1
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 2
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 3
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 4
00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 5
00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 6
00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 7
00:19.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 0
00:19.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 1
00:19.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 2
00:19.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 3
00:19.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 4
00:19.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 5
00:19.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 6
00:19.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 7
01:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
01:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
02:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
02:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
02:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Starship USB 3.0 Host Controller
20:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
20:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
20:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
20:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
20:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
20:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
21:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
21:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
22:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
22:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
22:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
22:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Starship USB 3.0 Host Controller
23:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
24:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
40:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
40:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
40:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
40:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
40:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
40:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
40:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
41:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
41:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
41:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
41:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
42:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
42:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
43:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
43:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
44:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
45:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
60:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
60:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
60:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
60:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
60:03.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
60:03.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
60:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:04.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
60:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
60:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
63:00.0 USB controller: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller
64:00.0 USB controller: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller
65:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 04)
66:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41)
67:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
67:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
68:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
68:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
80:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
80:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
80:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
80:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
81:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
81:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
82:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
82:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
82:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Starship USB 3.0 Host Controller
a0:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
a0:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
a0:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
a0:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
a0:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
a0:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
a0:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
a0:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
a0:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
a0:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
a0:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
a0:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
a0:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
a1:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
a1:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
a2:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
a2:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
a2:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
a2:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Starship USB 3.0 Host Controller
a3:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
a4:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
c0:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
c0:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
c0:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
c0:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
c0:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
c0:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
c1:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
c1:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
c2:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
c2:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
c3:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
c4:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
e0:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
e0:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
e0:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
e0:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
e0:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
e0:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
e0:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
e0:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
e0:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
e0:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
e0:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
e0:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
e0:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
e3:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
e3:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
e4:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
e4:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA

Keep on reading!