aptly publish ERROR: unable to publish: unable to process packages: error linking file to

We’ve encountered the following error when issuing a publish command:

aptly@aptly-server:~$ aptly --config=/mnt/storage/aptly/.aptly.conf publish snapshot xenial-myrepo-initial ubuntu
Loading packages...
Generating metadata files and linking package files...
ERROR: unable to publish: unable to process packages: error linking file to /mnt/storage/aptly/.aptly/public/ubuntu/pool/main/s/sftpcloudfs/sftpcloudfs_0.12.2-2_all.deb: file already exists and is different

And the snapshot had failed to publish. Check if the file is “aptly:aptly” (or the user and group your installation uses) because if someone has executed commands from the user root it may create some files with the user root (or other) and after that, some commands could fail. In our case, the file was with the right user for aptly and the solution was to remove the file manually (i.e. it is safe to remove it!) it was created again by the setup in the right time. Then execute the publish command again:

aptly@aptly-server:~$ rm /mnt/storage/aptly/.aptly/public/ubuntu/pool/main/s/sftpcloudfs/sftpcloudfs_0.12.2-2_all.deb 
aptly@aptly-server:~$ aptly --config=/mnt/storage/aptly/.aptly.conf publish snapshot xenial-myrepo-initial ubuntu
Loading packages...
Generating metadata files and linking package files...
Finalizing metadata files...
Signing file 'Release' with gpg, please enter your passphrase when prompted:
Clearsigning file 'Release' with gpg, please enter your passphrase when prompted:

Snapshot xenial-myrepo-initial has been successfully published.
Please setup your webserver to serve directory '/mnt/storage/aptly/.aptly/public' with autoindexing.
Now you can add following line to apt sources:
  deb http://your-server/ubuntu/ xenial-myrepo main
  deb-src http://your-server/ubuntu/ xenial-myrepo main
Don't forget to add your GPG key to apt with apt-key.

You can also use `aptly serve` to publish your repositories over HTTP quickly.

Common mistakes to appear this error are

  • File permissions
  • File ownership. As mentioned above aptly command executed by other user (like root). Probably it is a good idea to chown recursively the whole aptly root directory
  • Inerrupting the publish command execution
  • Inerrupting the drop command execution

The solution is simple, just remove the offensive file(s) and execute the command again. It is safe to remove the file manually.

Recovering MD array and mdadm: Cannot get array info for /dev/md0

What a case! A long story short one of our disks got a bad disk in a software RAID1 setup and when we tried replacing the disk in a recovery Linux console we got the strange error of an MD device:

mdadm: Cannot get array info for /dev/md125

And ccording to the /proc/mdstat the device was there and mdadm -E reported the array was “clean”.

root@631019 ~ # mdadm --add /dev/md125 /dev/sdb2
mdadm: Cannot get array info for /dev/md125

root@631019 ~ # cat /proc/mdstat                                                        πŸ™
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] 
md122 : inactive sda4[0](S)
      33520640 blocks super 1.2
       
md123 : inactive sda5[0](S)
      1914583040 blocks super 1.2
       
md124 : inactive sda3[0](S)
      4189184 blocks super 1.2
       
md125 : inactive sda2[0](S)
      1048512 blocks
       
unused devices: <none>

root@631019 ~ # mdadm -E /dev/sda2                                                      πŸ™
/dev/sda2:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : aff708ee:16669ffb:1a120e13:7e9185ae
  Creation Time : Thu Mar 14 15:10:21 2019
     Raid Level : raid1
  Used Dev Size : 1048512 (1023.94 MiB 1073.68 MB)
     Array Size : 1048512 (1023.94 MiB 1073.68 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 126

    Update Time : Thu Jul 11 10:22:17 2019
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : c1ee0a10 - correct
         Events : 103


      Number   Major   Minor   RaidDevice State
this     0       8        2        0      active sync   /dev/sda2

   0     0       8        2        0      active sync   /dev/sda2
   1     1       8       18        1      active sync   /dev/sdb2

The important piece of information here is that the RAID1 is in an inactive state, which is really strange! It is perfectly normal to be started with one disk missing (the raid as you can see consists from 2 disks) and in read-only mode before mounting it. But here it is in an inactive state! The output of /proc/mdstat shows a sign of inappropriate assembly of all those arrays probably during the boot of the rescue Linux system – missing information or old version of mdadm utility or some other configuration loaded! In such states – inactive and as you see no information about the type of the arrays it is normal mdadm to report error it could not get current array info. The key word here is CURRENT despite mdadm misses it in the error output:

root@631019 ~ # mdadm --add /dev/md125 /dev/sdb2
mdadm: Cannot get array info for /dev/md125

Because in fact mdadm tries adding a disk in the currently loaded configuration, not the real one in your disks!

The solution

  1. Remove ALL current configuration by issuing multiple stop commands with mdadm, no inactive raids or any raids should be reported in “/proc/mdstat”.
  2. Remove (or better rename) mdadm configuration files in /etc/mdadm.conf (in some Linux distributions is /etc/mdadm/mdadm.conf).
  3. Rescan for MD devices with mdadm. The mdadm will load the configuration from your disks.
  4. Add the missing partitions to your software raid devices.

Keep on reading!

aptly publish: gpg: no default secret key: secret key not available

This is also a common error in a typical aptly installation. The other two common errors related to the GPG keys are: aptly publish: ERROR: unable to initialize GPG signer. Missing pubring.gpg keys and aptly mirror – gpgv: Can’t check signature: public key not found. This secret key is used when you try to publish a repository (snapshot or mirror).

root@srv-aptly ~ # aptly publish snapshot xenial-myrepo-initial
Loading packages...
Generating metadata files and linking package files...
 15683 / 107250 [====================>--------------------------------------------------------------------------------------------------------------------]  14.62% 2h53m50s 
17025 / 107250 [=====================>--------------------------------------------------------------------------------------------------------------------]  15.87% 3h5m15sFinalizing metadata files...
Signing file 'Release' with gpg, please enter your passphrase when prompted:
gpg: no default secret key: secret key not available
gpg: signing failed: secret key not available
ERROR: unable to publish: unable to detached sign file: exit status 2

You are unable to sign the Release file because the keyring secring.gpg is missing a GPG key. Just create or import from your current servers the GPG key from keyring secring.gpg (for the root user it is /root/.gnupg/secring.gpg and in general this is the default path /[my-aptly-home-directory]/.gnupg/secring.gpg).

Here is the example with the two servers, exporting from your current and importing the key in your new (the second) server:

Export the secring.gpg GPG key from your server
root@srv-aptly-1:~ # gpg --list-keys --keyring secring.gpg
/root/.gnupg/secring.gpg
------------------------
pub   2048D/FDC7A25E 2017-09-16
uid                  My-aptly (aptly key no passphrase) <my-aptly@example.com>

root@srv-aptly-1:~ # gpg --no-default-keyring --keyring secring.gpg --export --armor FDC7A25E > FDC7A25E.key
Copy the file to the second server (FDC7A25E.key) and then import it in keyring secring.gpg
root@srv-aptly-2:~ # cat ./FDC7A25E.key| gpg --no-default-keyring --keyring secring.gpg --import
gpg: key FDC7A25E: public key "My-aptly (aptly key no passphrase) <my-aptly@example.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1

And now you can publish your repository with:

root@srv-aptly-2: ~ # aptly publish snapshot xenial-myrepo-initial ubuntu
Loading packages...
Generating metadata files and linking package files...
Finalizing metadata files...
Signing file 'Release' with gpg, please enter your passphrase when prompted:
Clearsigning file 'Release' with gpg, please enter your passphrase when prompted:

Snapshot xenial-myrepo-initial has been successfully published.
Please setup your webserver to serve directory '/mnt/storage/aptly/.aptly/public' with autoindexing.
Now you can add following line to apt sources:
  deb http://your-server/ubuntu/ xenial-myrepo main
  deb-src http://your-server/ubuntu/ xenial-myrepo main
Don't forget to add your GPG key to apt with apt-key.

You can also use `aptly serve` to publish your repositories over HTTP quickly.

The operation publish passed successfully.

Generate GPG Key

If you just came here installing a new aptly server and getting this error as mentioned above you miss a GPG key in keyring secring.gpg.

root@srv-aptly: ~# gpg --default-new-key-algo rsa4096 --gen-key --keyring secring.gpg
gpg (GnuPG) 2.2.11; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Note: Use "gpg --full-generate-key" for a full featured key generation dialog.

GnuPG needs to construct a user ID to identify your key.

Real name: My-aptly
Email address: my-aptly@example.com
You selected this USER-ID:
    "MyName <my-aptly@example.com>"

Change (N)ame, (E)mail, or (O)kay/(Q)uit? O
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key B14B67D0CF27191B marked as ultimately trusted
gpg: revocation certificate stored as '/root/.gnupg/openpgp-revocs.d/77EC42A1F16127C83509292BB14B67D0CF27191B.rev'
public and secret key created and signed.

Note that this key cannot be used for encryption.  You may want to use
the command "--edit-key" to generate a subkey for this purpose.
pub   rsa4096 2019-07-08 [SC] [expires: 2021-07-07]
      77EC42A1F16127C83509292BB14B67D0CF27191B
uid                      MyName <my-aptly@example.com>

NOTE

Just to note here we give you all the examples with the root user and the GPG keys are for the root user. You may use a different user for the aptly process and you must ensure the GPG keys to present for this user (the directories and files are the same, just home directory is different – the home directory of the aptly user i.e. “/[my-aptly-home-directory]/.gnupg/secring.gpg” and for all other GPG files “/[my-aptly-home-directory]/.gnupg/”).

aptly publish: ERROR: unable to initialize GPG signer. Missing pubring.gpg keys

In continuation of our aptly common mistakes here one more when making a second mirror aptly server to your master (you may encounter this error in many other situations, not only building a mirror aptly server). Again the problem is the GPG key like this one – aptly mirror – gpgv: Can’t check signature: public key not found this time the problem occurs when you try getting snapshot of your mirror repository.

By default Aptly uses the GNU key in keyring pubring.gpg (/root/.gnupg/pubring.gpg for the root user)

And even you may have the same key in other keyrings like trustedkeys.gpg you won’t be able to use them for signing process with the aptly snapshot.

Here is the error:

root@srv-aptly-2:~ # aptly publish snapshot myrepo-initial
ERROR: unable to initialize GPG signer: looks like there are no keys in gpg, please create one (official manual: http://www.gnupg.org/gph/en/manual.html)

The solution is to export the key from pubring.gpg keyring and then import the GPG key in keyring pubring.gpg in the new server. And then you won’t receive the error when making a snapshot with aptly. Or if your case is not making a second server, but your first aptly server you must generate the GPG key in pubring.gpg (look at the end how to do it and skip the lines below for GPU key export and import).

Export the pubring.gpg GPG key from your server
root@srv-aptly-1:~ # gpg --list-keys --keyring pubring.gpg
/root/.gnupg/pubring.gpg
------------------------
pub   2048D/FDC7A25E 2017-09-16
uid                  My-aptly (aptly key no passphrase) <my-aptly@example.com>

root@srv-aptly-1:~ # gpg --no-default-keyring --keyring pubring.gpg --export --armor FDC7A25E > FDC7A25E.key
Copy the file to the second server (FDC7A25E.key) and then import it in keyring pubring.gpg
root@srv-aptly-2:~ # cat ./FDC7A25E.key| gpg --no-default-keyring --keyring pubring.gpg --import
gpg: key FDC7A25E: public key "My-aptly (aptly key no passphrase) <my-aptly@example.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1
root@srv-aptly-2:~ # aptly publish snapshot myrepo-initial
Loading packages...
Generating metadata files and linking package files...
 15683 / 107250 [====================>--------------------------------------------------------------------------------------------------------------------]  14.62% 2h53m50s 17025 / 107250 [=====================>--------------------------------------------------------------------------------------------------------------------]  15.87% 3h5m15sFinalizing metadata files..

Generate GPG Key

If you just came here installing a new aptly server and getting this error as mentioned above you miss a GPG key in keyring pubring.gpg.

root@srv-aptly: ~# gpg --default-new-key-algo rsa4096 --gen-key --keyring pubring.gpg
gpg (GnuPG) 2.2.11; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Note: Use "gpg --full-generate-key" for a full featured key generation dialog.

GnuPG needs to construct a user ID to identify your key.

Real name: My-aptly
Email address: my-aptly@example.com
You selected this USER-ID:
    "MyName <my-aptly@example.com>"

Change (N)ame, (E)mail, or (O)kay/(Q)uit? O
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key B14B67D0CF27191B marked as ultimately trusted
gpg: revocation certificate stored as '/root/.gnupg/openpgp-revocs.d/77EC42A1F16127C83509292BB14B67D0CF27191B.rev'
public and secret key created and signed.

Note that this key cannot be used for encryption.  You may want to use
the command "--edit-key" to generate a subkey for this purpose.
pub   rsa4096 2019-07-08 [SC] [expires: 2021-07-07]
      77EC42A1F16127C83509292BB14B67D0CF27191B
uid                      MyName <my-aptly@example.com>

NOTE

Just to note here we give you all the examples with the root user and the GPG keys are for the root user. You may use a different user for the aptly process and you must ensure the GPG keys to present for this user (the directories and files are the same, just home directory is different – the home directory of the aptly user i.e. “/[my-aptly-home-directory]/.gnupg/pubring.gpg” and for all other GPG files “/[my-aptly-home-directory]/.gnupg/”).

Really bad performance when going from Write-Back to Write-Through in a LSI controller

Ever wonder what is the impact of write-through of an LSI controller in a real-world streaming server? Have no wonder anymore!

you can get several (multiple?) times slower with the write-through mode than if your controller were using the write-back mode of the cache

And it could happen any moment because when charging the battery of the LSI controller and you have set “No Write Cache if Bad BBU” the write-through would kick in. Of course, you can make a schedule for the battery charging/discharging process, but in general, it will happen and it will hurt your IO performance a lot!

In simple words a write operation is successful only if the controller confirms the write operation on all disks, no matter the data has already been in the cache.

This mode puts pressure on the disks and Write-Through is a known destroyer of hard disks! You can read a lot of administrator’s feedback on the Internet about crashed disks using write-through mode (and sometimes several simultaneously on one machine losing all your data even it would have redundancy with some of the RAID setups like RAID1, RAID5, RAID6, RAID10 and so).

srv ~ # sudo megacli -ldinfo -lall -aall
                                     
Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :system
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
Size                : 13.781 TB
Sector Size         : 512
Mirror Data         : 13.781 TB
State               : Optimal
Strip Size          : 128 KB
Number Of Drives per span:2
Span Depth          : 6
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy   : Disk's Default
Encryption Type     : None
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only

Exit Code: 0x00

As you can see our default cache policy is WriteBack and “No Write Cache if Bad BBU”, the BBU is not bad, but charging!
Keep on reading!

aptly mirror: ERROR: unable to update: no candidates for debian-installer/binary-amd64/Packages found

Always check the source what supports when trying to mirror! We have lost some time before discovering that our source repository does not support udeb and source packages! If you create a mirror with “-with-sources=true -with-udebs=true” the update process will require files, which may not exists in the source repository if it does not offer udeb or source files and you’ll end up with broken mirror and error for missing file!

Downloading & parsing package files...
Downloading http://aptly.example.com/ubuntu/dists/xenial-myrepos/main/binary-amd64/Packages.bz2...
ERROR: unable to update: no candidates for http://aptly-master.example.com/ubuntu/dists/xenial-myrepo/main/debian-installer/binary-amd64/Packages found

If you get error for “debian-installer/binary-amd64/Packages” not found, check the source repository if it offers udeb and/or source packages – probably not, so drop your mirror and recreate it including one or the two options

-with-sources=false -with-udebs=false

Keep on reading!

Upload files and directories with swift in OpenStack

First, you need to install

swift command line utility

and here is how to do it: Install OpenStack swift client only
In general you will need:

  1. username (–os-username) – Username
  2. password (–os-password) – Password
  3. authentication url (–os-auth-url) – The URL address, which authorize your requests, it generates a security token for your operations. Always use https!
  4. tenant name (–os-tenant-name) – Tenant is like a project.

All of the above information should be available from your OpenStack administrator.
For the examples we assume there is a container “mytest” (it’s like a main directory from the root). You cannot upload files in the root, because this is the place for containers only i.e. directories. You must always upload files under container (i.e. directory aka folder).

To upload a single file with swift cli execute:

myuser@myserver:~$ swift --os-username myuser --os-tenant-name mytenant --os-password mypass --os-auth-url https://auth-url.example.com/v2.0/ upload mytest ./file1.log 
file1.log

Keep on reading!

aptly – ERROR: unable to remove: published repo with storage:prefix/distribution ./mytest-stable not found

Sometimes the user manual may be unclear and you came here searching for a solution of dropping a published repository.
We have aptly version: 1.3.0 and here is the right syntax to remove a published repository.

First list the published repositories and reverse the “/” replacing it with space

The commands will be:

aptly publish list
Published repositories:
  * <name-distribution>/<release> [amd64] publishes {main: [xenial-<name>]: Some description}
aptly publish drop -force-drop <release> <name-distribution>

“name-distribution” is the “http://aptly.example.com/[name-distribution]” in the URL. For example, the repository URL of myrepo is “http://aptly.example.com/myrepo” and the name-distribution is “myrepo”.

A real world example

root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish list
Published repositories:
  * myrepo/stable [amd64] publishes {main: [xenial-myrepo]: Stable myrepo packages}
  * test/test [amd64] publishes {test: [test]: Test repo}
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish list --raw
myrepo stable
test test

We want to remove “myrepo/stable”:

root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop -force-drop stable myrepo
Removing /etc/aptly/.aptly/public/etc/dists...
Removing /etc/aptly/.aptly/public/etc/pool...

The published repository has been removed successfully.
root@srv-aptly:~#

The wrong syntax

You might have tried it that’s why you came here:

root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish list           
Published repositories:
  * myrepo/stable [amd64] publishes {main: [xenial-myrepo]: Stable myrepo packages}
  * test/test [amd64] publishes {test: [test]: Test repo}
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish list --raw
myrepo stable
test test
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop myrepo
ERROR: unable to remove: published repo with storage:prefix/distribution ./myrepo not found
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop myrepo stable
ERROR: unable to remove: published repo with storage:prefix/distribution stable/myrepo not found
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop myrepo-stable
ERROR: unable to remove: published repo with storage:prefix/distribution ./myrepo-stable not found
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop -force-drop myrepo-stable
ERROR: unable to remove: published repo with storage:prefix/distribution ./myrepo-stable not found
root@srv-aptly:~# aptly --config="/etc/aptly/.aptly.conf" publish drop -force-drop myrepo stable
ERROR: unable to remove: published repo with storage:prefix/distribution stable/myrepo not found
root@srv-aptly:~#

aptly mirror – gpgv: Can’t check signature: public key not found

If you want to mirror repositories from your current aptly server to a new server you must import the GPG key from your old server because you are going to encounter the following error:

gpgv: Signature made Fri 22 Apr 2019 17:35:04 AM UTC using DSA key ID FDC7A25E
gpgv: Can't check signature: public key not found

Looks like some keys are missing in your trusted keyring, you may consider importing them from keyserver:

gpg --no-default-keyring --keyring trustedkeys.gpg --keyserver pool.sks-keyservers.net --recv-keys 181482CCFDC7A25E

Sometimes keys are stored in repository root in file named Release.key, to import such key:

wget -O - https://some.repo/repository/Release.key | gpg --no-default-keyring --keyring trustedkeys.gpg --import

ERROR: unable to fetch mirror: verification of detached signature failed: exit status 2

And the mirror command fails. The problem is

you must import the GPG key from your old server in trustedkeys.gpg (even if you have already imported it in the new server with apt-key!!!)

Here is how to list, export and import it (we are going to import it in default and trustedkeys.gpg, because it is more convenient, but it is not mandatory to be in the default).
Keep on reading!

Install OpenStack swift client only

To manage files in the OpenStack cloud you need the swift client. This is not so tiny python tool (a lot of dependencies), which offers by means of files (as files are objects for the OpenStack):

  • delete – Delete files, directories and sub-directories. Be careful with a simple command you can delete all your files at once. “Delete a container or objects within a container.”
  • download – Download files to your local storage. “Download objects from containers.”
  • list – List all files in the main directory (i.e. the container), cannot be listed files in sub-directories. The output will be a file with path per line or you can enable extended output to show file size, time modified, the type of the file and the full file path. “Lists the containers for the account or the objects for a container.”
  • post – Creates containers and updates metadata, could create seurity keys for temporary urls – “Updates meta information for the account, container, or object; creates containers if not present.”
  • copy – Copy files to a new place within a container or to a different container. “Copies object, optionally adds meta”
  • stat – Display information for files, container and your account. No information is available per directory. Most typically you will read the file length, the type and modifcation date. “Displays information for the account, container, or object”
  • upload – uploads files and whole directories in a container – “Uploads files or directories to the given container.”
  • capabilities – List the configuration options of your account like file size limits, file amount limits , limit requests per seconds and so on and which additional middleware your instance supports like bulk_delete, bulk_upload (the name are self-explainatory) – “List cluster capabilities.”
  • tempurl – Generate a temporary url for a file protected with time validity and a key – “Create a temporary URL.”
  • auth – Show your authentication token and storage url – “Display auth related environment variables.”

The text in the hyphens is from the swift help command. If you do not know what is a container with simple words these are the main sub-directories in your account if you use the list command without any argument (not adding a name after “list”):

myuser@myserver:~$ swift --os-username myusr --os-tenant-name myusr --os-password mypass --os-auth-url https://auth01.example.com:5000/v2.0 list
mycontainer1
anyname

The best way is to install the swift client from the package system of your Linux Distribution. The package system ensures the programs you install and upgrade are consistent within dependencies upgrades.

When you install this package “python-swiftclient” it depends on multiple python packages, which might be upgraded later, too, but the package system of a Linux distribution ensure the programs of python-swiftclient will work with them

. On the contrary, if you install it manually even with “pip” as it is offered in OpenStack site it is unsure what will happen and even your client program from “python-swiftclient” could stop working because of an upgraded dependancy library. Of course, the packages from the official package system could be a little bit older (but probably more stable) than the manual installs from the OpenStack official site. Still if you would like to install it with “pip” here you can check how you can do it: https://www.swiftstack.com/docs/integration/python-swiftclient.html
Keep on reading!