Debug Ubuntu preseed failure – select and install software

Preparing the preseed for unattended installation sometimes could be challenging. This article presents the right way to analyze an installation failure in one of the main steps – “select and install software”.
There is a ubuntu installation preseed file for our Bionic unattended installation, which uses the “pkgsel” to install first packages in the new system:

d-i pkgsel/include string openssh-server wget vim git python gpg ntp
d-i pkgsel/upgrade select full-upgrade
d-i pkgsel/update-policy select unattended-upgrades

When an installation step in the preseed of a unattended installation fails the setup stops with a “Continue” confirmation.

main menu
“select and install software” – step failed

Here is what you can do to check what exactly fails in step “select and install software”:

  1. Start a shell in the current installation boot. Press “Ctrl+Alt+F2” to start the shell. You may use “Ctrl+Alt+F3” and “Ctrl+Alt+F4” for two more consoles and “Ctrl+Alt+F1” to return to the installation wizard.
  2. Check the /var/log/syslog, in which file the debconf writes the logging information.
  3. Find the lines where the step “select and install software” starts and look for errors after that. In this file, you can see all the step titles during, which the setup passes and they are named the same way the windows’ titles during the installation wizard.

Here is the real world output

Presing the “CTRL+ALT+F2” to start the BusyBox built-in shell, which is ash not bash!

Be careful there are some difference between ash and bash.

main menu
Installation wizard – BusyBox built-in shell (ash)

Last 20 lines shows the problem – pkgsel failed to install packages in step “select and install software”.

The installation wizard stops.

main menu
debconf logging using syslog – pkgsel

The problem is in the package “ntp”, the setup cannot install the “ntp” package because of unmet dependencies.

Because it is not so important to install ntp at this stage we added the package to the script executed in “preseed/late_command” and removed the package from the pkgsel line in the preseed file. In general, our problem was because we set local repositories for the bionic packages, but the setup cannot update list of available packages when the you set Bionic mirror to be unofficial local repository.

main menu
Package because of unmet dependencies

Install aptly under Ubuntu 18 LTS with nginx serving the packages and the first steps

This article is how to install aptly software, which offers easy Debian repository management.
First, few words for aptly and what tasks are really simple to do:

  • Mirror an existing (remote) repository. Make a local copy of Debian or Ubuntu repostories for all your internal infrastructure.
  • Create your own repositories
  • Create snapshots of repositories and mirrors.
  • Merge repositories
  • Make diff between repositories (in fact snapshots of repositories, but you may make a mirror of an repository and then make a snapshot and then make a diff with some other snapshot to see the changes between the different repositories or the time the snapshots are made).
  • Remove or add individual packages from official mirrored repositories.
  • Use api calls to manage the repositories. HTTP REST API is still in development, but a big part of it works.

For more information you may visit the official documentation page – https://www.aptly.info/doc/overview/

We are going to install the aptly and despite it could be used to serve the repository files we will use the Nginx web server for this work. Nginx is a more fast and reliable web server with easy installation of SSL certificates for our repositories.
The aptly is included in official Ubuntu repositories in the component universe, but at present, it is 2 to 3 versions behind the stable one from the aptly site, so we are going to use their repository to install aptly. Still, if you do not want to use
Keep on reading!

Install OpenStack swift client only

To manage files in the OpenStack cloud you need the swift client. This is not so tiny python tool (a lot of dependencies), which offers by means of files (as files are objects for the OpenStack):

  • delete – Delete files, directories and sub-directories. Be careful with a simple command you can delete all your files at once. “Delete a container or objects within a container.”
  • download – Download files to your local storage. “Download objects from containers.”
  • list – List all files in the main directory (i.e. the container), cannot be listed files in sub-directories. The output will be a file with path per line or you can enable extended output to show file size, time modified, the type of the file and the full file path. “Lists the containers for the account or the objects for a container.”
  • post – Creates containers and updates metadata, could create seurity keys for temporary urls – “Updates meta information for the account, container, or object; creates containers if not present.”
  • copy – Copy files to a new place within a container or to a different container. “Copies object, optionally adds meta”
  • stat – Display information for files, container and your account. No information is available per directory. Most typically you will read the file length, the type and modifcation date. “Displays information for the account, container, or object”
  • upload – uploads files and whole directories in a container – “Uploads files or directories to the given container.” You can read our article here – Upload files and directories with swift in OpenStack
  • capabilities – List the configuration options of your account like file size limits, file amount limits , limit requests per seconds and so on and which additional middleware your instance supports like bulk_delete, bulk_upload (the name are self-explainatory) – “List cluster capabilities.”
  • tempurl – Generate a temporary url for a file protected with time validity and a key – “Create a temporary URL.”
  • auth – Show your authentication token and storage url – “Display auth related environment variables.”

The text in the hyphens is from the swift help command. If you do not know what is a container with simple words these are the main sub-directories in your account if you use the list command without any argument (not adding a name after “list”):

myuser@myserver:~$ swift --os-username myusr --os-tenant-name myusr --os-password mypass --os-auth-url https://auth01.example.com:5000/v2.0 list
mycontainer1
anyname

The best way is to install the swift client from the package system of your Linux Distribution. The package system ensures the programs you install and upgrade are consistent within dependencies upgrades.

When you install this package “python-swiftclient” it depends on multiple python packages, which might be upgraded later, too, but the package system of a Linux distribution ensure the programs of python-swiftclient will work with them

. On the contrary, if you install it manually even with “pip” as it is offered in OpenStack site it is unsure what will happen and even your client program from “python-swiftclient” could stop working because of an upgraded dependancy library. Of course, the packages from the official package system could be a little bit older (but probably more stable) than the manual installs from the OpenStack official site. Still if you would like to install it with “pip” here you can check how you can do it: https://www.swiftstack.com/docs/integration/python-swiftclient.html
Keep on reading!

Ubuntu with PHP 7.2 and mcrypt module

As mentioned in our previous article PHP 7.2 or PHP 7.3 with mcrypt – manual build the PHP versions 7.2 and 7.3 do not include PHP mcrypt module. The mcrypt module was part of PHP 5 till 7.1, in which it was deprecated and removed in 7.2.
In this article we show how to build mcrypt module for Ubuntu based on our previous article showed above. Because of the great popularity of Ubuntu and it has no PHP mcrypt package in Ubuntu package system unlike other Linux distributions (like Gentoo, which created a package) we decided to make this article.

For our purpose we use Ubuntu 18.04.2 LTS and here is what the steps to have the mcrypt PHP module:

STEP 1) Update and install mcrypt library and header development packet

sudo apt update -y
sudo apt install -y libmcrypt-dev

STEP 2) Install the GNU GCC build utility and the PHP dev packet

This is the compiler to build the module.

sudo apt install -y build-essential
sudo apt install -y php7.2-dev

STEP 3) Download the PHP mcrypt module and build it.

cd
mkdir mcrypt-php-module-manual
cd mcrypt-php-module-manual
wget https://pecl.php.net/get/mcrypt-1.0.2.tgz
tar xzf mcrypt-1.0.2.tgz
cd mcrypt-1.0.2
phpize
aclocal
libtoolize --force
autoheader
autoconf
./configure
make
sudo make install

STEP 4) Load the module in the PHP configuration (we use PHP-FPM and PHP-CLI) and block future PHP versions to be installed when apt update is used.

Because we compile the PHP mcrypt module for the specific currently installed PHP we do not want to upgrade our PHP when there is an update and the mcrypt module to fail to load. Each change of the PHP version (upgrade) would require a recompile against the current PHP version. To see more for holding and unholding Ubuntu packages – apt-mark – upgrade with the exception of certain packages Of course, if there is an update for PHP you must install it just recompile the mcrypt package, too!

echo "extension=mcrypt.so" > 20-mcrypt.ini
sudo cp 20-mcrypt.ini /etc/php/7.2/cli/conf.d/20-mcrypt.ini
sudo cp 20-mcrypt.ini /etc/php/7.2/fpm/conf.d/20-mcrypt.ini
sudo apt-mark hold php-cli php7.2-cli php-fpm php7.2-fpm

Keep on reading!

Ubuntu apt – InRelease is not valid yet (invalid for another 151d 18h 5min 59s)

Invalid time could cause your server (or probably your virtual server or docker instance) to be unable to use Ubuntu’s packaging system apt. It is a typical thing if your virtual or docker instance does not use automatic time synchronization.

It is really important even small installation and virtualized environments to have automatic time synchronization or the service they provide could become error prone with time!

The “apt” just reports the repositories are not valid yet:

myuser@my-server-pc:~$ sudo su
root@my-server-pc:/home/myuser# apt update
Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Reading package lists... Done                                 
E: Release file for http://archive.ubuntu.com/ubuntu/dists/bionic-updates/InRelease is not valid yet (invalid for another 151d 18h 5min 59s). Updates for this repository will not be applied.
E: Release file for http://archive.ubuntu.com/ubuntu/dists/bionic-backports/InRelease is not valid yet (invalid for another 151d 17h 16min 26s). Updates for this repository will not be applied.
E: Release file for http://archive.ubuntu.com/ubuntu/dists/bionic-security/InRelease is not valid yet (invalid for another 151d 17h 15min 3s). Updates for this repository will not be applied.
root@my-server-pc:/home/myuser# date
Thu Jan 17 15:11:56 UTC 2019

The clock shows 17 January 2019, but now is 18 June 2019! This is a Ubuntu virtual server with the minimal installation.

The solution is to synchronize your clock manually or use a service (the better way)!

Keep on reading!

simple time synchronization of a server (laptop, desktop) using built-in systemd-timesyncd service

Here we offer you a relatively new way of keeping your server’s time (or your computer and laptop) synchronized with a reliable time service on the Internet.

systemd has a built-in feature – a small daemon (systemd-timesyncd) to periodically to contact NTP servers and keep the server’s clock synchronized with them!

Of course, you must use systemd in your Linux distribution. This article is for those Linux systems using systemd, not for upstart (sysvinit, openrc, upstart, runit and so on). Most of the modern Linux distributions use the systemd like Fedora, Ubuntu, CentOS, RedHat, Gentoo, SuSe and many more.

Once there were not many options to keep your server’s clock synced with NTP servers. Now we have simpler programs (some of which by the way could act as clients only!!!) – chrony, openntpd, systemd-timesyncd and more.
This time synchronization service is not going to open server port 123, it does not have the server capabilities of an NTP server. So you won’t need any firewall rules (like for ntpd). It is a simple client service to sync your time and keep it synchronized all the time with accuracy not more than 100ms.

Do not expect complex clock discipline like training or compensating. It just sets the time according to a selected time server from the configuration file in “/etc/systemd/timesyncd.conf”. The polling interval is automatically adjusted in minimal and maximal values from the configuration file and the daemon decides which is the actual interval based on the near-term drift it thinks. Possible back running clock if it needs to set in the past. The quality of the clock source could not be checked, so

in any case, you may not expect more than 100ms accuracy.

Of course, this service is actively developed and it has already many changes from the base client once it was!

Here is how you can enable it. Here are the steps:
Keep on reading!

SSD cache device to a hard disk drive using LVM

This article is to show how simple is to use an SSD cache device to a hard disk drive. We also included statistics and graphs for several days of usage in one of our streaming servers.
Our setup:

  • 1 SSD disk Samsung 480G. It will be used for writeback cache device!
  • 1 Hard disk drive 1T

We included several graphs of this setup from one of our static media servers serving HLS video streaming.

The effectiveness of the cache is around 2-4 times at least!

Keep on reading!

pycurl.h: fatal error: openssl/ssl.h: No such file or directory

If you encounter this error trying to install a pip module or compile a program under the console you surely miss OpenSSL development packages!
pip also may build a packages in your system and it could depend on generic library headers like in this case OpenSSL, which the installer (pip) won’t bring them and it will output an error as you can see

myuser@srv # sudo pip install pycurl pygeoip psutil
Collecting pycurl
  Using cached https://files.pythonhosted.org/packages/e8/e4/0dbb8735407189f00b33d84122b9be52c790c7c3b25286826f4e1bdb7bde/pycurl-7.43.0.2.tar.gz
Requirement already satisfied (use --upgrade to upgrade): pygeoip in /usr/local/lib/python2.7/dist-packages
Requirement already satisfied (use --upgrade to upgrade): psutil in /usr/lib/python2.7/dist-packages
Building wheels for collected packages: pycurl
  Running setup.py bdist_wheel for pycurl ... error
  Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-AbCshS/pycurl/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d /tmp/tmpqVNq1upip-wheel- --python-tag cp27:
  Using curl-config (libcurl 7.47.0)
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-2.7
  creating build/lib.linux-x86_64-2.7/curl
  copying python/curl/__init__.py -> build/lib.linux-x86_64-2.7/curl
  running build_ext
  building 'pycurl' extension
  creating build/temp.linux-x86_64-2.7
  creating build/temp.linux-x86_64-2.7/src
  x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DPYCURL_VERSION="7.43.0.2" -DHAVE_CURL_SSL=1 -DHAVE_CURL_OPENSSL=1 -DHAVE_CURL_SSL=1 -I/usr/include/python2.7 -c src/docstrings.c -o build/temp.linux-x86_64-2.7/src/docstrings.o
  In file included from src/docstrings.c:4:0:
  src/pycurl.h:164:28: fatal error: openssl/ssl.h: No such file or directory
  compilation terminated.
  error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
  
  ----------------------------------------
  Failed building wheel for pycurl
  Running setup.py clean for pycurl
Failed to build pycurl
Installing collected packages: pycurl
  Running setup.py install for pycurl ... error
    Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-AbCshS/pycurl/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-oea_jq-record/install-record.txt --single-version-externally-managed --compile:
    Using curl-config (libcurl 7.47.0)
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-2.7
    creating build/lib.linux-x86_64-2.7/curl
    copying python/curl/__init__.py -> build/lib.linux-x86_64-2.7/curl
    running build_ext
    building 'pycurl' extension
    creating build/temp.linux-x86_64-2.7
    creating build/temp.linux-x86_64-2.7/src
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DPYCURL_VERSION="7.43.0.2" -DHAVE_CURL_SSL=1 -DHAVE_CURL_OPENSSL=1 -DHAVE_CURL_SSL=1 -I/usr/include/python2.7 -c src/docstrings.c -o build/temp.linux-x86_64-2.7/src/docstrings.o
    In file included from src/docstrings.c:4:0:
    src/pycurl.h:164:28: fatal error: openssl/ssl.h: No such file or directory
    compilation terminated.
    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
    
    ----------------------------------------
Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-AbCshS/pycurl/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-oea_jq-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-AbCshS/pycurl/
You are using pip version 8.1.1, however version 18.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command

Keep on reading!

New Redis server (4.0.9) in Ubuntu 16.04 LTS

One of our critical service was under Ubuntu 16.04 LTS (and is scheduled for update, but this a another story!) and how it always happens other parts of our systems use new versions of Ubuntu. But the latest version in Ubuntu 16.04 is 3.0.6 https://packages.ubuntu.com/xenial-updates/redis-server.
Here you can find all the redis server versions available in the supported Ubuntu distributions: https://packages.ubuntu.com/search?keywords=redis-server&searchon=names
Keep on reading!

apt-mark – upgrade with the exception of certain packages

If you are in a situation when you want to upgrade your system, but do not want to upgrade a certain software in it you can just instruct apt not to upgrade these packages with:

apt-mark hold <package name(s)>

Here is how you can block updating 4 packages – ca-certificates, firefox, ghostscript, linux-firmware. First we update and upgrade and you can see there is no packages to keep back, and then we use apt-mark to “hold” package “linux-firmware” and ca-certificates, firefox, ghostscript at once. Initiating apt upgrade again will give you “The following packages have been kept back:” and it will include all packages, which will not be upgraded (it will include dependencies, which require some of the blocked packages).
Keep on reading!