aptly delete a mirror and remove all files

Executing drop command on a mirror will only remove the meta information for the mirror and it will not remove the package files occupying space on the file system.

Dropping mirror in aptly supposes to execute a clean command with aplty

aptly db cleanup

The newly created Bionic mirrors in the prevoius article on the aptly subject – Mirror the official Ubuntu repositories using aptly will be deleted here and removing all files with:

aptly@srv:~$ aptly mirror drop bionic-main
Mirror `bionic-main` has been removed.
aptly@srv:~$ aptly mirror drop bionic-security-main
Mirror `binonic-security-main` has been removed.
aptly@srv:~$ aptly mirror drop bionic-universe     
Mirror `bionic-universe` has been removed.
aptly@srv:~$ aptly mirror drop bionic-updates-main
Mirror `binonic-updates-main` has been removed.
aptly@srv:~$ aptly mirror drop bionic-updates-universe
Mirror `bionic-updates-universe` has been removed.
aptly@srv:~$ aptly mirror list
No mirrors found, create one with `aptly mirror create ...`.

The occupied space on the disk mounted in /srv is 270G:

aptly@srv:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.9G     0  1.9G   0% /dev
tmpfs           395M  3.5M  391M   1% /run
/dev/sda3        19G  4.6G   13G  27% /
tmpfs           2.0G  204K  2.0G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/sda4       470G  270G  176G  61% /srv
tmpfs           395M     0  395M   0% /run/user/0
tmpfs           395M     0  395M   0% /run/user/1001

Actually freeing the space on the disk with the clean aptly command:

aptly@srv:~$ aptly db cleanup
Loading mirrors, local repos, snapshots and published repos...
Loading list of all packages...
Deleting unreferenced packages (143121)...
Building list of files referenced by packages...
Building list of files in package pool...
Deleting unreferenced files (194097)...
Disk space freed: 268.80 GiB...
Compacting database...

The occupied space on the disk mounted in /srv is below 2G after the cleaning command:

aptly@srv:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.9G     0  1.9G   0% /dev
tmpfs           395M  3.5M  391M   1% /run
/dev/sda3        19G  4.6G   13G  27% /
tmpfs           2.0G  204K  2.0G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/sda4       470G    1G  176G   1% /srv
tmpfs           395M     0  395M   0% /run/user/0
tmpfs           395M     0  395M   0% /run/user/1001

edit mysql options in docker (or docker-compose) mysql

Modifying the default options for the docker (podman) MySQL server is essential. The default MySQL options are too conservative and even for simple (automation?) tests the options could be .
For example, modifying only one or two of the default InnoDB configuration options may lead to boosting multiple times faster execution of SQL queries and the related automation tests.

Here are three simple ways to modify the (default or current) MySQL my.cnf configuration options:

  • Command-line arguments. All MySQL configuration options could be overriden by passing them in the command line of mysqld binary. The format is:
    --variable-name=value
    

    and the variable names could be obtained by

    mysqld --verbose --help
    

    and for the live configuration options:

    mysqladmin variables
    
  • Options in a additional configuration file, which will be included in the main configuration. The options in /etc/mysql/conf.d/config-file.cnftake precedence.
  • Replacing the default my.cnf configuration file/etc/mysql/my.cnf.

Check out also the official page – https://hub.docker.com/_/mysql.
Under CentOS 8 docker is replaced by podman and just replace the docker with podman in all of the commands below.

OPTION 1) Command-line arguments.

This is the simplest way of modifying the default my.cnf (the one, which comes with the docker image or this in the current docker image file). It is fast and easy to use and change, just a little bit of much writing in the command-line. As mentioned above all MySQL options could be changed by a command-line argument to the mysqld binary. For example:

mysqld --innodb_buffer_pool_size=1024M

It will start MySQL server with variable innodb_buffer_pool_size set to 1G. Translating it to (for multiple options just add them at the end of the command):

  • docker run

    root@srv ~ # docker run --name my-mysql -v /var/lib/mysql:/var/lib/mysql \
    -e MYSQL_ROOT_PASSWORD=111111 \
    -d mysql:8 \
    --innodb_buffer_pool_size=1024M \
    --innodb_read_io_threads=4 \
    --innodb_flush_log_at_trx_commit=2 \
    --innodb_flush_method=O_DIRECT
    1bb7f415ab03b8bfd76d1cf268454e3c519c52dc383b1eb85024e506f1d04dea
    root@srv ~ # docker exec -it my-mysql mysqladmin -p111111 variables|grep innodb_buffer_pool_size
    | innodb_buffer_pool_size                                  | 1073741824
    
  • docker-compose:

    # Docker MySQL arguments example
    version: '3.1'
    
    services:
    
      db:
        image: mysql:8
        command: --default-authentication-plugin=mysql_native_password --innodb_buffer_pool_size=1024M --innodb_read_io_threads=4 --innodb_flush_log_at_trx_commit=2 --innodb_flush_method=O_DIRECT
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: 111111
        volumes:
         - /var/lib/mysql_data:/var/lib/mysql
        ports:
          - "3306:3306"
    

    Here is how to run it (the above text file should be named docker-compose.yml and the file should be in the current directory when executing the command below):

    root@srv ~ # docker-compose up
    Creating network "docker-compose-mysql_default" with the default driver
    Creating my-mysql ... done
    Attaching to my-mysql
    my-mysql | 2020-06-16 09:45:35+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.20-1debian10 started.
    my-mysql | 2020-06-16 09:45:35+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
    my-mysql | 2020-06-16 09:45:35+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.20-1debian10 started.
    my-mysql | 2020-06-16T09:45:36.293747Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
    my-mysql | 2020-06-16T09:45:36.293906Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.20) starting as process 1
    my-mysql | 2020-06-16T09:45:36.307654Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
    my-mysql | 2020-06-16T09:45:36.942424Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
    my-mysql | 2020-06-16T09:45:37.136537Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/var/run/mysqld/mysqlx.sock' bind-address: '::' port: 33060
    my-mysql | 2020-06-16T09:45:37.279733Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
    my-mysql | 2020-06-16T09:45:37.306693Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
    my-mysql | 2020-06-16T09:45:37.353358Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.20'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server - GPL.
    

    And check the option:

    root@srv ~ # docker exec -it my-mysql mysqladmin -p111111 variables|grep innodb_buffer_pool_size
    | innodb_buffer_pool_size                                  | 1073741824
    

OPTION 2) Options in a additional configuration file.

Create a MySQL option file with name config-file.cnf:

[mysqld]
innodb_buffer_pool_size=1024M
innodb_read_io_threads=4
innodb_flush_log_at_trx_commit=2
innodb_flush_method=O_DIRECT
  1. docker run
  2. The source path must be absolute path!

    docker run --name my-mysql \
    -v /var/lib/mysql_data:/var/lib/mysql \
    -v /etc/mysql/docker-instances/config-file.cnf:/etc/mysql/conf.d/config-file.cnf \
    -e MYSQL_ROOT_PASSWORD=111111 \
    -d mysql:8
    
  3. docker-compose
    The source path may not be absolute path.

    # Docker MySQL arguments example
    version: '3.1'
    
    services:
    
      db:
        container_name: my-mysql
        image: mysql:8
        command: --default-authentication-plugin=mysql_native_password
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: 111111
        volumes:
         - /var/lib/mysql_data:/var/lib/mysql
         - ./config-file.cnf:/etc/mysql/conf.d/config-file.cnf
        ports:
          - "3306:3306"
    

OPTION 3) Replacing the default my.cnf configuration file.

Add the modified options to a my.cnf template file and map it to the container on /etc/mysql/my.cnf. When overwriting the main MySQL option file – my.cnf you may map the whole /etc/mysql directory (just replace /etc/mysql/my.cnf with /etc/mysql below), too. The source file (or directory) may be any file (or directory) not the /etc/mysql/my.cnf (or /etc/mysql)

  • docker run:
    The source path must be absolute path.

    docker run --name my-mysql \
    -v /var/lib/mysql_data:/var/lib/mysql \
    -v /etc/mysql/my.cnf:/etc/mysql/my.cnf \
    -e MYSQL_ROOT_PASSWORD=111111 \
    --publish 3306:3306 \
    -d mysql:8
    

    Note: here a new option “–publish 3306:3306” is included to show how to map the ports out of the container like all the examples with the docker-compose here.

  • docker-compose:
    The source path may not be absolute path, but the current directory.

    # Use root/example as user/password credentials
    version: '3.1'
    
    services:
    
      db:
        container_name: my-mysql
        image: mysql:8
        command: --default-authentication-plugin=mysql_native_password
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: 111111
        volumes:
         - /var/lib/mysql_data:/var/lib/mysql
         - ./mysql/my.cnf:/etc/mysql/my.cnf
        ports:
          - "3306:3306"
    

Change the location of container storage in podman (with SELinux enabled)

There two main options to change the location of all the containers’ storages:

  • “mount bind” the new location to the default storage directory (look Note 1)
  • Change the path of the location in the configuration file /etc/containers/storage.conf

You should stop all your containers though it is not mandatory.

You should stop the containers (if any) and copy the directory, because when reconfigured the storage path podman won’t access the ones in the old path – containers and images!

STEP 1) Change the storage path in the podman configuration file.

If the SELinux has been disabled, which should not be done, it is just a matter of changing a path option in the configuration file /etc/containers/storage.conf

# Primary Read/Write location of container storage
graphroot = "/var/lib/containers/storage"

Change it to whatever path you like. Mostly, it should point to the big storage device. In our case, the big storage is mounted under “/mnt/mystorage/virtual/storage”. Change the options to:

# Primary Read/Write location of container storage
graphroot = "/mnt/mystorage/virtual/storage"

Check the running configuration with:

[root@lsrv1 mystorage]# podman info
host:
  BuildahVersion: 1.12.0-dev
  CgroupVersion: v1
  Conmon:
    package: conmon-2.0.8-1.el7.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.8, commit: f85c8b1ce77b73bcd48b2d802396321217008762'
  Distribution:
    distribution: '"centos"'
    version: "7"
  MemFree: 191963136
  MemTotal: 16563531776
  OCIRuntime:
    name: runc
    package: runc-1.0.0-67.rc10.el7_8.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 7857680384
  SwapTotal: 8581541888
  arch: amd64
  cpus: 8
  eventlogger: journald
  hostname: lsrv1
  kernel: 3.10.0-1062.9.1.el7.x86_64
  os: linux
  rootless: false
  uptime: 607h 10m 53.36s (Approximately 25.29 days)
registries:
  blocked: null
  insecure: null
  search:
  - registry.access.redhat.com
  - registry.fedoraproject.org
  - registry.centos.org
  - docker.io
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: overlay
  GraphOptions: {}
  GraphRoot: /mnt/mystorage/virtual/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 0
  RunRoot: /var/run/containers/storage
  VolumePath: /mnt/mystorage/virtual/storage/volumes

Keep on reading!

Data too large, data for [] would be [] which is larger than the limit of

Rsyslog writing to Elasticsearch could lead to an error for some of the records and missing to save them in the backend:

{ ... { "error": { "root_cause": [ { "type": "circuit_breaking_exception", 
"reason": "[parent] Data too large, data for [<http_request>] would be [1008813778\/962mb], which is larger than the limit of [986061209\/940.3mb], 
real usage: [1008812248\/962mb], new bytes reserved: [1530\/1.4kb], usages [request=0\/0b, fielddata=317\/317b, in_flight_requests=1530\/1.4kb, accounting=178301893\/170mb]",
"bytes_wanted": 1008813778, "bytes_limit": 986061209, "durability": "PERMANENT" }], 
"type": "circuit_breaking_exception", "reason": "[parent] Data too large, data for [<http_request>] would be [1008813778\/962mb], which is larger than the limit of [986061209\/940.3mb], 
real usage: [1008812248\/962mb], new bytes reserved: [1530\/1.4kb], usages [request=0\/0b, fielddata=317\/317b, in_flight_requests=1530\/1.4kb, accounting=178301893\/170mb]",
"bytes_wanted": 1008813778, "bytes_limit": 986061209, "durability": "PERMANENT" }, "status": 429 } }

Unfortunately, such writes are not saved in the Elasticsearch and the data has been lost.

The problem here is that the Java VM has reached the maximum allowed memory and more memory should be allowed to be used by the Java Virtual Machine.

Find the Java VM option for the Elasticsearchjvm.options. In CentOS 7 the file is located in /etc/elasticsearch/jvm.options and set more memory with the variables “-Xms[SIZE]g -Xmx[SIZE]g”, such as:

.....
-Xms4g
-Xmx4g
.....

|grep -v grep
This will allow 4G “maximum size of total heap space” to be used by the Java Virtual Machine. By default, it is 1G (-Xms1g -Xmx1g). It is a good idea to set it half of the server’s memory. Save and restart the Elasticsearch service as usual:

systemctl restart elasticsearch

You should see the variable in the command line with ps command:

[root@loganalyzer ~]# ps axuf|grep elasticsearch
elastic+   592 10.8 34.4 168638848 5493156 ?   Ssl  00:56   4:23 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60
-Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 
-Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false 
-Dlog4j2.disable.jmx=true -Djava.locale.providers=COMPAT 
-Xms4g -Xmx4g 
-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
-Djava.io.tmpdir=/tmp/elasticsearch-16851535740012150929 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch 
-XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log elasticsearch
-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m 
-XX:MaxDirectMemorySize=2147483648 -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch 
-Des.distribution.flavor=default -Des.distribution.type=rpm -Des.bundled_jdk=true 
-cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
elastic+   690  0.0  0.0  70448  4516 ?        Sl   00:56   0:00  \_ /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

The environment variable ES_JAVA_OPTS could be used, too.

ES_JAVA_OPTS="-Xms4g -Xmx4g" ./bin/elasticsearch 

collectd nginx plugin: curl_easy_perform failed because of selinux

Enabling the Nginx plugin for collectd under CentOS (or any other system using SELinux) might be confusing for a newbie. Most sources on the Internet would just install collectd-nginx:

yum install -y collectd-nginx

and configure it in the nginx.conf and collectd.conf. Still, the statistics might not work as expected, the collectd may not be able to gather statistics from the Nginx.

SELinux may prevent collectd (plugin) daemon to connect to Nginx and gather statistics from the Nginx stats page.

Checking the collectd log and it reports a problem:
Keep on reading!

aptly remove a package from a repository using the cli

Here is a fast tip – how to remove a package from our local aptly repository:

  1. Remove the package from the local repository.
  2. Create a new snapshot form the local repository.
  3. Publish the snapshot by switching to the newly created snapshot from the above step.

The commands executing over repository with name xenial-apps to remove package with name example-app and version 10.5.1.22-ubuntu20. The snapshot name xenial-apps1588149526 is just a temporary name used for the snapshot (the ID is unix timestamp of the current time).

aptly repo remove  xenial-apps 'example-app (= 10.5.1.22-ubuntu20)'
aptly snapshot create xenial-apps1588149526 from repo xenial-apps
aptly publish switch xenial-apps ubuntu xenial-apps1588149526

Real world example.

This is the log from our system with just changed names:
Keep on reading!

Kibana server is not ready yet – and Waiting for that migration to complete in the logs

Now, living in the cloud and big data there is a time when the admin may need to save all their logs in a central place! Elasticsearch and Kibana look good for the job! And after months of hassle-free work of the Elasticsearch and Kibana, Elasticsearch just stopped working and after a restart and upgrade (Elasticsearch and Kibana) Kibana showed an error message:

Kibana server is not ready yet

And if you have tried the STOP/START of Kibana and Elasticsearch and Kibana would still show the above message here is what you should do:

  1. Check if the two services are running! Kibana and Elasticsearch, if some of them is missing start it.
  2. Search for the logs and especially Elasticsearch logs. The first place to check is systemd logs with journalctl program (systemctl status also will point out the problem showing last lines of the logs).
  3. Look for the last lines and if they include

    Another Kibana instance appears to be migrating the index

    This article is probably the right place to solve the issue and start your setup successful.

STEP 1) Running services and analyzing the logs.

If kibana and elastisearch use systemd system to operate it is easy to access the logs with systemctl and journalctl
Check whether the kibana and elasticsearch are running with:

[root@loganalyzer ~]# ps ax|grep elasticsearch|grep -v grep
  258 ?        Ssl  836:31 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.locale.providers=COMPAT -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.io.tmpdir=/tmp/elasticsearch-13303119363353782625 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m -XX:MaxDirectMemorySize=536870912 -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -Des.bundled_jdk=true -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
  360 ?        Sl     0:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
[root@loganalyzer ~]# ps ax|grep kibana|grep -v grep
 1284 ?        Ssl    4:32 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

If one of the two services are missing you must start it! And second, each service should have only one instance (i.e. process)!
And check the kibana logs with journalctl

[root@loganalyzer ~]# journalctl -u kibana.service
.....
.....
Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","plugins","bfetch"],"pid":1219,"message":"Setting up plugin"}
Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","savedobjects-service"],"pid":1219,"message":"Waiting until all Elasticsearch nodes
 are compatible with Kibana before starting saved objects migrations..."}
Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","savedobjects-service"],"pid":1219,"message":"Starting saved objects migrations"}
Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","savedobjects-service"],"pid":1219,"message":"Creating index .kibana_task_manager_2
."}
Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["warning","savedobjects-service"],"pid":1219,"message":"Unable to connect to Elasticsearch
. Error: [resource_already_exists_exception] index [.kibana_task_manager_2/O070AunfSyG6hwd6_pqqRA] already exists, with { index_uuid=\"O070AunfSyG6hwd6_pqqRA\" & index=\".kibana_task_manager
_2\" }"}
Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["warning","savedobjects-service"],"pid":1219,"message":"Another Kibana instance appears to
 be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_2 
and restarting Kibana."}

The systemctl status may be used, too. The error and the index are shown in the last lines of the status output – look below.

The problem here is there was a migration of index .kibana_task_manager_2, but it was abandoned because of unknown reason and now we should delete it to be able to use our kibana service. The index name might be with another name but it is the same problem.

STEP 2) Delete kibana index

Delete the kibana index in elasticsearch backend using curl and HTTP/HTTPS request such as:

[root@loganalyzer kibana]# curl -XDELETE http://192.168.0.2:9200/.kibana_task_manager_2
{"acknowledged":true}

Keep on reading!

Cron missing path – executing docker/podman – adding network: failed to locate iptables

If you have ever happened to execute some complex scripts using the cron system you were inevitable to discover the Linux environment was different than the login or ssh shell. The different environment tends to lead to a missing or different PATH environment! Here is what happens with podman starting a container from a cron script:

time="2020-04-19T20:45:20Z" level=error msg="Error adding network: failed to locate iptables: exec: \"iptables\": executable file not found in $PATH"
time="2020-04-19T20:45:20Z" level=error msg="Error while adding pod to CNI network \"podman\": failed to locate iptables: exec: \"iptables\": executable file not found in $PATH"
Error: unable to start container "onedrive-cli": error configuring network namespace for container d297cf80db20441d4258a1acc7d810444795d1ca8730ab242d9fe8a13eaa697d: failed to locate iptables: exec: "iptables": executable file not found in $PATH

The iptables executable is missing because the PATH variable is different than the login or ssh shell one. Executing the commands or the script under ssh or login will result in no error and a proper podman (docker) execution!

A similar problem could have happened with another software trying to execute iptables or another tool, which is not found in the cron’s PATH environment because cron’s environment is very limited and

To ensure the PATH is like the user’s (root) environment just source the “profile” or “.bashrc” file of the current user before the execution of the script or in the first lines of it.
This would do the trick.

. /etc/profile

Or user’s custom

. ~/.bashrc

Or the default OS bashrc

. /etc/bashrc

The dot may be replaced by “source”:

source /etc/bashrc

All (environment) variables will be available after the source command.

Here is the difference:
The environment without the sourcing profile/bashrc file:

 
LANG=en_US.UTF-8
XDG_SESSION_ID=19118
USER=root
PWD=/root
HOME=/root
SHELL=/bin/sh
SHLVL=1
LOGNAME=root
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
XDG_RUNTIME_DIR=/run/user/0
PATH=/usr/bin:/bin
_=/usr/bin/env

Sourcing the “/etc/profile” file:

LANG=en_US.UTF-8
HISTCONTROL=ignoredups
HOSTNAME=srv.example.com
XDG_SESSION_ID=19165
USER=root
PWD=/root
HOME=/root
MAIL=/var/spool/mail/root
SHELL=/bin/bash
SHLVL=1
LOGNAME=root
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
XDG_RUNTIME_DIR=/run/user/0
PATH=/usr/local/sbin:/usr/sbin:/usr/bin:/bin
HISTSIZE=1000
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/usr/bin/env

Multiple additional envrinment varibles, which could be important for user’s scripts executed by the cron.

And in CentOS 8 the iptables happens to be in “/usr/sbin/iptables” – a path /usr/sbin not included in the default cron environment PATH variable!
Of course, the PATH environment may be edited in the cron scheduler with crontab (by just setting the PATH with a path) till the next path missing in it and included in the user’s path! It’s just better to ensure the two environments are the same every time by sourcing the environment configuration file such as /etc/profile or user’s bashrc (or the default on in /etc/bashrc?).

Mirror a PPA repositories using aptly – PHP (ppa:ondrej/php)

This is a simple example of how to mirror a PPA repository to a local server. The Ubuntu PPA to mirror is ppa:ondrej/php, which offers the user different PHP version generally not available in the Ubuntu installation. Of course, the user should be very careful about adding PPA repositories, because they are exactly what the abbreviation stands for Personal Package Archives.

If you want to know how to install and a brief description of what is aptly you may want to read our previous article – Install aptly under Ubuntu 18 LTS with Nginx serving the packages and the first steps

What we are going to do – this is what you need to have a mirror of an external application repository:

  1. Install aptly in Ubuntu 18 LTS
  2. Create a mirror in aptly
  3. Create a snapshot of the mirror created before
  4. Publish the snapshot to be used in other servers.

and at the last step there is an example how to use the mirror in your local machines.

STEP 1) Install aptly in Ubuntu 18.04 LTS.

As mentioned already you may follow our article on the subject – Install aptly under Ubuntu 18 LTS with Nginx serving the packages and the first steps. The following steps are based on this installation!
The aptly home directory is in “/srv/aptly”. We use the “aptly” user and change to it to manipulate the aptly installation.
Change the user to aptly, because under this user the mirror process will happen.

root@srv ~ # su - aptly
aptly@srv:~$

STEP 2) Create a mirror in aptly.

Prepare the keys (aptly needs to have the Ubuntu keys in its trustedkeys keyring):

aptly@srv:~$ gpg --no-default-keyring --keyring trustedkeys.gpg --keyserver pool.sks-keyservers.net --recv-keys 4F4EA0AAE5267A6C
gpg: requesting key E5267A6C from hkp server pool.sks-keyservers.net
gpg: key E5267A6C: public key "Launchpad PPA for Ond\xc5\x99ej Sur´┐Ż" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

Here we’ve used the method to obtain the key from a GPG KEY server, but the key can be downloaded directrly from the original repository as suggested in the error message below.
If you are not sure where to download the key you could always just try to create the mirror ( in fact, this is in STEP 3) ) and get the error for missing key and how to obtain the key:
Keep on reading!

nginx proxy cache and expires directive – pass-through the origin cache control

Proxying static content sometimes requires to modify the expire directive on the proxy server, but sometimes it may just need to pass the origin expire directive. What if the “expires” is defined in the server section and ones need to pass through the value from the origin by the proxy server?

Simply switch off the expires by “expires off”. It will disable an earlier definition in the block it is used and the proxy answer to the client will include the origin header for the cache control.

It won’t disable the cache control meaning to add no-cache in proxy answer to the client. So if it is used “expires” in a block and a pass-through from the origin is required, just make a location block with “expires off”:

        server {
                listen          10.10.10.10:443 ssl http2;
                server_name     srv1.example.com;

                ssl_certificate  /etc/ssl/nginx/srv1.example.com.chain.crt;
                ssl_certificate_key /etc/ssl/nginx/srv1.example.com.key;

                resolver 8.8.8.8;

                client_max_body_size 12m;

                expires         -1;
                root            /mnt/storage/web/root;
                access_log      /mnt/storage/web/logs/srv1.example.com;
                error_log       /mnt/storage/web/logs/srv1.example.com warn;

                location / {
                        #proxy
                        proxy_buffer_size   128k;
                        proxy_buffers   4 256k;
                        proxy_busy_buffers_size   256k;
                        proxy_buffering off;
                        proxy_read_timeout 600;
                        proxy_send_timeout 600;
                        proxy_store off;
                        proxy_cache off;
                        proxy_redirect off;
                        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
                        proxy_no_cache $http_pragma $http_authorization;
                        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
                        proxy_cache_bypass $http_pragma $http_authorization;

                        proxy_pass https://https_backend;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
                        proxy_set_header Host $host;
                        proxy_set_header        X-Real-IP-EXAMPLE       $remote_addr;
                        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                }

                location ~* \.jpeg {
                        expires off;
                        #proxy
                        proxy_buffer_size   128k;
                        proxy_buffers   4 256k;
                        proxy_busy_buffers_size   256k;
                        proxy_buffering off;
                        proxy_read_timeout 600;
                        proxy_send_timeout 600;
                        proxy_store off;
                        proxy_cache off;
                        proxy_redirect off;
                        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
                        proxy_no_cache $http_pragma $http_authorization;
                        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
                        proxy_cache_bypass $http_pragma $http_authorization;
                        proxy_ignore_headers "Expires" "Cache-Control";


                        proxy_pass https://https_backend;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
                        proxy_set_header Host $host;
                        proxy_set_header        X-Real-IP-EXAMPLE       $remote_addr;
                        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                }

                 location ~* \.(jpg|gif|png|css|mcss|js|mjs|woff|woff2)$ {
                        expires 30d;
                        #proxy
                        proxy_buffer_size   128k;
                        proxy_buffers   4 256k;
                        proxy_busy_buffers_size   256k;
                        proxy_buffering off;
                        proxy_read_timeout 600;
                        proxy_send_timeout 600;
                        proxy_store off;
                        proxy_cache off;
                        proxy_redirect off;
                        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
                        proxy_no_cache $http_pragma $http_authorization;
                        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
                        proxy_cache_bypass $http_pragma $http_authorization;

                        proxy_pass http://http_backend;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
                        proxy_set_header Host $host;
                        proxy_set_header        X-Real-IP-EXAMPLE       $remote_addr;
                        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                }
        }

The above sample configuration defines 3 location blocks for 3 different “expires” cases:

  1. Uses the server block “expires -1” – proxy for dynamic code such as proxy site application code. No caching.
  2. Uses the location block .jpeg “expires off” – the proxy will pass-through the value from the origin server. Any value from the origin.
  3. Uses the location block .(gif|png|css|mcss|js|mjs|woff|woff2) “expires 30d” – it will set +30 days expire header no matter of the origin value. Origin server value is replaced by the today +30 days in the future cache cache header.