KDE Plasma windows force resize – iKVM virtual keyboard

If you happen to use KDE Plasma these days and you encounter view problems like you cannot see the whole viewpoint of a window (especially JAVA/GTK based programs?).

KDE Plasma Desktop offers the ability to force a window to expand to new dimensions.

STEP 1) The Java-based iKVM program window has a handful virtual keyboard.

It could be used to “click on” specific key combinations, which otherwise could be caught by your system. But in sometimes the virtual keyboad window is trimmed and you lose some important keys like Ctrl, Alt, Space, arrow keys and more (the last line of buttons).

main menu
iKVM virtual keyboard trimmed keys

Keep on reading!

Remove disk (all partitions) from software RAID1 with mdadm and change layout of the disk

The following article is to show how to remove healthy partitions from software RAID1 devices to change the layout of the disk and then add them back to the array.
The mdadm is the tool to manipulate the software RAID devices under Linux and it is part of all Linux distributions (some don’t install it by default so it may need to be installed).

Software RAID layout

[root@srv ~]# cat /proc/mdstat 
Personalities : [raid1] 
md125 : active raid1 sda4[1] sdb3[0]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid1 sdb2[0] sda3[1]
      32867328 blocks super 1.2 [2/2] [UU]
      
md127 : active raid1 sda2[1] sdb1[0]
      52427776 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

STEP 1) Make the partitions faulty.

The partitions cannot be removed if they are not faulty.

[root@srv ~]# mdadm --fail /dev/md125 /dev/sdb3
mdadm: set /dev/sdb3 faulty in /dev/md125
[root@srv ~]# mdadm --fail /dev/md126 /dev/sdb2
mdadm: set /dev/sdb2 faulty in /dev/md126
[root@srv ~]# mdadm --fail /dev/md127 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md127

Keep on reading!

Build docker image with custom Dockerfile name – docker build requires exactly 1 argument

Docker uses the Dockerfile to build docker images, but what if you want to change the name and (or) the path of this file?
By default “docker build” command uses a file named Dockerfile on the same directory you execute the “docker build“. There is an option to change the path and name of this special file:

  -f, --file string             Name of the Dockerfile (Default is 'PATH/Dockerfile')

And the “-f” may include path and file name but it is mandatory to specify the path at the end “docker build” usually the current directory (context by the docker terminology) by adding “.” (the dot at the end of the command)

So if you want to build with a docker file mydockerfile in the current directory you must execute:

docker build -f mydockerfile .

If your file is in a sub-directory execute:

docker build -f subdirectory/mydockerfile .

The command will create a docker image in your local repository. Here is the output of the first command:

root@srv:~/docker# docker build -f mydockerfile .
Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM ubuntu:bionic-20191029
bionic-20191029: Pulling from library/ubuntu
7ddbc47eeb70: Pull complete 
c1bbdc448b72: Pull complete 
8c3b70e39044: Pull complete 
45d437916d57: Pull complete 
Digest: sha256:6e9f67fa63b0323e9a1e587fd71c561ba48a034504fb804fd26fd8800039835d
Status: Downloaded newer image for ubuntu:bionic-20191029
 ---> 775349758637
Step 2/3 : MAINTAINER test@example.com
 ---> Running in 5fa42bca749c
Removing intermediate container 5fa42bca749c
 ---> 0a1ffa1728f4
Step 3/3 : RUN apt-get update && apt-get upgrade -y && apt-get install -y git wget
 ---> Running in 2e35040f247c
Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
.....
.....
Processing triggers for ca-certificates (20180409) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Removing intermediate container 2e35040f247c
 ---> 2382809739a4
Successfully built 2382809739a4

Here is the image:

REPOSITORY                            TAG                 IMAGE ID            CREATED              SIZE
root@srv:~# docker images
<none>                                <none>              2382809739a4        About a minute ago   186MB

Build command with custom name and registry URL and TAG

root@srv:~# docker build -t gitlab.ahelpme.com:4567/root/ubuntu-project/ubuntu18-manual-base:v0.1 -f mydockerfile .
Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM ubuntu:bionic-20191029
 ---> 775349758637
Step 2/3 : MAINTAINER test@example.com
 ---> Using cache
 ---> 0a1ffa1728f4
Step 3/3 : RUN apt-get update && apt-get upgrade -y && apt-get install -y git wget
 ---> Using cache
 ---> 2382809739a4
Successfully built 2382809739a4
Successfully tagged gitlab.ahelpme.com:4567/root/ubuntu-project/ubuntu18-manual-base:v0.1
root@srv:~# docker push gitlab.ahelpme.com:4567/root/ubuntu-project/ubuntu18-manual-base:v0.1
The push refers to repository [gitlab.ahelpme.com:4567/root/ubuntu-project/ubuntu18-manual-base]
7cebba4bf6c3: Pushed 
e0b3afb09dc3: Pushed 
6c01b5a53aac: Pushed 
2c6ac8e5063e: Pushed 
cc967c529ced: Pushed 
v0.1: digest: sha256:acf42078bf46e320c402f09c6417a3dae8992ab4f4f685265486063daf30cb13 size: 1364

the registry URL is “gitlab.ahelpme.com:4567” and the project path is “/root/ubuntu-project/” and the name of the image is “ubuntu18-manual-base” with tag “v0.1“. The build command uses the cache from our first build example here (because the docker file is the same).

Typical errors with “-f”

Two errors you may encounter when trying the “-f” to change the name of the default Dockerfile name:

$ docker build -t gitlab.ahelpme.com:4567/root/ubuntu-project/ubuntu18-manual-base:v0.1 -f mydockerfile subdirectory/
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /builds/dev/docker-containers/mydockerfile: no such file or directory

$ docker build -t gitlab.ahelpme.com:4567/root/ubuntu-project/ubuntu18-manual-base:v0.1 -f subdirectory/mydockerfile
"docker build" requires exactly 1 argument.
See 'docker build --help'.

Usage:  docker build [OPTIONS] PATH | URL | -

First, you might think the -f would take the path and file name and this should be enough, but the errors above appears!

Our example Dockerfile

This is our simple example docker file:

FROM ubuntu:bionic-20191029
MAINTAINER test@example.com

RUN apt-get update && apt-get upgrade -y && apt-get install -y git wget

We are using the official docker image from Ubuntu. Always use official docker images!

nano – Error opening terminal: unknown under chroot or (docker) container

A quick tip for GNU Nano – the text editor. Ever receiving the terminal error when using nano?

[root@srv ~]# nano                                                        
Error opening terminal: unknown.
[root@srv ~]#

Under chroot or container (like docker) environment try exporting the TERM environment:

export TERM=linux

And then execute nano the chances are you will enter nano normally and do your work (faster than vim 🙂 )?

You may use it without exporting to your shell, but just the for the nano:

TERM=linux nano

Save iptables rules over reboots on Ubuntu 16 and Ubuntu 18 – persistent iptables rules

Moving towards the firewalld software and especially the systemd some good old init scripts got missing! For example, one of those good scripts is the init script for iptables firewall, which allows saving iptables rules and during boot, it loads them again. With the init iptables script we have persistence of the iptables rules. Meanwhile, we can always call the init script with “save” argument to update the currently saved rules. Many different Linux distributions have this init script – “/etc/init.d/iptables”, but in systemd world, it has been removed and replaced with nothing (probably, because you are encouraged to use firewalld, which is not a bad thing!).

There are two packages “iptables-persistent” and “netfilter-persistent”, which work together to have iptables persistence over reboots. The rules are saved and restored automatically during system startup.

First, install “iptables-persistent” and “netfilter-persistent” with

sudo apt install netfilter-persistent iptables-persistent

During the iptables–persistent installation the setup asks the user to save the current iptables rules. Hit “Yes” if you want to save the current iptables rules, which will be automatically loaded the next time the system starts up.

main menu
Configuring iptables-persistent setup

So it is safe to install it on a live system – the current iptables rules won’t be deleted.
Second, ensure the boot script to restore the iptables rules is enabled

sudo systemctl enable netfilter-persistent

Additional information

Saving the current state of the iptables rules:

myuser@myubuntupc:~$ sudo /usr/sbin/netfilter-persistent save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save

Restore the original state of the iptables rules:

sudo systemctl restart netfilter-persistent

And all commands you can do – start, stop, restart, reload, flush, save. You can use the script directly (it is not mandatory to use systemctl to restart, i.e. restore rules and etc.)

myuser@myubuntupc:~$ sudo /usr/sbin/netfilter-persistent
Usage: /usr/sbin/netfilter-persistent (start|stop|restart|reload|flush|save)

The script netfilter-persistent executes 2 other scripts as plugins:

/usr/share/netfilter-persistent/plugins.d/15-ip4tables
/usr/share/netfilter-persistent/plugins.d/25-ip6tables

The iptables rules are saved respectively in files

/etc/iptables/rules.v4
/etc/iptables/rules.v6

And you can always edit them manually or save/restore with iptables-save and iptables-restore redirecting the output to the above files.

It’s normal the state of the “active (exited)”. The service is “enabled” as you can see (by default the setup automatically enables the service on Ubuntu, but always check it to be sure, it’s the firewall!).

myuser@myubuntupc:~$ sudo systemctl status netfilter-persistent
● netfilter-persistent.service - netfilter persistent configuration
   Loaded: loaded (/lib/systemd/system/netfilter-persistent.service; enabled; vendor preset: enabled)
   Active: active (exited) since Thu 2019-01-17 20:44:08 EST; 14min ago
 Main PID: 666 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/netfilter-persistent.service

Jan 17 20:44:08 myubuntupc systemd[1]: Starting netfilter persistent configuration...
Jan 17 20:44:08 myubuntupc netfilter-persistent[666]: run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables start
Jan 17 20:44:08 myubuntupc netfilter-persistent[666]: run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables start
Jan 17 20:44:08 myubuntupc systemd[1]: Started netfilter persistent configuration.

nginx proxy cache – log the upstream response server, time, cache status, connect time and more in nginx access logs

The Nginx upstream module exposes embedded variables, which we can use to log them in the Nginx access log files.
Some of the variables are really interesting and could be of great use to the system administrators and in general to tune your systems (content delivery network?). For example, you can log

  • $upstream_cache_status – the cache status of the object the server served. For each URI you will have in the logs if the item is from the cache (HIT) or the Nginx used an upstream server to get the item (MISS)
  • $upstream_response_time – the time Nginx proxy needed to get the resource from the upstream server
  • $upstream_addr – the Nginx upstream server used for the requested URI in the logs.
  • $upstream_connect_time – the connect time to the specific

And many more you may check the documentation at the bottom with heading “Embedded Variables” – http://nginx.org/en/docs/http/ngx_http_upstream_module.html

For example, in peak hours, you can see how the time to get the resource from the upstream servers changes.

And you can substruct the time from the time your server served the URI to the client.

Of course, you can use this with any upstream case not only with proxy cache! These variables may be used with application backend servers like PHP (FastCGI) application servers and more. In a single log in the access log file, there could be information not only for the URI but for the time spent to generate the request in the application server.

Example

Logging in JSON format (JSON is just for the example, you can use the default string):

        log_format main3 escape=json '{'
                '"remote_addr":"$remote_addr",'
                '"time_iso8601":"$time_iso8601",'
                '"request_uri":"$request_uri",'
                '"request_length":"$request_length",'
                '"request_method":"$request_method",'
                '"request_time":"$request_time",'
                '"server_port":"$server_port",'
                '"server_protocol":"$server_protocol",'
                '"ssl_protocol":"$ssl_protocol",'
                '"status":"$status",'
                '"bytes_sent":"$bytes_sent",'
                '"http_referer":"$http_referer",'
                '"http_user_agent":"$http_user_agent",'
                '"upstream_response_time":"$upstream_response_time",'
                '"upstream_addr":"$upstream_addr",'
                '"upstream_connect_time":"$upstream_connect_time",'
                '"upstream_cache_status":"$upstream_cache_status",'
                '"tcpinfo_rtt":"$tcpinfo_rtt",'
                '"tcpinfo_rttvar":"$tcpinfo_rttvar"'
                '}';

We included the variables we needed, but there are a lot more, check out the Nginx documentation for more.
Just add the above snippet to your Nginx configuration and activate it with the access_log directive:

access_log      /var/log/nginx/example.com-json.log main3;

“main3” is the name of the format and it could be anything you like.

And the logs look like:

{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T13:36:33+00:00","request_uri":"/i/example/bc/bcda7f798ea1c75f18838bc3f0ffbd1c_200.jpg","request_length":"412","request_method":"GET","request_time":"0.325","server_port":"8801","server_protocol":"HTTP/1.1","ssl_protocol":"","status":"404","bytes_sent":"332","http_referer":"https://example.com/test_1","http_user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12","upstream_response_time":"0.324","upstream_addr":"10.10.10.2","upstream_connect_time":"0.077","upstream_cache_status":"MISS","tcpinfo_rtt":"45614","tcpinfo_rttvar":"22807"}
{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T13:36:33+00:00","request_uri":"/i/example/2d/2df5f3dfe1754b3b4ba8ac66159c0384_200.jpg","request_length":"412","request_method":"GET","request_time":"0.242","server_port":"8801","server_protocol":"HTTP/1.1","ssl_protocol":"","status":"404","bytes_sent":"332","http_referer":"https://example.com/test_1","http_user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12","upstream_response_time":"0.242","upstream_addr":"10.10.10.2","upstream_connect_time":"0.000","upstream_cache_status":"MISS","tcpinfo_rtt":"46187","tcpinfo_rttvar":"23093"}
{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T13:36:41+00:00","request_uri":"/flv/example/test_1.ts?st=E05FMg-DSIAgRfVhbadUWQ&e=1568381799&sopt=pdlfwefdfsr","request_length":"357","request_method":"GET","request_time":"0.960","server_port":"8801","server_protocol":"HTTP/1.0","ssl_protocol":"","status":"200","bytes_sent":"3988358","http_referer":"","http_user_agent":"Lavf53.32.100","upstream_response_time":"0.959","upstream_addr":"10.10.10.2","upstream_connect_time":"0.000","upstream_cache_status":"MISS","tcpinfo_rtt":"46320","tcpinfo_rttvar":"91"}
{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T14:09:34+00:00","request_uri":"/flv/example/aee001dce114c88874b306bc73c2d482_1.ts?range=564-1804987","request_length":"562","request_method":"GET","request_time":"0.613","server_port":"8801","server_protocol":"HTTP/1.0","ssl_protocol":"","status":"200","bytes_sent":"5318082","http_referer":"","http_user_agent":"AppleCoreMedia/1.0.0.16E227 (iPad; U; CPU OS 12_2 like Mac OS X; en_us)","upstream_response_time":"","upstream_addr":"","upstream_connect_time":"","upstream_cache_status":"HIT","tcpinfo_rtt":"45322","tcpinfo_rttvar":"295"}

It’s easy to print them beatiful in the console with the “jq” tool

[root@srv logging]# tail -f 10.10.10.10.log|awk 'BEGIN {FS="{"} {print "{"$2}'|jq "."
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T13:36:33+00:00",
  "request_uri": "/i/example/bc/bcda7f798ea1c75f18838bc3f0ffbd1c_200.jpg",
  "request_length": "412",
  "request_method": "GET",
  "request_time": "0.325",
  "server_port": "8801",
  "server_protocol": "HTTP/1.1",
  "ssl_protocol": "",
  "status": "404",
  "bytes_sent": "332",
  "http_referer": "https://example.com/test_1",
  "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12",
  "upstream_response_time": "0.324",
  "upstream_addr": "10.10.10.2",
  "upstream_connect_time": "0.077",
  "upstream_cache_status": "MISS",
  "tcpinfo_rtt": "45614",
  "tcpinfo_rttvar": "22807"
}
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T13:36:33+00:00",
  "request_uri": "/i/example/2d/2df5f3dfe1754b3b4ba8ac66159c0384_200.jpg",
  "request_length": "412",
  "request_method": "GET",
  "request_time": "0.242",
  "server_port": "8801",
  "server_protocol": "HTTP/1.1",
  "ssl_protocol": "",
  "status": "404",
  "bytes_sent": "332",
  "http_referer": "https://example.com/test_1",
  "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12",
  "upstream_response_time": "0.242",
  "upstream_addr": "10.10.10.2",
  "upstream_connect_time": "0.000",
  "upstream_cache_status": "MISS",
  "tcpinfo_rtt": "46187",
  "tcpinfo_rttvar": "23093"
}
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T13:36:41+00:00",
  "request_uri": "/flv/example/test_1.ts?st=E05FMg-DSIAgRfVhbadUWQ&e=1568381799&sopt=pdlfwefdfsr",
  "request_length": "357",
  "request_method": "GET",
  "request_time": "0.960",
  "server_port": "8801",
  "server_protocol": "HTTP/1.0",
  "ssl_protocol": "",
  "status": "200",
  "bytes_sent": "3988358",
  "http_referer": "",
  "http_user_agent": "Lavf53.32.100",
  "upstream_response_time": "0.959",
  "upstream_addr": "10.10.10.2",
  "upstream_connect_time": "0.000",
  "upstream_cache_status": "MISS",
  "tcpinfo_rtt": "46320",
  "tcpinfo_rttvar": "91"
}
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T14:09:34+00:00",
  "request_uri": "/flv/example/aee001dce114c88874b306bc73c2d482_1.ts?range=564-1804987",
  "request_length": "562",
  "request_method": "GET",
  "request_time": "0.613",
  "server_port": "8801",
  "server_protocol": "HTTP/1.0",
  "ssl_protocol": "",
  "status": "200",
  "bytes_sent": "5318082",
  "http_referer": "",
  "http_user_agent": "AppleCoreMedia/1.0.0.16E227 (iPad; U; CPU OS 12_2 like Mac OS X; en_us)",
  "upstream_response_time": "",
  "upstream_addr": "",
  "upstream_connect_time": "",
  "upstream_cache_status": "HIT",
  "tcpinfo_rtt": "45322",
  "tcpinfo_rttvar": "295"
}

3 misses and 1 hit, the hit 3 of the upstream variables we used are blank, because the server took the item from the cache.

rsyslog remote logging – prevent local messages to appear

A tip for those who have a remote user server for their log files. When you set up a remote server you probably don’t want local messages to appear in the logging directory (directories) and here is how you can archive it:
Above all the rules in the configuration file “/etc/rsyslog.conf” (or where it is in your system) you include “if” statement for the local server like this:

# Remote logging
$template HostIPtemp,"/mnt/logging/%FROMHOST-IP%.log"
if ($fromhost-ip != "127.0.0.1" ) then ?HostIPtemp
& stop

The name of the template is “HostIPtemp” and the starting part of the path “/mnt/logging/” may be anything you like.
All the remote messages will be redirected to the template and all the rules after won’t be applied to them because we use the “stop instruction”.

That’s why this rule must be above all rules in the whole rule configuration. Above all rules – probably you will find a commented line with “#### RULES ####”

The above configuration will have the following directory structure:

[root@srv ~]# ls -altr /mnt/logging/
total 2792
drwxr-xr-x. 7 root root    4096 12 Sep 10,05 ..
drwxr-xr-x. 2 root root    4096 12 Sep 13,01 .
-rw-------. 1 root root 2844525 12 Sep 13,01 10.10.10.10.log
-rw-------. 1 root root 1245633 12 Sep 13,01 10.10.10.11.log
-rw-------. 1 root root 9722578 12 Sep 13,01 10.10.10.12.log
-rw-------. 1 root root 1127231 12 Sep 13,01 10.10.10.13.log

Live status information like used space and more for nginx proxy cache

Using the Nginx virtual host traffic status module you can have extended live information for your proxy cache module and the proxy cache upstream servers. We have covered the topic of how to install the module here – Install Nginx virtual host traffic status module – traffic information in Nginx and more per server block and upstreams and this article is just to show you what information you could have using the module with proxy cache (and the upstream servers) module.
In general, there is no live information about Nginx proxy cache. Of course, by the space it is occupied in the disk you can guess how much space is taken by your Nginx cache (or when you restart or upgrade the Nginx it would reinitialize the cache and when finished the numbers would be written in the error log). With this module “Nginx virtual host traffic status module” – https://github.com/vozlt/nginx-module-vts you would have additional status information page containing information for the proxy cache module (we included only for the proxy cache here, for more look at the other article mentioned above), too:

Per upstream server

  • state – up, down, backup server and so on.
  • Response Time – the time the server responded last time. You can use this to see how far away is your server and to detect problems with your upstreams connectivity.
  • Weight – the weight of the server in the group. It’s from the configuration file.
  • Max Fails – the max fail attempts before it blacklists the upstream for “Fail Timeout” time. It’s from the configuration file.
  • Fail Timeout – the time, which the server will be blacklisted and the time the all fails (from Max Fails) must occur. It’s from the configuration file.
  • RequestsTotal – from the start of the server, Req/s – Requests per second at the moment of loading the extended status page, Time.
  • ResponsesTotal and split by error codes – 1xx, 2xx, 3xx, 4xx, 5xx.
  • TrafficSent – total sent from the start of the server), Rcvd – total received from the start of the server, Sent/s – sent per second at the moment, Rcvd/s – received per second at the moment.

Per cache – i.e. key zone name

  • SizeCapacity – the capacity from the configuration file, Used – used space lively updated! With this you can have access to the used space of your zones with only loading a page – the extended status page of this module!
  • TrafficSent – total sent from the start of the server, Rcvd – total received from the start of the server, Sent/s – sent per second at the moment, Rcvd/s – received per second at the moment.
  • CacheMiss, Bypass, Expired, Stale, Updating, Revalidated, Hit, Scarce, Total – they all are self-explanatory and all counters are from the start of the server.

You can compute the effectiveness of your cache for a period of time. For example, you can make different graphs based on this data for long periods and for different short periods like in peaks of off-peaks. We might have an article on the subject.

SCREENSHOT 1) Cache with three cache zones and two upstream servers – main and backup

As you can see our biggest zone has 2.92T occupied and it is 100% of the available space, so probably the cache manager is deleteing at the moment. The hits are 24551772 and the total is 28023927 so the ratio in percentages is 87.6%! 87.6% of the hits of this zone is servered by the server without the need of touching the upstream servers. In the first cache zone we have more aggressive time expiring, so there were 21% requests, which were updated.

main menu
Cache with three cache zones and two upstream servers – main and backup

Keep on reading!

root cannot delete, move or change a file – Operation not permitted or Permission denied – immutable attribute

If you are the root user and some file (files or directories) cannot be deleted, removed, renamed or changed you probably deal with the immutable attribute set on (by a colleague of yours – installation setups tend to not set such attributes).

Here is what it looks like to have such a file

root@srv.remote /etc/apache2/vhosts.d # mv example.com.conf /root/old/apache/
mv: cannot move `example.com.conf' to `/root/old/apache/example.com.conf': Operation not permitted
root@srv.remote /etc/apache2/vhosts.d # lsattr example.com.conf
----i--------e- example.com.conf
root@srv.remote /etc/apache2/vhosts.d # rm example.com.conf
rm: cannot remove `example.com.conf': Operation not permitted
root@srv.remote /etc/apache2/vhosts.d # echo "teeest" >> example.com.conf
-bash: example.com.conf: Permission denied

Here is how you can set the attribute off.

You need first to set off the file’s immutable attribute and then to do whatever you intended to do in the first place – delete, rename, change and so on. Y

chattr -i filename.txt

In continuation of our example above:

root@srv.remote /etc/apache2/vhosts.d # chattr -i example.com.conf
root@srv.remote /etc/apache2/vhosts.d # lsattr example.com.conf
-------------e- example.com.conf
root@srv.remote /etc/apache2/vhosts.d # mv example.com.conf /root/old/apache/
root@srv.remote /etc/apache2/vhosts.d #

As you can see no immutable attribute no problem to move the file!

And just not note you need to install a package with the name e2fsprogs (not always in the default installation) in your Linux distribution to have lsattr, chattr and more!

openntpd – immediately sync the clock on startup

Here is our simple tip for your healthy server’s date and time:

Immediately synchronize the clock of your computer when using the openntpd (a lightweight version of ntpd with client-only mode).

Use the “-s” (lower “s” letter) to instruct the daemon ntpd to synchronize the clock immediately after it discovers a healthy time server!

-s          Try to set the time immediately at startup, as opposed to slowly adjusting the clock.  ntpd will stay in the foreground for up to 15 seconds waiting
                 for one of the configured NTP servers to reply.

Find the start-up configuration file in your “/etc” (for your Linux distribution, its name is probably ntpd, for Gentoo it is “/etc/conf.d/ntpd”, the thing is to find the start-up confiuration script, not the ntpd.conf, which is the ntpd configuration file for the daemon) and include “-s” in the NTPD_OPTS:

cat /etc/conf.d/ntpd 
# /etc/conf.d/ntpd: config file for openntpd's ntpd

NTPD_OPTS="-s"

Restart the service.

If you use it in a virtualized environment like containers (docker, lxc, lxd and so on) and qemu, virtualbox, vmware and so on and you often suspend the machine to synchronize the clock when you resume it you must manually restart the openntpd service!!! Or you are going to wait for slowly adjusting the time as usual.

Information status

There is a utility to check what’s going on with the openntpd – ntpctl. It has only three read-only commands:

usage: ntpctl -s all | peers | Sensors | status

Keep on reading!