nano – Error opening terminal: unknown under chroot or (docker) container

A quick tip for GNU Nano – the text editor. Ever receiving the terminal error when using nano?

[root@srv ~]# nano                                                        
Error opening terminal: unknown.
[root@srv ~]#

Under chroot or container (like docker) environment try exporting the TERM environment:

export TERM=linux

And then execute nano the chances are you will enter nano normally and do your work (faster than vim 🙂 )?

You may use it without exporting to your shell, but just the for the nano:

TERM=linux nano

Save iptables rules over reboots on Ubuntu 16 and Ubuntu 18 – persistent iptables rules

Moving towards the firewalld software and especially the systemd some good old init scripts got missing! For example, one of those good scripts is the init script for iptables firewall, which allows saving iptables rules and during boot, it loads them again. With the init iptables script we have persistence of the iptables rules. Meanwhile, we can always call the init script with “save” argument to update the currently saved rules. Many different Linux distributions have this init script – “/etc/init.d/iptables”, but in systemd world, it has been removed and replaced with nothing (probably, because you are encouraged to use firewalld, which is not a bad thing!).

There are two packages “iptables-persistent” and “netfilter-persistent”, which work together to have iptables persistence over reboots. The rules are saved and restored automatically during system startup.

First, install “iptables-persistent” and “netfilter-persistent” with

sudo apt install netfilter-persistent iptables-persistent

During the iptables–persistent installation the setup asks the user to save the current iptables rules. Hit “Yes” if you want to save the current iptables rules, which will be automatically loaded the next time the system starts up.

main menu
Configuring iptables-persistent setup

So it is safe to install it on a live system – the current iptables rules won’t be deleted.
Second, ensure the boot script to restore the iptables rules is enabled

sudo systemctl enable netfilter-persistent

Additional information

Saving the current state of the iptables rules:

myuser@myubuntupc:~$ sudo /usr/sbin/netfilter-persistent save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save

Restore the original state of the iptables rules:

sudo systemctl restart netfilter-persistent

And all commands you can do – start, stop, restart, reload, flush, save. You can use the script directly (it is not mandatory to use systemctl to restart, i.e. restore rules and etc.)

myuser@myubuntupc:~$ sudo /usr/sbin/netfilter-persistent
Usage: /usr/sbin/netfilter-persistent (start|stop|restart|reload|flush|save)

The script netfilter-persistent executes 2 other scripts as plugins:

/usr/share/netfilter-persistent/plugins.d/15-ip4tables
/usr/share/netfilter-persistent/plugins.d/25-ip6tables

The iptables rules are saved respectively in files

/etc/iptables/rules.v4
/etc/iptables/rules.v6

And you can always edit them manually or save/restore with iptables-save and iptables-restore redirecting the output to the above files.

It’s normal the state of the “active (exited)”. The service is “enabled” as you can see (by default the setup automatically enables the service on Ubuntu, but always check it to be sure, it’s the firewall!).

myuser@myubuntupc:~$ sudo systemctl status netfilter-persistent
● netfilter-persistent.service - netfilter persistent configuration
   Loaded: loaded (/lib/systemd/system/netfilter-persistent.service; enabled; vendor preset: enabled)
   Active: active (exited) since Thu 2019-01-17 20:44:08 EST; 14min ago
 Main PID: 666 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/netfilter-persistent.service

Jan 17 20:44:08 myubuntupc systemd[1]: Starting netfilter persistent configuration...
Jan 17 20:44:08 myubuntupc netfilter-persistent[666]: run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables start
Jan 17 20:44:08 myubuntupc netfilter-persistent[666]: run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables start
Jan 17 20:44:08 myubuntupc systemd[1]: Started netfilter persistent configuration.

nginx proxy cache – log the upstream response server, time, cache status, connect time and more in nginx access logs

The Nginx upstream module exposes embedded variables, which we can use to log them in the Nginx access log files.
Some of the variables are really interesting and could be of great use to the system administrators and in general to tune your systems (content delivery network?). For example, you can log

  • $upstream_cache_status – the cache status of the object the server served. For each URI you will have in the logs if the item is from the cache (HIT) or the Nginx used an upstream server to get the item (MISS)
  • $upstream_response_time – the time Nginx proxy needed to get the resource from the upstream server
  • $upstream_addr – the Nginx upstream server used for the requested URI in the logs.
  • $upstream_connect_time – the connect time to the specific

And many more you may check the documentation at the bottom with heading “Embedded Variables” – http://nginx.org/en/docs/http/ngx_http_upstream_module.html

For example, in peak hours, you can see how the time to get the resource from the upstream servers changes.

And you can substruct the time from the time your server served the URI to the client.

Of course, you can use this with any upstream case not only with proxy cache! These variables may be used with application backend servers like PHP (FastCGI) application servers and more. In a single log in the access log file, there could be information not only for the URI but for the time spent to generate the request in the application server.

Example

Logging in JSON format (JSON is just for the example, you can use the default string):

        log_format main3 escape=json '{'
                '"remote_addr":"$remote_addr",'
                '"time_iso8601":"$time_iso8601",'
                '"request_uri":"$request_uri",'
                '"request_length":"$request_length",'
                '"request_method":"$request_method",'
                '"request_time":"$request_time",'
                '"server_port":"$server_port",'
                '"server_protocol":"$server_protocol",'
                '"ssl_protocol":"$ssl_protocol",'
                '"status":"$status",'
                '"bytes_sent":"$bytes_sent",'
                '"http_referer":"$http_referer",'
                '"http_user_agent":"$http_user_agent",'
                '"upstream_response_time":"$upstream_response_time",'
                '"upstream_addr":"$upstream_addr",'
                '"upstream_connect_time":"$upstream_connect_time",'
                '"upstream_cache_status":"$upstream_cache_status",'
                '"tcpinfo_rtt":"$tcpinfo_rtt",'
                '"tcpinfo_rttvar":"$tcpinfo_rttvar"'
                '}';

We included the variables we needed, but there are a lot more, check out the Nginx documentation for more.
Just add the above snippet to your Nginx configuration and activate it with the access_log directive:

access_log      /var/log/nginx/example.com-json.log main3;

“main3” is the name of the format and it could be anything you like.

And the logs look like:

{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T13:36:33+00:00","request_uri":"/i/example/bc/bcda7f798ea1c75f18838bc3f0ffbd1c_200.jpg","request_length":"412","request_method":"GET","request_time":"0.325","server_port":"8801","server_protocol":"HTTP/1.1","ssl_protocol":"","status":"404","bytes_sent":"332","http_referer":"https://example.com/test_1","http_user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12","upstream_response_time":"0.324","upstream_addr":"10.10.10.2","upstream_connect_time":"0.077","upstream_cache_status":"MISS","tcpinfo_rtt":"45614","tcpinfo_rttvar":"22807"}
{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T13:36:33+00:00","request_uri":"/i/example/2d/2df5f3dfe1754b3b4ba8ac66159c0384_200.jpg","request_length":"412","request_method":"GET","request_time":"0.242","server_port":"8801","server_protocol":"HTTP/1.1","ssl_protocol":"","status":"404","bytes_sent":"332","http_referer":"https://example.com/test_1","http_user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12","upstream_response_time":"0.242","upstream_addr":"10.10.10.2","upstream_connect_time":"0.000","upstream_cache_status":"MISS","tcpinfo_rtt":"46187","tcpinfo_rttvar":"23093"}
{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T13:36:41+00:00","request_uri":"/flv/example/test_1.ts?st=E05FMg-DSIAgRfVhbadUWQ&e=1568381799&sopt=pdlfwefdfsr","request_length":"357","request_method":"GET","request_time":"0.960","server_port":"8801","server_protocol":"HTTP/1.0","ssl_protocol":"","status":"200","bytes_sent":"3988358","http_referer":"","http_user_agent":"Lavf53.32.100","upstream_response_time":"0.959","upstream_addr":"10.10.10.2","upstream_connect_time":"0.000","upstream_cache_status":"MISS","tcpinfo_rtt":"46320","tcpinfo_rttvar":"91"}
{"remote_addr":"10.10.10.10","time_iso8601":"2019-09-12T14:09:34+00:00","request_uri":"/flv/example/aee001dce114c88874b306bc73c2d482_1.ts?range=564-1804987","request_length":"562","request_method":"GET","request_time":"0.613","server_port":"8801","server_protocol":"HTTP/1.0","ssl_protocol":"","status":"200","bytes_sent":"5318082","http_referer":"","http_user_agent":"AppleCoreMedia/1.0.0.16E227 (iPad; U; CPU OS 12_2 like Mac OS X; en_us)","upstream_response_time":"","upstream_addr":"","upstream_connect_time":"","upstream_cache_status":"HIT","tcpinfo_rtt":"45322","tcpinfo_rttvar":"295"}

It’s easy to print them beatiful in the console with the “jq” tool

[root@srv logging]# tail -f 10.10.10.10.log|awk 'BEGIN {FS="{"} {print "{"$2}'|jq "."
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T13:36:33+00:00",
  "request_uri": "/i/example/bc/bcda7f798ea1c75f18838bc3f0ffbd1c_200.jpg",
  "request_length": "412",
  "request_method": "GET",
  "request_time": "0.325",
  "server_port": "8801",
  "server_protocol": "HTTP/1.1",
  "ssl_protocol": "",
  "status": "404",
  "bytes_sent": "332",
  "http_referer": "https://example.com/test_1",
  "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12",
  "upstream_response_time": "0.324",
  "upstream_addr": "10.10.10.2",
  "upstream_connect_time": "0.077",
  "upstream_cache_status": "MISS",
  "tcpinfo_rtt": "45614",
  "tcpinfo_rttvar": "22807"
}
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T13:36:33+00:00",
  "request_uri": "/i/example/2d/2df5f3dfe1754b3b4ba8ac66159c0384_200.jpg",
  "request_length": "412",
  "request_method": "GET",
  "request_time": "0.242",
  "server_port": "8801",
  "server_protocol": "HTTP/1.1",
  "ssl_protocol": "",
  "status": "404",
  "bytes_sent": "332",
  "http_referer": "https://example.com/test_1",
  "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/603.3.8 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12",
  "upstream_response_time": "0.242",
  "upstream_addr": "10.10.10.2",
  "upstream_connect_time": "0.000",
  "upstream_cache_status": "MISS",
  "tcpinfo_rtt": "46187",
  "tcpinfo_rttvar": "23093"
}
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T13:36:41+00:00",
  "request_uri": "/flv/example/test_1.ts?st=E05FMg-DSIAgRfVhbadUWQ&e=1568381799&sopt=pdlfwefdfsr",
  "request_length": "357",
  "request_method": "GET",
  "request_time": "0.960",
  "server_port": "8801",
  "server_protocol": "HTTP/1.0",
  "ssl_protocol": "",
  "status": "200",
  "bytes_sent": "3988358",
  "http_referer": "",
  "http_user_agent": "Lavf53.32.100",
  "upstream_response_time": "0.959",
  "upstream_addr": "10.10.10.2",
  "upstream_connect_time": "0.000",
  "upstream_cache_status": "MISS",
  "tcpinfo_rtt": "46320",
  "tcpinfo_rttvar": "91"
}
{
  "remote_addr": "10.10.10.10",
  "time_iso8601": "2019-09-12T14:09:34+00:00",
  "request_uri": "/flv/example/aee001dce114c88874b306bc73c2d482_1.ts?range=564-1804987",
  "request_length": "562",
  "request_method": "GET",
  "request_time": "0.613",
  "server_port": "8801",
  "server_protocol": "HTTP/1.0",
  "ssl_protocol": "",
  "status": "200",
  "bytes_sent": "5318082",
  "http_referer": "",
  "http_user_agent": "AppleCoreMedia/1.0.0.16E227 (iPad; U; CPU OS 12_2 like Mac OS X; en_us)",
  "upstream_response_time": "",
  "upstream_addr": "",
  "upstream_connect_time": "",
  "upstream_cache_status": "HIT",
  "tcpinfo_rtt": "45322",
  "tcpinfo_rttvar": "295"
}

3 misses and 1 hit, the hit 3 of the upstream variables we used are blank, because the server took the item from the cache.

rsyslog remote logging – prevent local messages to appear

A tip for those who have a remote user server for their log files. When you set up a remote server you probably don’t want local messages to appear in the logging directory (directories) and here is how you can archive it:
Above all the rules in the configuration file “/etc/rsyslog.conf” (or where it is in your system) you include “if” statement for the local server like this:

# Remote logging
$template HostIPtemp,"/mnt/logging/%FROMHOST-IP%.log"
if ($fromhost-ip != "127.0.0.1" ) then ?HostIPtemp
& stop

The name of the template is “HostIPtemp” and the starting part of the path “/mnt/logging/” may be anything you like.
All the remote messages will be redirected to the template and all the rules after won’t be applied to them because we use the “stop instruction”.

That’s why this rule must be above all rules in the whole rule configuration. Above all rules – probably you will find a commented line with “#### RULES ####”

The above configuration will have the following directory structure:

[root@srv ~]# ls -altr /mnt/logging/
total 2792
drwxr-xr-x. 7 root root    4096 12 Sep 10,05 ..
drwxr-xr-x. 2 root root    4096 12 Sep 13,01 .
-rw-------. 1 root root 2844525 12 Sep 13,01 10.10.10.10.log
-rw-------. 1 root root 1245633 12 Sep 13,01 10.10.10.11.log
-rw-------. 1 root root 9722578 12 Sep 13,01 10.10.10.12.log
-rw-------. 1 root root 1127231 12 Sep 13,01 10.10.10.13.log

Live status information like used space and more for nginx proxy cache

Using the Nginx virtual host traffic status module you can have extended live information for your proxy cache module and the proxy cache upstream servers. We have covered the topic of how to install the module here – Install Nginx virtual host traffic status module – traffic information in Nginx and more per server block and upstreams and this article is just to show you what information you could have using the module with proxy cache (and the upstream servers) module.
In general, there is no live information about Nginx proxy cache. Of course, by the space it is occupied in the disk you can guess how much space is taken by your Nginx cache (or when you restart or upgrade the Nginx it would reinitialize the cache and when finished the numbers would be written in the error log). With this module “Nginx virtual host traffic status module” – https://github.com/vozlt/nginx-module-vts you would have additional status information page containing information for the proxy cache module (we included only for the proxy cache here, for more look at the other article mentioned above), too:

Per upstream server

  • state – up, down, backup server and so on.
  • Response Time – the time the server responded last time. You can use this to see how far away is your server and to detect problems with your upstreams connectivity.
  • Weight – the weight of the server in the group. It’s from the configuration file.
  • Max Fails – the max fail attempts before it blacklists the upstream for “Fail Timeout” time. It’s from the configuration file.
  • Fail Timeout – the time, which the server will be blacklisted and the time the all fails (from Max Fails) must occur. It’s from the configuration file.
  • RequestsTotal – from the start of the server, Req/s – Requests per second at the moment of loading the extended status page, Time.
  • ResponsesTotal and split by error codes – 1xx, 2xx, 3xx, 4xx, 5xx.
  • TrafficSent – total sent from the start of the server), Rcvd – total received from the start of the server, Sent/s – sent per second at the moment, Rcvd/s – received per second at the moment.

Per cache – i.e. key zone name

  • SizeCapacity – the capacity from the configuration file, Used – used space lively updated! With this you can have access to the used space of your zones with only loading a page – the extended status page of this module!
  • TrafficSent – total sent from the start of the server, Rcvd – total received from the start of the server, Sent/s – sent per second at the moment, Rcvd/s – received per second at the moment.
  • CacheMiss, Bypass, Expired, Stale, Updating, Revalidated, Hit, Scarce, Total – they all are self-explanatory and all counters are from the start of the server.

You can compute the effectiveness of your cache for a period of time. For example, you can make different graphs based on this data for long periods and for different short periods like in peaks of off-peaks. We might have an article on the subject.

SCREENSHOT 1) Cache with three cache zones and two upstream servers – main and backup

As you can see our biggest zone has 2.92T occupied and it is 100% of the available space, so probably the cache manager is deleteing at the moment. The hits are 24551772 and the total is 28023927 so the ratio in percentages is 87.6%! 87.6% of the hits of this zone is servered by the server without the need of touching the upstream servers. In the first cache zone we have more aggressive time expiring, so there were 21% requests, which were updated.

main menu
Cache with three cache zones and two upstream servers – main and backup

Keep on reading!

root cannot delete, move or change a file – Operation not permitted or Permission denied – immutable attribute

If you are the root user and some file (files or directories) cannot be deleted, removed, renamed or changed you probably deal with the immutable attribute set on (by a colleague of yours – installation setups tend to not set such attributes).

Here is what it looks like to have such a file

root@srv.remote /etc/apache2/vhosts.d # mv example.com.conf /root/old/apache/
mv: cannot move `example.com.conf' to `/root/old/apache/example.com.conf': Operation not permitted
root@srv.remote /etc/apache2/vhosts.d # lsattr example.com.conf
----i--------e- example.com.conf
root@srv.remote /etc/apache2/vhosts.d # rm example.com.conf
rm: cannot remove `example.com.conf': Operation not permitted
root@srv.remote /etc/apache2/vhosts.d # echo "teeest" >> example.com.conf
-bash: example.com.conf: Permission denied

Here is how you can set the attribute off.

You need first to set off the file’s immutable attribute and then to do whatever you intended to do in the first place – delete, rename, change and so on. Y

chattr -i filename.txt

In continuation of our example above:

root@srv.remote /etc/apache2/vhosts.d # chattr -i example.com.conf
root@srv.remote /etc/apache2/vhosts.d # lsattr example.com.conf
-------------e- example.com.conf
root@srv.remote /etc/apache2/vhosts.d # mv example.com.conf /root/old/apache/
root@srv.remote /etc/apache2/vhosts.d #

As you can see no immutable attribute no problem to move the file!

And just not note you need to install a package with the name e2fsprogs (not always in the default installation) in your Linux distribution to have lsattr, chattr and more!

openntpd – immediately sync the clock on startup

Here is our simple tip for your healthy server’s date and time:

Immediately synchronize the clock of your computer when using the openntpd (a lightweight version of ntpd with client-only mode).

Use the “-s” (lower “s” letter) to instruct the daemon ntpd to synchronize the clock immediately after it discovers a healthy time server!

-s          Try to set the time immediately at startup, as opposed to slowly adjusting the clock.  ntpd will stay in the foreground for up to 15 seconds waiting
                 for one of the configured NTP servers to reply.

Find the start-up configuration file in your “/etc” (for your Linux distribution, its name is probably ntpd, for Gentoo it is “/etc/conf.d/ntpd”, the thing is to find the start-up confiuration script, not the ntpd.conf, which is the ntpd configuration file for the daemon) and include “-s” in the NTPD_OPTS:

cat /etc/conf.d/ntpd 
# /etc/conf.d/ntpd: config file for openntpd's ntpd

NTPD_OPTS="-s"

Restart the service.

If you use it in a virtualized environment like containers (docker, lxc, lxd and so on) and qemu, virtualbox, vmware and so on and you often suspend the machine to synchronize the clock when you resume it you must manually restart the openntpd service!!! Or you are going to wait for slowly adjusting the time as usual.

Information status

There is a utility to check what’s going on with the openntpd – ntpctl. It has only three read-only commands:

usage: ntpctl -s all | peers | Sensors | status

Keep on reading!

grep – find files, which have no match in their entire content

Here is a quick tip for a very useful option, which is not widely known!
If you want to search for a matching string in a file you can use “grep” to look for lines in the file with the matching string, but what if you would like to search for files, which DO NOT contain the search string?
The Unix-world command grep has the option “-L”, which will output the name of the file not containing the search string:

-L, –files-without-match – Suppress normal output; instead print the name of each input file from which no output would normally have been printed. The scanning will stop on the first match.

The quote is from grep’s man page!
Some simple examples:

myuser@srv:~/tmp$ grep -L DISTRIB_DESCRIPTION *
10.10.10.10
10.10.10.11
10.10.10.12
10.10.10.13
10.10.10.50
myuser@srv:~/tmp$ grep DISTRIB_DESCRIPTION *
10.10.10.14:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.15:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.16:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.17:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.18:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.19:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.20:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.51:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.52:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
10.10.10.53:DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"

As you can see including the “-L” option will output only names, because no match is found in the files. And if you miss the option the output will show you all files and lines in the files matching the search string.
In addition, if you use “-l” (note this time the letter “l” is lower case), you the searching in the file stops on the first match, so searching in multiple files the output will include only one time the name and the match of a file on contrast when not using “-l” will output the same files’ names with the match the times matched:

myuser@srv:~/tmp$ grep VERSION *
10.10.10.14:VERSION="16.04.5 LTS (Xenial Xerus)"
10.10.10.14:VERSION_ID="16.04"
10.10.10.14:VERSION_CODENAME=xenial
10.10.10.15:VERSION="16.04.5 LTS (Xenial Xerus)"
10.10.10.15:VERSION_ID="16.04"
10.10.10.15:VERSION_CODENAME=xenial
10.10.10.16:VERSION="16.04.5 LTS (Xenial Xerus)"
10.10.10.16:VERSION_ID="16.04"
10.10.10.16:VERSION_CODENAME=xenial
myuser@srv:~/tmp$ grep -l VERSION *
10.10.10.14
10.10.10.15
10.10.10.16

List Openstack container’s options with the swift command-line client – capabilities command

First, you need to install

swift command-line utility

and second, install the command-line tool to manage your account: Install OpenStack swift client only
With the capabilities command you may discover the following important policy and limits of your account like:

  • Listing limits – how many files (objects) will be in the output when using list command.
  • The maximum file size, which is supported by the server.
  • Maximum files (objects) for deletion per a single request. How many files you can delete with a single request, which is very convinient to put thousands of files per one requests, not to initiate an http(s) connection for each file (object), which could be thousands of files, or even millions!
  • Additinal plugins (in terms of Openstack – middleware), which are supported
  • Maximum container name length

and many more.

In general, you will need:

  1. username (–os-username) – Username
  2. password (–os-password) – Password
  3. authentication url (–os-auth-url) – The URL address, which authorize your requests, it generates a security token for your operations. Always use https!
  4. tenant name (–os-tenant-name) – Tenant is like a project.

All of the above information should be available from your OpenStack administrator.
Here an example output of the capabalities command:

myuser@myserver:~$ swift --os-username myusr --os-tenant-name myusr --os-password mypass --os-auth-url https://auth01.example.com:5000/v2.0 capabilities
Core: swift
 Options:
  account_autocreate: True
  account_listing_limit: 20000
  allow_account_management: False
  container_listing_limit: 20000
  extra_header_count: 0
  max_account_name_length: 256
  max_container_name_length: 256
  max_file_size: 5368709122
  max_header_size: 8192
  max_meta_count: 90
  max_meta_name_length: 128
  max_meta_overall_size: 4096
  max_meta_value_length: 256
  max_object_name_length: 1024
  policies: [{'name': 'Policy-0', 'default': True}]
  strict_cors_mode: True
Additional middleware: bulk_delete
 Options:
  max_deletes_per_request: 20000
Additional middleware: bulk_upload
 Options:
  max_containers_per_extraction: 20000
  max_failed_extractions: 1000
Additional middleware: container_sync
 Options:
  realms: {}
Additional middleware: crossdomain
Additional middleware: formpost
Additional middleware: keystoneauth
Additional middleware: slo
 Options:
  max_manifest_segments: 1000
  max_manifest_size: 2097152
  min_segment_size: 1048576
Additional middleware: staticweb

You can see various middleware are activated with specific options – bulk_upload – to upload multiple files with one request (a list with files) and bulk_delete – to delete multiple files per one request and so on.

ansible – using ansible vault with copy module to decrypt on-the-fly files

Here is an interesting tip for all who what to protect the sensitive information with ansible. Our example is simple enough – we want to protect our private key and we want to decrypt it when installing on the server. The copy ansible module has a decrypt feature and it can decrypt the file on-the-fly when the task is executed.
Here is how to use ansible vault to encrypt the file with the private key and the ansible playbook file to copy the file.

If you are a newbie in ansible you can check this article – First ansible use – install and execute a single command or multiple tasks in a playbook There you can see how to create your inventory file (and configure sudo if you remotely log in with unprivileged user) used herein the example.

STEP 1) Encrypt the file with ansible vault

myuser@srv ~ $ ansible-vault encrypt server.key
New Vault password: 
Confirm New Vault password: 
Encryption successful

You can see the file now is changed and starts with:

myuser@srv ~ $ cat server.key 
$ANSIBLE_VAULT;1.1;AES256
62363263663865646361643461663531373637386631646262366333663831643435633263363336
3735326665326363356566303566626638316662376432640a326362326230353966353431383164
35353531653331306430656562616165353632643330393662313535326438363964303436306639
....
....

STEP 2) Ansible playbook file to use copy and decrypt option

---
- hosts: all
  tasks:
    - name: Copy server private key
      copy:
        src: server.key
        dest: /etc/env/server.key
        decrypt: yes
        owner: root 
        group: root 
        mode: 400
        backup: no

STEP 3) Execute the ansible playbook

myuser@srv ~ $ ansible-playbook --ask-vault-pass -l srv3 -i ./inventory.ini ./playbook-example.yml -b
Vault password: 

PLAY [all] *****************************************************************************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************************************************************
ok: [srv3]

TASK [Copy server private key] *********************************************************************************************************************************************
changed: [srv3]

PLAY RECAP *****************************************************************************************************************************************************************
srv3                       : ok=2    changed=1    unreachable=0    failed=0   

And the file in the remote server (srv3 in the example) is unencrypted in /etc/env/server.key!