nginx and proxy_cache not growing cache despite max_size is bigger – shared memory zone to blame

One of our big Nginx cache servers has recently been upgraded to have 70T storage, which is pretty good storage for a proxy. And in a hurry to configure the big storage we changed only the “max_size” option of proxy_cache_path directive! After a week in production, the proxy reached 23T and it just stopped growing with no apparent reason! Space and Nginx max_size were OK 75T total space and 70T for the proxy cache, but no Nginx had not added more space after reaching 23T occupied space for two days, which was impossible because all files were kept for 5 years and 200G per day were generated. No errors in the logs and we even use “virtual host traffic status module” – Live status information like used space and more for nginx proxy cache, but still no clue why it did not grow above this threshold of 23T! And it began to remove cached objects!

proxy_cache_path /mnt/cache levels=1:2 keys_zone=STATIC:900m inactive=42600h max_size=70000g;

It appeared we exhausted the shared memory zone limit for the zone! And Nginx cache just stopped growing.

According to the Nginx manual “One megabyte (of shared memory zone), a zone can store about 8 thousand keys“. Apparently, after 23T of files, we have passed 7 200 000 keys and exhausted the limit we configured in the proxy_cache_path line!

The solution is really simple just increase the limit for the shared memory zone for the zone.

proxy_cache_path /mnt/cache levels=1:2 keys_zone=STATIC:4000m inactive=42600h max_size=70000g;

In the past, with small cache (15T) it was enough to have 900Mbytes for the cache’s shared memory. Now we set it to 4000 Mbytes to be able to store approximately 32 000 000 keys. We have 23T and 900M of shared memory for keys (for our setup, your setup may differ a lot!!!) and setting it to 4000M, which is more than 4 times bigger than before it will probably be enough for the rest free storage to be used at full extent.
Be careful this operation will trigger the “Nginx cache loader” to load the cache index and may produce IO during this operation!

Nginx shared memory zone size

Nginx workers use shared mappings – mmap, which is different from the SYSV and POSIX shared memory (so you cannot use ipcs tool to check for shared memory). You should check how many memory currently the process is using. Here is how you can get the size of the shared memory zone occupied by the Nginx processes and as you can see each Nginx worker is around 900M of column “RSS” (Resident Set Size):

[root@srv ~]# ps -o rss,pid,comm,user,cmd -C nginx
  RSS   PID COMMAND         USER     CMD
904888 3979 nginx           nginx    nginx: worker process
905116 3980 nginx           nginx    nginx: worker process
904828 3981 nginx           nginx    nginx: worker process
905176 3982 nginx           nginx    nginx: worker process
905196 3983 nginx           nginx    nginx: worker process
905008 3984 nginx           nginx    nginx: worker process
904908 3985 nginx           nginx    nginx: worker process
905372 3986 nginx           nginx    nginx: worker process
905088 3987 nginx           nginx    nginx: worker process
902688 3988 nginx           nginx    nginx: worker process
904932 3989 nginx           nginx    nginx: worker process
905032 3990 nginx           nginx    nginx: worker process
26452  3991 nginx           nginx    nginx: cache manager process
33928  8148 nginx           root     nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf

For a single Nginx process:

[root@srv ~]# cat /proc/3981/status |grep RssShmem
RssShmem:         894240 kB

You can check the occupied inodes of your file system with df to get approximately how many files you have:

[root@srv ~]# df -i
Filesystem         Inodes   IUsed      IFree IUse% Mounted on
devtmpfs         16452656     715   16451941   1% /dev
tmpfs            16455999       1   16455998   1% /dev/shm
tmpfs            16455999    1153   16454846   1% /run
tmpfs            16455999      17   16455982   1% /sys/fs/cgroup
/dev/md1          2076704   39687    2037017   2% /
/dev/md3       1214685184 6897020 1207788164   1% /mnt/cache
tmpfs            16455999       5   16455994   1% /run/user/0

inodes around 6 897 020 and not growing for days. This number is very close to the maximum keys, which 900M key shared memory zone may store!

Two days after changing the key shared memory zone limit to 4000Mbytes:

proxy_cache_path /mnt/cache levels=1:2 keys_zone=STATIC:4000m inactive=42600h max_size=70000g;

The Nginx workers passed 900Mbytes RSS (Resident Set Size) and it reached 1Gbyte. The occupied cached sized grew with 1T and continued to grow.

[root@srv ~]# ps -o rss,pid,comm,user,cmd -C nginx
  RSS   PID COMMAND         USER     CMD
52256  8148 nginx           root     nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
1005624 16899 nginx         nginx    nginx: worker process
1005948 16900 nginx         nginx    nginx: worker process
1005936 16901 nginx         nginx    nginx: worker process
1005912 16902 nginx         nginx    nginx: worker process
1005832 16903 nginx         nginx    nginx: worker process
1005836 16904 nginx         nginx    nginx: worker process
1005868 16905 nginx         nginx    nginx: worker process
1005932 16906 nginx         nginx    nginx: worker process
1005796 16907 nginx         nginx    nginx: worker process
1005980 16908 nginx         nginx    nginx: worker process
1005848 16909 nginx         nginx    nginx: worker process
1005888 16910 nginx         nginx    nginx: worker process
26328 16911 nginx           nginx    nginx: cache manager process

The occupied inodes also increased to 7 484 291, which means the cache added around 700 000 new files.

[root@srv ~]# df -i
Filesystem         Inodes   IUsed      IFree IUse% Mounted on
devtmpfs         16452656     715   16451941    1% /dev
tmpfs            16455999       1   16455998    1% /dev/shm
tmpfs            16455999    1153   16454846    1% /run
tmpfs            16455999      17   16455982    1% /sys/fs/cgroup
/dev/md1          2076704   39690    2037014    2% /
/dev/md3       1214685184 7484582 1207200602    1% /mnt/cache
tmpfs            16455999       5   16455994    1% /run/user/0

how to disable effectively the deleting (purging) files from nginx proxy_cache (nginx cache manager process)

In peaks deleting files could kill your server and easily the traffic could degraded multiple times than normal if the nginx cache manager start deleting files!

The server is perfectly normal but suddenly it just get loaded and all nginx processes are in D (“Disk sleep”) state.

What could it be? What is going on with your proxy server?

Probably the cache is full!

Unfortunately there is no way to check how much is filled the cache live – just an upgrade or restart of the nginx process will trigger nginx cache loader to check all the cache files and will write the cache size on exit in the error log – but be careful the cache loading is also IO intensive operation – stats all the cache files and they could be millions images).

If you are sure the cache manager is to blame for the IO of your server (probably using this method – Check whether nginx cache manager is deleting files at the moment), you can stop it almost immediately!

Just increase the nginx cache drastically – add zero to the maximum cache size

Of course, you should have enough free space till you resolve the problem – for example more servers or manual deletion on peak-off or tune your cache deletion or any other solution….
Search for something like

        proxy_cache_path /mnt/cache levels=1:2 keys_zone=CACHESTATICS:900m inactive=710h max_size=400g

And add zero to the max_size number like:

        proxy_cache_path /mnt/cache levels=1:2 keys_zone=CACHESTATICS:900m inactive=710h max_size=4000g

The max size will increase from 400G to 4000G (4T)!
This will effectively stop the files deleting and the nginx cache manager will have slept for long time before invoking again to delete files. This could be life saving operation for your server at peak!

Here is a real graph from one of our servers – the cache manager started deleting files from the cache and the traffic dropped 99%!!!

SCREENSHOT 1) The nginx cache manager just started to delete files from the cache and this operation just killed our server completely.

You can see almost zero bandwidth! The problem was resolved when we reloaded nginx with a bigger cache max_size value. The nginx manager immediately went to sleep and no IO for deleting files. The load of the server returned to normal!

main menu
nginx cache manager start deleting files

SCREENSHOT 2) Hard drives were saturated and the disk maxed the IO time to 10 ms.

Despite the bigger READ and WRITE IOPS there was 95-99% less traffic.

main menu
Disk IO Time when cache manager is working

Then you can tune the values for deleting files from the cache – Tune nginx proxy cache – control the cache manager how to delete cached files.

Check whether nginx cache manager is deleting files at the moment

Here is a tip for the webmasters (or system admins) to discover whether the nginx using proxy_cache to cache files is deleting files at the moment! There situation where you may need to know if the loaded of a static media server is caused by the deletion of the cache manager or by the read or seek operations when serving the static files. The deletion is really slow and IO intensive operation, which could greatly impact the performance and traffic of the server.
Find the process nginx’s “cache manager process” and strace it:

[root@srv ~]# ps axuf|grep nginx
root     31582  0.0  0.0 2906768 25108 ?       Ss   Feb15   0:01 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
nginx    16008  1.9  1.3 2941188 440224 ?      S    16:39   1:33  \_ nginx: worker process
nginx    16009  1.5  1.2 2941188 398836 ?      S    16:39   1:12  \_ nginx: worker process
nginx    16010  0.5  0.7 2941984 239064 ?      S    16:39   0:26  \_ nginx: worker process
nginx    16011  0.7  0.9 2941984 299356 ?      D    16:39   0:35  \_ nginx: worker process
nginx    16012  1.2  1.1 2941188 389540 ?      D    16:39   1:01  \_ nginx: worker process
nginx    16013  2.3  1.5 2941188 487324 ?      D    16:39   1:55  \_ nginx: worker process
nginx    16014  0.0  0.6 2906772 224004 ?      S    16:39   0:01  \_ nginx: cache manager process
[root@srv ~]# strace -f -p 16014
strace: Process 16014 attached
gettid()                                = 16014
write(31, "2019/02/25 18:00:31 [info] 16014"..., 89) = 89
epoll_wait(36, [], 512, 5406)           = 0
unlink("/mnt/cache/0/39/c8ccbbc06d16debb1c8d58ceb6f99390") = 0
unlink("/mnt/cache/0/78/118924d7bf70e20fa8f790c6f9e7c780") = 0
unlink("/mnt/cache/3/ce/fab074cc670e6a80114dcbc398a63ce3") = 0
unlink("/mnt/cache/5/48/0b4e162dd7be8244815721fb7d68e485") = 0
unlink("/mnt/cache/5/56/e5eb4b38c7c8d209d0aabaf79ac02565") = 0
unlink("/mnt/cache/e/c6/207b432fa77375e4eefcaf52db250c6e") = 0
unlink("/mnt/cache/4/6d/ac0db27a03dabc79d869068db1b516d4") = 0
unlink("/mnt/cache/9/e8/91625c6e60de8e5425c4135c7dfb2e89") = 0
unlink("/mnt/cache/b/3c/f3c53000cf0cb20d55d8c09df8a733cb") = 0
unlink("/mnt/cache/f/f7/6f06423cd411b45816969fe020903f7f") = 0
unlink("/mnt/cache/f/50/c9b8ab72821a6e9bcb9c8d4b790dc50f") = 0
unlink("/mnt/cache/6/1f/74b0f1fdf1ac30db6af7793dc15671f6") = 0
unlink("/mnt/cache/0/83/caf199c1b99d438f96caec71bf2ea830") = 0
unlink("/mnt/cache/4/3d/c90f8fbbba4aaf407e386641dc2203d4") = 0
unlink("/mnt/cache/4/ad/d23cf8598020141b2bcec46d2b5cbad4") = 0
unlink("/mnt/cache/d/47/05973bc310503f36c67b7c1c24c8247d") = 0
unlink("/mnt/cache/f/11/e4fcbde8533d89105ab41f22c55e211f") = 0
unlink("/mnt/cache/2/06/29066a58e4116d24266026b4ed1e3062") = 0
epoll_wait(32, [], 512, 50)             = 0
unlink("/mnt/cache/4/6b/9a104ebdf70d00137a88d4584b2bb6b4") = 0
unlink("/mnt/cache/e/95/6d176447f57f21769d86a8f0b2a8b95e") = 0
unlink("/mnt/cache/b/b2/2f6f51163c65ae1fc06a913d6de1ab2b") = 0
unlink("/mnt/cache/a/24/2b058045a23b69de7a4442c9e6fce24a") = 0
unlink("/mnt/cache/7/60/00833e0b236ca8472f5be8227d645607") = 0
unlink("/mnt/cache/a/08/bf00eea300eff97dc4fffa61daaca08a") = 0
unlink("/mnt/cache/2/48/a291d8aca2b6f4f9471686eabe9b2482") = 0
unlink("/mnt/cache/0/e3/2d631adbc3bfdf8e44a51fa5453eee30") = 0
unlink("/mnt/cache/1/3b/08eef7c86c5ece9b5279b304dd86e3b1") = 0
unlink("/mnt/cache/b/a4/03213e4a8a1e8fb17ae698e54e70fa4b") = 0
unlink("/mnt/cache/b/a3/77f1b11811a9cda0ae93c498769f7a3b") = 0
unlink("/mnt/cache/4/01/1d50fac60681ae3263c8875775d20014") = 0
unlink("/mnt/cache/c/94/e71b96cbc65b248bd8e4540cbd69294c") = 0
unlink("/mnt/cache/1/59/99ec58e865b97e217835dd84f5f48591") = 0
unlink("/mnt/cache/4/b8/6a64825ce555b8f2440f051a7f7bcb84") = 0
unlink("/mnt/cache/7/51/fe2acbb895427ed8e406ce7e79d61517") = 0
.....
.....

You can tune the file removing from the cache with manager_files, manager_threshold and manager_sleep arguments of the proxy_cache_path.
If you came here searching information on the topic probably you should check out these articles, too: how to disable effectively the deleting (purging) files from nginx proxy_cache (nginx cache manager process) and Tune nginx proxy cache – control the cache manager how to delete cached files