Stopping the glusterfs volume releases disk sleep process hangs

A quick tip for GlusterFS volume. There are multiple possible reasons for a Linux process to hang in “Disk Sleep” state, which even the KILL -9 cannot interrupt:

  • a bug in GlusterFS
  • just bad options turn on online
  • other device relying on a GlusterFS, which is unavailable.
[17294588.184470] INFO: task gdisk:12505 blocked for more than 120 seconds.
[17294588.184538] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[17294588.184628] gdisk           D ffff8ce01fb9acc0     0 12505  26866 0x00000080
[17294588.184780] Call Trace:
[17294588.184844]  [<ffffffffbaed3d81>] ? __wake_up_common_lock+0x91/0xc0
[17294588.184910]  [<ffffffffbb585da9>] schedule+0x29/0x70
[17294588.184974]  [<ffffffffbb5838b1>] schedule_timeout+0x221/0x2d0
[17294588.185037]  [<ffffffffbaed3dc3>] ? __wake_up+0x13/0x20
[17294588.185102]  [<ffffffffc0a05d2e>] ? loop_make_request+0x12e/0x210 [loop]
[17294588.185169]  [<ffffffffbaf06d32>] ? ktime_get_ts64+0x52/0xf0
[17294588.185232]  [<ffffffffbb58549d>] io_schedule_timeout+0xad/0x130
[17294588.185304]  [<ffffffffbb5863dd>] wait_for_completion_io+0xfd/0x140
[17294588.185369]  [<ffffffffbaedb990>] ? wake_up_state+0x20/0x20
[17294588.185468]  [<ffffffffbb157e64>] blkdev_issue_flush+0xb4/0x110
[17294588.185533]  [<ffffffffbb08d335>] blkdev_fsync+0x35/0x50
[17294588.185598]  [<ffffffffbb082f57>] do_fsync+0x67/0xb0
[17294588.185671]  [<ffffffffbb083240>] SyS_fsync+0x10/0x20
[17294588.185734]  [<ffffffffbb592ed2>] system_call_fastpath+0x25/0x2a
[17294708.187598] INFO: task gdisk:12505 blocked for more than 120 seconds.
[17294708.187664] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[17294708.187753] gdisk           D ffff8ce01fb9acc0     0 12505  26866 0x00000080
[17294708.187905] Call Trace:
[17294708.187968]  [<ffffffffbaed3d81>] ? __wake_up_common_lock+0x91/0xc0
[17294708.188033]  [<ffffffffbb585da9>] schedule+0x29/0x70
[17294708.188096]  [<ffffffffbb5838b1>] schedule_timeout+0x221/0x2d0
[17294708.188159]  [<ffffffffbaed3dc3>] ? __wake_up+0x13/0x20
[17294708.188223]  [<ffffffffc0a05d2e>] ? loop_make_request+0x12e/0x210 [loop]
[17294708.188289]  [<ffffffffbaf06d32>] ? ktime_get_ts64+0x52/0xf0
[17294708.188352]  [<ffffffffbb58549d>] io_schedule_timeout+0xad/0x130
[17294708.188416]  [<ffffffffbb5863dd>] wait_for_completion_io+0xfd/0x140
[17294708.188480]  [<ffffffffbaedb990>] ? wake_up_state+0x20/0x20
[17294708.188545]  [<ffffffffbb157e64>] blkdev_issue_flush+0xb4/0x110
[17294708.188624]  [<ffffffffbb08d335>] blkdev_fsync+0x35/0x50
[17294708.188690]  [<ffffffffbb082f57>] do_fsync+0x67/0xb0
[17294708.188754]  [<ffffffffbb083240>] SyS_fsync+0x10/0x20
[17294708.188828]  [<ffffffffbb592ed2>] system_call_fastpath+0x25/0x2a

The above example of dmesg log shows the gdisk process stuck in “Disk Sleep” state, because of a loop device from a file on an unavailable GlusterFS volume! Kill -9 won’t help, the process will remain in this bad state and even a restart would be difficult to perform!

[root@srv1 ~]# gluster volume stop VOL2 
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: VOL2: success
[root@srv1 ~]# gluster volume start VOL2 
volume start: VOL2: success

The solution is to stop the GlusterFS Volume and all the blocked processes on bad devices such as above would be released. The processes will carry on executing or will end their execution after issuing a stop command to the volume. No problem to start the GlusterFS volume immediately after the stop!
NOTE: executing STOP command would affect all servers using this volume. The volume becomes inaccessible for all!