alter table error 1215 (HY000): Cannot add foreign key constraint

Adding a foreign key may result in the following error:

mysql> ALTER TABLE `table1` ADD CONSTRAINT FK_table2_id FOREIGN KEY (`table2_id`) REFERENCES table2(`ID`);
ERROR 1215 (HY000): Cannot add foreign key constraint

This error can occur because in couple of cases and it is not very informative, but basically, it says there is some kind of compatibility issue between the table1.table_id and table2.ID. And the three most common problems, besides the tables and columns exists, are:

  1. One of the two table are not Innodb.
    SHOW CREATE TABLE `table1`;
    
  2. There is a value (or values) in table1.table_id, which does not exists in table2.ID.
  3. SELECT `table1`.`table2_id`, `table2`.`ID` FROM `table1` LEFT JOIN `table2` ON `table1`.`table2_id`=`table2`.`ID`;
    

    Execute a left join to check whether there are NULL values in the table1.table2_id. If NULL values exist a record with the same missing IDs should be inserted in table2.ID. this kind of check may be stopped temporarily with:

    SET FOREIGN_KEY_CHECKS=0;
    

    Change it back to 1 after the ALTER clause.

  4. The type and the attributes of the columns are different. The attributes such as UNSINGED may cause a problem, too!
    ALTER TABLE `table1` CHANGE `table2_id` `table2_id` INT(11) UNSIGNED NOT NULL; 
    

    In this case, the column types are the same, but the attributes are different and the MySQL server throws the error! Change the signed int to unsigned with the above command. And after the table1.table2_id and table2.ID are of the same type and they have the same attributes, the alter command will be executed successfully.

Here are the initial structure of above tables:

CREATE TABLE `table1` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `table2_id` int(11) NOT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=220 DEFAULT CHARSET=utf8
CREATE TABLE `table2` (
  `ID` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `col1` int(11) NOT NULL DEFAULT '0',
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=642 DEFAULT CHARSET=utf8 

And after a successful ALTER statement, the table1 will looks like:

ALTER TABLE `table1` ADD CONSTRAINT `FK_table2_id` FOREIGN KEY (`table2_id`) REFERENCES `table1`(`ID`);
Query OK, 216 rows affected (0.92 sec)
Records: 216  Duplicates: 0  Warnings: 0

CREATE TABLE `table1` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `table2_id` int(11) NOT NULL,
  PRIMARY KEY (`ID`),
  KEY `FK_table2_id` (`table1_id`),
  CONSTRAINT `FK_table2_id` FOREIGN KEY (`table2_id`) REFERENCES `table2` (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=220 DEFAULT CHARSET=utf8

Gentoo emerge virtualbox- Mesa / GLU: Mesa not found at, Mesa headers not found

Emerging the package app-emulation/virtualbox the following error occurs:

Checking for Mesa / GLU: 
  Mesa not found at -L/usr/X11R6/lib -L/usr/X11R6/lib64 -L/usr/local/lib -lXext -lX11 -lGL -I/usr/local/include or Mesa headers not found
  Check the file /var/tmp/portage/app-emulation/virtualbox-6.1.18/work/VirtualBox-6.1.18/configure.log for detailed error information.
Check /var/tmp/portage/app-emulation/virtualbox-6.1.18/work/VirtualBox-6.1.18/configure.log for details
 * ERROR: app-emulation/virtualbox-6.1.18::gentoo failed (configure phase):
 *   (no error message)
 * 
 * Call stack:
 *     ebuild.sh, line  125:  Called src_configure
 *   environment, line 5504:  Called doecho './configure' '--with-gcc=x86_64-pc-linux-gnu-gcc' '--with-g++=x86_64-pc-linux-gnu-g++' '--disable-dbus' '--disable-kmods' '--disable-alsa' '--disable-docs' '--disable-devmapper' '--disable-pulse' '--disable-python' '--enable-webservice' '--enable-vnc'
 *   environment, line 1538:  Called die
 * The specific snippet of code:
 *       "$@" || die

The configure script reports the mesa is missing, but the package media-libs/mesa is installed. Reinstalling does not fix the problem.
Farther investigation in the logs by checking the configure.log reveals the real problem:

srv ~ # tail -n 16 /var/tmp/portage/app-emulation/virtualbox-6.1.18/work/VirtualBox-6.1.18/configure.log
***** Checking Mesa / GLU *****
compiling the following source file:
#include <cstdio>
#include <X11/Xlib.h>
#include <GL/glx.h>
#include <GL/glu.h>
extern "C" int main(void)
{
  return 0;
}
using the following command line:
x86_64-pc-linux-gnu-g++  -fPIC -g -O -Wall -o /var/tmp/portage/app-emulation/virtualbox-6.1.18/work/VirtualBox-6.1.18/.tmp_out /var/tmp/portage/app-emulation/virtualbox-6.1.18/work/VirtualBox-6.1.18/.tmp_src.cc "-L/usr/X11R6/lib -L/usr/X11R6/lib64 -L/usr/local/lib -lXext -lX11 -lGL -I/usr/local/include"
/var/tmp/portage/app-emulation/virtualbox-6.1.18/work/VirtualBox-6.1.18/.tmp_src.cc:4:10: fatal error: GL/glu.h: No such file or directory
    4 | #include <GL/glu.h>
      |          ^~~~~~~~~~
compilation terminated.

The glu part of mesa is missing. In Gentoo, the glu (https://gitlab.freedesktop.org/mesa/glu) is not included in the media-libs/mesa and it is a separate package media-libs/glu.

The solution is to emerge media-libs/glu and then the app-emulation/virtualbox.

emerge -v media-libs/glu

Another Linux distribution may include glu in the main mesa package.

Here, the conclusion is to always check the configure.log, because it reports the exact error and not to trust the generic output of the configure script.

Stopping the glusterfs volume releases disk sleep process hangs

A quick tip for GlusterFS volume. There are multiple possible reasons for a Linux process to hang in “Disk Sleep” state, which even the KILL -9 cannot interrupt:

  • a bug in GlusterFS
  • just bad options turn on online
  • other device relying on a GlusterFS, which is unavailable.
[17294588.184470] INFO: task gdisk:12505 blocked for more than 120 seconds.
[17294588.184538] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[17294588.184628] gdisk           D ffff8ce01fb9acc0     0 12505  26866 0x00000080
[17294588.184780] Call Trace:
[17294588.184844]  [<ffffffffbaed3d81>] ? __wake_up_common_lock+0x91/0xc0
[17294588.184910]  [<ffffffffbb585da9>] schedule+0x29/0x70
[17294588.184974]  [<ffffffffbb5838b1>] schedule_timeout+0x221/0x2d0
[17294588.185037]  [<ffffffffbaed3dc3>] ? __wake_up+0x13/0x20
[17294588.185102]  [<ffffffffc0a05d2e>] ? loop_make_request+0x12e/0x210 [loop]
[17294588.185169]  [<ffffffffbaf06d32>] ? ktime_get_ts64+0x52/0xf0
[17294588.185232]  [<ffffffffbb58549d>] io_schedule_timeout+0xad/0x130
[17294588.185304]  [<ffffffffbb5863dd>] wait_for_completion_io+0xfd/0x140
[17294588.185369]  [<ffffffffbaedb990>] ? wake_up_state+0x20/0x20
[17294588.185468]  [<ffffffffbb157e64>] blkdev_issue_flush+0xb4/0x110
[17294588.185533]  [<ffffffffbb08d335>] blkdev_fsync+0x35/0x50
[17294588.185598]  [<ffffffffbb082f57>] do_fsync+0x67/0xb0
[17294588.185671]  [<ffffffffbb083240>] SyS_fsync+0x10/0x20
[17294588.185734]  [<ffffffffbb592ed2>] system_call_fastpath+0x25/0x2a
[17294708.187598] INFO: task gdisk:12505 blocked for more than 120 seconds.
[17294708.187664] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[17294708.187753] gdisk           D ffff8ce01fb9acc0     0 12505  26866 0x00000080
[17294708.187905] Call Trace:
[17294708.187968]  [<ffffffffbaed3d81>] ? __wake_up_common_lock+0x91/0xc0
[17294708.188033]  [<ffffffffbb585da9>] schedule+0x29/0x70
[17294708.188096]  [<ffffffffbb5838b1>] schedule_timeout+0x221/0x2d0
[17294708.188159]  [<ffffffffbaed3dc3>] ? __wake_up+0x13/0x20
[17294708.188223]  [<ffffffffc0a05d2e>] ? loop_make_request+0x12e/0x210 [loop]
[17294708.188289]  [<ffffffffbaf06d32>] ? ktime_get_ts64+0x52/0xf0
[17294708.188352]  [<ffffffffbb58549d>] io_schedule_timeout+0xad/0x130
[17294708.188416]  [<ffffffffbb5863dd>] wait_for_completion_io+0xfd/0x140
[17294708.188480]  [<ffffffffbaedb990>] ? wake_up_state+0x20/0x20
[17294708.188545]  [<ffffffffbb157e64>] blkdev_issue_flush+0xb4/0x110
[17294708.188624]  [<ffffffffbb08d335>] blkdev_fsync+0x35/0x50
[17294708.188690]  [<ffffffffbb082f57>] do_fsync+0x67/0xb0
[17294708.188754]  [<ffffffffbb083240>] SyS_fsync+0x10/0x20
[17294708.188828]  [<ffffffffbb592ed2>] system_call_fastpath+0x25/0x2a

The above example of dmesg log shows the gdisk process stuck in “Disk Sleep” state, because of a loop device from a file on an unavailable GlusterFS volume! Kill -9 won’t help, the process will remain in this bad state and even a restart would be difficult to perform!

[root@srv1 ~]# gluster volume stop VOL2 
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: VOL2: success
[root@srv1 ~]# gluster volume start VOL2 
volume start: VOL2: success

The solution is to stop the GlusterFS Volume and all the blocked processes on bad devices such as above would be released. The processes will carry on executing or will end their execution after issuing a stop command to the volume. No problem to start the GlusterFS volume immediately after the stop!
NOTE: executing STOP command would affect all servers using this volume. The volume becomes inaccessible for all!

Repairing damaged backup GPT with gdisk

Problem with network shared storage could lead to a damaged file system or even GPT tables, so the gdisk may help in this case.
Here it is a nasty the error:

[root@srv1 ~]# kpartx /dev/loop0
Alternate GPT is invalid, using primary GPT.
loop0p1 : 0 83881984 /dev/loop0 2048
[root@srv1 ~]# parted /dev/loop0
GNU Parted 3.1
Using /dev/loop0
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Error: The backup GPT table is corrupt, but the primary appears OK, so that will be used.
OK/Cancel? OK                                                             
Model: Loopback device (loopback)
Disk /dev/loop0: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  42.9GB  42.9GB               primary

(parted)

So parted reports the backup GPT is damaged, but how to fix it? The solution is to use gdisk and use write “w” command in it. gdisk also shows the exact error with the GPT table with “v” option:

[root@srv1 ~]# gdisk 
GPT fdisk (gdisk) version 0.8.10

Type device filename, or press <Enter> to exit: /dev/loop0
Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.

Warning! One or more CRCs don't match. You should repair the disk!

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: damaged

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************

Command (? for help): p
Disk /dev/loop0: 83886080 sectors, 40.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 7EDF123B-FBC4-4C09-B636-922BD165F862
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 83886046
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048        83884031   40.0 GiB    0700  primary

Command (? for help): v

Caution: The CRC for the backup partition table is invalid. This table may
be corrupt. This program will automatically create a new backup partition
table when you save your partitions.

Identified 1 problems!

Command (? for help): p
Disk /dev/loop0: 83886080 sectors, 40.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 7EDF123B-FBC4-4C09-B636-922BD165F862
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 83886046
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048        83884031   40.0 GiB    0700  primary

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/loop0.

Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

And the GPT backup in this loop device is fixed. Executing parted again reports no problems:

[root@srv1 ~]# parted /dev/loop0 print
Model: Loopback device (loopback)
Disk /dev/loop0: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  42.9GB  42.9GB               primary

Verify also reports nor error. More options are available:

[root@srv1 ~]# gdisk /dev/loop0
GPT fdisk (gdisk) version 0.8.10

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): h
b  back up GPT data to a file
c  change a partition's name
d  delete a partition
i  show detailed information on a partition
l  list known partition types
n  add a new partition
o  create a new empty GUID partition table (GPT)
p  print the partition table
q  quit without saving changes
r  recovery and transformation options (experts only)
s  sort partitions
t  change a partition's type code
v  verify disk
w  write table to disk and exit
x  extra functionality (experts only)
?  print this menu

Command (? for help): v

No problems found. 4029 free sectors (2.0 MiB) available in 2
segments, the largest of which is 2015 (1007.5 KiB) in size.

Command (? for help): q

Force losetup detach of local file after it became unavailable

Mounting a file as loop devices is simple enough operation! But what if the file just disappears because of the network storage got unreachable?

In our case the shared network storage had got unavailable and the ext4 file system of the loop device got read-only! Unfortunately after the reset of the shared network storage and mounting it in the same place, the loop device (/dev/loop0) still maintained the old file descriptor to the unavailable file. And the losetup could not detach the device, at all.

root@srv ~# losetup -d /dev/loop0
root@srv ~# losetup -l
NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0         0      0         0  0 /mnt/storage4/servers/test_raw_image.img
[root@lsrv1 ~]# kpartx -df /dev/loop0
read error, sector 0
read error, sector 1
read error, sector 29

The above command could not detach the device. Unfortunately, the losetup does not have the force detach, so the server ended with blocked loop0 device pointing to unavailable file. kpartx does not work, neither.

The solution is to use dmsetup!

root@srv ~# dmsetup remove /dev/mapper/loop0p1
root@srv ~# losetup -l
root@srv ~#

And there is no loop device any more. The wrong pointing loop device has been removed successfully! Now the user can use the loop0 for another device and in many cases, this helps to umount the filesystem!

libelf was not found in the pkg-config search path

Building from source under CentOS the user may stumble on some compilation errors and most of them are for missing -devel packages. Here is such example with not so easy to find the name of a missing library:

[/tmp/netdata-libbpf-El77Ld/libbpf-0.0.9_netdata-1/src]# env CFLAGS=-fPIC CXXFLAGS= LDFLAGS= BUILD_STATIC_ONLY=y OBJDIR=build DESTDIR=.. make install 
Package libelf was not found in the pkg-config search path.
Perhaps you should add the directory containing `libelf.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libelf' found
mkdir -p build/staticobjs
cc -I. -I../include -I../include/uapi -DCOMPAT_NEED_REALLOCARRAY -fPIC -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64   -c bpf.c -o build/staticobjs/bpf.o
cc -I. -I../include -I../include/uapi -DCOMPAT_NEED_REALLOCARRAY -fPIC -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64   -c btf.c -o build/staticobjs/btf.o
btf.c:17:18: fatal error: gelf.h: No such file or directory
 #include <gelf.h>
                  ^
compilation terminated.
make: *** [build/staticobjs/btf.o] Error 1
 FAILED   

The missing development library file is with the name: elfutils-libelf-devel. Installing the package with yum or dnf will resolve the above error:

yum install -y elfutils-libelf-devel

Or for CentOS 8 and newer Fedora versions:

dnf install -y elfutils-libelf-devel

VBoxManage: error: Failed to initialize COM! NS_ERROR_FILE_TARGET_DOES_NOT_EXIST (0x80520006)

What an error! And the VirtualBox stopped loading anymore! This error:

myuser@srv ~ $ VBoxManage list vms
VBoxManage: error: Failed to initialize COM! (hrc=NS_ERROR_FILE_TARGET_DOES_NOT_EXIST)

and here with the GIU, which offers a little bit more information about the error ID:
This error occurs despite the successfully loaded VirtualBox kernel modules!

main menu
VBoxManage: error: Failed to initialize COM! NS_ERROR_FILE_TARGET_DOES_NOT_EXIST (0x80520006)

The two errors are the same and hint there is a file missing (or files?)? In our case, after upgrading from virtualbox-bin (a binary package) to a virtualbox (a package, which actually builds the VirtualBox on the system) under Gentoo, but apparently, this error could occur to everyone, who tries to build yourself the VirtualBox bundle.

The solution is simple: just DO NOT CHANGE the installation path of the Virtualbox software, which by default is /opt/VirtualBox.

The path is hardcoded in the sources and cannot be changed! At present version 6.1.16, we could not find a build option to change it! Of course, the linking /opt/VirtualBox would do the trick to move the installation physically away from the /opt, but the path must be valid /opt/VirtualBox.

In Gentoo, the emerge default installation goes to “/usr/lib64/virtualbox“, so the maintainer of the package changed the path and Virtualbox stopped working! Or a user build it and install it in any other location should link the /opt/VirtualBox to the installation directory. For example, in Gentoo, the fix will be:

root@srv ~ $ ln -s /usr/lib64/virtualbox /opt/VirtualBox
root@srv ~ $ exit
myuser@srv ~ $ VBoxManage list vms
"gentoo_raw" {55346caf-04db-4d88-831a-111111111111}
"diskless" {44346caf-c952-5555-b8a3-111111111111}
"diskless-linux" {44346caf-424f-487f-ae8d-111111111111}
"centos7-netinstall" {44346caf-4e86-441b-8d1e-111111111111}

A simple link would bring back Virtualbox to live.

Bonus

Probably, there is a configuration file “xpti.dat” in two or more locations: ~/.config/VirtualBox/xpti.dat and ~/.VirtualBox/xpti.dat, which is generated on every start of a VirtualBox. In the file there are configuration lines like:
Keep on reading!

gitlab in podman cannot create unix sockets in glusterfs because of SELinux

Installing gitlab-ee (and gitlab-ce) under CentOS 7 with enabled SELinux (i.e. enforcing mode) looped endlessly the container in restarting the installation process! There were multiple errors for missing sockets in the podman logs of the gitlab container. Here are some of the errors:
Missing postgresql unix socket in “/var/opt/gitlab/postgresql”:

Recipe: gitlab::database_migrations
  * bash[migrate gitlab-rails database] action run
    [execute] rake aborted!
              PG::ConnectionBad: could not connect to server: No such file or directory
                Is the server running locally and accepting
                connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"?
              /opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:53:in `block (3 levels) in <top (required)>'
              /opt/gitlab/embedded/bin/bundle:23:in `load'
              /opt/gitlab/embedded/bin/bundle:23:in `<main>'
              Tasks: TOP => gitlab:db:configure
              (See full trace by running task with --trace)
    
    
    Error executing action `run` on resource 'bash[migrate gitlab-rails database]'
.....
.....
Running handlers:
There was an error running gitlab-ctl reconfigure:

bash[migrate gitlab-rails database] (gitlab::database_migrations line 55) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of "bash"  "/tmp/chef-script20200915-35-lemic5" ----
STDOUT: rake aborted!
PG::ConnectionBad: could not connect to server: No such file or directory
        Is the server running locally and accepting
        connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"?
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:53:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/bin/bundle:23:in `load'
/opt/gitlab/embedded/bin/bundle:23:in `<main>'
Tasks: TOP => gitlab:db:configure
(See full trace by running task with --trace)
STDERR: 
---- End output of "bash"  "/tmp/chef-script20200915-35-lemic5" ----
Ran "bash"  "/tmp/chef-script20200915-35-lemic5" returned 1

Missing redis socket in

Running handlers:
There was an error running gitlab-ctl reconfigure:

redis_service[redis] (redis::enable line 19) had an error: RuntimeError: ruby_block[warn pending redis restart] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/redis/resources/service.rb line 65) had an error: RuntimeError: Execution of the command `/opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket INFO` failed with a non-zero exit code (1)
stdout: 
stderr: Could not connect to Redis at /var/opt/gitlab/redis/redis.socket: No such file or directory

It should be noted that the /var/opt/gitlab directory has been mapped in /mnt/storage/podman/gitlab/data. GlusterFS is used for /mnt/storage, so the gitlab files resides on a GlusterFS volume.

ERROR 1) Cannot create unix socket.

Checking the /var/log/audit/audit.log reveiled the problem immediately:
Keep on reading!

Copy files with read errors successfully – skipping only errors (i.e. bad sectors)

Sometimes disks have errors or an SSD disk has a bad NAND cell. Saving the whole hard disk data may not be needed and when only a specific file or two are important and which cannot be copied by cp or rsync because of “Unrecovered read error”.
Furthermore, the SSD reallocates the bad cells, when there are writes to the cell(s), which may not occur years, but reading may be each day. Reading from a sector with bad NAND cells will result in slow IO (multiple read commands are executed before giving up). Copying the file to a new place without only 512 bytes may not harm the data, but it is difficult to be done with the generic tool for copying.
This article is to save single files from a mounted ext4 file system with bad sectors using the ddrescue tool – https://www.gnu.org/software/ddrescue/ In fact, the ddrescue could save files or whole devices.

STEP 1) Install ddrescue.

Installing ddrescue is pretty easy. The tool is included in almost all Linux distributions and it doesn’t have many dependencies. Apparently, there is another dd_rescue tool, which is different than this one, just follow the link above for the tool used here.
CentOS 7/8 or Fedora:

yum install -y ddrescue

Ubuntu last 10 years versions:

apt install -y gddrescue

Gentoo:

emerge -v ddrescue

STEP 2) Rescuing a single file with read errors because of bad sectors in a mounted file system.

[root@srv Snapshots]# ddrescue -v \{9f02ae0a-6dae-4729-b6a6-ec3f0550f294\}.vdi test2.vdi
GNU ddrescue 1.25
About to copy 15724 MBytes from '{9f02ae0a-6dae-4729-b6a6-ec3f0550f294}.vdi' to 'test2.vdi'
    Starting positions: infile = 0 B,  outfile = 0 B
    Copy block size: 128 sectors       Initial skip size: 384 sectors
Sector size: 512 Bytes

Press Ctrl-C to interrupt
     ipos:   13495 MB, non-trimmed:        0 B,  current rate:       0 B/s
     opos:   13495 MB, non-scraped:        0 B,  average rate:    162 MB/s
non-tried:        0 B,  bad-sector:     8192 B,    error rate:    4608 B/s
  rescued:   15724 MB,   bad areas:        2,        run time:      1m 36s
pct rescued:   99.99%, read errors:       18,  remaining time:          0s
                              time since last successful read:          0s
Finished                                      
[root@srv Snapshots]# ls -al
total 52602944
drwx------. 2 root root        4096 Jun  2 02:22 .
drwxr-xr-x. 4 root root        4096 Jun  1 14:16 ..
-rw-------. 1 root root   459981735 Nov  8  2018 2018-11-08T15-19-17-776317000Z.sav
-rw-------. 1 root root   566704069 Jun  1 14:16 2020-06-01T11-16-05-735318000Z.sav
-rw-------. 1 root root  8329887744 Jun  1 12:53 {3d30ebea-2e2f-4e33-8088-d3d66f315e2c}.vdi
-rw-------. 1 root root 15724445696 Nov  8  2018 {9f02ae0a-6dae-4729-b6a6-ec3f0550f294}.vdi
-rw-------. 1 root root  4012900352 Jun  1 14:16 {f7e72510-7dce-48fd-b62c-630664ad984f}.vdi
-rw-r--r--. 1 root root 15724445696 Jun  2 02:24 test2.vdi
-rw-------. 1 root root  9051041792 Jun  2 02:19 test.vdi

Here is an animated gif of the ddrescue procedure:

main menu
ddrescue – copy files with bad sectors

Keep on reading!

Data too large, data for [] would be [] which is larger than the limit of

Rsyslog writing to Elasticsearch could lead to an error for some of the records and missing to save them in the backend:

{ ... { "error": { "root_cause": [ { "type": "circuit_breaking_exception", 
"reason": "[parent] Data too large, data for [<http_request>] would be [1008813778\/962mb], which is larger than the limit of [986061209\/940.3mb], 
real usage: [1008812248\/962mb], new bytes reserved: [1530\/1.4kb], usages [request=0\/0b, fielddata=317\/317b, in_flight_requests=1530\/1.4kb, accounting=178301893\/170mb]",
"bytes_wanted": 1008813778, "bytes_limit": 986061209, "durability": "PERMANENT" }], 
"type": "circuit_breaking_exception", "reason": "[parent] Data too large, data for [<http_request>] would be [1008813778\/962mb], which is larger than the limit of [986061209\/940.3mb], 
real usage: [1008812248\/962mb], new bytes reserved: [1530\/1.4kb], usages [request=0\/0b, fielddata=317\/317b, in_flight_requests=1530\/1.4kb, accounting=178301893\/170mb]",
"bytes_wanted": 1008813778, "bytes_limit": 986061209, "durability": "PERMANENT" }, "status": 429 } }

Unfortunately, such writes are not saved in the Elasticsearch and the data has been lost.

The problem here is that the Java VM has reached the maximum allowed memory and more memory should be allowed to be used by the Java Virtual Machine.

Find the Java VM option for the Elasticsearchjvm.options. In CentOS 7 the file is located in /etc/elasticsearch/jvm.options and set more memory with the variables “-Xms[SIZE]g -Xmx[SIZE]g”, such as:

.....
-Xms4g
-Xmx4g
.....

|grep -v grep
This will allow 4G “maximum size of total heap space” to be used by the Java Virtual Machine. By default, it is 1G (-Xms1g -Xmx1g). It is a good idea to set it half of the server’s memory. Save and restart the Elasticsearch service as usual:

systemctl restart elasticsearch

You should see the variable in the command line with ps command:

[root@loganalyzer ~]# ps axuf|grep elasticsearch
elastic+   592 10.8 34.4 168638848 5493156 ?   Ssl  00:56   4:23 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60
-Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 
-Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false 
-Dlog4j2.disable.jmx=true -Djava.locale.providers=COMPAT 
-Xms4g -Xmx4g 
-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
-Djava.io.tmpdir=/tmp/elasticsearch-16851535740012150929 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch 
-XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log elasticsearch
-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m 
-XX:MaxDirectMemorySize=2147483648 -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch 
-Des.distribution.flavor=default -Des.distribution.type=rpm -Des.bundled_jdk=true 
-cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
elastic+   690  0.0  0.0  70448  4516 ?        Sl   00:56   0:00  \_ /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

The environment variable ES_JAVA_OPTS could be used, too.

ES_JAVA_OPTS="-Xms4g -Xmx4g" ./bin/elasticsearch