There is a MySQL 8 Cluster InnoDB of three servers and one of the server crashed with a bad RAM. The same setup is described here – Install and deploy MySQL 8 InnoDB Cluster with 3 nodes under CentOS 8 and MySQL Router for HA. The failed server got restarted without clean shutdown and after booting up the MySQL Cluster node tried to recover automatically, but the recover process failed and the node left the group of the three server:
2022-05-31T04:00:00.322469Z 24 [ERROR] [MY-011620] [Repl] Plugin group_replication reported: 'Fatal error during the incremental recovery process of Group Replication. The server will leave the group.' 2022-05-31T04:00:00.322489Z 24 [Warning] [MY-011645] [Repl] Plugin group_replication reported: 'Skipping leave operation: concurrent attempt to leave the group is on-going.' 2022-05-31T04:00:00.322500Z 24 [ERROR] [MY-011712] [Repl] Plugin group_replication reported: 'The server was automatically set into read only mode after an error was detected.' 2022-05-31T04:00:03.448475Z 0 [System] [MY-011504] [Repl] Plugin group_replication reported: 'Group membership changed: This member has left the group.'
The recovery process proposed here follows these steps
- Connect with mysqlsh (MySQL Shell) to a MySQL instance, which is currently a part of the cluster group. The member, which left the group is not part any more, though the MySQL Cluster status shows it is part of the cluster topology, but with error.
- Remove the bad instance from the MySQL Cluster with removeInstance
- Add the instance with addInstance and the recovery process will kick in. The type of the recovery process will be chosen by the setup if not specified. In this case, the setup chooses the Incremental state recovery over (full) clone mode.
- Initiate the cluster rescan operation to recovery the group replication and the MySQL Cluster.
mysql
Summery of the recovery process
- The recovery process was successful.
- The distributed recovery with Incremental state recovery has finished for 24 hours for 200Mbyte database, which is really strange and the speed was really bad. The instance uses ordinary disks, not SSDs and a 1Gbps network.
- No need to change or manage the MySQL Router in any of the steps or the recovery stages. It handled the situation from the very beginning by removing the bad instance and then adding it again only after the recovery process had finished successfully.
- MySQL Shell should be connected to an healthy instance currently a part of the Cluster.
In the console output logs all commands and important lines are highlighted.
STEP 1) Remove the bad instance from the cluster.
The status of the cluster with the bad instance.
[root@db-cluster-3 ~]# mysqlsh MySQL Shell 8.0.28 Copyright (c) 2016, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > \connect clusteradmin@db-cluster-1 Creating a session to 'clusteradmin@db-cluster-1' Fetching schema names for autocompletion... Press ^C to stop. Closing old connection... Your MySQL connection id is 39806649 (X protocol) Server version: 8.0.28 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. MySQL db-cluster-1:33060+ ssl JS > var cluster = dba.getCluster() MySQL db-cluster-1:33060+ ssl JS > cluster.status() { "clusterName": "mycluster1", "defaultReplicaSet": { "name": "default", "primary": "db-cluster-1:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active.", "topology": { "db-cluster-1:3306": { "address": "db-cluster-1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-2:3306": { "address": "db-cluster-2:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-3:3306": { "address": "db-cluster-3:3306", "instanceErrors": [ "ERROR: group_replication has stopped with an error." ], "memberRole": "SECONDARY", "memberState": "ERROR", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "(MISSING)", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "db-cluster-1:3306" }
Remove the bad instance from the cluster:
[root@db-cluster-3 ~]# mysqlsh MySQL Shell 8.0.28 Copyright (c) 2016, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > \connect clusteradmin@db-cluster-1 Creating a session to 'clusteradmin@db-cluster-1' Fetching schema names for autocompletion... Press ^C to stop. Closing old connection... Your MySQL connection id is 39806650 (X protocol) Server version: 8.0.28 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. MySQL db-cluster-1:33060+ ssl JS > var cluster = dba.getCluster() MySQL db-cluster-1:33060+ ssl JS > cluster.removeInstance('db-cluster-3:3306') ERROR: db-cluster-3:3306 is reachable but has state ERROR To safely remove it from the cluster, it must be brought back ONLINE. If not possible, use the 'force' option to remove it anyway. Do you want to continue anyway (only the instance metadata will be removed)? [y/N]: y The instance will be removed from the InnoDB cluster. Depending on the instance being the Seed or not, the Metadata session might become invalid. If so, please start a new session to the Metadata Storage R/W instance. NOTE: Transaction sync was skipped * Instance 'db-cluster-3:3306' is attempting to leave the cluster... The instance 'db-cluster-3:3306' was successfully removed from the cluster. MySQL db-cluster-1:33060+ ssl JS > cluster.status() { "clusterName": "mycluster1", "defaultReplicaSet": { "name": "default", "primary": "db-cluster-1:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "db-cluster-1:3306": { "address": "db-cluster-1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-2:3306": { "address": "db-cluster-2:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "db-cluster-1:3306" }
STEP 2) Add the instance back to the cluster
Add the same instance without doing anything to it such as removing files or resetting the MySQL datadir.
[root@db-cluster-3 ~]# mysqlsh MySQL Shell 8.0.28 Copyright (c) 2016, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > \connect clusteradmin@db-cluster-1 Creating a session to 'clusteradmin@db-cluster-1' Fetching schema names for autocompletion... Press ^C to stop. Closing old connection... Your MySQL connection id is 39806650 (X protocol) Server version: 8.0.28 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. MySQL db-cluster-1:33060+ ssl JS > var cluster = dba.getCluster() MySQL db-cluster-1:33060+ ssl JS > cluster.addInstance('clusteradmin@db-cluster-3:3306') The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'db-cluster-3:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'. The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'. Incremental state recovery was selected because it seems to be safely usable. Validating instance configuration at db-cluster-3:3306... This instance reports its own address as db-cluster-3:3306 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using 'db-cluster-3:33061'. Use the localAddress option to override. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Incremental state recovery is now in progress. * Waiting for distributed recovery to finish... NOTE: 'db-cluster-3:3306' is being recovered from 'db-cluster-2:3306' * Distributed recovery has finished Cluster.addInstance: db-cluster-1:3306: The client was disconnected by the server because of inactivity. See wait_timeout and interactive_timeout for configuring this behavior. (MYSQLSH 4031) MySQL db-cluster-1:33060+ ssl JS >
After almost 24 hours the recovery process finished successfully on recovery 200Mbytes single database (6Gbytes binary logs available). The recovery process used the Incremental state recovery mode. The mode was chosen by the recovery process automatically. It may be forced to choose another mode such as (full) clone by the user.
As stated in the MySQL Shell output, the user may press Ctrl+C to leave the console. The recovery process will continue in the background.
STEP 3) Recover the cluster with 3 healthy instances.
Now, just rescan the MySQL Cluster to add the recovered instance:
[root@db-cluster-3 ~]# mysqlsh MySQL Shell 8.0.28 Copyright (c) 2016, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > \connect clusteradmin@db-cluster-1 Creating a session to 'clusteradmin@db-cluster-1' Fetching schema names for autocompletion... Press ^C to stop. Closing old connection... Your MySQL connection id is 39806650 (X protocol) Server version: 8.0.28 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. MySQL db-cluster-1:33060+ ssl JS > var cluster = dba.getCluster() MySQL db-cluster-1:33060+ ssl JS > cluster.status() { "clusterName": "mycluster1", "defaultReplicaSet": { "name": "default", "primary": "db-cluster-1:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "db-cluster-1:3306": { "address": "db-cluster-1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-2:3306": { "address": "db-cluster-2:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-3:3306": { "address": "db-cluster-3:3306", "instanceErrors": [ "WARNING: Instance is not managed by InnoDB cluster. Use cluster.rescan() to repair." ], "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "db-cluster-1:3306" } MySQL db-cluster-1:33060+ ssl JS > cluster.rescan() Rescanning the cluster... Result of the rescanning operation for the 'mycluster1' cluster: { "name": "mycluster1", "newTopologyMode": null, "newlyDiscoveredInstances": [ { "host": "db-cluster-3:3306", "member_id": "99856952-90ae-11ec-9a5f-fafd8f1acc17", "name": null, "version": "8.0.28" } ], "unavailableInstances": [], "updatedInstances": [] } A new instance 'db-cluster-3:3306' was discovered in the cluster. Would you like to add it to the cluster metadata? [Y/n]: Y Adding instance to the cluster metadata... The instance 'db-cluster-3:3306' was successfully added to the cluster metadata. MySQL db-cluster-1:33060+ ssl JS > cluster.status() { "clusterName": "mycluster1", "defaultReplicaSet": { "name": "default", "primary": "db-cluster-1:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "db-cluster-1:3306": { "address": "db-cluster-1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-2:3306": { "address": "db-cluster-2:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-3:3306": { "address": "db-cluster-3:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "db-cluster-1:3306" } MySQL db-cluster-1:33060+ ssl JS >
Now, the cluster is in ONLINE state and it can handle ONE instance failure, because the three instances are healthy.
MySQL Router logs
Here is the MySQL Router logs. It logged the left of the group, then the start of the recovery process and then the addition of the recovered instance after the rescan:
2022-05-31 03:56:01 metadata_cache INFO [7fd45edfe640] Potential changes detected in cluster 'mycluster1' after metadata refresh 2022-05-31 03:56:01 metadata_cache INFO [7fd45edfe640] Metadata for cluster 'mycluster1' has 3 member(s), single-primary: (view_id=0) 2022-05-31 03:56:01 metadata_cache INFO [7fd45edfe640] db-cluster-1:3306 / 33060 - mode=RW 2022-05-31 03:56:01 metadata_cache INFO [7fd45edfe640] db-cluster-2:3306 / 33060 - mode=RO 2022-05-31 03:56:01 metadata_cache INFO [7fd45edfe640] db-cluster-3:3306 / 33060 - mode=n/a 2022-05-31 03:56:18 metadata_cache WARNING [7fd45edfe640] Member db-cluster-3:3306 (99856952-90ae-11ec-9a5f-fafd8f1acc17) defined in metadata not found in actual Group Replication 2022-05-31 03:56:18 metadata_cache INFO [7fd45edfe640] Potential changes detected in cluster 'mycluster1' after metadata refresh 2022-05-31 03:56:18 metadata_cache INFO [7fd45edfe640] Metadata for cluster 'mycluster1' has 3 member(s), single-primary: (view_id=0) 2022-05-31 03:56:18 metadata_cache INFO [7fd45edfe640] db-cluster-1:3306 / 33060 - mode=RW 2022-05-31 03:56:18 metadata_cache INFO [7fd45edfe640] db-cluster-2:3306 / 33060 - mode=RO 2022-05-31 03:56:18 metadata_cache INFO [7fd45edfe640] db-cluster-3:3306 / 33060 - mode=n/a 2022-05-31 04:00:01 metadata_cache INFO [7fd45edfe640] Potential changes detected in cluster 'mycluster1' after metadata refresh 2022-05-31 04:00:01 metadata_cache INFO [7fd45edfe640] Metadata for cluster 'mycluster1' has 3 member(s), single-primary: (view_id=0) 2022-05-31 04:00:01 metadata_cache INFO [7fd45edfe640] db-cluster-1:3306 / 33060 - mode=RW 2022-05-31 04:00:01 metadata_cache INFO [7fd45edfe640] db-cluster-2:3306 / 33060 - mode=RO 2022-05-31 04:00:01 metadata_cache INFO [7fd45edfe640] db-cluster-3:3306 / 33060 - mode=n/a .... .... .... 2022-06-07 23:42:47 metadata_cache INFO [7fd45edfe640] GR member db-cluster-3:3306 (99856952-90ae-11ec-9a5f-fafd8f1acc17) Recovering, missing in the metadata, ignoring 2022-06-07 23:42:47 metadata_cache INFO [7fd45edfe640] Potential changes detected in cluster 'mycluster1' after metadata refresh 2022-06-07 23:42:47 metadata_cache INFO [7fd45edfe640] Metadata for cluster 'mycluster1' has 2 member(s), single-primary: (view_id=0) 2022-06-07 23:42:47 metadata_cache INFO [7fd45edfe640] db-cluster-1:3306 / 33060 - mode=RW 2022-06-07 23:42:47 metadata_cache INFO [7fd45edfe640] db-cluster-2:3306 / 33060 - mode=RO 2022-06-09 00:57:24 metadata_cache INFO [7fd45edfe640] Potential changes detected in cluster 'mycluster1' after metadata refresh 2022-06-09 00:57:24 metadata_cache INFO [7fd45edfe640] Metadata for cluster 'mycluster1' has 3 member(s), single-primary: (view_id=0) 2022-06-09 00:57:24 metadata_cache INFO [7fd45edfe640] db-cluster-1:3306 / 33060 - mode=RW 2022-06-09 00:57:24 metadata_cache INFO [7fd45edfe640] db-cluster-2:3306 / 33060 - mode=RO 2022-06-09 00:57:24 metadata_cache INFO [7fd45edfe640] db-cluster-3:3306 / 33060 - mode=RO
Errors
There are some errors. As it can be seen, the rejoinInstance does not work and getting a cluster object in MySQL Shell connected to the bad instance is impossible.
[root@db-cluster-3 ~]# mysqlsh MySQL Shell 8.0.28 Copyright (c) 2016, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > \connect clusteradmin@db-cluster-3 Creating a session to 'clusteradmin@db-cluster-3' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 30 (X protocol) Server version: 8.0.28 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. MySQL db-cluster-3:33060+ ssl JS > var cluster = dba.getCluster() WARNING: Cluster error connecting to target: MYSQLSH 51002: Group replication does not seem to be active in instance 'db-cluster-3:3306' Dba.getCluster: Group replication does not seem to be active in instance 'db-cluster-3:3306' (MYSQLSH 51002) MySQL JS > \connect clusteradmin@db-cluster-1 Creating a session to 'clusteradmin@db-cluster-1' Please provide the password for 'clusteradmin@db-cluster-1': ******************** Save password for 'clusteradmin@db-cluster-1'? [Y]es/[N]o/Ne[v]er (default No): Y Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 39796906 (X protocol) Server version: 8.0.28 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. MySQL db-cluster-1:33060+ ssl JS > var cluster = dba.getCluster() MySQL db-cluster-1:33060+ ssl JS > cluster.status() { "clusterName": "mycluster1", "defaultReplicaSet": { "name": "default", "primary": "db-cluster-1:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active.", "topology": { "db-cluster-1:3306": { "address": "db-cluster-1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-2:3306": { "address": "db-cluster-2:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-3:3306": { "address": "db-cluster-3:3306", "instanceErrors": [ "ERROR: group_replication has stopped with an error." ], "memberRole": "SECONDARY", "memberState": "ERROR", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "(MISSING)", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "db-cluster-1:3306" } MySQL db-cluster-1:33060+ ssl JS > cluster.rejoinInstance("db-cluster-3") Rejoining instance 'db-cluster-3:3306' to cluster 'mycluster1'... Cluster.rejoinInstance: The group_replication_group_name cannot be changed when Group Replication is running (MYSQLSH 3093)
Logs
The MySQL 8 Cluster InnoDB error log after the server crash. The instance tried to recover with no success:
2022-05-31T03:57:40.946686Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.28) starting as process 683 2022-05-31T03:57:42.897304Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2022-05-31T03:57:58.593777Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2022-05-31T03:58:05.242610Z 0 [System] [MY-013587] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' is starting.' 2022-05-31T03:58:15.151129Z 0 [System] [MY-010229] [Server] Starting XA crash recovery... 2022-05-31T03:58:15.154662Z 0 [System] [MY-010232] [Server] XA crash recovery finished. 2022-05-31T03:58:18.129707Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2022-05-31T03:58:18.129771Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. 2022-05-31T03:58:18.807347Z 0 [Warning] [MY-010604] [Repl] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=db-cluster-3-relay-bin' to avoid this problem. 2022-05-31T03:58:53.692200Z 0 [Warning] [MY-010818] [Server] Error reading GTIDs from relaylog: -1 2022-05-31T03:58:55.727411Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock 2022-05-31T03:58:55.727479Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.28' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server - GPL. 2022-05-31T03:58:56.201675Z 11 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. 2022-05-31T03:59:04.227610Z 2 [System] [MY-011511] [Repl] Plugin group_replication reported: 'This server is working as secondary member with primary member address db-cluster-1:3306.' 2022-05-31T03:59:04.245369Z 0 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Shutting down an outgoing connection. This happens because something might be wrong on a bi-directional connection to node db-cluster-1:33061. Please check the connection status to this member' 2022-05-31T03:59:05.228068Z 0 [System] [MY-013471] [Repl] Plugin group_replication reported: 'Distributed recovery will transfer data using: Incremental recovery from a group donor' 2022-05-31T03:59:05.509694Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to db-cluster-1:3306, db-cluster-2:3306, db-cluster-3:3306 on view 16451843215018933:65.' 2022-05-31T04:00:00.296337Z 13 [ERROR] [MY-010596] [Repl] Error reading relay log event for channel 'group_replication_applier': corrupted data in log event 2022-05-31T04:00:00.296387Z 13 [ERROR] [MY-013121] [Repl] Slave SQL for channel 'group_replication_applier': Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, the server was unable to fetch a keyring key required to open an encrypted relay log file, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave. Error_code: MY-013121 2022-05-31T04:00:00.322163Z 13 [ERROR] [MY-011451] [Repl] Plugin group_replication reported: 'The applier thread execution was aborted. Unable to process more transactions, this member will now leave the group.' 2022-05-31T04:00:00.322230Z 11 [ERROR] [MY-011452] [Repl] Plugin group_replication reported: 'Fatal error during execution on the Applier process of Group Replication. The server will now leave the group.' 2022-05-31T04:00:00.322317Z 11 [ERROR] [MY-011712] [Repl] Plugin group_replication reported: 'The server was automatically set into read only mode after an error was detected.' 2022-05-31T04:00:00.322360Z 13 [ERROR] [MY-010586] [Repl] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'FIRST' position 0 2022-05-31T04:00:00.322440Z 24 [ERROR] [MY-011622] [Repl] Plugin group_replication reported: 'Unable to evaluate the group replication applier execution status. Group replication recovery will shutdown to avoid data corruption.' 2022-05-31T04:00:00.322469Z 24 [ERROR] [MY-011620] [Repl] Plugin group_replication reported: 'Fatal error during the incremental recovery process of Group Replication. The server will leave the group.' 2022-05-31T04:00:00.322489Z 24 [Warning] [MY-011645] [Repl] Plugin group_replication reported: 'Skipping leave operation: concurrent attempt to leave the group is on-going.' 2022-05-31T04:00:00.322500Z 24 [ERROR] [MY-011712] [Repl] Plugin group_replication reported: 'The server was automatically set into read only mode after an error was detected.' 2022-05-31T04:00:03.448475Z 0 [System] [MY-011504] [Repl] Plugin group_replication reported: 'Group membership changed: This member has left the group.'