This article is going to show how to install a MySQL server and deploy a MySQL 8 InnoDB Cluster with three nodes behind a MySQL router to archive a high availability with MySQL database back-end.
In really simple words, MySQL 8.0 InnoDB Cluster is just MySQL replication on steroids – i.e. a little more additional work between the servers in the group before committing the transactions. It uses MySQL Group Replication plugin, which allows the group to operate in two different modes:
- a single-primary mode with automatic primary election. Only one server gets the updates.
- a multi-master mode – all servers accept the updates. For advanced setups.
Group Replication is bi-directional, the servers communicate with each other and use row replication to replicate the data. The main limitation is that only the MySQL InnoDB engine is supported, because of the transactions support. So the performance (and most features and caveats) of MySQL InnoDB is not impacted by cluster setup and overhead compared to the MySQL in replication mode (or a single server setups) from the previous MySQL versions. Still, all read-write transactions commit only after they have been approved by the group – a verification process providing consensus between the servers. In fact, most of the features like GUIDs, row-based replication (i.e. different replication modes) are developed and available to older versions. The new part is handled by Group Communication System (GCS) protocols, which provide a failure detection mechanism, a group membership service, and a safe and completely ordered message delivery (more on the subject here https://dev.mysql.com/doc/refman/8.0/en/group-replication-background.html).
In addition to the group replication, MySQL Router 8.0 provides the HA – high availability. The program, which redirects, fails over, balances to the right server in the group is the MySQL Router. Clients may connect directly to the servers in the group, but only if the clients connect using MySQL router will have HA because Group Replication does not have a built-in method for it. It is worth noting, there could be many MySQL Routers in different servers, they do not need to communicate or synchronize anything with each other. So the router could be installed in the same place, where the application is installed or on a separate dedicated server, or on every MySQL server in the group.
Key points in this article of MySQL InnoDB Cluster deployment:
- CentOS 8 Stream is used for the operating system
- SELinux tuning to allow MySQL process to connect the network.
- CentOS 8 firewall tuning to unblock the nodes traffic between them.
- Disable mysql package system module to use the official MySQL repository.
- Three MySQL 8.0.28 server nodes will be installed
- To create and manage the cluster MySQL Shell 8.0 and dba object in it are used.
- Three MySQL routers on each MySQL node will be installed.
- Each server will have the domains of the all three servers in /etc/hosts file – db-cluster-1, db-cluster-2, db-cluster-3.
- The cluster is in group replication with one primary (i.e. master) and two secondary nodes (i.e. slaves)
STEP 1) Install CentOS 8 Stream.
There is an article with the CentOS 8 – How to do a network installation of CentOS 8 (8.0.1950) – minimal server installation, which installation is essentially the same as CentOS 8 Stream.
STEP 2) Prepare the CentOS 8 Stream to install MySQL 8 server.
At present, the latest MySQL Community edition is 8.0.28. The preferred way to install the MySQL server is to download the RPM repository file from MySQL web site – https://dev.mysql.com/downloads/repo/yum/
[root@db-cluster-1 ~]# wget https://repo.mysql.com/mysql80-community-release-el8-3.noarch.rpm --2022-02-17 15:56:11-- https://repo.mysql.com/mysql80-community-release-el8-3.noarch.rpm Resolving repo.mysql.com (repo.mysql.com)... 2.18.233.231 Connecting to repo.mysql.com (repo.mysql.com)|2.18.233.231|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 14100 (14K) [application/x-redhat-package-manager] Saving to: ‘mysql80-community-release-el8-3.noarch.rpm’ mysql80-community-release-e 100%[=========================================>] 13.77K --.-KB/s in 0s 2022-02-17 15:56:12 (257 MB/s) - ‘mysql80-community-release-el8-3.noarch.rpm’ saved [14100/14100] [root@db-cluster-1 ~]# dnf install -y ./mysql80-community-release-el8-3.noarch.rpm Last metadata expiration check: 1:18:27 ago on Thu 17 Feb 2022 02:37:58 PM UTC. Dependencies resolved. ============================================================================================================== Package Architecture Version Repository Size ============================================================================================================== Installing: mysql80-community-release noarch el8-3 @commandline 14 k Transaction Summary ============================================================================================================== Install 1 Package Total size: 14 k Installed size: 7.5 k Downloading Packages: Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : mysql80-community-release-el8-3.noarch 1/1 Verifying : mysql80-community-release-el8-3.noarch 1/1 Installed: mysql80-community-release-el8-3.noarch Complete! [root@db-cluster-1 ~]#
To use this repository – the official MySQL repository, the mysql module of the CentOS 8 package system must be disabled or MySQL Community Server 8 won’t be available for installation.
[root@db-cluster-1 ~]# dnf module disable mysql MySQL 8.0 Community Server 4.3 MB/s | 2.3 MB 00:00 MySQL Connectors Community 702 kB/s | 80 kB 00:00 MySQL Tools Community 3.7 MB/s | 432 kB 00:00 Dependencies resolved. ============================================================================================================== Package Architecture Version Repository Size ============================================================================================================== Disabling modules: mysql Transaction Summary ============================================================================================================== Is this ok [y/N]: y Complete! [root@db-cluster-1 ~]#
STEP 3) Install MySQL and configure the MySQL server.
If it is a clean install a couple of dependencies will be installed:
[root@db-cluster-1 ~]# dnf install -y mysql-community-server Last metadata expiration check: 0:01:13 ago on Thu 17 Feb 2022 04:03:26 PM UTC. Dependencies resolved. ============================================================================================================== Package Arch Version Repository Size ============================================================================================================== Installing: mysql-community-server x86_64 8.0.28-1.el8 mysql80-community 53 M Installing dependencies: groff-base x86_64 1.22.3-18.el8 baseos 1.0 M libaio x86_64 0.3.112-1.el8 baseos 33 k mysql-community-client x86_64 8.0.28-1.el8 mysql80-community 14 M mysql-community-client-plugins x86_64 8.0.28-1.el8 mysql80-community 2.4 M mysql-community-common x86_64 8.0.28-1.el8 mysql80-community 633 k mysql-community-icu-data-files x86_64 8.0.28-1.el8 mysql80-community 2.1 M mysql-community-libs x86_64 8.0.28-1.el8 mysql80-community 1.5 M net-tools x86_64 2.0-0.52.20160912git.el8 baseos 322 k numactl-libs x86_64 2.0.12-13.el8 baseos 36 k perl-Carp noarch 1.42-396.el8 baseos 30 k perl-Data-Dumper x86_64 2.167-399.el8 baseos 58 k perl-Digest noarch 1.17-395.el8 appstream 27 k perl-Digest-MD5 x86_64 2.55-396.el8 appstream 37 k perl-Encode x86_64 4:2.97-3.el8 baseos 1.5 M perl-Errno x86_64 1.28-421.el8 baseos 76 k perl-Exporter noarch 5.72-396.el8 baseos 34 k perl-File-Path noarch 2.15-2.el8 baseos 38 k perl-File-Temp noarch 0.230.600-1.el8 baseos 63 k perl-Getopt-Long noarch 1:2.50-4.el8 baseos 63 k perl-HTTP-Tiny noarch 0.074-1.el8 baseos 58 k perl-IO x86_64 1.38-421.el8 baseos 142 k perl-MIME-Base64 x86_64 3.15-396.el8 baseos 31 k perl-Net-SSLeay x86_64 1.88-1.module_el8.4.0+517+be1595ff appstream 379 k perl-PathTools x86_64 3.74-1.el8 baseos 90 k perl-Pod-Escapes noarch 1:1.07-395.el8 baseos 20 k perl-Pod-Perldoc noarch 3.28-396.el8 baseos 86 k perl-Pod-Simple noarch 1:3.35-395.el8 baseos 213 k perl-Pod-Usage noarch 4:1.69-395.el8 baseos 34 k perl-Scalar-List-Utils x86_64 3:1.49-2.el8 baseos 68 k perl-Socket x86_64 4:2.027-3.el8 baseos 59 k perl-Storable x86_64 1:3.11-3.el8 baseos 98 k perl-Term-ANSIColor noarch 4.06-396.el8 baseos 46 k perl-Term-Cap noarch 1.17-395.el8 baseos 23 k perl-Text-ParseWords noarch 3.30-395.el8 baseos 18 k perl-Text-Tabs+Wrap noarch 2013.0523-395.el8 baseos 24 k perl-Time-Local noarch 1:1.280-1.el8 baseos 34 k perl-URI noarch 1.73-3.el8 appstream 116 k perl-Unicode-Normalize x86_64 1.25-396.el8 baseos 82 k perl-constant noarch 1.33-396.el8 baseos 25 k perl-interpreter x86_64 4:5.26.3-421.el8 baseos 6.3 M perl-libnet noarch 3.11-3.el8 appstream 121 k perl-libs x86_64 4:5.26.3-421.el8 baseos 1.6 M perl-macros x86_64 4:5.26.3-421.el8 baseos 72 k perl-parent noarch 1:0.237-1.el8 baseos 20 k perl-podlators noarch 4.11-1.el8 baseos 118 k perl-threads x86_64 1:2.21-2.el8 baseos 61 k perl-threads-shared x86_64 1.58-2.el8 baseos 48 k Installing weak dependencies: perl-IO-Socket-IP noarch 0.39-5.el8 appstream 47 k perl-IO-Socket-SSL noarch 2.066-4.module_el8.4.0+517+be1595ff appstream 298 k perl-Mozilla-CA noarch 20160104-7.module_el8.3.0+416+dee7bcef appstream 15 k Enabling module streams: perl 5.26 perl-IO-Socket-SSL 2.066 perl-libwww-perl 6.34 Transaction Summary ============================================================================================================== Install 51 Packages Total download size: 88 M Installed size: 388 M Downloading Packages: (1/51): perl-Digest-1.17-395.el8.noarch.rpm 368 kB/s | 27 kB 00:00 (2/51): perl-Digest-MD5-2.55-396.el8.x86_64.rpm 353 kB/s | 37 kB 00:00 (3/51): perl-IO-Socket-IP-0.39-5.el8.noarch.rpm 350 kB/s | 47 kB 00:00 (4/51): perl-Mozilla-CA-20160104-7.module_el8.3.0+416+dee7bcef.noarch.rpm 366 kB/s | 15 kB 00:00 (5/51): perl-IO-Socket-SSL-2.066-4.module_el8.4.0+517+be1595ff.noarch.rpm 2.0 MB/s | 298 kB 00:00 (6/51): perl-Net-SSLeay-1.88-1.module_el8.4.0+517+be1595ff.x86_64.rpm 3.1 MB/s | 379 kB 00:00 (7/51): perl-URI-1.73-3.el8.noarch.rpm 1.0 MB/s | 116 kB 00:00 (8/51): perl-libnet-3.11-3.el8.noarch.rpm 1.2 MB/s | 121 kB 00:00 (9/51): libaio-0.3.112-1.el8.x86_64.rpm 434 kB/s | 33 kB 00:00 (10/51): numactl-libs-2.0.12-13.el8.x86_64.rpm 686 kB/s | 36 kB 00:00 (11/51): groff-base-1.22.3-18.el8.x86_64.rpm 5.1 MB/s | 1.0 MB 00:00 (12/51): net-tools-2.0-0.52.20160912git.el8.x86_64.rpm 1.9 MB/s | 322 kB 00:00 (13/51): perl-Carp-1.42-396.el8.noarch.rpm 285 kB/s | 30 kB 00:00 (14/51): perl-Data-Dumper-2.167-399.el8.x86_64.rpm 591 kB/s | 58 kB 00:00 (15/51): perl-Exporter-5.72-396.el8.noarch.rpm 647 kB/s | 34 kB 00:00 (16/51): perl-Errno-1.28-421.el8.x86_64.rpm 672 kB/s | 76 kB 00:00 (17/51): perl-Encode-2.97-3.el8.x86_64.rpm 7.1 MB/s | 1.5 MB 00:00 (18/51): perl-File-Path-2.15-2.el8.noarch.rpm 328 kB/s | 38 kB 00:00 (19/51): perl-File-Temp-0.230.600-1.el8.noarch.rpm 524 kB/s | 63 kB 00:00 (20/51): perl-Getopt-Long-2.50-4.el8.noarch.rpm 717 kB/s | 63 kB 00:00 (21/51): perl-HTTP-Tiny-0.074-1.el8.noarch.rpm 609 kB/s | 58 kB 00:00 (22/51): perl-IO-1.38-421.el8.x86_64.rpm 1.3 MB/s | 142 kB 00:00 (23/51): perl-MIME-Base64-3.15-396.el8.x86_64.rpm 289 kB/s | 31 kB 00:00 (24/51): perl-PathTools-3.74-1.el8.x86_64.rpm 874 kB/s | 90 kB 00:00 (25/51): perl-Pod-Escapes-1.07-395.el8.noarch.rpm 208 kB/s | 20 kB 00:00 (26/51): perl-Pod-Perldoc-3.28-396.el8.noarch.rpm 935 kB/s | 86 kB 00:00 (27/51): perl-Pod-Simple-3.35-395.el8.noarch.rpm 2.0 MB/s | 213 kB 00:00 (28/51): perl-Pod-Usage-1.69-395.el8.noarch.rpm 334 kB/s | 34 kB 00:00 (29/51): perl-Scalar-List-Utils-1.49-2.el8.x86_64.rpm 640 kB/s | 68 kB 00:00 (30/51): perl-Socket-2.027-3.el8.x86_64.rpm 617 kB/s | 59 kB 00:00 (31/51): perl-Storable-3.11-3.el8.x86_64.rpm 1.1 MB/s | 98 kB 00:00 (32/51): perl-Term-ANSIColor-4.06-396.el8.noarch.rpm 541 kB/s | 46 kB 00:00 (33/51): perl-Term-Cap-1.17-395.el8.noarch.rpm 268 kB/s | 23 kB 00:00 (34/51): perl-Text-ParseWords-3.30-395.el8.noarch.rpm 191 kB/s | 18 kB 00:00 (35/51): perl-Text-Tabs+Wrap-2013.0523-395.el8.noarch.rpm 249 kB/s | 24 kB 00:00 (36/51): perl-Time-Local-1.280-1.el8.noarch.rpm 346 kB/s | 34 kB 00:00 (37/51): perl-Unicode-Normalize-1.25-396.el8.x86_64.rpm 831 kB/s | 82 kB 00:00 (38/51): perl-constant-1.33-396.el8.noarch.rpm 265 kB/s | 25 kB 00:00 (39/51): perl-macros-5.26.3-421.el8.x86_64.rpm 1.1 MB/s | 72 kB 00:00 (40/51): perl-parent-0.237-1.el8.noarch.rpm 333 kB/s | 20 kB 00:00 (41/51): perl-libs-5.26.3-421.el8.x86_64.rpm 7.4 MB/s | 1.6 MB 00:00 (42/51): perl-podlators-4.11-1.el8.noarch.rpm 1.3 MB/s | 118 kB 00:00 (43/51): perl-interpreter-5.26.3-421.el8.x86_64.rpm 16 MB/s | 6.3 MB 00:00 (44/51): perl-threads-2.21-2.el8.x86_64.rpm 267 kB/s | 61 kB 00:00 (45/51): perl-threads-shared-1.58-2.el8.x86_64.rpm 174 kB/s | 48 kB 00:00 (46/51): mysql-community-common-8.0.28-1.el8.x86_64.rpm 5.9 MB/s | 633 kB 00:00 (47/51): mysql-community-client-plugins-8.0.28-1.el8.x86_64.rpm 11 MB/s | 2.4 MB 00:00 (48/51): mysql-community-icu-data-files-8.0.28-1.el8.x86_64.rpm 12 MB/s | 2.1 MB 00:00 (49/51): mysql-community-libs-8.0.28-1.el8.x86_64.rpm 8.7 MB/s | 1.5 MB 00:00 (50/51): mysql-community-client-8.0.28-1.el8.x86_64.rpm 18 MB/s | 14 MB 00:00 (51/51): mysql-community-server-8.0.28-1.el8.x86_64.rpm 36 MB/s | 53 MB 00:01 -------------------------------------------------------------------------------------------------------------- Total 24 MB/s | 88 MB 00:03 MySQL 8.0 Community Server 3.0 MB/s | 3.1 kB 00:00 Importing GPG key 0x3A79BD29: Userid : "MySQL Release Engineering <mysql-build@oss.oracle.com>" Fingerprint: 859B E8D7 C586 F538 430B 19C2 467B 942D 3A79 BD29 From : /etc/pki/rpm-gpg/RPM-GPG-KEY-mysql-2022 Key imported successfully MySQL 8.0 Community Server 1.9 MB/s | 1.9 kB 00:00 Importing GPG key 0x5072E1F5: Userid : "MySQL Release Engineering <mysql-build@oss.oracle.com>" Fingerprint: A4A9 4068 76FC BD3C 4567 70C8 8C71 8D3B 5072 E1F5 From : /etc/pki/rpm-gpg/RPM-GPG-KEY-mysql Key imported successfully Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : mysql-community-common-8.0.28-1.el8.x86_64 1/51 Installing : mysql-community-client-plugins-8.0.28-1.el8.x86_64 2/51 Installing : mysql-community-libs-8.0.28-1.el8.x86_64 3/51 Running scriptlet: mysql-community-libs-8.0.28-1.el8.x86_64 3/51 Installing : mysql-community-client-8.0.28-1.el8.x86_64 4/51 Installing : mysql-community-icu-data-files-8.0.28-1.el8.x86_64 5/51 Installing : numactl-libs-2.0.12-13.el8.x86_64 6/51 Running scriptlet: numactl-libs-2.0.12-13.el8.x86_64 6/51 Installing : net-tools-2.0-0.52.20160912git.el8.x86_64 7/51 Running scriptlet: net-tools-2.0-0.52.20160912git.el8.x86_64 7/51 Installing : libaio-0.3.112-1.el8.x86_64 8/51 Installing : groff-base-1.22.3-18.el8.x86_64 9/51 Installing : perl-Digest-1.17-395.el8.noarch 10/51 .... .... perl-threads-shared-1.58-2.el8.x86_64 Complete! [root@db-cluster-1 ~]#
Some of the installation output is trimmed.
Now, MySQL 8 is installed and on the first start, it would create the system databases and some unique IDs in /var/lib/mysql/
First start of the server:
[root@db-cluster-1 ~]# systemctl start mysqld [root@db-cluster-1 ~]# systemctl enable mysqld [root@db-cluster-1 ~]# systemctl status mysqld ● mysqld.service - MySQL Server Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2022-02-11 16:17:18 UTC; 9min 45s ago Docs: man:mysqld(8) http://dev.mysql.com/doc/refman/en/using-systemd.html Process: 747 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS) Main PID: 817 (mysqld) Status: "Server is operational" Tasks: 39 (limit: 11381) Memory: 590.9M CGroup: /system.slice/mysqld.service └─817 /usr/sbin/mysqld Feb 17 16:11:38 db-cluster-1 systemd[1]: Starting MySQL Server... Feb 17 16:12:37 db-cluster-1 systemd[1]: Started MySQL Server. [root@db-cluster-1 ~]# ls -altr /var/lib/mysql/ total 188884 drwxr-xr-x. 23 root root 4096 Feb 17 16:05 .. -rw-r-----. 1 mysql mysql 56 Feb 17 16:11 auto.cnf -rw-r-----. 1 mysql mysql 50331648 Feb 17 16:11 ib_logfile1 -rw-r-----. 1 mysql mysql 8585216 Feb 17 16:11 '#ib_16384_1.dblwr' drwxr-x---. 2 mysql mysql 4096 Feb 17 16:12 performance_schema -rw-------. 1 mysql mysql 1676 Feb 17 16:12 ca-key.pem -rw-r--r--. 1 mysql mysql 1112 Feb 17 16:12 ca.pem -rw-------. 1 mysql mysql 1680 Feb 17 16:12 server-key.pem -rw-r--r--. 1 mysql mysql 1112 Feb 17 16:12 server-cert.pem -rw-------. 1 mysql mysql 1680 Feb 17 16:12 client-key.pem -rw-r--r--. 1 mysql mysql 1112 Feb 17 16:12 client-cert.pem -rw-r--r--. 1 mysql mysql 452 Feb 17 16:12 public_key.pem -rw-------. 1 mysql mysql 1680 Feb 17 16:12 private_key.pem drwxr-x---. 2 mysql mysql 4096 Feb 17 16:12 mysql drwxr-x---. 2 mysql mysql 4096 Feb 17 16:12 sys -rw-r-----. 1 mysql mysql 5515 Feb 17 16:12 ib_buffer_pool drwxr-x---. 2 mysql mysql 4096 Feb 17 16:12 '#innodb_temp' -rw-r-----. 1 mysql mysql 16 Feb 17 16:12 binlog.index -rw-r-----. 1 mysql mysql 157 Feb 17 16:12 binlog.000001 -rw-------. 1 mysql mysql 5 Feb 17 16:12 mysql.sock.lock srwxrwxrwx. 1 mysql mysql 0 Feb 17 16:12 mysql.sock drwxr-x--x. 6 mysql mysql 4096 Feb 17 16:12 . -rw-r-----. 1 mysql mysql 12582912 Feb 17 16:12 ibtmp1 -rw-r-----. 1 mysql mysql 25165824 Feb 17 16:12 mysql.ibd -rw-r-----. 1 mysql mysql 12582912 Feb 17 16:12 ibdata1 -rw-r-----. 1 mysql mysql 16777216 Feb 17 16:14 undo_001 -rw-r-----. 1 mysql mysql 196608 Feb 17 16:14 '#ib_16384_0.dblwr' -rw-r-----. 1 mysql mysql 16777216 Feb 17 16:14 undo_002 -rw-r-----. 1 mysql mysql 50331648 Feb 17 16:14 ib_logfile0 [root@db-cluster-1 ~]#
STEP 3) Install MySQL shell to create and manage the MySQL 8 InnoDB Cluster
The MySQL Shell 8.0 is the program, with which the user interacts with the cluster – https://dev.mysql.com/doc/mysql-shell/8.0/en/
[root@db-cluster-1 ~]# dnf install -y mysql-shell Last metadata expiration check: 2:18:03 ago on Thu 17 Feb 2022 05:44:58 PM UTC. Dependencies resolved. ============================================================================================================== Package Architecture Version Repository Size ============================================================================================================== Installing: mysql-shell x86_64 8.0.28-1.el8 mysql-tools-community 17 M Installing dependencies: python39-libs x86_64 3.9.7-1.module_el8.6.0+930+10acc06f appstream 8.2 M python39-pip-wheel noarch 20.2.4-6.module_el8.6.0+930+10acc06f appstream 1.3 M python39-setuptools-wheel noarch 50.3.2-4.module_el8.6.0+930+10acc06f appstream 497 k Installing weak dependencies: python39 x86_64 3.9.7-1.module_el8.6.0+930+10acc06f appstream 33 k python39-pip noarch 20.2.4-6.module_el8.6.0+930+10acc06f appstream 2.0 M python39-setuptools noarch 50.3.2-4.module_el8.6.0+930+10acc06f appstream 871 k Enabling module streams: python39 3.9 Transaction Summary =============================================================================================================== Install 7 Packages Total download size: 30 M Installed size: 150 M Downloading Packages: (1/7): python39-3.9.7-1.module_el8.6.0+930+10acc06f.x86_64.rpm 375 kB/s | 33 kB 00:00 (2/7): python39-pip-wheel-20.2.4-6.module_el8.6.0+930+10acc06f.noarch.rpm 6.7 MB/s | 1.3 MB 00:00 (3/7): python39-setuptools-50.3.2-4.module_el8.6.0+930+10acc06f.noarch.rpm 10 MB/s | 871 kB 00:00 (4/7): python39-pip-20.2.4-6.module_el8.6.0+930+10acc06f.noarch.rpm 4.1 MB/s | 2.0 MB 00:00 (5/7): python39-setuptools-wheel-50.3.2-4.module_el8.6.0+930+10acc06f.noarch.rpm 2.7 MB/s | 497 kB 00:00 (6/7): python39-libs-3.9.7-1.module_el8.6.0+930+10acc06f.x86_64.rpm 11 MB/s | 8.2 MB 00:00 (7/7): mysql-shell-8.0.28-1.el8.x86_64.rpm 27 MB/s | 17 MB 00:00 --------------------------------------------------------------------------------------------------------------- Total 21 MB/s | 30 MB 00:01 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : python39-setuptools-wheel-50.3.2-4.module_el8.6.0+930+10acc06f.noarch 1/7 Installing : python39-pip-wheel-20.2.4-6.module_el8.6.0+930+10acc06f.noarch 2/7 Installing : python39-libs-3.9.7-1.module_el8.6.0+930+10acc06f.x86_64 3/7 Installing : python39-3.9.7-1.module_el8.6.0+930+10acc06f.x86_64 4/7 Running scriptlet: python39-3.9.7-1.module_el8.6.0+930+10acc06f.x86_64 4/7 Installing : python39-setuptools-50.3.2-4.module_el8.6.0+930+10acc06f.noarch 5/7 Running scriptlet: python39-setuptools-50.3.2-4.module_el8.6.0+930+10acc06f.noarch 5/7 Installing : python39-pip-20.2.4-6.module_el8.6.0+930+10acc06f.noarch 6/7 Running scriptlet: python39-pip-20.2.4-6.module_el8.6.0+930+10acc06f.noarch 6/7 Installing : mysql-shell-8.0.28-1.el8.x86_64 7/7 Running scriptlet: mysql-shell-8.0.28-1.el8.x86_64 7/7 Verifying : python39-3.9.7-1.module_el8.6.0+930+10acc06f.x86_64 1/7 Verifying : python39-libs-3.9.7-1.module_el8.6.0+930+10acc06f.x86_64 2/7 Verifying : python39-pip-20.2.4-6.module_el8.6.0+930+10acc06f.noarch 3/7 Verifying : python39-pip-wheel-20.2.4-6.module_el8.6.0+930+10acc06f.noarch 4/7 Verifying : python39-setuptools-50.3.2-4.module_el8.6.0+930+10acc06f.noarch 5/7 Verifying : python39-setuptools-wheel-50.3.2-4.module_el8.6.0+930+10acc06f.noarch 6/7 Verifying : mysql-shell-8.0.28-1.el8.x86_64 7/7 Installed: mysql-shell-8.0.28-1.el8.x86_64 python39-3.9.7-1.module_el8.6.0+930+10acc06f.x86_64 python39-libs-3.9.7-1.module_el8.6.0+930+10acc06f.x86_64 python39-pip-20.2.4-6.module_el8.6.0+930+10acc06f.noarch python39-pip-wheel-20.2.4-6.module_el8.6.0+930+10acc06f.noarch python39-setuptools-50.3.2-4.module_el8.6.0+930+10acc06f.noarch python39-setuptools-wheel-50.3.2-4.module_el8.6.0+930+10acc06f.noarch Complete!
STEP 3.1) Add the domains and IPs in the /etc/hosts
Of course, this step is not mandatory, the domains could be resolvable through the DNS, but domains in the /etc/hosts are faster resolved with only local for the system lookup. In fact, using /etc/hosts for the cluster domain names the IPs may be different in the different nodes, but this setup is more complex is beyond this article’s scope.
192.168.0.11 db-cluster-1 192.168.0.12 db-cluster-2 192.168.0.13 db-cluster-3
STEP 3.2) SELinux configuration.
SELinux tuning is important for the cluster to work properly:
[root@db-cluster-1 ~]# getenforce Enforcing [root@db-cluster-1 ~]# setsebool -P mysql_connect_any 1
If getenforce reports Enforcing the setseboot command is mandatory to run the cluster flawless.
STEP 3.3) Firewall configuration.
The cluster is in the local trusted network, the three IPs or the whole network could be added in the build-in trusted zone with:
firewall-cmd --permanent --zone=trusted --add-source=192.168.0.0/24 firewall-cmd --reload
or only for servers’ IPs assuimg the IPs of the three nodes are 192.168.0.11, 192.168.0.12, 192.168.0.13.
firewall-cmd --permanent --zone=trusted --add-source=192.168.0.11 firewall-cmd --permanent --zone=trusted --add-source=192.168.0.12 firewall-cmd --permanent --zone=trusted --add-source=192.168.0.13 firewall-cmd --reload
STEP 3.4) MySQL generic configuration.
When the SELinux and firewall configuration are ready, tune the MySQL generic and some InnoDB configuration variables for the purpose of the setup. For example, add to the bottom of file /etc/my.cnf (under section “[mysqld]“):
#my configs skip-external-locking skip-name-resolve skip-character-set-client-handshake key_buffer_size = 256M max_allowed_packet = 256M table_open_cache = 500 open_files_limit = 5000 sort_buffer_size = 2048K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 1024K myisam_sort_buffer_size = 64M max_connections = 150 max_heap_table_size = 512M innodb_buffer_pool_size = 2048M innodb_log_buffer_size = 32M innodb_log_files_in_group = 2 innodb_flush_log_at_trx_commit = 0 innodb_lock_wait_timeout = 50 innodb_file_per_table innodb_thread_concurrency = 0 innodb_flush_method = O_DIRECT
The innodb_buffer_pool_size is set to 70% the amount of RAM.
STEP 3.5) Set MySQL root password.
The initializing process of MySQL sets a temporary password for the root account, which should be changed with MySQL console. The password is logged in the MySQL log file – /var/log/mysqld.log. Search for it with:
[root@db-cluster-1 ~]# grep -i "temporary password" /var/log/mysqld.log 2022-02-17T16:12:02.627695Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: Z5+fYVe_s=ZO
And then change the password with MySQL SQL console (the password is need for login):
[root@db-cluster-1 ~]# mysql -uroot -p"Z5+fYVe_s=ZO" mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 10 Server version: 8.0.28 Copyright (c) 2000, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'tee9wie2Pa4aevai#g&i'; Query OK, 0 rows affected (0.08 sec) mysql> ^DBye
It is convenient to add the password to the user’s home .my.cnf file (in this case, root system account – /root/.my.cnf).
[client] password="tee9wie2Pa4aevai#g&i"
Entering the MySQL console will not require a MySQL root password, because the MySQL console will read it from the file.
STEP 4) Check the configuration for MySQL 8.0 InnoDB Cluster and create a cluster administrative account.
The MySQL Shell command “mysqlsh” is used to configure and manage the MySQL 8.0 InnoDB Cluster using JavaScript syntax. The user accesses MySQL InnoDB Cluster functanality with the “dba” object through “mysqlsh”
First thing to do before creating the MySQL 8.0 InnoDB Cluster is to check whether the current MySQL configuration is ready and compliant for the Cluster setup. Enther the MySQL Shell and execute the following command:
[root@db-cluster-1 ~]# mysqlsh MySQL Shell 8.0.28 Copyright (c) 2016, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > dba.checkInstanceConfiguration("root@localhost") Please provide the password for 'root@localhost': ******************** Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No): No Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster... ERROR: New account(s) with proper source address specification to allow remote connection from all instances must be created to manage the cluster. Dba.checkInstanceConfiguration: User 'root' can only connect from 'localhost'. (RuntimeError)
A cluster administrative account must be created, which could connect from the network. Then, check the MySQL readiness for the MySQL 8.0 Cluster InnoDB with the cluster’s administrative account. The local root administrative account should be used to create the cluster administrative account with the name clusteradmin (the name may be different):
[root@db-cluster-1 ~]# mysqlsh MySQL Shell 8.0.28 Copyright (c) 2016, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > dba.configureInstance("root@localhost") Please provide the password for 'root@localhost': ******************** Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No): N Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster... This instance reports its own address as db-cluster-1:3306 Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed. ERROR: User 'root' can only connect from 'localhost'. New account(s) with proper source address specification to allow remote connection from all instances must be created to manage the cluster. 1) Create remotely usable account for 'root' with same grants and password 2) Create a new admin account for InnoDB cluster with minimal required grants 3) Ignore and continue 4) Cancel Please select an option [1]: 2 Please provide an account name (e.g: icroot@%) to have it created with the necessary privileges or leave empty and press Enter to cancel. Account Name: clusteradmin@% Password for new account: ******************** Confirm password: ******************** applierWorkerThreads will be set to the default value of 4. NOTE: Some configuration options need to be fixed: +----------------------------------------+---------------+----------------+--------------------------------------------------+ | Variable | Current Value | Required Value | Note | +----------------------------------------+---------------+----------------+--------------------------------------------------+ | binlog_transaction_dependency_tracking | COMMIT_ORDER | WRITESET | Update the server variable | | enforce_gtid_consistency | OFF | ON | Update read-only variable and restart the server | | gtid_mode | OFF | ON | Update read-only variable and restart the server | | server_id | 1 | <unique ID> | Update read-only variable and restart the server | +----------------------------------------+---------------+----------------+--------------------------------------------------+ Some variables need to be changed, but cannot be done dynamically on the server. Do you want to perform the required configuration changes? [y/n]: y Do you want to restart the instance after configuring it? [y/n]: y Cluster admin user 'clusteradmin'@'%' created. Configuring instance... The instance 'db-cluster-1:3306' was configured to be used in an InnoDB cluster. Restarting MySQL... NOTE: MySQL server at db-cluster-1:3306 was restarted.
The MySQL shell command dba.configureInstance(“root@localhost”) offers:
- To create an admin account (i.e. administrative cluster account) for InnoDB cluster with minimal required grants
- a check for the MySQL configuration compliance with the MySQL Cluster InnoDB.
- Fix any non-compliant configuration and restart the MySQL server.
Using the newly created admin user is easy to check whether the dba.configureInstance fixed the issues reported above:
[root@db-cluster-1 ~]# mysqlsh MySQL Shell 8.0.28 Copyright (c) 2016, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > dba.checkInstanceConfiguration("clusteradmin@db-cluster-1") Please provide the password for 'clusteradmin@db-cluster-1': ******************** Save password for 'clusteradmin@db-cluster-1'? [Y]es/[N]o/Ne[v]er (default No): Y Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster... This instance reports its own address as db-cluster-1:3306 Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed. Checking whether existing tables comply with Group Replication requirements... No incompatible tables detected Checking instance configuration... Instance configuration is compatible with InnoDB cluster The instance 'db-cluster-1:3306' is valid to be used in an InnoDB cluster. { "status": "ok" } MySQL JS >
This server is ready to create the MySQL InnoDB Cluster. It is a good idea to remember the password of clusteradmin@db-cluster-1 for future use in mysqlsh (line 11).
STEP 5) Create the MySQL Cluster.
It is a simple operation and it will create a group replication with the first server master, i.e. primary in the group. Note, first connect to the MySQL with the cluster administrator account and then issue a create command with the logical name of the cluster.
[root@db-cluster-1 ~]# mysqlsh MySQL Shell 8.0.28 Copyright (c) 2016, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > \connect clusteradmin@db-cluster-1 Creating a session to 'clusteradmin@db-cluster-1' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 12 (X protocol) Server version: 8.0.28 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. MySQL db-cluster-1:33060+ ssl JS > dba.createCluster("mycluster1") A new InnoDB cluster will be created on instance 'db-cluster-1:3306'. Validating instance configuration at db-cluster-1:3306... This instance reports its own address as db-cluster-1:3306 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using 'db-cluster-1:33061'. Use the localAddress option to override. Creating InnoDB cluster 'mycluster1' on 'db-cluster-1:3306'... Adding Seed Instance... Cluster successfully created. Use Cluster.addInstance() to add MySQL instances. At least 3 instances are needed for the cluster to be able to withstand up to one server failure. <Cluster:mycluster1> MySQL db-cluster-1:33060+ ssl JS > var cluster = dba.getCluster() MySQL db-cluster-1:33060+ ssl JS > cluster.status() { "clusterName": "mycluster1", "defaultReplicaSet": { "name": "default", "primary": "db-cluster-1:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "db-cluster-1:3306": { "address": "db-cluster-1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "db-cluster-1:3306" } MySQL db-cluster-1:33060+ ssl JS >
The cluster is up and running with only one node, which is the read/write primary node (note the mode: R/W). By default, the cluster mode is with one primary node and multiple (in this case, two nodes) in secondary mode, which executes only reads (i.e. selects).
STEP 6) Install and configure the rest two cluster nodes.
Now, it is time to repeat the above steps two more times without the last “5) Create the cluster”. Here are just the commands without their output for clarity:
wget https://repo.mysql.com/mysql80-community-release-el8-3.noarch.rpm dnf install -y ./mysql80-community-release-el8-3.noarch.rpm dnf module disable mysql dnf install -y mysql-community-server systemctl start mysqld dnf install -y mysql-shell echo " 192.168.0.11 db-cluster-1 192.168.0.12 db-cluster-2 192.168.0.13 db-cluster-3" >> /etc/hosts setsebool -P mysql_connect_any 1 firewall-cmd --permanent --zone=trusted --add-source=192.168.0.0/24 firewall-cmd --reload #tune the MySQL configuration /etc/my.cnf file with generic variables grep -i "temporary password" /var/log/mysqld.log mysql -uroot -p"[temporary password from above command]" mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'tee9wie2Pa4aevai#g&i'; mysql> quit #add the new root password in <strong>/root/.my.cnf</strong> mysqlsh MySQL JS > dba.configureInstance("root@localhost") Please select an option [1]: 2 Account Name: clusteradmin@% Password for new account: ******************** Confirm password: ******************** Do you want to perform the required configuration changes? [y/n]: y Do you want to restart the instance after configuring it? [y/n]: y MySQL JS > dba.checkInstanceConfiguration("clusteradmin@db-cluster-3") { "status": "ok" }
STEP 7) Add the two MySQL nodes in the cluster.
Now, it is time to add the MySQL nodes in the cluster created in STEP 5). When adding a node a recovery process happens and there two different modes – Clone and Incremental. The Clone mode is safer way to add an node and in general, it is used when adding a completely new node.
It’s worth mentioning the cloning process when adding a node (instance) to the cluster is a complex procedure, which may add users and temporary configuration and if interrupted the process may not be able to restart properly!
[root@db-cluster-1 ~]# mysqlsh MySQL Shell 8.0.28 Copyright (c) 2016, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > \connect clusteradmin@db-cluster-1 Creating a session to 'clusteradmin@db-cluster-1' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 8 (X protocol) Server version: 8.0.28 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. MySQL db-cluster-1:33060+ ssl JS > var cluster = dba.getCluster() MySQL db-cluster-1:33060+ ssl JS > cluster.status() { "clusterName": "mycluster1", "defaultReplicaSet": { "name": "default", "primary": "db-cluster-1:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "db-cluster-1:3306": { "address": "db-cluster-1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "db-cluster-1:3306" } MySQL db-cluster-1:33060+ ssl JS > cluster.addInstance('clusteradmin@db-cluster-2:3306') NOTE: The target instance 'db-cluster-2:3306' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it. The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'db-cluster-2:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'. The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'. Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C Validating instance configuration at db-cluster-2:3306... This instance reports its own address as db-cluster-2:3306 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using 'db-cluster-2:33061'. Use the localAddress option to override. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Clone based state recovery is now in progress. NOTE: A server restart is expected to happen as part of the clone process. If the server does not support the RESTART command or does not come back after a while, you may need to manually start it back. * Waiting for clone to finish... NOTE: db-cluster-2:3306 is being cloned from db-cluster-1:3306 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed NOTE: db-cluster-2:3306 is shutting down... * Waiting for server restart... ready * db-cluster-2:3306 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 72.61 MB transferred in about 1 second (~72.61 MB/s) State recovery already finished for 'db-cluster-2:3306' The instance 'db-cluster-2:3306' was successfully added to the cluster. MySQL db-cluster-1:33060+ ssl JS > cluster.status() { "clusterName": "mycluster1", "defaultReplicaSet": { "name": "default", "primary": "db-cluster-1:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "db-cluster-1:3306": { "address": "db-cluster-1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-2:3306": { "address": "db-cluster-2:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "db-cluster-1:3306" } MySQL db-cluster-1:33060+ ssl JS >
And add the third server db-cluster-3 with the same command (resume in the above MySQL shell – mysqlsh or connect again):
[root@db-cluster-1 ~]# mysqlsh MySQL Shell 8.0.28 Copyright (c) 2016, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > \connect clusteradmin@db-cluster-1 Creating a session to 'clusteradmin@db-cluster-1' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 51 (X protocol) Server version: 8.0.28 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. MySQL db-cluster-1:33060+ ssl JS > var cluster = dba.getCluster() MySQL db-cluster-1:33060+ ssl JS > cluster.status() { "clusterName": "mycluster1", "defaultReplicaSet": { "name": "default", "primary": "db-cluster-1:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "db-cluster-1:3306": { "address": "db-cluster-1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-2:3306": { "address": "db-cluster-2:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "db-cluster-1:3306" } MySQL db-cluster-1:33060+ ssl JS > cluster.addInstance('clusteradmin@db-cluster-3:3306') NOTE: The target instance 'db-cluster-3:3306' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it. The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'db-cluster-3:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'. The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'. Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C Validating instance configuration at db-cluster-3:3306... This instance reports its own address as db-cluster-3:3306 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using 'db-cluster-3:33061'. Use the localAddress option to override. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Clone based state recovery is now in progress. NOTE: A server restart is expected to happen as part of the clone process. If the server does not support the RESTART command or does not come back after a while, you may need to manually start it back. * Waiting for clone to finish... NOTE: db-cluster-3:3306 is being cloned from db-cluster-2:3306 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed NOTE: db-cluster-3:3306 is shutting down... * Waiting for server restart... ready * db-cluster-3:3306 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 72.61 MB transferred in 5 sec (14.52 MB/s) Incremental state recovery is now in progress. * Waiting for distributed recovery to finish... NOTE: 'db-cluster-3:3306' is being recovered from 'db-cluster-2:3306' * Distributed recovery has finished The instance 'db-cluster-3:3306' was successfully added to the cluster. MySQL db-cluster-1:33060+ ssl JS > cluster.status() { "clusterName": "mycluster1", "defaultReplicaSet": { "name": "default", "primary": "db-cluster-1:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "db-cluster-1:3306": { "address": "db-cluster-1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-2:3306": { "address": "db-cluster-2:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-3:3306": { "address": "db-cluster-3:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "db-cluster-1:3306" } MySQL db-cluster-1:33060+ ssl JS >
STEP 8) Install and run MySQL Router
After installation of MySQL Router it should be bootstrapped to generate the current configuration of the cluster.
MySQL Router depends only on the MySQL InnoDB Cluster scheme, so multiple instances on different hosts could be used simultaneously.
First, install the MySQL Router.
[root@db-cluster-1 ~]# dnf install -y mysql-router-community Last metadata expiration check: 0:44:22 ago on Fri 18 Feb 2022 11:23:25 AM UTC. Dependencies resolved. ============================================================================================================== Package Architecture Version Repository Size ============================================================================================================== Installing: mysql-router-community x86_64 8.0.28-1.el8 mysql-tools-community 3.5 M Transaction Summary ============================================================================================================== Install 1 Package Total download size: 3.5 M Installed size: 14 M Downloading Packages: mysql-router-community-8.0.28-1.el8.x86_64.rpm 5.0 MB/s | 3.5 MB 00:00 -------------------------------------------------------------------------------------------------------------- Total 5.0 MB/s | 3.5 MB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: mysql-router-community-8.0.28-1.el8.x86_64 1/1 Installing : mysql-router-community-8.0.28-1.el8.x86_64 1/1 Running scriptlet: mysql-router-community-8.0.28-1.el8.x86_64 1/1 Verifying : mysql-router-community-8.0.28-1.el8.x86_64 1/1 Installed: mysql-router-community-8.0.28-1.el8.x86_64 Complete!
Second, bootstrap the configuration to create the initial configuration and the needed user(s).
[root@db-cluster-1 ~]# mysqlrouter --bootstrap clusteradmin@localhost:3306 --user mysqlrouter --conf-use-sockets --account routerfriend1 --account-create if-not-exists Please enter MySQL password for clusteradmin: # Bootstrapping system MySQL Router instance... Please enter MySQL password for routerfriend1: - Creating account(s) (only those that are needed, if any) - Verifying account (using it to run SQL queries that would be run by Router) - Storing account in keyring - Adjusting permissions of generated files - Creating configuration /etc/mysqlrouter/mysqlrouter.conf Existing configuration backed up to '/etc/mysqlrouter/mysqlrouter.conf.bak' # MySQL Router configured for the InnoDB Cluster 'mycluster1' After this MySQL Router has been started with the generated configuration $ /etc/init.d/mysqlrouter restart or $ systemctl start mysqlrouter or $ mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf InnoDB Cluster 'mycluster1' can be reached by connecting to: ## MySQL Classic protocol - Read/Write Connections: localhost:6446, /tmp/mysql.sock - Read/Only Connections: localhost:6447, /tmp/mysqlro.sock ## MySQL X protocol - Read/Write Connections: localhost:6448, /tmp/mysqlx.sock - Read/Only Connections: localhost:6449, /tmp/mysqlxro.sock [root@db-cluster-1 ~]#
It will generate the configuration in /etc/mysqlrouter/mysqlrouter.conf and some more files like the keys and certificates needed for the SSL connections:
# File automatically generated during MySQL Router bootstrap [DEFAULT] name=system user=mysqlrouter keyring_path=/var/lib/mysqlrouter/keyring master_key_path=/etc/mysqlrouter/mysqlrouter.key connect_timeout=15 read_timeout=30 dynamic_state=/var/lib/mysqlrouter/state.json client_ssl_cert=/var/lib/mysqlrouter/router-cert.pem client_ssl_key=/var/lib/mysqlrouter/router-key.pem client_ssl_mode=PREFERRED server_ssl_mode=AS_CLIENT server_ssl_verify=DISABLED [logger] level=INFO [metadata_cache:mycluster1] cluster_type=gr router_id=1 user=routerfriend1 metadata_cluster=mycluster1 ttl=0.5 auth_cache_ttl=-1 auth_cache_refresh_interval=2 use_gr_notifications=0 [routing:mycluster1_rw] bind_address=0.0.0.0 bind_port=6446 socket=/tmp/mysql.sock destinations=metadata-cache://mycluster1/?role=PRIMARY routing_strategy=first-available protocol=classic [routing:mycluster1_ro] bind_address=0.0.0.0 bind_port=6447 socket=/tmp/mysqlro.sock destinations=metadata-cache://mycluster1/?role=SECONDARY routing_strategy=round-robin-with-fallback protocol=classic [routing:mycluster1_x_rw] bind_address=0.0.0.0 bind_port=6448 socket=/tmp/mysqlx.sock destinations=metadata-cache://mycluster1/?role=PRIMARY routing_strategy=first-available protocol=x [routing:mycluster1_x_ro] bind_address=0.0.0.0 bind_port=6449 socket=/tmp/mysqlxro.sock destinations=metadata-cache://mycluster1/?role=SECONDARY routing_strategy=round-robin-with-fallback protocol=x [http_server] port=8443 ssl=1 ssl_cert=/var/lib/mysqlrouter/router-cert.pem ssl_key=/var/lib/mysqlrouter/router-key.pem [http_auth_realm:default_auth_realm] backend=default_auth_backend method=basic name=default_realm [rest_router] require_realm=default_auth_realm [rest_api] [http_auth_backend:default_auth_backend] backend=metadata_cache [rest_routing] require_realm=default_auth_realm [rest_metadata_cache] require_realm=default_auth_realm
And the last step is to start the MySQL Router and the ports to the router should be opened:
[root@db-cluster-1 ~]# systemctl start mysqlrouter [root@db-cluster-1 ~]# systemctl status mysqlrouter ● mysqlrouter.service - MySQL Router Loaded: loaded (/usr/lib/systemd/system/mysqlrouter.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2022-02-18 12:13:22 UTC; 3s ago Main PID: 2272 (mysqlrouter) Status: "running" CGroup: /system.slice/mysqlrouter.service └─2272 /usr/bin/mysqlrouter Feb 18 12:13:22 db-cluster-1 systemd[1]: Starting MySQL Router... Feb 18 12:13:22 db-cluster-1 systemd[1]: Started MySQL Router. [root@db-cluster-1 ~]# systemctl enable mysqlrouter Created symlink /etc/systemd/system/multi-user.target.wants/mysqlrouter.service → /usr/lib/systemd/system/mysqlrouter.service. [root@db-cluster-1 ~]# firewall-cmd --permanent --zone=public --add-port=6446/tcp success [root@db-cluster-1 ~]# firewall-cmd --permanent --zone=public --add-port=6447/tcp success [root@db-cluster-1 ~]# firewall-cmd --permanent --zone=public --add-port=6448/tcp success [root@db-cluster-1 ~]# firewall-cmd --permanent --zone=public --add-port=6449/tcp success [root@db-cluster-1 ~]# firewall-cmd --reload success
Port 6446 is for Read/Write operations and the 6447 is only for Read operations of the classic MySQL protocol, the other two ports are for the MySQL X protocol. Note that, by executing the above firewall rules, all IPs would be allowed to contact the specified ports. An additional limitation may apply. Limit the IPs, which may contact the MySQL Router with:
firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="1.1.1.0/24" port protocol="tcp" port="6446" accept" firewall-cmd --reload
Only the IPs within the range 1.1.1.0/24 could contact the 6446 port.
Repeat this last step on every server, where the MySQL router is installed. It may be a good idea to be a local service for the application to failover properly when a MySQL cluster node fails.
Bonus – Troubleshooting
If MySQL could not connect to 33061 used for communications in the group replication because of firewall (see STEP 3.2) or SELinux misconfiguration (see STEP 3.3).
2022-02-17T09:23:28.266341Z 27 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL '__mysql_innodb_cluster_creating_cluster__' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. 2022-02-17T09:23:28.301711Z 27 [System] [MY-013587] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' is starting.' 2022-02-17T09:23:28.305522Z 31 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. 2022-02-17T09:23:28.451550Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error testing to the local group communication engine instance.' 2022-02-17T09:23:28.520774Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061' 2022-02-17T09:23:33.697625Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error testing to the local group communication engine instance.' 2022-02-17T09:23:33.775694Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061' 2022-02-17T09:23:38.885940Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error testing to the local group communication engine instance.' 2022-02-17T09:23:38.971057Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061' 2022-02-17T09:23:44.140848Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error testing to the local group communication engine instance.' 2022-02-17T09:23:44.211101Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061' 2022-02-17T09:23:49.318153Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error testing to the local group communication engine instance.' 2022-02-17T09:23:49.390162Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061' 2022-02-17T09:23:54.563024Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error testing to the local group communication engine instance.' 2022-02-17T09:23:54.632728Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061' 2022-02-17T09:23:59.744722Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error testing to the local group communication engine instance.' 2022-02-17T09:23:59.815798Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061' 2022-02-17T09:24:04.906002Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error testing to the local group communication engine instance.' 2022-02-17T09:24:04.997848Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061' 2022-02-17T09:24:10.060337Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error testing to the local group communication engine instance.' 2022-02-17T09:24:10.128839Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061' 2022-02-17T09:24:15.256115Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error testing to the local group communication engine instance.' 2022-02-17T09:24:15.321630Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33061' 2022-02-17T09:24:28.351894Z 27 [ERROR] [MY-011640] [Repl] Plugin group_replication reported: 'Timeout on wait for view after joining group' 2022-02-17T09:24:28.351997Z 27 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
It is important to tune the default MySQL generic variables because some of the cluster procedures need big buffers like max_allowed_packet. Bigger value is needed in the cloning process.
NOTE: db-cluster-2:3306 is being cloned from db-cluster-1:3306 ERROR: The clone process has failed: Clone Donor Error: 3957 : Clone needs max_allowed_packet value to be 2097152 or more. Current value is 1048576. (3862) Cluster.addInstance: Clone Donor Error: 3957 : Clone needs max_allowed_packet value to be 2097152 or more. Current value is 1048576. (RuntimeError)
The process of adding an instance is interrupted, which leads to the following error:
[root@db-cluster-1 ~]# mysqlsh MySQL Shell 8.0.28 Copyright (c) 2016, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > \connect clusteradmin@db-cluster-1 Creating a session to 'clusteradmin@db-cluster-1' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 63 (X protocol) Server version: 8.0.28 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. MySQL db-cluster-1:33060+ ssl JS > var cluster = dba.getCluster() MySQL db-cluster-1:33060+ ssl JS > cluster.status() { "clusterName": "mycluster1", "defaultReplicaSet": { "name": "default", "primary": "db-cluster-1:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "db-cluster-1:3306": { "address": "db-cluster-1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "db-cluster-2:3306": { "address": "db-cluster-2:3306", "instanceErrors": [ "ERROR: GR Recovery channel receiver stopped with an error: error connecting to master 'mysql_innodb_cluster_364432059@db-cluster-1:3306' - retry-time: 60 retries: 1 message: Access denied for user 'mysql_innodb_cluster_364432059'@'192.168.0.1' (using password: YES) (1045) at 2022-02-18 10:44:06.692599", "NOTE: instance server_id is not registered in the metadata. Use cluster.rescan() to update the metadata." ], "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "recovery": { "receiverError": "error connecting to master 'mysql_innodb_cluster_364432059@db-cluster-1:3306' - retry-time: 60 retries: 1 message: Access denied for user 'mysql_innodb_cluster_364432059'@'192.168.0.1' (using password: YES)", "receiverErrorNumber": 1045, "state": "CONNECTION_ERROR" }, "recoveryStatusText": "Distributed recovery in progress", "role": "HA", "status": "RECOVERING", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "db-cluster-1:3306" } MySQL db-cluster-1:33060+ ssl JS > cluster.removeInstance("clusteradmin@db-cluster-2") ERROR: The instance db-cluster-2:3306 does not belong to the cluster. Cluster.removeInstance: Metadata for instance db-cluster-2:3306 not found (MYSQLSH 51104)
Removing the instance reports it does not belong to the cluster, the cloning process could not proceed with the cloning because of a missing user, and so on.
Thank you very much for the comprehensive guide. From my understanding, this is multi-master 3 node setup where the delta data keeps replicating to other two nodes. Another piece of information I was looking for is how do I point an app to this setup for a backend in high available mode. I have a scenario where I can place these servers on a load balancer pool and use a virtual IP on top. If I do that, would Active/Active configuration work? Right now we have two node multi master setup load balanced on F5 and that is active/passive and we are fed up with that because we always have to keep 01 as primary and 02 as slave for the reasons only my DB knows. The cut over to 02 is always manual and the business doesn’t like that dependency as one of the node goes down switching involves everyone jumping on a call. In other words can I take your setup and implement in a fire and forget way on HA side?
What we have done in our organization is to have a local MySQL router on each application server and the application software connects with 127.0.0.1 to the ports 6446 and 6447. So here in the article the MySQL Routers are installed in every MySQL InnoDB Cluster node, but in fact the MySQL Router may be installed everywhere and no limit how many MySQL Routers you may have (I think it is mentioned somewhere in the MySQL Router documentations). So if you install on each node a MySQL Router and a virtual IP on top of it, the Active/Active configuration should work flawlessly. Our setup is slightly different, but it has been working for a whole year and it survived several failure of nodes switching properly every time without any manual intervention.