1. Home
  2. Linux
  3. CentOS
  4. Configuring Percona XtraDB Cluster on CentOS

Configuring Percona XtraDB Cluster on CentOS

Configuring Percona XtraDB Cluster on CentOS
Configuring Percona XtraDB Cluster on CentOS

In this tutorial we will teach you how to install and configure three Percona XtraDB Cluster nodes on CentOS 6.8 servers using the packages from Percona repositories.

 Node 1
 Host name: percona1
 IP address: 192.168.70.71
 Node 2
 Host name: percona2
 IP address: 192.168.70.72
 Node 3
 Host name: percona3
 IP address: 192.168.70.73

Prerequisites

The following things are required for this tutorial:

  • All three nodes with CentOS 6.8 installed.
  • The firewall on all nodes configured to allow connecting to ports 3306, 4444, 4567 and 4568.
  • SELinux on all nodes disabled.

Step 1. Installing PXC

Install Percona XtraDB Cluster.

Step 2. Configuring the first node

You should have individual nodes configured to be able to bootstrap the cluster.

  1. Check if the configuration file ‘/etc/my.cnf’ on the first node (percona1) has the following:
[mysqld]
datadir=/var/lib/mysql
user=mysql
# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This InnoDB autoincrement locking mode is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node 1 address
wsrep_node_address=192.168.70.71
# SST method
wsrep_sst_method=xtrabackup-v2
# Cluster name
wsrep_cluster_name=my_centos_cluster
# Authentication for SST method
wsrep_sst_auth="sstuser:s3cret"
  1. Begin the node with the command below.
[root@percona1 ~]# /etc/init.d/mysql bootstrap-pxc

Note that if you are using CentOS 7, the bootstrap service has to be used instead.

[root@percona1 ~]# systemctl start [email protected]

The command will start the cluster with initial ‘wsrep_cluster_address’ variable set to ‘gcomm://’ and, if you have restarted the node later, you will not need to change the configuration file.

  1. After starting up the first node, you may check the cluster status at any time with the command below.
mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |
...
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
...
| wsrep_cluster_size | 1 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
...
| wsrep_ready | ON |
+----------------------------+--------------------------------------+
40 rows in set (0.01 sec)

This output will show you that the cluster has successfully been bootstrapped.
Note that it can be preferred to leave an empty password for the root account. You can change the password with the command shown below.

mysql@percona1> UPDATE mysql.user SET password=PASSWORD("Passw0rd") where user='root';
mysql@percona1> FLUSH PRIVILEGES;
To intiate State Snapshot Transfer with XtraBackup, create a new user with the right privileges:
mysql@percona1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cret';
mysql@percona1> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
mysql@percona1> FLUSH PRIVILEGES;

Note that the MySQL root account is also used for performing SST, however, it can be a lot more secure to use a different (non-root) account for this.

Step 3. Configuring the second node

  1. Be sure the config file located in ‘/etc/my.cnf’ on the second node (percona2) has the following code in it.
[mysqld]
datadir=/var/lib/mysql
user=mysql
# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This InnoDB autoincrement locking mode is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node 2 address
wsrep_node_address=192.168.70.72
# Cluster name
wsrep_cluster_name=my_centos_cluster
# SST method
wsrep_sst_method=xtrabackup-v2
#Authentication for SST method
wsrep_sst_auth="sstuser:s3cret"
  1. Start with the next node (2nd) with the command below.
[root@percona2 ~]# /etc/init.d/mysql start
  1. After the server has started running, it will receive SST automatically. This means that the second node will not have an empty root password any longer. To be able to connect to the cluster and check the status, use the root password you have set in the first node. Cluster status can be seen on both nodes. The code below shows an example of the status from the second node (percona2).
mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |
...
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
...
| wsrep_cluster_size | 2 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
...
| wsrep_ready | ON |
+----------------------------+--------------------------------------+
40 rows in set (0.01 sec)

The results show if the new node has been successfully added to the cluster.

Step 4. Configuring the third node

  1. Be sure the MySQL configuration file in ‘/etc/my.cnf’ on the third node (percona3) has the code shown below.
[mysqld]
datadir=/var/lib/mysql
user=mysql
# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This InnoDB autoincrement locking mode is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #3 address
wsrep_node_address=192.168.70.73
# Cluster name
wsrep_cluster_name=my_centos_cluster
# SST method
wsrep_sst_method=xtrabackup-v2
#Authentication for SST method
wsrep_sst_auth="sstuser:s3cret"
  1. Use the command below to initiate the third node.
[root@percona3 ~]# /etc/init.d/mysql start
  1. After the third node has been initiated, it will receive an SST automatically. Cluster status can be seen on all three nodes. The code below is an example of status from the third node (percona3).
mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |
...
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
...
| wsrep_cluster_size | 3 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
...
| wsrep_ready | ON |
+----------------------------+--------------------------------------+
40 rows in set (0.01 sec)

The results will confirm that the third node has successfully joined the cluster.

Step 5. Testing replication.

In order to test the replication, you will need to create a new database on the second node as well as creating a table for that database on the third node, and also adding a couple of records to the table on the first node.

  1. First, make a new database on the second node.
mysql@percona2> CREATE DATABASE percona;
Query OK, 1 row affected (0.01 sec)
  1. Then, make a table on the third node.
mysql@percona3> USE percona;
Database changed
mysql@percona3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));
Query OK, 0 rows affected (0.05 sec)
  1. Put records on the first node:
mysql@percona1> INSERT INTO percona.example VALUES (1, 'percona1');
Query OK, 1 row affected (0.02 sec)
  1. Get all the rows from the table on the second node.
mysql@percona2> SELECT * FROM percona.example;
+---------+-----------+
| node_id | node_name |
+---------+-----------+
| 1 | percona1 |
+---------+-----------+
1 row in set (0.00 sec)

Conclusion

This simple process will make sure that your nodes are in the cluster are working as intended and synchronized.

Updated on December 23, 2018

Was this article helpful?

Related Articles

Leave a Comment

[apsl-login-lite login_text='Please login with a social account']