This article aims to introduce how to install and configure a MySQL cluster based on 2 servers. And realize that MySQL can still continue to run when any server has problems or is down.
Notice! Although this is a MySQL cluster based on 2 servers, there must be an additional third server as a management node, but this server can be shut down after the cluster startup is completed. At the same time, it should be noted that it is not recommended to shut down the server that serves as the management node after the cluster startup is completed. Although it is theoretically possible to build a MySQL cluster based on only two servers, with such an architecture, once one server goes down, the cluster cannot continue to work normally, thus losing the meaning of the cluster. For this reason, a third server is required to run as a management node. In addition, many friends may not have the actual environment of three servers, so they may consider conducting experiments in VMWare or other virtual machines.
The following assumes the situation of these three services:
Server1: mysql1.vmtest.net 192.168.0.1
Server2: mysql2.vmtest.net 192.168.0.2
Server3: mysql3.vmtest.net 192.168.0.3
Servers1 and Server2 serve as the servers that actually configure the MySQL cluster. The requirements for Server3 as the management node are lower. Only small adjustments are required to the Server3 system and there is no need to install MySQL. Server3 can use a computer with lower configuration and can run other services on Server3 at the same time. 2. Install MySQL on Server1 and Server2
=================================From [url]/u/info_img/2009-05/30/url] Download mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz. Note: It must be the max version of MySQL. The Standard version does not support cluster deployment! The following steps need to be done once on Server1 and Server2.
# mv mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz /usr/local/
# cd /usr/local/
# groupadd mysql
# useradd -g mysql mysql
# tar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# rm -f mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# mv mysql-max-4.1.9-pc-linux-gnu-i686 mysql
# cd mysql
# scripts/mysql_install_db --user=mysql
# chown -R root .
# chown -R mysql data
# chgrp -R mysql .
# cp support-files/mysql.server /etc/rc.d/init.d/mysqld
# chmod +x /etc/rc.d/init.d/mysqld
# chkconfig --add mysqld Do not start MySQL at this time!
Install and configure the management node server (Server3)
===================================== As a management node server, Server3 requires two files: ndb_mgm and ndb_mgmd : Download mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz from [url]/u/info_img/2009-05/30/url]
# mkdir /usr/src/mysql-mgm
# cd /usr/src/mysql-mgm
# tar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# rm mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# cd mysql-max-4.1.9-pc-linux-gnu-i686
# mv bin/ndb_mgm .
# mv bin/ndb_mgmd .
# chmod +x ndb_mg*
# mv ndb_mg* /usr/bin/
#cd
# rm -rf /usr/src/mysql-mgm Now start creating the configuration file for this management node server:
# mkdir /var/lib/mysql-cluster
# cd /var/lib/mysql-cluster
# vi config.ini Add the following content to config.ini:
[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
#ManagementServer
[NDB_MGMD]
HostName=192.168.0.3 #The IP address of the management node server Server3
# Storage Engines
[NDBD]
HostName=192.168.0.1 #The IP address of MySQL cluster Server1
DataDir= /var/lib/mysql-cluster
[NDBD]
HostName=192.168.0.2 #The IP address of MySQL cluster Server2
DataDir=/var/lib/mysql-cluster
# The following two [MYSQLD] can fill in the host names of Server1 and Server2.
# However, in order to replace the server in the cluster faster, it is recommended to leave it blank, otherwise this configuration must be changed after replacing the server.
[MYSQLD]
[MYSQLD] After saving and exiting, start the management node server Server3:
# After starting the management node with ndb_mgmd, you should note that this is only the management node service, not the management terminal. Therefore you will not see any output information about the startup. 4. Configure the cluster server and start MySQL
============================= The following changes need to be made in both Server1 and Server2:
# vi /etc/my.cnf
[mysqld]
ndbcluster
ndb-connectstring=192.168.0.3 #Server3’s IP address
[mysql_cluster]
ndb-connectstring=192.168.0.3 #After saving the IP address of Server3 and exiting, create a data directory and start MySQL:
# mkdir /var/lib/mysql-cluster
# cd /var/lib/mysql-cluster
# /usr/local/mysql/bin/ndbd --initial
# /etc/rc.d/init.d/mysqld start You can add /usr/local/mysql/bin/ndbd to /etc/rc.local to start up. Note: You only need to use the --initial parameter when starting ndbd for the first time or after making changes to Server3's config.ini! 5. Check working status
================ Return to the management node server Server3 and start the management terminal:
# /usr/bin/ndb_mgm Type the show command to view the current working status: (The following is an example of status output)
[root@mysql3 root]# /usr/bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
--------------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.0.1 (Version: 4.1.9, Nodegroup: 0, Master)
id=3 @192.168.0.2 (Version: 4.1.9, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.3 (Version: 4.1.9)
[mysqld(API)] 2 node(s)
id=4 (Version: 4.1.9)
id=5 (Version: 4.1.9)
ndb_mgm>
If there are no problems above, start testing MySQL now: Note that this document does not set the root password for MySQL. It is recommended that you set the MySQL root password for Server1 and Server2 yourself. In Server1:
# /usr/local/mysql/bin/mysql -u root -p
> use test;
> CREATE TABLE ctest (i INT) ENGINE=NDBCLUSTER;
> INSERT INTO ctest () VALUES (1);
> SELECT * FROM ctest; You should be able to see 1 row returned information (return value 1). If the above is normal, switch to Server2 and repeat the above test to observe the effect. If successful, execute INSERT in Server2 and then switch back to Server1 to observe whether it works normally. If there are no problems, congratulations on success! 6. Destructive testing
============== Unplug the network cable of Server1 or Server2 and observe whether the other cluster server is working normally (you can use SELECT query to test). After the test is completed, re-insert the network cable. If you have no access to the physical server, that is to say, you cannot unplug the network cable, you can also test like this: On Server1 or Server2:
# ps aux | grep ndbd will see all ndbd process information:
root 5578 0.0 0.3 6220 1964 ? S 03:14 0:00 ndbd
root 5579 0.0 20.4 492072 102828 ? R 03:14 0:04 ndbd
root 23532 0.0 0.1 3680 684 pts/1 S 07:59 0:00 grep ndbd and then kill an ndbd process to destroy the MySQL cluster server:
# kill -9 5578 After 5579, use SELECT query on another cluster server to test. And if you execute the show command in the management terminal of the management node server, you will see the status of the damaged server. After the test is completed, you only need to restart the ndbd process of the damaged server:
#ndbdAttention! As mentioned before, there is no need to add the --initial parameter at this time! At this point, the MySQL cluster configuration is complete!