This article will tell you how to configure a three-node MySQL 5 database server cluster. :Two storage nodes and one
management node.
The cluster isa high-availability load balancer
consisting of two nodes using the Ultra Monkey installation package
that provides "heartbeat" (used to check whether another node is alive) and
"ldirectord" (distributing requests to the nodes of the MySQL cluster)device.
In this article, each of our nodes uses the Debian Sarge Linux system. The installation of other Linux distributions may be slightly
different. We use MySQL version 5.0.19. If you don't want to use MySQL 5, you can also use MySQL 4.1,
although
I haven't tested that yet.This article is a practical application guide; it does not involve too much theory. You can find a lot of theory about clustering online.
For one server
I use the following Debian server, they are all on the same network segment (this example: 192.168.0.x):
sql1.test.com: 192.168.0.101 MySQL Cluster Node 1
sql2.test.com: 192.168.0.102 MySQL Cluster node 2
loadb1.test.com: 192.168.0.103 Load balancing 1 / MySQL cluster management server
loadb2.test.com: 192.168.0.104 Load balancing 2
In addition, we need a virtual IP address: 192.168.0.105. It will arrange load balancing for this MySQL cluster so that
applications can access the cluster through a unified single IP address.
Even though we want to use two nodes in MySQL Cluster, we still need a third node, the MySQL Cluster Management Server
, for one main reason: if one of the MySQL Cluster nodes goes down and the MySQL Cluster Management Server is not running, then
Then the data on the two cluster nodes will be inconsistent ("split brain"). We need it to configure MySQL Cluster.
So our installation generally requires five machines:
2 MySQL Cluster nodes + 1 Cluster Management Server + 2 Load Balancers = 5
Because the Cluster Management Server does not use many resources, the system will be empty Does nothing, so we can put our
first
load balancer on the same machine as it, which saves us one server, so ideally we only need fourmachines.
2. Configure the MySQL cluster management server.
First we need to download MySQL 5.0.19 and install the cluster management server (ndb_mgmd) and the cluster management client (ndb_mgm - which
can be used to monitor the operation of the cluster). The following steps are in loadb1.test .com (192.168.0.103):
loadb1.test.com:
mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
wget http://dev.mysql.com/get/Downloads/ MySQL-5.0/mysql-max-5.0.19-linux-i686-
glibc23.tar.gz/from/http://www.mirrorservice.org/sites/ftp.mysql.com/
tar xvfz mysql-max-5.0 .19-linux-i686-glibc23.tar.gz
cd mysql-max-5.0.19-linux-i686-glibc23
mv bin/ndb_mgm /usr/bin
mv bin/ndb_mgmd /usr/bin
chmod 755 /usr/bin/ndb_mg *
cd /usr/src
rm -rf /usr/src/mysql-mgm
Next, we must create the cluster configuration file, /var/lib/mysql-cluster/config.ini:
loadb1.test.com:
mkdir /var /lib/mysql-cluster
cd /var/lib/mysql-cluster
vi config.ini
————-
[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Section for the cluster management node
[ NDB_MGMD]
# IP address of the management node (this system)
HostName=192.168.0.103
# Section for the storage nodes
[NDBD]
# IP address of the first storage node
HostName=192.168.0.101
DataDir= /var/lib/mysql-cluster
[NDBD]
# IP address of the second storage node
HostName=192.168.0.102
DataDir=/var/lib/mysql-cluster
# one [MYSQLD] per storage node
[MYSQLD]
[MYSQLD]
———-
In actual application, please change the file Replace the IP address in with your corresponding IP.
Then we start the cluster management server:
loadb1.test.com:
ndb_mgmd -f /var/lib/mysql-cluster/config.ini
We should also need to be able to automatically start the management server when the server starts, so we create a very simple Initialization
script and corresponding startup connection:
loadb1.test.com:
echo 'ndb_mgmd -f /var/lib/mysql-cluster/config.ini' > /etc/init.d/ndb_mgmd
chmod 755 /etc/init.d/ ndb_mgmd
update-rc.d ndb_mgmd defaults
Three configuration MySQL cluster nodes (storage nodes)
Now we will install mysql-max-5.0.19 on sql1.test.com and sql2.test.com respectively:
sql1.example.com/sql2 .example.com:
groupadd mysql
useradd -g mysql mysql
cd /usr/local/
wget http://dev.mysql.com/get/Downloads/MySQL-5.0/mysql-max-5.0.19-linux-i686-
glibc23.tar.gz/from/http://www.mirrorservice.org/sites/ftp.mysql.com/
tar xvfz mysql-max-5.0.19-linux-i686-glibc23.tar.gz
ln -s mysql- max-5.0.19-linux-i686-glibc23 mysql
cd mysql
scripts/mysql_install_db –user=mysql
chown -R root:mysql .
chown -R mysql data
cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server
update-rc.d mysql.server defaults
cd /usr/local/mysql/bin
mv * /usr/bin
cd ../
rm -fr /usr/local/mysql/bin
ln -s /usr/bin /usr/local/mysql/bin
Then we create the MySQL configuration file /etc/my.cnf on each node:
vi /etc/my.cnf
–
[mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring=192.168.0.103
[mysql_cluster]
# IP address of the cluster management node
ndb-connectstring=192.168.0.103
–
Make sure you enter the correct IP address in the cluster management server.
Next we create the data directory and start the MySQL server on each cluster node:
sql1.test.com / sql2.test.com:
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
ndbd – initial
/etc/init.d/mysql.server start
(Please remember: we only start MySQL for the first time or when /var/lib/mysql-cluster/config.ini on loadb1.test.com
changes. Only use ndbd –initial)
Now, it’s time to set the password for the MySQL root account:
sql1.test.com / sql2.test.com:
mysqladmin -u root password yourrootsqlpassword
We need to start the cluster node when the server starts, so we create An ndbd initialization script and the corresponding
system startup connection:
sql1.test.com / sql2.test.com:
echo 'ndbd' > /etc/init.d/ndbd
chmod 755 /etc/init.d/ndbd
update-rc. d ndbd defaults
4 Testing MySQL Cluster
Our MySQL Cluster configuration is complete, now it’s time to test it. On the cluster management server
(loadb1.test.com), run the cluster management client ndb_mgm to check whether the cluster nodes are connected:
loadb1.test.com:
ndb_mgm
You will see this:
– NDB Cluster — Management Client –
ndb_mgm>
In Enter show;
show on the command line;
the output information should be like this:
ndb_mgm> show;
Connected to Management Server at: localhost:1186
Cluster Configuration
———————
[ndbd(NDB)] 2 node(s)
id= 2 @192.168.0.101 (Version: 5.0.19, Nodegroup: 0, Master)
id=3 @192.168.0.102 (Version: 5.0.19, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.103 (Version: 5.0.19)
[mysqld(API)] 2 node(s)
id=4 @192.168.0.101 (Version: 5.0.19)
id=5 @192.168.0.102 (Version: 5.0.19)
ndb_mgm>
If you see that all your nodes are connected. Then everything goes well!
Enter
quit;
to exit the ndb_mgm client console.
Now we create a test database on the sql1.test.com node, create a test table, and
fill in some test data:
mysql -u root -p
CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
CREATE TABLE testtable (i INT) ENGINE=NDBCLUSTER ;
INSERT INTO testtable () VALUES (1);
SELECT * FROM testtable;
quit;
(Look at the CREATE statement above: the tables of all databases we want to cluster must use ENGINE=NDBCLUSTER!
If you use other ENGINE, then Clustering will not work!)
The result of SELECT should be:
mysql> SELECT * FROM testtable;
+——+
| i |
+——+
| 1 |
+——+
1 row in set (0.03 sec)
Now we are in
Create the same database on the sql2.test.com node (yes, we still have to create it, butthe data will be copied to sql2.test.com
after the testtable is created
, because the testtable is using ENGINE=NDBCLUSTER):sql2.test. com:
mysql -u root -p
CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
SELECT * FROM testtable;
The result of SELECT should be the same as the result of sql1.test.com above:
mysql> SELECT * FROM testtable;
+——+
| i |
+——+
| 1 |
+——+
1 row in set (0.04 sec)
We see that the data has been copied from the sql1.test.com node to the sql2.example.com node. Now we
insert another row into the testtable:
sql2.test.com:
INSERT INTO testtable () VALUES (2);
quit;
Now let us return to the sql1.example.com node and check whether we can see the newly inserted row:
sql1.example.com:
mysql -u root -p
USE mysqlclustertest;
SELECT * FROM testtable;
quit;
You should see output like this:
mysql> SELECT * FROM testtable;
+——+
| i |
+——+
| 1 |
| 2 |
+——+
2 rows in set (0.05 sec)
so every cluster node has the same data!
Now let's see if we stop node 1 (sql1.example.com): run
sql1.example.com:
killall ndbd
and check
ps aux | grep ndbd | grep -iv grep
to see that all ndbd processes have ended. If you still see ndbd processes, run
killall ndbd
againuntil all ndbd processes are terminated.
Now on our management server, check the status of the cluster (loadb1.example.com):
loadb1.example.com:
ndb_mgm
Typeshow;
on the ndb_mgm console
You should see this:
ndb_mgm> show;
Connected to Management Server at: localhost:1186
Cluster Configuration
————————
[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from 192.168.0.101)
id=3 @192.168.0.102 (Version: 5.0 .19, Nodegroup: 0, Master)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.103 (Version: 5.0.19)
[mysqld(API)] 2 node(s)
id=4 @192.168 .0.101 (Version: 5.0.19)
id=5 @192.168.0.102 (Version: 5.0.19)
ndb_mgm>
You see, the sql1.example.com node is not connected.
Enter:
quit;
exit the ndb_mgm console.
Let us check the sql2.example.com node:
sql2.example.com:
mysql -u root -p
USE mysqlclustertest;
SELECT * FROM testtable;
quit;
The result of the SELECT query should still be:
mysql> SELECT * FROM testtable;
+— —+
| i |
+——+
| 1 |
| 2 |
+——+
2 rows in set (0.17 sec)
Ok, all tests are normal, now let us start the sql1.test.com node again:
sql1.example .com:
ndbd
5 How to Restart the Cluster
Now let us imagine that youneed to restart the MySQL cluster
due to modification of /var/lib/mysql-cluster/config.ini on the loadb1.test.com node
or other reasons.To do this, you use
the ndb_mgm cluster management client on the loadb1.example.com node:
loadb1.test.com:
ndb_mgm
At the ndb_mgm console, you type
shutdown;
you will see a message like this:
ndb_mgm> shutdown ;
Node 3: Cluster shutdown initiated
Node 2: Node shutdown completed.
2 NDB Cluster node(s) have shutdown.
NDB Cluster management server shutdown.
ndb_mgm>
This means that the cluster nodes sql1.test.com and sql2.test.com, have been The cluster management servers have all been shut down.
Run
quit;
exit the ndb_mgm console.
To start the cluster management server, do this on loadb1.test.com:
loadb1.test.com:
ndb_mgmd -f /var/lib/mysql-cluster/config.ini
and on sql1.test.com and sql2.test.com Run on:
sql1.example.com / sql2.example.com:
ndbd
Or, you have just modified the /var/lib/mysql-cluster/config.ini file on loadb1.test.com:
ndbd –initial
After this, You can check on loadb1.test.com to see if the cluster has been restarted:
loadb1.test.com:
ndb_mgm
In the ndb_mgm console, enter
show;
to view the current status of the cluster. It may take a moment for all nodes to report being connected after restarting.
Enter:
quit;
exit the ndb_mgm console.
6. Configuring Load Balancing
Our MySQL cluster is now complete and you can now start using it.
However, we don't have aseparate IP
address
to access the cluster
, which means you have to configure part of the application to use MySQL Cluster Node 1 (sql1.test.com), and another part to use Node 2 (sql2.test.com).Of course, all applications only need to use one node, but if you
don't want to split the load between cluster nodes, then what is the purpose of having a cluster? Another question is,
what if one cluster node goes down? Then applications using this cluster node will not be able to work at all.
The solution here is to configure a load balancer in front of MySQL Cluster to balance the load between each MySQL Cluster node.
The load balancer configures a shared virtual IP address among the cluster nodes, and all your applications use this virtual IP address to
access the cluster. If one of the nodes goes down, your application will still work because the load balancer will transfer requests
to the other node that is working properly.
Now in this example the load balancer becomes the bottleneck. What if this load balancer breaks? So we will
configure two load balancers in (active/passive) active/passive installation mode. This means we have one load balancer that is active and another one that is
hot standby, and when the active one goes down, it will become active. Each load balancer uses heartbeat to check
the activity of the other load balancer, and the load balancer also uses ldirectord, which is responsible for distributing traffic to the cluster nodes.
Both heartbeat and ldirectord are included in the Ultra Monkey installation package we will install.
It is very important that the system kernel of loadb1.test.com and loadb2.test.com nodes supports IPVS (IP Virtual Server).
IPVS performs load balancing at the transport layer of the Linux kernel.
6.1 Install Ultra Monkey
OK, now let’s start: First we enable IPVS on the loadb1.test.com and loadb2.test.com nodes:
loadb1.example.com / loadb2.example.com:
modprobe ip_vs_dh
modprobe ip_vs_ftp
modprobe ip_vs
modprobe ip_vs_lblc
modprobe ip_vs_lblcr
modprobe ip_vs_lc
modprobe
ip_vs_nq
modprobe ip_vs_rr modprobe ip_vs_sed
modprobe ip_vs_sh
modprobe ip_vs_wlc
modprobe ip_vs_wrr
To enable IPVS modules at startup, we enumerate the modules in /etc/modules:
loadb1.test.com / loadb2.test.com:
vi /etc /
modulesip_vs_dh
ip_vs_ftp
ip_vs
ip_vs_lblc
ip_vs_lblcr
ip_vs_lc
ip_vs_nq
ip_vs_rr
ip_vs_sed
ip_vs_sh
ip_vs_wlc
ip_vs_wrr
Now we edit /etc/apt/sources.list, add the download point of Ultra Monkey, and then we install Ultra Monkey:
loadb1.test .com/loadb2.test.com:
vi/ etc/apt/sources.list
deb http://www.ultramonkey.org/download/3/ sarge main
deb-src http://www.ultramonkey.org/download/3 sarge main
apt-get update
apt-get install ultramonkey libdbi-perl libdbd-mysql-perl libmysqlclient14-dev
Now Ultra Monkey has been installed, if you see the following warning:
| libsensors3 not functional
|
| It appears that your kernel is not compiled with sensors support. As a
| result , libsensors3 will not be functional on your system.
|
| If you want to enable it, have a look at “I2C Hardware Sensors Chip
| support” in your kernel configuration.
You can ignore it.
Answer the following question:
Do you want to automatically load IPVS rules on boot?
<-- No
Select a daemon method.
<-- none
The libdbd-mysql-perl installation package I just installed does not work on MySQL 5 (we are in MySQL Cluster Using MySQL 5),
so we install the latest DBD::mysql Perl installation package:
loadb1.test.com / loadb2.test.com:
cd /tmp
wget
tar xvfz DBD-mysql-3.0002.tar.gz
cd DBD-mysql -3.0002
perl Makefile.PL
make
make install
We must first enable packet forwarding:
loadb1.example.com / loadb2.example.com:
vi /etc/sysctl.conf
# Enables packet forwardingnet.ipv4.ip_forward = 1
6.2 Configure heartbeat (heartbeat )
We configure heartbeat by creating three files (the files of loadb1.test.com and loadb2.test.com must be exactly the same):
loadb1.test.com / loadb2.test.com:
vi /etc/ha.d/ha .cf
logfacility local0
bcast eth0
mcast eth0 225.0.0.1 694 1 0
auto_failback off
node loadb1
node loadb2
respawn hacluster /usr/lib/heartbeat/ipfail
apiauth ipfail gid=haclient uid=hacluster
Remember: you must list the node name (this In the example, loadb1 and loadb2)
uname -n
Apart from these, we do not need to make any modifications to this file.
vi /etc/ha.d/haresources
loadb1
ldirectord::ldirectord.cf
LVSSyncDaemonSwap::master
IPaddr2::192.168.0.105/24/eth0/192.168.0.255
You must enumerate one of the load balancing node names (here: loadb1), and lists
the virtual IP address (192.168.0.105) and broadcast address (192.168.0.255).
vi /etc/ha.d/authkeys
auth 3
3 md5 somerandomstring
somerandomstring are the two heartbeats used by loadb1 and loadb2 to authenticate each other. The password for the daemon.
Set your own password here. You can choose from 3 encryption methods. I generally use md5 for encryption.
/etc/ha.d/authkeys should be read-only for the root account. Here we do this:
loadb1.test.com / loadb2.test.com:
chmod 600 /etc/ha.d/authkeys
6.3 Configuring ldirectord
Now we create for ldirectord Configuration file, load balancer:
loadb1.example.com / loadb2.example.com:
vi /etc/ha.d/ldirectord.cf
# Global Directives
checktimeout=10
checkinterval=2
autoreload=no
logfile="local0"
quiescent=yes
virtual = 192.168.0.105:3306
service = mysql
real = 192.168.0.101:3306 gate
real = 192.168.0.102:3306 gate
checktype = negotiate
login = "ldirector"
passwd = "ldirectorpassword"
database = "ldirectordb"
request = "SELECT * FROM connectioncheck"
scheduler = wrr
Please fill in the correct virtual IP address (192.168.0.105) and the correct IP address of the MySQL cluster node (192.168.0.101 and 192.168.0.102).
3306 is the default port when MySQL is running. We also specified a MySQL user (ldirector) and password (ldirectorpassword), a database (ldirectordb) and
a SQL query. ldirectord uses this information to test MySQL cluster nodes to check that they are always available. Next we will use the ldirector user to
create the ldirectordb database.
Now we create the necessary startup connections for heartbeat and remove ldirectord (because ldirectord will be started by heartbeat):
loadb1.test.com / loadb2.test.com:
update-rc.d -f heartbeat remove
update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
update-rc.d -f ldirectord remove
6.4 Create database ldirector
Next we create ldirector database on MySQL cluster nodes sql1.test.com and sql2.test.com. This database will be used by our load balancer to check
the availability of MySQL cluster nodes.
sql1.test.com:
mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
USE ldirectordb;
CREATE TABLE connectioncheck (i INT) ENGINE =NDBCLUSTER;
INSERT INTO connectioncheck () VALUES (1);
quit;
sql2.test.com:
mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
quit;
6.4 Preparing MySQL Cluster Nodes for Load Balancing
Finally we must configure the MySQL Cluster nodes sql1.test.com and sql2.test.com to accept the virtual IP address 192.168.0.105.
sql1.test.com / sql2.test.com:
apt-get install iproute
Add the following content in /etc/sysctl.conf:
vi /etc/sysctl.conf
# Enable configuration of arp_ignore option
net.ipv4.conf.all .arp_ignore = 1
# When an arp request is received on eth0, only respond if that address is
# configured on eth0. In particular, do not respond if the address is
# configured on lo
net.ipv4.conf.eth0.arp_ignore = 1
# Ditto for eth1, add for all ARPing interfaces
#net.ipv4.conf.eth1.arp_ignore = 1
# Enable configuration of arp_announce option
net.ipv4.conf.all.arp_announce = 2
# When making an ARP request sent through eth0 Always use an address that
# is configured on eth0 as the source address of the ARP request. If this
# is not set, and packets are being sent out eth0 for an address that is on
# lo, and an arp request is required, then the address on lo will be used.
# As the source IP address of arp requests is entered into the ARP cache on
# the destination, it has the effect of announcing this address. This is
# not desirable in this case as adresses on lo on the real -servers should
# be announced only by the linux-director.
net.ipv4.conf.eth0.arp_announce = 2
# Ditto for eth1, add for all ARPing interfaces
#net.ipv4.conf.eth1.arp_announce = 2
sysctl -p
will Add the following paragraph to /etc/network/interfaces:
sql1.test.com / sql2.test.com:
vi /etc/network/interfaces
auto lo:0
iface lo:0 inet static
address 192.168.0.105
netmask 255.255.255.255
pre -up sysctl -p > /dev/null
ifup lo:0
7. Start the load balancer and test
Now we start two load balancing managers:
loadb1.test.com / loadb2.test.com:
/etc/init.d /ldirectord stop
/etc/init.d/heartbeat start
If you don't see errors, you need to restart each load balancer now:
loadb1.test.com / loadb2.test.com:
shutdown -r now
After restarting we can check if Both load balancers work as expected:
loadb1.test.com / loadb2.test.com:
ip addr sh eth0
The active load balancer should list the virtual IP address (192.168.0.105):
2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:16:3e:45:fc:f8 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.103/24 brd 192.168.0.255 scope global eth0
inet 192.168.0.105/24 brd
The load balancer for
192.168.0.255 scope global secondary eth0
passive (hot standby) should appear as follows:2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:16:3e:16:c1:4e brd ff:ff:ff :ff:ff:ff
inet 192.168.0.104/24 brd 192.168.0.255 scope global eth0
loadb1.test.com / loadb2.test.com:
ldirectord ldirectord.cf status
Output on active load balancer:
ldirectord for /etc/ha .d/ldirectord.cf is running with pid: 1603
Output on passive load balancer:
ldirectord is stopped for /etc/ha.d/ldirectord.cf
loadb1.example.com / loadb2.example.com:
ipvsadm -L - nOutput
on active load balancer:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.0.105:3306 wrr
-> 192.168.0.101:3306 Route 1 0 0
-> 192.168.0.102:3306
Output on Route 1 0 0 passive load balancer:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
loadb1.test.com / loadb2.test.com:
/etc/ha.d/resource.d/LVSSyncDaemonSwap master status
Output on active load balancer:
master running
(ipvs_syncmaster pid: 1766)
Output on passive load balancer:
master stopped
(ipvs_syncbackup pid: 1440)
If your tests are all normal, you can now access the MySQL database from other servers on the same network (192.168.0.x) using the virtual IP address 192.168.0.105:
mysql -h 192.168 .0.105 -u ldirector -p
(Please remember: your MySQL client must be at least version 4.1; older versions will not run MySQL5. )
You can now shut down one of the MySQL cluster nodes to test; you should still be able to connect to the MySQL database.
8 Notes
When running a MySQL Cluster, here are some important things to remember:
– All data is in memory! Therefore you need a large amount of memory on your cluster nodes. Here is
the calculation formula for the memory capacity required for each node:
(Database size SizeofDatabase * Number of replicas NumberOfReplicas * 1.1 ) / Number of data nodes NumberOfDataNodes
So if you have a database size of 1GB, you should equip each node with 1.1GB of memory!
–The cluster management node listens for any connection on port 1186. So this means it's not very secure, so you should run MySQL Cluster on a separate network.