Project

General

Profile

Support #337

Updated by Daniel Curtis about 10 years ago

{{>toc}} 

 Here’s the steps (down to the commands to run as root) to install and configure the MariaDB Galera Clustered Database server software on Debian 7 running across a 3 Node Cluster (3 nodes to increase DB consistency, response time & reduce split-brain occurrences of full DB re-syncs). 

 Also note that currently (in MariaDB 5.5 at least) the MariaDB Galera Cluster only supports the InnoDB/XtraDB storage engine, so ensure that your Databases to be Clustered are InnoDB based for DB type. 

 Some Linux or system level requirements or pre-install setup tasks are: 

 # Ensure the /etc/hosts files are the same on each Node and that it contains entries for each DB Node’s IP addresses and hostname (both the node hostname and its fully qualified hostname). 
 <pre> 
 vi /etc/hosts 
 </pre> 
 #* And add/modify it similar to the following 
 <pre> 
 192.168.0.1 db1 db1.example.com 
 192.168.0.2 db2 db2.example.com 
 192.168.0.3 db3 db3.example.com 
 </pre> 
 # Ensure that each Node’s Firewall permits each other node host IP addresses access to ports 4444 and 4567 and the standard MySQL port of 3306. 

 h2. Configure Node Install the MariaDB Galera    on node 1: 

 * Install prerequisite Debian Packages: 
 <pre> 
 apt-get -y install perl python-software-properties rsync 
 </pre> 

 * Setup the MariaDB 5.5 Repository: 
 <pre> 
 apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db 
 add-apt-repository 'deb http://ftp.osuosl.org/pub/mariadb/repo/5.5/debian wheezy main' 
 </pre> 

 * Install MariaDB Galera: 
 <pre> 
 apt-get install mariadb-galera-server galera 
 </pre> 

 Once all 3 Nodes have the above software installed and the initial MySQL settings applied, bring them all up and online (mysql running with the default Debian settings on each node is ok at this point). 

 Install the relevant Database content to the “first” Node, which we’ll call “Node1″ or reference as “db1″, this may well involve using and knowing the MySQL ‘root’ username and its password. 

 * After that and on Node 1, restart its MySQL software as the initiating node of a Galera Cluster by loading the following configuration file(s) and restarting Node1′s MySQL server software, similar to the following: 
 <pre> 
 vi /etc/mysql/conf.d/cluster.cnf 
 </pre> 
 #* And add the following: 
 <pre> 
 [mysqld] 
 binlog_format=ROW 
 default-storage-engine=innodb 
 innodb_autoinc_lock_mode=2 
 bind-address=0.0.0.0 

 # Galera Provider Configuration 
 wsrep_provider=/usr/lib/galera/libgalera_smm.so 
 #wsrep_provider_options="gcache.size=32G" 

 # Galera Cluster Configuration 
 wsrep_cluster_name="example_cluster" 

 # Initialize the cluster; switch after --wsrep-new-cluster has been run and other nodes have joined. 
 wsrep_cluster_address="gcomm:// 

 # Set cluster node addresses (hostnames work too) 
 #wsrep_cluster_address="gcomm://192.168.0.1,192.168.0.2,192.168.0.3" 

 # Galera Synchronization Congifuration 
 wsrep_sst_method=rsync 
 #wsrep_sst_auth=user:pass 

 # Galera Node Configuration 
 wsrep_node_address="192.168.0.1" 
 wsrep_node_name="db1.example.com" 
 </pre> 

 * Restart the MySQL Server on Node 1 with the above /etc/mysql/conf.d/cluster.cnf file installed: 
 <pre> 
 service mysql stop 
 service mysql start --wsrep-new-cluster 
 </pre> 

 * Check how MySQL Galera Cluster servers is going after restart with: 
 <pre> 
 tail -n 20 /var/log/mysql/mysqld_safe.log 
 </pre> 

 h2. Configure Node node 2 

 * Install prerequisite Debian Packages: 
 <pre> 
 apt-get -y install perl python-software-properties rsync 
 </pre> 

 * Setup the MariaDB 5.5 Repository: 
 <pre> 
 apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db 
 add-apt-repository 'deb http://ftp.osuosl.org/pub/mariadb/repo/5.5/debian wheezy main' 
 </pre> 

 * Install MariaDB Galera: 
 <pre> 
 apt-get install mariadb-galera-server galera 
 </pre> 

 With Node 1 now running a single-node Galera Cluster, you can configure and start Nodes 2 and 3 with these following commands done and verified on Node 2 and then Node 3: 

 * Load MariaDB Cluster config file, listing Node 1′s IP address as the initial ‘gcomm’ Cluster server, on Node 2 do: 
 <pre> 
 vi /etc/mysql/conf.d/cluster.cnf 
 </pre> 
 #* And add the following: 
 <pre> 
 [mysqld] 
 binlog_format=ROW 
 default-storage-engine=innodb 
 innodb_autoinc_lock_mode=2 
 bind-address=0.0.0.0 

 # Galera Provider Configuration 
 wsrep_provider=/usr/lib/galera/libgalera_smm.so 
 #wsrep_provider_options="gcache.size=32G" 

 # Galera Cluster Configuration 
 wsrep_cluster_name="example_cluster" 
 wsrep_cluster_address="gcomm://192.168.0.1,192.168.0.2,192.168.0.3" 

 # Galera Synchronization Congifuration 
 wsrep_sst_method=rsync 
 #wsrep_sst_auth=user:pass 

 # Galera Node Configuration 
 wsrep_node_address="192.168.0.1" 
 wsrep_node_name="db2.example.com" 
 </pre> 

 * Restart the MySQL Server on Node 2 with the above @/etc/mysql/conf.d/cluster.cnf@ file installed: 
 <pre> 
 service mysql restart 
 </pre> 

 * Check how MySQL Galera Cluster servers is going after restart with: 
 Note: Any UUID numbers are replaced with ‘…’ in the following output 
 <pre> 
 cat /var/log/mysql/mysqld_safe.log 
 </pre> 
 #* You should see log file entries about each node joining and being syncronised into the Galera MySQL Cluster on Node 1, entries like: 
 <pre> 
 [Note] WSREP: declaring ... stable 
 [Note] WSREP: Node ... state prim 
 [Note] WSREP: (..., 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://192.168.0.3:4567 
 [Note] WSREP: view(view_id(PRIM,...,11) memb { 
 ..., 
 ..., 
 } joined { 
 } left { 
 } partitioned { 
 ..., 
 }) 
 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2 
 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID. 
 [Note] WSREP: forgetting ... (tcp://192.168.0.3:4567) 
 [Note] WSREP: (..., 'tcp://0.0.0.0:4567') turning message relay requesting off 
 [Note] WSREP: STATE EXCHANGE: sent state msg: ... 
 [Note] WSREP: STATE EXCHANGE: got state msg: ... from 0 (node1.domain.name) 
 [Note] WSREP: STATE EXCHANGE: got state msg: ... from 1 (node2.domain.name) 
 [Note] WSREP: Quorum results: 
 version = 2, 
 component = PRIMARY, 
 conf_id = 8, 
 members = 2/2 (joined/total), 
 act_id = 162, 
 last_appl. = 0, 
 protocols = 0/4/2 (gcs/repl/appl), 
 group UUID = ... 
 [Note] WSREP: Flow-control interval: [23, 23] 
 [Note] WSREP: New cluster view: global state: ...:162, view# 9: Primary, number of nodes: 2, my index: 1, protocol version 2 
 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification. 
 [Note] WSREP: Assign initial position for certification: 162, protocol version: 2 
 14:24:37 [Note] WSREP: cleaning up ... (tcp://192.168.0.3:4567) 
 [Note] WSREP: declaring ... stable 
 [Note] WSREP: declaring ... stable 
 [Note] WSREP: Node ... state prim 
 [Note] WSREP: view(view_id(PRIM,...,12) memb { 
 ..., 
 ..., 
 ..., 
 } joined { 
 } left { 
 } partitioned { 
 }) 
 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 3 
 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID. 
 [Note] WSREP: STATE EXCHANGE: sent state msg: ... 
 [Note] WSREP: STATE EXCHANGE: got state msg: ... from 0 (node1.domain.name) 
 [Note] WSREP: STATE EXCHANGE: got state msg: ... from 1 (node2.domain.name) 
 [Note] WSREP: STATE EXCHANGE: got state msg: ... from 2 (node3.domain.name) 
 [Note] WSREP: Quorum results: 
 version = 2, 
 component = PRIMARY, 
 conf_id = 9, 
 members = 3/3 (joined/total), 
 act_id = 162, 
 last_appl. = 0, 
 protocols = 0/4/2 (gcs/repl/appl), 
 group UUID = ... 
 [Note] WSREP: Flow-control interval: [28, 28] 
 [Note] WSREP: New cluster view: global state: ...:162, view# 10: Primary, number of nodes: 3, my index: 1, protocol version 2 
 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification. 
 [Note] WSREP: Assign initial position for certification: 162, protocol version: 2 
 [Note] WSREP: Member 2 (node3.domain.name) synced with group. 
 </pre> 

 * Again, use the mysql client on Node 2 to query Node 2′s Galera Server. 
 <pre> 
 mysql -uroot -p -pMYSQLROOTPASSWD -e "SHOW STATUS LIKE 'wsrep_%';" 
 </pre> 

 With Node 1 and 2 now running a dual-node Galera Cluster, you can configure and start Node 3 with these following commands done and verified on Node 1 and then Node 2. 

 h2. Configure Node 3 

 * Install prerequisite Debian Packages: 
 <pre> 
 apt-get -y install perl python-software-properties rsync 
 </pre> 

 * Setup the MariaDB 5.5 Repository: 
 <pre> 
 apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db 
 add-apt-repository 'deb http://ftp.osuosl.org/pub/mariadb/repo/5.5/debian wheezy main' 
 </pre> 

 * Install MariaDB Galera: 
 <pre> 
 apt-get install mariadb-galera-server galera 
 </pre> 

 * Load MariaDB Cluster config file, listing Node 1 and 2′s IP address as the initial ‘gcomm’ Cluster server, on Node 3 do: 
 <pre> 
 vi /etc/mysql/conf.d/cluster.cnf 
 </pre> 
 #* And add the following: 
 <pre> 
 [mysqld] 
 binlog_format=ROW 
 default-storage-engine=innodb 
 innodb_autoinc_lock_mode=2 
 bind-address=0.0.0.0 

 # Galera Provider Configuration 
 wsrep_provider=/usr/lib/galera/libgalera_smm.so 
 #wsrep_provider_options="gcache.size=32G" 

 # Galera Cluster Configuration 
 wsrep_cluster_name="example_cluster" 
 wsrep_cluster_address="gcomm://192.168.0.1,192.168.0.2,192.168.0.3" 

 # Galera Synchronization Congifuration 
 wsrep_sst_method=rsync 
 #wsrep_sst_auth=user:pass 

 # Galera Node Configuration 
 wsrep_node_address="192.168.0.1" 
 wsrep_node_name="db3.example.com" 
 </pre> 

 * Restart the MySQL Server on Node 3 with the above @/etc/mysql/conf.d/cluster.cnf@ file installed: 
 <pre> 
 service mysql restart 
 </pre> 

 Check the @mysqld_safe.log@ logfiles on all nodes to see that Node 3 also joins and syncronises with the Cluster. 

 At this point you have all 3 x Nodes in the Galera MySQL Cluster and you can now update Node 1 and Node 2′s cluster.cnf file to set the IP Addresses of all Nodes in the Cluster. 

 * On Node 1, change the line in @/etc/mysql/conf.d/cluster.cnf@ that says: 
 <pre> 
 wsrep_cluster_address = 'gcomm://192.168.0.1,192.168.0.2,192.168.0.3' 
 </pre> 

 And then restart MySQL on Node 1, checking the relevant log files and mysql client command to query the Cluster membership once Node 1′s mysql server software has restarted, with: 
 <pre> 
 service mysql restart 
 </pre> 

 * Check the log for any problems: 
 <pre> 
 tail -f /var/log/mysql/mysqld_safe.log 
 </pre> 

 * Check the status of the Galera cluster 
 <pre> 
 mysql -uroot -p -e "SHOW STATUS LIKE 'wsrep_%';" 
 </pre> 

 h2. Finalize the Galera Cluster configuration 

 Once Node 1 has been reconfigured to know of all Cluster Members, do the same step above on Node 2 and Node 3, setting Node 2 and Node 3′s *wsrep_cluster_address* in @/etc/mysql/conf.d/cluster.cnf@ to: 
 <pre> 
 wsrep_cluster_address = 'gcomm://192.168.0.1,192.168.0.2,192.168.0.3' 
 </pre> 

 And restarting their MySQL Servers and checking that they then each rejoin the live Cluster. 

 Finally by now you should have a running live 3 Node MariaDB Galera MySQL Cluster, the status of which on each Node should list that there’s 3 members of the Cluster, via: 
 <pre> 
 mysql -uroot -pMYSQLROOTPASSWD -e "SHOW STATUS LIKE 'wsrep_%';" | egrep "wsrep_incoming_addresses|wsrep_cluster_size" 
 </pre> 

 h3. Transferring the Debian Maintenance Configuration 

 Currently, Ubuntu and Debian's MariaDB servers use a special maintenance user to do routine maintenance. Some tasks that fall outside of the maintenance category also are run as this user, including important functions like stopping MySQL. 

 With our cluster environment being shared between the individual nodes, the maintenance user, who has randomly generated login credentials on each node, will be unable to execute commands correctly. Only the initial server will have the correct maintenance credentials, since the others will attempt to use their local settings to access the shared cluster environment.This can fix this by simply copying the contents of the maintenance file to each individual node: 

 * On server that bootstrapped the cluster, open the Debian maintenance configuration file: 
 <pre> 
 sudo nano /etc/mysql/debian.cnf 
 </pre> 
 #* The file will look similar to the following: 
 <pre> 
 [client] 
 host       = localhost 
 user       = debian-sys-maint 
 password = VRP84dIknkX31uOf 
 socket     = /var/run/mysqld/mysqld.sock 
 [mysql_upgrade] 
 host       = localhost 
 user       = debian-sys-maint 
 password = VRP84dIknkX31uOf 
 socket     = /var/run/mysqld/mysqld.sock 
 basedir    = /usr 
 </pre> 
 * We simply need to copy this information and paste it into the same file on each node. 

 h3. Automatic Cluster Initialization at Boot 

 Newer versions of Galera have options that can enable the cluster to start at boot. If you shut down all nodes, you effectively terminated the cluster (not the data of course, but the running cluster), hence the right way is to start the all the nodes with @gcomm://<node1 address>,<node2 address>,...?pc.wait_prim=no@ again.  

 * On ever node in the Galera cluster edit the mysql cluster configuration file 
 <pre> 
 vi /etc/mysql/conf.d/cluster.cnf 
 </pre> 
 #* And modify the @wsrep_cluster_address@ paramter: 
 <pre> 
 wsrep_cluster_address="gcomm://192.168.0.1,192.168.0.2,192.168.0.3?pc.wait_prim=no" 
 </pre> 

 * On one of the nodes set global @wsrep_provider_options="pc.bootstrap=true"@. 
 <pre> 
 vi /etc/mysql/conf.d/cluster.cnf 
 </pre> 
 #* And add the @wsrep_provider_options@ paramter: 
 <pre> 
 wsrep_provider_options="pc.bootstrap=true" 
 </pre> 

 h2. Resources 

 * https://www.digitalocean.com/community/tutorials/how-to-configure-a-galera-cluster-with-mariadb-on-ubuntu-12-04-servers 
 * https://mariadb.com/kb/en/mariadb/documentation/replication/galera/getting-started-with-mariadb-galera-cluster/

Back