Project

General

Profile

Support #699

Setup a MariaDB Galera Cluster on Debian 8

Added by Daniel Curtis about 9 years ago. Updated over 7 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
Database Server
Target version:
Start date:
11/22/2015
Due date:
% Done:

50%

Estimated time:
2.00 h
Spent time:

Description

Here’s step-by-step guide to install and configure a MariaDB Galera Clustered Database server software on Debian 8.

Prepare the Environments

NOTE: All three nodes should have their environments setup in the same way.

  • Make sure the system is up to date:
    sudo apt-get update && sudo apt-get upgrade
    
  • Install prerequisite Debian Packages:
    sudo apt-get install software-properties-common python-software-properties rsync
    
  • Setup the MariaDB 10.0 Repository:
    sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db
    sudo add-apt-repository 'deb http://ftp.osuosl.org/pub/mariadb/repo/10.0/debian jessie main'
    sudo apt-get update
    
  • Install MariaDB 10 with Galera:
    sudo apt-get install mariadb-galera-server galera
    
  1. Ensure the /etc/hosts files are the same on each Node and that it contains entries for each DB Node’s IP addresses and hostname.
    vi /etc/hosts
    
    • And add/modify it similar to the following
      192.168.0.1 db1 db1.example.com
      192.168.0.2 db2 db2.example.com
      192.168.0.3 db3 db3.example.com
      

Configure Node 1

  • Create a mysql cluster config:
    vi /etc/mysql/conf.d/cluster.cnf
    
    • And add the following:
      [mysqld]
      binlog_format=ROW
      default-storage-engine=innodb
      innodb_autoinc_lock_mode=2
      query_cache_size=0
      query_cache_type=0
      bind-address=0.0.0.0
      
      # Galera Provider Configuration
      wsrep_provider=/usr/lib/galera/libgalera_smm.so
      wsrep_cluster_name="example_cluster" 
      wsrep_cluster_address="gcomm://192.168.0.1,192.168.0.2,192.168.0.3" 
      wsrep_sst_method=rsync
      wsrep_node_address="192.168.0.1" 
      wsrep_node_name="db1.example.com" 
      
  • Restart the MySQL Server on Node 1:
    service mysql stop
    service mysql bootstrap
    
  • Check how MySQL Galera Cluster servers is going after restart with:
    tail -n 20 /var/log/mysql/mysqld_safe.log
    

Configure Node 2

  • Load MariaDB Cluster config file, listing Node 1′s IP address as the initial ‘gcomm’ Cluster server, on Node 2 do:
    vi /etc/mysql/conf.d/cluster.cnf
    
    • And add the following:
      [mysqld]
      binlog_format=ROW
      default-storage-engine=innodb
      innodb_autoinc_lock_mode=2
      query_cache_size=0
      query_cache_type=0
      bind-address=0.0.0.0
      
      # Galera Provider Configuration
      wsrep_provider=/usr/lib/galera/libgalera_smm.so
      wsrep_cluster_name="example_cluster" 
      wsrep_cluster_address="gcomm://192.168.0.1,192.168.0.2,192.168.0.3" 
      wsrep_sst_method=rsync
      wsrep_node_address="192.168.0.1" 
      wsrep_node_name="db2.example.com" 
      
  • Restart the MySQL Server on Node 2 with the above /etc/mysql/conf.d/cluster.cnf file installed:
    service mysql restart
    
  • Again, use the mysql client on Node 2 to query Node 2′s Galera Server.
    mysql -uroot -p -e "SHOW STATUS LIKE 'wsrep_%';" 
    

Configure Node 3

  • Load MariaDB Cluster config file, listing Node 1 and 2′s IP address as the initial ‘gcomm’ Cluster server, on Node 3 do:
    vi /etc/mysql/conf.d/cluster.cnf
    
    • And add the following:
      [mysqld]
      binlog_format=ROW
      default-storage-engine=innodb
      innodb_autoinc_lock_mode=2
      query_cache_size=0
      query_cache_type=0
      bind-address=0.0.0.0
      
      # Galera Provider Configuration
      wsrep_provider=/usr/lib/galera/libgalera_smm.so
      wsrep_cluster_name="example_cluster" 
      wsrep_cluster_address="gcomm://192.168.0.1,192.168.0.2,192.168.0.3" 
      wsrep_sst_method=rsync
      wsrep_node_address="192.168.0.1" 
      wsrep_node_name="db3.example.com" 
      
  • Restart the MySQL Server on Node 3 with the above /etc/mysql/conf.d/cluster.cnf file installed:
    service mysql restart
    

Check the mysqld_safe.log logfiles on all nodes to see that Node 3 also joins and syncronises with the Cluster.

At this point you have all 3 x Nodes in the Galera MySQL Cluster and you can now update Node 1 and Node 2′s cluster.cnf file to set the IP Addresses of all Nodes in the Cluster.

  • Check the log for any problems:
    tail -f /var/log/mysql/mysqld_safe.log
    
  • Check the status of the Galera cluster
    mysql -uroot -p -e "SHOW STATUS LIKE 'wsrep_%';" 
    

Debian Maintenance Configuration

Currently, Ubuntu and Debian's MariaDB servers use a special maintenance user to do routine maintenance. Some tasks that fall outside of the maintenance category also are run as this user, including important functions like stopping MySQL.

With our cluster environment being shared between the individual nodes, the maintenance user, who has randomly generated login credentials on each node, will be unable to execute commands correctly. Only the initial server will have the correct maintenance credentials, since the others will attempt to use their local settings to access the shared cluster environment.This can fix this by simply copying the contents of the maintenance file to each individual node:

  • On server that bootstrapped the cluster, open the Debian maintenance configuration file:
    sudo nano /etc/mysql/debian.cnf
    
    • The file will look similar to the following:
      [client]
      host     = localhost
      user     = debian-sys-maint
      password = VRP84dIknkX31uOf
      socket   = /var/run/mysqld/mysqld.sock
      [mysql_upgrade]
      host     = localhost
      user     = debian-sys-maint
      password = VRP84dIknkX31uOf
      socket   = /var/run/mysqld/mysqld.sock
      basedir  = /usr
      
  • We simply need to copy this information and paste it into the same file on each node.

Automatic Cluster Initialization at Boot

Newer versions of Galera have options that can enable the cluster to start at boot. If you shut down all nodes, you effectively terminated the cluster (not the data of course, but the running cluster), hence the right way is to start the all the nodes with gcomm://<node1 address>,<node2 address>,...?pc.wait_prim=no again.

  • On ever node in the Galera cluster edit the mysql cluster configuration file
    vi /etc/mysql/conf.d/cluster.cnf
    
    • And modify the wsrep_cluster_address paramter:
      wsrep_cluster_address="gcomm://192.168.0.1,192.168.0.2,192.168.0.3?pc.wait_prim=no" 
      
  • On one of the nodes set global wsrep_provider_options="pc.bootstrap=true".
    vi /etc/mysql/conf.d/cluster.cnf
    
    • And add the wsrep_provider_options parameter:
      wsrep_provider_options="pc.bootstrap=true" 
      

(Optional) Setup Xtrabackup

Installing xtrabackup is a decent way to increase the performance and add a minor level of security to the Galera cluster. Xtrabackup is maintained by Percona and as such we need to add their repo and install it the xtrabackup program, then finally modify the cluster.cnf file to use xtrabackup.

  • Add the percona repo keys
    apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
    
  • Create the percona repo list file:
    vi /etc/apt/sources.list.d/percona.list
    
    • And add the following:
      deb http://repo.percona.com/apt jessie main
      deb-src http://repo.percona.com/apt jessie main
      
  • Refresh the local cache:
    apt-get update
    
  • Install xtrabackup
    apt-get install xtrabackup
    
  • On ever node in the Galera cluster edit the mysql cluster configuration file
    vi /etc/mysql/conf.d/cluster.cnf
    
    • And modify the wsrep_sst_method parameter:
      wsrep_sst_method=xtrabackup
      
    • NOTE: If using authentication make sure add the following parameter to the cluster.cnf file:
      wsrep_sst_auth=username:SuperSecretPassword
      

Resources

#1

Updated by Daniel Curtis about 9 years ago

  • Description updated (diff)
  • Status changed from New to In Progress
  • % Done changed from 0 to 50
#2

Updated by Daniel Curtis over 7 years ago

  • Status changed from In Progress to Closed

Also available in: Atom PDF