TASSTA Documentation Center TASSTA Documentation Center More products
Hide table of contents Hide details Search My account

Scenarios

T.Lion with T.Brother combination requires you to start nodes in a cluster as reference points before the remaining nodes are able to join and form the cluster. This process is known as cluster bootstrap. Bootstrapping is an initial step to introduce a database node as a primary component before others see it as a reference point to sync up data.

When the cluster starts with the bootstrap command on a node, that particular node assumes the Primary state (check the value of wsrep_cluster_status). The remaining nodes are started normally. They automatically look for the existing Primary Component and join it to form a cluster. Once done, the data synchronization is started between the joiner and the donor.

Important:

You should only bootstrap the cluster when starting a new cluster or when there are no other nodes in Primary state. Otherwise you might end up with split clusters or data loss.

The following examples illustrate when to bootstrap the three-node cluster (T.Lion, T.Brother, Arbitrator) based on node state (wsrep_local_state_comment) and cluster state (wsrep_cluster_status):

T.Lion disappears from the cluster

Image alt text

It might be a result of power outage, hardware failure, kernel panic, mysqld crash, kill -9 on mysqld PID, and the like. Two remaining nodes notice the connectivity problems and will try re-connecting to it. After several timeouts, the remaining nodes remove it from the cluster. As we still have enough nodes (2 out of 3), no service disruption happens. T.Brother assumes the MASTER state and starts serving users.

Bootstrap flow: Restart the INITIALIZED node. Once the failed node is fixed, it joins the cluster automatically.

T.Brother and Arbitrator disappear

Image alt text

T.Lion is not able to form the quorum alone, so the cluster is switching into a non-primary mode. mysqld process on T.Lion server is running, but the database refuses SQL queries. Read queries will be processed until the node ensures it cannot reach lost nodes, but all write operations are blocked immediatelly.

TASSTA services stop working. T.Lion waits for its peers to show up again. If that happens, the cluster is restored automatically.

If T.Brother and Arbitrator are functional and see each other (just cut off the T.Lion node due to network failure), they will keep functioning as they still form the quorum. If T.Brother crashed or turned off due to power outage, you need to enable the primary component on the T.Lion node, before bringing T.Brother back:

SET GLOBAL wsrep_provider_options='pc.bootstrap=true';
Important:

Double-check the failed nodes are really down before executing this command. Otherwise, you will most likely end up with two clusters having different data.

Bootstrap flow:

  1. Restart the INITIALIZED node.
  2. Once done, start the node.

All nodes went down without proper shutdown procedures

Image alt text

This might happen in case of data center power failure, software problem leading to crash on all nodes, or as a result of data consistency being compromised when cluster detects each node has different data. In each of those cases, the grastate.dat file is not updated and does not contain a valid sequence number (seqno).

As the data consistency is not guarenteed, try finding the most recent one and bootstrap the cluster using it. Before starting mysql daemon on any node, extract the last sequence number by checking the transactional state. Bootstrap from the most recent node first and then start the others.

Bootstrap flow:

  1. Bootstrap the most recent node using pc.bootstrap=true.
  2. Restart the remaining nodes, one at a time.

T.Lion or T.Brother is gracefully stopped

Image alt text

In this case, the other nodes receive a "goodbye" message from the stopped node. The cluster size is reduced and some properties like quorum calculation or auto-increment are automatically changed. Once we start the node, it joins the cluster.

Bootstrap flow: Start the node.

Both T.Brother and Arbitrator are gracefully stopped

Image alt text

The cluster size is reduced to 1, hence even the single remaining node forms a Primary component and is serving client requests. To get the nodes back into the cluster, simply start them. However, the node is switched to the Donor/Desynced state as it will have to provide state transfer to at least the first joining node. It is still possible to read/write to it during that process, but it may be much slower, depending on how large state transfers it needs to send.

Bootstrap flow: Start the nodes, one at a time.

All nodes are gracefully stopped

Image alt text

You should re-initialize the cluster. During the clean shutdown, a node writes its last executed position into the grastate.dat file. By comparing the seqno number inside, you can find out the most recent node. The cluster must be bootstrapped using that node, otherwise, newer nodes will have to perform full SST to join cluster initialized from the older one. Also, the newer transactions will be lost.

Bootstrap flow:

  1. Bootstrap the most recent node
  2. Start the remaining nodes, one at a time.