Testing Galera in Kubernetes

I deployed a 3 node Galera cluster in Kubernetes. Galera clusters MariaDB or MySQL, allowing you to read and write to all nodes while always being consistent (ACID) at all times. Kubernetes is a deployment environment for container applications. 

Here are key features of Galera:

– It uses MariaDB instances, and then uses a plug-in to cluster them. So, you are still using stock MariaDB instances.

– It uses InnoDB table types, which has been my default since introduced, and now the OTB default. It introduced ACID to MySQL long ago.

– Every node is a master/slave. So you can write to any node.

– Unlike typical horizontal clustering, which typically offers eventual consistency, this provides consistency across nodes at all times.

What this means is from a functional perspective, you can continue to use for your OLTP applications requiring ACID.

It’s primary benefit is when a node fails, as long as quorum is met (majority of nodes still up), the database remains available for transactions.

Enter Kubernetes (K8S), and a node failure is quickly remedied by K8S as soon as it can. I kill a node, it brings it back up within a minute or two. In the meantime, the other 2 of 3 nodes remain up, and continue to serve transactions since 2/3 is a majority. This is the primary benefit of Galera, and Kubernetes is the ideal environment for it.

While Galera doesn’t provide load balancing, K8S does, as you connect in K8S to the single service name that routes the connection to a node that is currently available.

I tested this, and added a row to a database to one of the up nodes while a node I just killed was being recreated automatically by K8S, yet still down. When the killed node was restored, it too had the new row in the table. So, new nodes “catch up” to missed transactions automatically.

I have not reviewed the performance impact; but, guaranteeing consistency across nodes 100% of the time has a performance cost when compared to a horizontal database with eventual consistency. Yet, performance is likely to be better than a single node since replication can be extremely efficient (think low level processing, without having to duplicate query processing). Your primary benefit, though, is higher availability.

Testing in Kubernetes

If you’d like to give it a whirl, here are instructions for how to test it. 

Create a cluster and deploy a 3 node Galera cluster. I had no problem deploying Galera in Google Cloud to a cluster using these 3 YAMLs

View in Kubernetes console

kubectl proxy

Access via

http://localhost:8001/ui

To use Skip and have Admin privileges, load dashboard-admin.yaml, which you can create per these instructions.

In order to test from a local db client, create a port-forward rule.  Here I use a different port because my local machine has its own instance of a MariaDB server listening on 3306.

# Listen on port 13306 locally for port 3306 of pod 'mysql-0'
 kubectl port-forward mysql-0 13306:3306

You can easily kill it and change the pod to jump around from one instance or another.  When I killed mysql-2, I inserted in mysql-0 while mysql-2 was still down.  Then when mysql-2 was back up, I changed the port forward to mysql-2 to verify it had the new row inserted while it was down.  Alternately, you can port forward to all 3 pods on 3 different ports.

To connect, use this from a local client instance where you have MariaDB or MySQL installed:

mysql -h 127.0.0.1 -P 13306 -u root -p

To test the Galera cluster you can follow these instructions.

Cleanup

In addition to deleting the test cluster, you’ll need to delete the Persistence Volumes, which you can find under Google’s Compute Engine Disks if you are using GCP.  

Spread the love

About Erik Calco

With a passion for Investing, Business, Technology, Economics, People and God, Erik seeks to impact people's lives before he leaves. Contact Erik
This entry was posted in Data, Technology and tagged , , , , , , . Bookmark the permalink.

Leave a Reply