my: a command-line tool for MariaDB Clusters

I’ve posted the code for my MariaDB Cluster command-line tool called my. It does a bunch of stuff but the main purpose is to allow you to easily monitor replication cluster-wide while working in the shell.

Here’s an example of this screen…

hostname port  cons  u_cons  role  repl_detail                                       lag  gtid    read_only
master1  3306  7     0       ms    master2.ucid.local:3306 mysql-bin.000046 7296621  0    0-2-4715491  OFF
master2  3306  33    20      ms    master1.ucid.local:3306 mysql-bin.000052 1031424  0    0-2-4715491  OFF
slave1   3306  5     0       ms    master1.ucid.local:3306 mysql-bin.000052 1031424  0    0-2-4715491  ON
slave2   3306  29    19      ms    master2.ucid.local:3306 mysql-bin.000046 7296621  0    0-2-4715491  ON
backup   3306  5     0       ms    master2.ucid.local:3306 mysql-bin.000046 7296621  0    0-2-4715491  ON

This screen will handle hosts that are down, identify ones where MariaDB isn’t running, highlight replication lag or errors, as well as multi-master setups. See the README for more details for how to get started.

Notes from the field: CockroachDB Cluster Setup

Download the CockroachDB Binary

Perform on each node.

tar xvzf cockroach-latest.linux-amd64.tgz
mv cockroach-latest.linux-amd64/cockroach /usr/bin/
chmod +x /usr/bin/cockroach

Create cockroach user and directories

Perform on each node.

groupadd cockroach
useradd -r cockroach -g cockroach
su - cockroach
cd /home/cockroach
mkdir -p certs my-safe-directory cockroach_db

Check ntp status

Check NTP is running and configured correctly. CockroachDB replies on syncronised clocks to function correctly.

service ntpd.service status

Secure the Cluster

Perform on each node.

Copy all keysgenerate on the first host to the others but regenerate the node certificates (This means the command with create-node). For further details see Secure a Cluster.

cockroach cert create-ca --certs-dir=certs --ca-key=my-safe-directory/ca.key # These keys, in both dirs, need to be copied to each host
ls -l certs
cockroach cert create-node localhost $(hostname) --certs-dir=certs --ca-key=my-safe-directory/ca.key
ls -l certs
cockroach cert create-client root --certs-dir=certs --ca-key=my-safe-directory/ca.key --overwrite
ls -l certs

Start the nodes


su - cockroach
cockroach start --background --host=node1 --http-host=node1 --port=26257 --http-port=8080 --store=/home/cockroach/cockroach_db --certs-dir=/home/cockroach/certs;


su - cockroach
cockroach start --background --host=node2 --http-host=node2 --port=26257 --http-port=8080 --store=/home/cockroach --join=node1:26257 --certs-dir=/home/cockroach/certs


su - cockroach
cockroach start --background --host=node3 --http-host=node3 --port=26257 --http-port=8080 --store=/home/cockroach --join=node1.ucid.local:26257 --certs-dir=/home/cockroach/certs

Check the status of the cluster

sudo su - cockroach
cockroach node ls --certs-dir=certs --host node1
cockroach node status --certs-dir=certs --host node1

Create a cron to start CockroachDB on boot

Create the file /etc/cron.d/cockroach_start with the below cron command for each node…


@reboot cockroach       cockroach start --background --host=node1 --http-host=node1 --port=26257 --http-port=8080 --store=/home/cockroach/cockroach_db --join="node2:26257,node3:26257" --certs-dir=/home/cockroach/certs;


@reboot cockroach       cockroach start --background --host=node2 --http-host=node2 --port=26257 --http-port=8080 --store=/home/cockroach --join="node1:26257,node3:26257" --certs-dir=/home/cockroach/certs;


@reboot cockroach       cockroach start --background --host=node3 --http-host=node3 --port=26257 --http-port=8080 --store=/home/cockroach --join="node1:26257,node2:26257" --certs-dir=/home/cockroach/certs;

Reboot the nodes to ensure the CockroachDB comes up and all join the cluster successfully.

MongoDB: Making the most of a 2 Data-Centre Architecture

There’s a big initiative at my employers to improve the uptime of the services we provide. The goal is 100% uptime as perceived by the customer. There’s obviously a certain level of flexibility one could take in the interpretation of this. I choose to be as strict as I can about this to avoid any disappointments! I’ve decided to work on this in the context of our primary MongoDB Cluster. Here is a logical view of the current architecture, spread over two data centres;

MongoDB Cluster Architecture Two Data Centres

What happens with this architecture?

If DC1 goes down shard0 and shard2 are both read-only while shard1 remains read/write. DC1 contains only a single config server so some meta-data operations will be unavailable. If DC2 goes down shard0 and shard2 remain read/write while shard1 becomes read only. 2 config servers are hosted in DC1 so cluster meta-data operations remain available.

What are the options we can consider when working within the constraints of a two data-centre architecture?

  1. Do nothing and depend on the failed data-centre coming back online quickly. Obviously not an ideal solution. If either data-centre goes down we suffer some form of impairment to the customer.
  2. Nominate one site as the PRIMARY data-centre which will contain enough shard members, for each shard, to achieve quorum should the secondary site go down. Under this architecture we remain fully available assuming the PRIMARY data centre remains up. The entire cluster becomes read-only if the main data-centre goes down. To achieve our goal, of 100% uptime, we have to hope our PRIMARY Data centre never goes down. We could request that application developers make some changes to cope with short periods of write unavailability.
  3. A third option, which would actually work combined with both of the above approaches, would be to force a replicaset to accept writes when only a single server is available. You need to be careful here and be sure the other nodes in the replicaset aren’t receiving any writes. We can follow the procedure Reconfigure a replicaset with unavailable members to make this happen. This would require manual intervention and there would be some work to reconfigure the replicaset when the offline nodes returned.

Clearly none of these options are ideal. In the event of a data-centre outage we are likely to suffer some form of impairment to service. The only exception here is if the SECONDARY data-centre goes down in #2. This strategy depends, to some extent, on luck. With full documented, and tested, failover procedures we can minimise this downtime. The goal of 100% uptime seems pretty tricky to achieve without moving to a 3 data-centre architecture.


MongoDB and the occasionally naughty query

It’s no secret that databases like uniqueness and high cardinality. Low cardinality columns do not make good candidates for indexes. A recent issue I had with MongoDB proved that NoSQL is no different in this regard.

The MongoDB Query Planner is quite simple and works as follows (simplified)…

  • If there are no usable indexes; perform a collation scan.
  • If there is one usable index; use that index.
  • If there is more than one usable index; generate candidate plans, score plans and cache them, use the plan with the best score.
  • Periodically or due to certain actions (creating, dropping indexes); purge plans from cache and re-evaluate possible plans again.

This fairly simple method generally seems to perform well but I have observed occasional problems resulting from this method.

Assuming the following document schema;

     "_id": XXXXXXXXX,
     "email1": "",
     "email2": "",
     "identifier": "XXXXXXXX"

Assume the following indexes are defined;

{ "email1": 1, "email2": 1 }
{ "identifier": 1 }

The value for “identifier” was originally specified to be unique but was changed after developer feedback as they said this will “not always” be the case.

Let’s assume we have the following query;

db.collection.find({ "email1": XXXXXX, "email2": XXXXXX, "identifier": XXXXXX });

For several months this query performed great always selecting the “identifier” index. As time went on we started to notice some slow instances of this query were creating higher load. Upon initial investigation we discovered that MongoDB was selecting the index on email1, email2 when the index on “identifier” was clearly a better choice. Often there would be tens of thousands of documents for a combination of (email1, email2) and just a single document for the “identifier” value. Why was MongoDB making this bad choice?

After further investigation we discover queries like this…

db.collection.find({ "email1": XXXXXX, "email2": XXXXXX, "identifier": "welcome-message" })

The key here was the value of “identifier”. There were a few hundred thousand documents with the value of “welcome-message” in the system. When one of these queries, that happened to have the value of “welcome-message” came in and MongoDB decided to re-evaluate plans for this query. Sometimes MongoDB decided that “identifier” was not the best choice of index. While that was certainly true for that instance of the query it wasn’t for the vast majority of them. So 99% of these queries that used to only scan one index key / document now scanned hundreds of thousands!

So how did we fix it? We thought about changing our indexes to…

{ "email1": 1, "email2": 1, "identifier": 1 }

But that would have taken us too long to deploy across our multiple shards and been quite disruptive. We could also have implemented index filters but decided against it as they don’t persist through restarts. As a short term fix we deleted the documents with the “welcome-message” identifier and the developers agreed to append a random code to the end of the identifier, i.e. “welcome-message-s6geR4tgds36”. This quickly resolved the problem and we’ve had no repeating instances since. Databases, including NoSQL ones, love uniqueness!

Possibly in the future MongoDB may implement something like Column Statistics that many other traditional databases employ to solve these problems. In the meantime we all need to be a little bit more aware of our indexes and the data being pushed into our systems.

A few Splunk queries for MongoDB logs

Here’s a few Splunk queries I’ve used to supply some data for a dashboard I used to manage a MongoDB Cluster.

Election events

If any MongoDB elections happen at 3AM on a Wednesday night I want to know about it. This query, added to a single value panel allows me to do this easily…

host=mongo* source=/var/log/mongo*.log "Starting an election" | stats count


I also want to know about any rollbacks than happen during an election…

host=mongo* source=/var/log/mongo*.log "beginning rollback" | stats count

Log message with severity ERROR

Count log messages with ERROR severity

host=mongo* source="/var/log/mongodb/*.log" | rex "(?<timestamp>^\d\d\d\d-\d\d-\d\dT\d\d:\d\d:\d\d.\d\d\d\+\d\d\d\d) (?<severity>.) (?<component>\S*) "| where severity=E | stats count

Chunk moves initiated

Have any chunks moved

host=mongo* source="/var/log/mongodb/*.log" "moving chunk" | stats count

State changes

How many states changes, i.e. PRIMARY -> SECONDARY in period…

host=mongo* "is now in state" | stats count