Spotting missing indexes for MariaDB & MySQL

Here’s a query I use for MySQL / MariaDB to spot any columns that might need indexing.  It uses a bunch of information_schema views to flag any columns that end with the characters “id” that are not indexed. This tests that the column is located at the head of the index through the ORDINAL_POSITION clause. So if it’s in an index at position 2, or higher, this won’t count.

SELECT t.TABLE_SCHEMA, t.TABLE_NAME, c.COLUMN_NAME, IFNULL(kcu.CONSTRAINT_NAME, 'Not indexed') AS `Index`
FROM information_schema.TABLES t
INNER JOIN information_schema.`COLUMNS` c
	ON c.TABLE_SCHEMA = t.TABLE_SCHEMA
	AND c.TABLE_NAME = t.TABLE_NAME
	AND c.COLUMN_NAME LIKE '%Id'
LEFT JOIN information_schema.`KEY_COLUMN_USAGE` kcu
	ON kcu.TABLE_SCHEMA = t.TABLE_SCHEMA
	AND kcu.TABLE_NAME = t.TABLE_NAME
	AND kcu.COLUMN_NAME = c.COLUMN_NAME
	AND kcu.ORDINAL_POSITION = 1	
WHERE kcu.TABLE_SCHEMA IS NULL
AND t.TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql');

The idea here is that any columns that are named business_key_id should probably be indexed. This will give us a list of columns that we can consider for indexing. As usual, your mileage may vary according to your own database design. Do you have any better ideas?


Moving an InnoDB Database with FLUSH TABLES .. FOR EXPORT

If we wanted to move a large InnoDB database our options were very limited. Essentially we had to mysqldump the single database or move the entire tablespace. I did have an idea for moving a single InnoDB database by copying files but only ever tried it out with TokuDB. This method worked but seemed to frighten the developers so it’s something I never pursued beyond proof-of-concept.

With the FLUSH TABLES .. FOR EXPORT feature we have another option which may be more convenient in some cases. Here’s a practical example of this feature…

We’re using the sakila database and MariaDB 10.0.15 but it should also work with MySQL 5.6 onwards. Note that this procedure requires the innodb_file_per_table is on and the tables will be read-only for the duration of the export.

First we need to generate a list of tables

USE sakila
 
SET SESSION group_concat_max_len = 10240;
 
SELECT GROUP_CONCAT("`", TABLE_NAME, "`")
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = DATABASE()
AND TABLE_TYPE = 'BASE TABLE'
AND `ENGINE` = 'InnoDB';

From this output we can build the statement to export the tables…

FLUSH TABLES `actor`,`address`,`category`,`city`,`country`,`customer`,`film`,`film_actor`,`film_category`,`inventory`,`language`,`payment`,`rental`,`staff`,`store` FOR EXPORT;

Create a database on the other server you will be moving the tables to.

CREATE DATABASE sakila2;
 
USE sakila2;

Copy the structure of your database to sakila2 using your preferred method. Then execute the following queries. These will generate statements that will allow us to import the tablespaces. Save the output.

USE sakila2;
 
SELECT CONCAT("ALTER TABLE `", TABLE_NAME, "` DISCARD TABLESPACE;")
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = DATABASE()
AND TABLE_TYPE = 'BASE TABLE'
AND `ENGINE` = 'InnoDB';
 
SELECT CONCAT("ALTER TABLE `", TABLE_NAME, "` IMPORT TABLESPACE;")
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = DATABASE()
AND TABLE_TYPE = 'BASE TABLE'
AND `ENGINE` = 'InnoDB';

Discard the tablespaces on the destination server so the new ones can be imported…

SET @@session.foreign_key_checks = 0;
ALTER TABLE `actor` DISCARD TABLESPACE;
ALTER TABLE `address` DISCARD TABLESPACE;
ALTER TABLE `category` DISCARD TABLESPACE;
ALTER TABLE `city` DISCARD TABLESPACE;
ALTER TABLE `country` DISCARD TABLESPACE;
ALTER TABLE `customer` DISCARD TABLESPACE;
ALTER TABLE `film` DISCARD TABLESPACE;
ALTER TABLE `film_actor` DISCARD TABLESPACE;
ALTER TABLE `film_category` DISCARD TABLESPACE;
ALTER TABLE `inventory` DISCARD TABLESPACE;
ALTER TABLE `language` DISCARD TABLESPACE;
ALTER TABLE `payment` DISCARD TABLESPACE;
ALTER TABLE `rental` DISCARD TABLESPACE;
ALTER TABLE `staff` DISCARD TABLESPACE;
ALTER TABLE `store` DISCARD TABLESPACE;
SET @@session.foreign_key_checks = 1;

While the tables locks are still alive we need to copy all of the ibd and cfg files to the destination server. In later version of MySQL / MariaDB you don’t strictly need the cfg as the table structures can be discovered.

scp /var/lib/sakila.{ibd,cfg} destination-server:/var/lib/mysql/sakila2

Once the copy is complete unlock the tables at the source…

UNLOCK TABLES;

You probably need to change the ownership of the files at the destination…

chown -R mysql:mysql /var/lib/mysql/sakila2

Now we should be ready to import the tablespaces…

USE sakila2;
 
SET @@session.foreign_key_checks = 0;
ALTER TABLE `actor` IMPORT TABLESPACE;
ALTER TABLE `address` IMPORT TABLESPACE;
ALTER TABLE `category` IMPORT TABLESPACE;
ALTER TABLE `city` IMPORT TABLESPACE;
ALTER TABLE `country` IMPORT TABLESPACE;
ALTER TABLE `customer` IMPORT TABLESPACE;
ALTER TABLE `film` IMPORT TABLESPACE;
ALTER TABLE `film_actor` IMPORT TABLESPACE;
ALTER TABLE `film_category` IMPORT TABLESPACE;
ALTER TABLE `inventory` IMPORT TABLESPACE;
ALTER TABLE `language` IMPORT TABLESPACE;
ALTER TABLE `payment` IMPORT TABLESPACE;
ALTER TABLE `rental` IMPORT TABLESPACE;
ALTER TABLE `staff` IMPORT TABLESPACE;
ALTER TABLE `store` IMPORT TABLESPACE;
SET @session.foreign_key_checks = 1;

Now your exported tables should be ready for action. If you want to make sure then run some CHECK TABLE statements against your data…

SELECT CONCAT("CHECK TABLE ", TABLE_NAME, ";")
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = DATABASE()
AND TABLE_TYPE = "BASE TABLE";

A MariaDB Multi-Master setup example

Here’s a very quick example for how to setup Multi-Master replication with MariaDB.  It’s light on detail here to focus only on the steps you actually need to take, Have a good read of the documentation before attempting this. This example also uses GTIDs so you’ll need some understanding of these as well.

The example here is pretty simple. There are two master servers. We will replicate two servers to a single slave. From master1 we will replicate all databases to the slave. From master2 we will replicate a single database to the same slave as master1. In this example I have assumed you’re working on the slave itself as I references it by 127.0.0.1.

First dump and load the databases we want…

# all databases from one server
mysqldump -h master1 -P3306 --master-data=2 --routines --single-transaction --all-databases -u dba -pXXXXXXXX > all_databases.sql
cat all_databases.sql | mysql -h 127.0.0.1 -P3306
 
# Single database from another master
mysqldump -h master2 -P3306 --master-data=2 --routines --single-transaction --databases mydb -u dba -pXXXXXXXX > single_database.sql
cat single_database.sql | mysql -h 127.0.0.1 -P3306

Extract the GTID co-ordinates…

head -n 30 all_databases.sql | grep -i gtid
head -n 30 single_database.sql | grep -i gtid

You’ll see something like this displayed…

-- GTID to start replication from
-- SET GLOBAL gtid_slave_pos='XXXX-XXXX-XXXX';
-- GTID to start replication from
-- SET GLOBAL gtid_slave_pos='XXXX-XXXX-XXXX';

Extract the GTID positions and combine them in a statement like below to execute on the slave…

SET GLOBAL gtid_slave_pos='XXXX-XXXX-XXXX,XXXX-XXXX-XXXX';

Configure the slave to point at both masters…

CHANGE MASTER 'mirror_master1' TO
			MASTER_HOST='master1',
			MASTER_PORT=3306,
			MASTER_USER='replication',
			MASTER_PASSWORD='XXXXXXXX',
			MASTER_USE_GTID=slave_pos;
 
CHANGE MASTER 'single_database' TO
			MASTER_HOST='master2',
			MASTER_PORT=3307,
			MASTER_USER='replication',
			MASTER_PASSWORD='XXXXXXXX',
			MASTER_USE_GTID=slave_pos;

Run the following statement so the slave will only process binlog entries for the appropriate database… (you’d also want this in your cnf file)

SET GLOBAL single_database.replicate_do_db=mydb;

Now you can start the slave threads…

START ALL SLAVES;

Double check the status to make sure it has worked…

SHOW SLAVE STATUS \G

Identify Cross Database Foreign Keys

I’ve blogged before about cross-database foreign keys and what I think of them. I had a developer wanting to check for such references between two databases today. Here’s what I came up with to do this…

SELECT t.NAME AS TABLE_NAME,
		i.NAME AS INDEX_NAME,
		f.*
FROM information_schema.`INNODB_SYS_FOREIGN` f
INNER JOIN information_schema.`INNODB_SYS_INDEXES` i
	ON i.`NAME` = SUBSTRING(f.`ID`, LOCATE('/', f.`ID`) + 1)
INNER JOIN information_schema.`INNODB_SYS_TABLES` t
	ON t.TABLE_ID = i.TABLE_ID
WHERE (f.FOR_NAME LIKE 'db1%' AND f.REF_NAME LIKE 'db2%')
OR (f.REF_NAME LIKE 'db1%' AND f.FOR_NAME LIKE 'db2%');

N.B. This query only applies to the innodb storage engine.


The Journey of a Mac Book Pro

I don’t normally post stuff like this but I thought it was quite interesting. The journey of my new Mac Book Pro is truly a reflection of the globalised world we now live in.

journey mac book pro The Journey of a Mac Book Pro

The list of cities it visited, in reverse order, is…

  • Watford, United Kingdom
  • Stansted, United Kingdom
  • Barking, United Kingdom
  • Koeln, Germany
  • Warsaw, Poland
  • Almaty, Kazakhstan
  • Incheon, Korea, Republic of
  • Shanghai, China