MyISAM key cache stats with information_schema.KEY_CACHES

Following on from a post last week on INNODB_BUFFER_PAGE queries I thought I’d look at the equivalent for the MyISAM key cache. The information_schema.KEY_CACHES is MariaDB only at the moment

SELECT KEY_CACHE_NAME,
	  FULL_SIZE / 1024 / 1024 AS key_buffer_mb,
	  USED_BLOCKS * BLOCK_SIZE / 1024 / 1024 AS  used_mb,
	  UNUSED_BLOCKS * BLOCK_SIZE / 1024 / 1024 AS  unused_mb,
	  100 - ((`READS` / READ_REQUESTS) * 100) AS read_cache_hit_rate,
	  DIRTY_BLOCKS / (USED_BLOCKS + UNUSED_BLOCKS) AS dirty_percentage,
	  READ_REQUESTS / (`READ_REQUESTS` + WRITE_REQUESTS) AS read_write_ratio
FROM information_schema.`KEY_CACHES`;

Most of the data returned should be self explanatory. The read_write_ratio indicates the number of reads versus writes, for example 1.0 indicates 100% reads whole 0.75 would indicate 75% read and 25% writes.


innodb_buffer_page queries

If you want to get some high level statistics on the buffer pool in MySQL / MariaDB you can use the INNODB_BUFFER_POOL_STATS table in the information_schema database.

If you need a little more detail then the INNODB_BUFFER_PAGE table is a good place to start. This view contains details about each page in the buffer pool. So we can use this to get database level details all the way down to individual indexes. Here’s a few queries to get you started…

Buffer pool consumption by database

SELECT bp.POOL_ID, 
		NULLIF(SUBSTRING(bp.TABLE_NAME, 1, LOCATE(".", bp.TABLE_NAME) - 1), '') AS db_name, 
		(COUNT(*) * 16) / 1024 / 1024 AS buffer_pool_consumption_gb
FROM `INNODB_BUFFER_PAGE` bp
GROUP BY bp.POOL_ID, db_name
ORDER BY buffer_pool_consumption_gb DESC;

The ‘NULL’ database here will consist of various INNODB internal structures, for example the UNDO LOG. You can inspect these pages individually with..

SELECT *
FROM `INNODB_BUFFER_PAGE` bp
WHERE bp.TABLE_NAME IS NULL;

Buffer pool consumption by database/table & page type

SELECT bp.POOL_ID, NULLIF(SUBSTRING(bp.TABLE_NAME, 1, LOCATE(".", bp.TABLE_NAME) - 1), '') AS db_name, bp.PAGE_TYPE, bp.TABLE_NAME, (COUNT(*) * 16) / 1024 / 1024 AS buffer_pool_consumption_gb
FROM `INNODB_BUFFER_PAGE` bp
GROUP BY bp.POOL_ID, db_name, bp.PAGE_TYPE, bp.TABLE_NAME
ORDER BY bp.POOL_ID, db_name, bp.PAGE_TYPE, bp.TABLE_NAME, buffer_pool_consumption_gb DESC;

Buffer pool consumption by index

SELECT bp.POOL_ID, 
		NULLIF(SUBSTRING(bp.TABLE_NAME, 1, LOCATE(".", bp.TABLE_NAME) - 1), '') AS db_name, 
		bp.TABLE_NAME, 
		bp.INDEX_NAME, (COUNT(*) * 16) / 1024 / 1024 AS buffer_pool_consumption_gb
FROM `INNODB_BUFFER_PAGE` bp
GROUP BY bp.POOL_ID, db_name, bp.TABLE_NAME, bp.INDEX_NAME
ORDER BY bp.POOL_ID, db_name, bp.PAGE_TYPE, buffer_pool_consumption_gb DESC;

Missing InnoDB information_schema Views in MariaDB

While working on a MariaDB 10.0.14 instance today I noticed the INNODB_% tables were missing from information_schema. I could tell the InnoDB plugin was loaded.

SHOW PLUGINS;
Name                         Status    Type                Library          License  
InnoDB                       ACTIVE    STORAGE ENGINE      ha_innodb.so     GPL      

Checking the documentation I could see that these views are supplied by the innodb plugin itself. So the solution should be in the form of an INSTALL PLUGIN statement. I executed the following query on another MariaDB instance, of the same version, that had the INNODB_% table in info schema…

SELECT CONCAT("INSTALL PLUGIN ", PLUGIN_NAME, " SONAME 'ha_innodb.so';") 
FROM information_schema.PLUGINS
WHERE PLUGIN_NAME LIKE 'INNODB_%';

This generated the following statements. I executed these on the other instance to create the INNODB tables…

INSTALL PLUGIN INNODB_TRX SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_LOCKS SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_LOCK_WAITS SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_CMP SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_CMP_RESET SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_CMPMEM SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_CMPMEM_RESET SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_CMP_PER_INDEX SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_CMP_PER_INDEX_RESET SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_BUFFER_PAGE SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_BUFFER_PAGE_LRU SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_BUFFER_POOL_STATS SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_METRICS SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_FT_DEFAULT_STOPWORD SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_FT_DELETED SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_FT_BEING_DELETED SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_FT_CONFIG SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_FT_INDEX_CACHE SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_FT_INDEX_TABLE SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_SYS_TABLES SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_SYS_TABLESTATS SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_SYS_INDEXES SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_SYS_COLUMNS SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_SYS_FIELDS SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_SYS_FOREIGN SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_SYS_FOREIGN_COLS SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_SYS_TABLESPACES SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_SYS_DATAFILES SONAME 'ha_innodb.so';
INSTALL PLUGIN INNODB_CHANGED_PAGES SONAME 'ha_innodb.so';

Spotting missing indexes for MariaDB & MySQL

Here’s a query I use for MySQL / MariaDB to spot any columns that might need indexing.  It uses a bunch of information_schema views to flag any columns that end with the characters “id” that are not indexed. This tests that the column is located at the head of the index through the ORDINAL_POSITION clause. So if it’s in an index at position 2, or higher, this won’t count.

SELECT t.TABLE_SCHEMA, t.TABLE_NAME, c.COLUMN_NAME, IFNULL(kcu.CONSTRAINT_NAME, 'Not indexed') AS `Index`
FROM information_schema.TABLES t
INNER JOIN information_schema.`COLUMNS` c
	ON c.TABLE_SCHEMA = t.TABLE_SCHEMA
	AND c.TABLE_NAME = t.TABLE_NAME
	AND c.COLUMN_NAME LIKE '%Id'
LEFT JOIN information_schema.`KEY_COLUMN_USAGE` kcu
	ON kcu.TABLE_SCHEMA = t.TABLE_SCHEMA
	AND kcu.TABLE_NAME = t.TABLE_NAME
	AND kcu.COLUMN_NAME = c.COLUMN_NAME
	AND kcu.ORDINAL_POSITION = 1	
WHERE kcu.TABLE_SCHEMA IS NULL
AND t.TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql');

The idea here is that any columns that are named business_key_id should probably be indexed. This will give us a list of columns that we can consider for indexing. As usual, your mileage may vary according to your own database design. Do you have any better ideas?


Moving an InnoDB Database with FLUSH TABLES .. FOR EXPORT

If we wanted to move a large InnoDB database our options were very limited. Essentially we had to mysqldump the single database or move the entire tablespace. I did have an idea for moving a single InnoDB database by copying files but only ever tried it out with TokuDB. This method worked but seemed to frighten the developers so it’s something I never pursued beyond proof-of-concept.

With the FLUSH TABLES .. FOR EXPORT feature we have another option which may be more convenient in some cases. Here’s a practical example of this feature…

We’re using the sakila database and MariaDB 10.0.15 but it should also work with MySQL 5.6 onwards. Note that this procedure requires the innodb_file_per_table is on and the tables will be read-only for the duration of the export.

First we need to generate a list of tables

USE sakila
 
SET SESSION group_concat_max_len = 10240;
 
SELECT GROUP_CONCAT("`", TABLE_NAME, "`")
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = DATABASE()
AND TABLE_TYPE = 'BASE TABLE'
AND `ENGINE` = 'InnoDB';

From this output we can build the statement to export the tables…

FLUSH TABLES `actor`,`address`,`category`,`city`,`country`,`customer`,`film`,`film_actor`,`film_category`,`inventory`,`language`,`payment`,`rental`,`staff`,`store` FOR EXPORT;

Create a database on the other server you will be moving the tables to.

CREATE DATABASE sakila2;
 
USE sakila2;

Copy the structure of your database to sakila2 using your preferred method. Then execute the following queries. These will generate statements that will allow us to import the tablespaces. Save the output.

USE sakila2;
 
SELECT CONCAT("ALTER TABLE `", TABLE_NAME, "` DISCARD TABLESPACE;")
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = DATABASE()
AND TABLE_TYPE = 'BASE TABLE'
AND `ENGINE` = 'InnoDB';
 
SELECT CONCAT("ALTER TABLE `", TABLE_NAME, "` IMPORT TABLESPACE;")
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = DATABASE()
AND TABLE_TYPE = 'BASE TABLE'
AND `ENGINE` = 'InnoDB';

Discard the tablespaces on the destination server so the new ones can be imported…

SET @@session.foreign_key_checks = 0;
ALTER TABLE `actor` DISCARD TABLESPACE;
ALTER TABLE `address` DISCARD TABLESPACE;
ALTER TABLE `category` DISCARD TABLESPACE;
ALTER TABLE `city` DISCARD TABLESPACE;
ALTER TABLE `country` DISCARD TABLESPACE;
ALTER TABLE `customer` DISCARD TABLESPACE;
ALTER TABLE `film` DISCARD TABLESPACE;
ALTER TABLE `film_actor` DISCARD TABLESPACE;
ALTER TABLE `film_category` DISCARD TABLESPACE;
ALTER TABLE `inventory` DISCARD TABLESPACE;
ALTER TABLE `language` DISCARD TABLESPACE;
ALTER TABLE `payment` DISCARD TABLESPACE;
ALTER TABLE `rental` DISCARD TABLESPACE;
ALTER TABLE `staff` DISCARD TABLESPACE;
ALTER TABLE `store` DISCARD TABLESPACE;
SET @@session.foreign_key_checks = 1;

While the tables locks are still alive we need to copy all of the ibd and cfg files to the destination server. In later version of MySQL / MariaDB you don’t strictly need the cfg as the table structures can be discovered.

scp /var/lib/sakila/{actor,address,category,etc}.{ibd,cfg} user@destination-server:/var/lib/mysql/sakila2

Once the copy is complete unlock the tables at the source…

UNLOCK TABLES;

You probably need to change the ownership of the files at the destination…

chown -R mysql:mysql /var/lib/mysql/sakila2

Now we should be ready to import the tablespaces…

USE sakila2;
 
SET @@session.foreign_key_checks = 0;
ALTER TABLE `actor` IMPORT TABLESPACE;
ALTER TABLE `address` IMPORT TABLESPACE;
ALTER TABLE `category` IMPORT TABLESPACE;
ALTER TABLE `city` IMPORT TABLESPACE;
ALTER TABLE `country` IMPORT TABLESPACE;
ALTER TABLE `customer` IMPORT TABLESPACE;
ALTER TABLE `film` IMPORT TABLESPACE;
ALTER TABLE `film_actor` IMPORT TABLESPACE;
ALTER TABLE `film_category` IMPORT TABLESPACE;
ALTER TABLE `inventory` IMPORT TABLESPACE;
ALTER TABLE `language` IMPORT TABLESPACE;
ALTER TABLE `payment` IMPORT TABLESPACE;
ALTER TABLE `rental` IMPORT TABLESPACE;
ALTER TABLE `staff` IMPORT TABLESPACE;
ALTER TABLE `store` IMPORT TABLESPACE;
SET @session.foreign_key_checks = 1;

Now your exported tables should be ready for action. If you want to make sure then run some CHECK TABLE statements against your data…

SELECT CONCAT("CHECK TABLE ", TABLE_NAME, ";")
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = DATABASE()
AND TABLE_TYPE = "BASE TABLE";

UPDATE – 2014-02-20 Corrected scp command typo.