6 Useful Bash tips I wish I knew from day zero

Here’s a few bash commands tricks I wished I’d been shown when I first picked up the shell. Please share any additional favorites you have.

Repeat the last command with sudo

How often do you type…

yum install long-list packages-devel

Only to be told…

You need to be root to perform this command.

Execute the following to install your packages…

sudo !!

The !! points to the previous command executed in the shell.

Save a readonly file in vi/vim

How many times have you opened a file in vi/vim, made lots of changes, only to be told it’s read-only when you attempt to save? Enter this in command mode to get around this…

:w !sudo tee

There’s a good explanation of this here.


It can be time-consuming to search through your command history. Setup an alias instead.

alias shortcut="cmd -with=1 --lots=2 --of=3 -options=4 | piped -a -b -c"

The cmd with lots of options can now be executed as…


Much simpler! Add these to your .bash_profile file to make them available permanently.

Clear your terminal window

I used to use clear for this. But this just shimmies everything upwards. Use..


To actually clear the terminal screen.

Display output to a screen and save to a file

For a long time I was copying and pasting terminal output to save into text files. Then I discovered tee. Do this instead…

ls -lh | tee output.txt

Command-line calculator

For simple calculations on the command-line you can use bc.

echo "2 * 2" | bc



The behavior of gtid_strict_mode in MariaDB

GTIDs in MariaDB are a nice improvement to replication and make fail-over a simpler process. I struggled a little with the explanation of gtid_strict_mode and what to expect. So I thought I’d run through a simple scenario to make my own understanding clear.

In this scenario I have created two instances of MySQL; primary and secondary. I have setup replication using standard binlog filename & positions offset and then switched to GTID using slave_pos. I will cover the behavior of replication with gtid_strict_mode off and on, how problems are caused, how you might like to recover, and how to avoid problems.

gtid_strict_mode = 0

Consider the starting gtids for our primary and secondary instances…

primary: gtid_current_pos = 1-1-2146
secondary: gtid_slave_pos & gtid_current_pos = 1-1-2146

Run a few transactions on the primary…

mysql> CREATE DATABASE rhys;
mysql> USE rhys;
mysql> INSERT INTO t1 VALUES (1);
mysql> INSERT INTO t1 VALUES (2);
mysql> INSERT INTO t1 VALUES (3);

The gtid state now corresponds to…

primary: gtid_current_pos = 1-1-2151 (as are gtid_binlog_pos, gtid_binlog_state)
secondary: gtid_current_pos = 1-1-2151 (as are gtid_binlog_pos & gtid_slave_pos)
gtid_binlog_state = 1-2-2146,1-1-2151

Run a few transactions on the secondary. We would consider these to be errant transactions on our slave server.

mysql> USE rhys;
mysql> INSERT INTO t1 VALUES (101);
mysql> INSERT INTO t1 VALUES (102);
mysql> INSERT INTO t1 VALUES (103);
mysql> INSERT INTO t1 VALUES (104);
mysql> INSERT INTO t1 VALUES (105);
mysql> INSERT INTO t1 VALUES (999);

The gtid state is now as follows…

primary: gtid_current_pos = 1-1-2151 (as are gtid_binlog_pos, gtid_binlog_state)
secondary: gtid_binlog_pos = 1-2-2157
gtid_binlog_state = 1-1-2151, 1-2-2157
gtid_current_pos = 1-2-2157
gtid_slave_pos = 1-1-2151

Run another transaction on the primary. Note the next generated gtid here would be 1-1-2152

mysql> USE rhys;
mysql> INSERT INTO t1 VALUES (1001);

What’s our gtid state?

primary: gtid_current_pos = 1-1-2152 (as are gtid_binlog_pos, gtidbinlog_state)
secondary: gtid_binlog_pos = 1-1-2152
gtid_binlog_state = 1-2-2157,1-1-2152
gtid_current_pos = 1-1-2152
gtid_slave_pos = 1-1-2152

SHOW SLAVE STATUS on the secondary will show the slave thread running happily. In other words, when gtid_strict_mode = 0, the replication thread behaves in a similar way to standard replication. We just have some gtid order checking and extra validation when failing over. But errant transactions are allowed on slave servers.

gtid_strict_mode = 1

Now set gtid_strict_mode = 1 in my.cnf and restart both instances. Run a transaction on the primary…

mysql> USE rhys;
mysql> INSERT INTO t1 VALUES 100001);

Our gtid state is now as follows…

primary: gtid_current_pos = 1-1-2153 (as are gtid_binlog_pos, gtidbinlog_state)
secondary: gtid_binlog_pos = 1-1-2152
gtid_binlog_state 1-2-2157,1-1-2153
gtid_current_pos = 1-1-2153
gtid_slave_pos = 1-1-2153

SHOW SLAVE STATUS will show the slave thread running happy even after we have put it into gtid_strict_mode. This is because the errant transactions happened before so they get a pass on this occasion. Execute an errant transaction on the secondary MariaDB instance…

mysql> USE rhys;
mysql> INSERT INTO t1 VALUES (17);

SHOW SLAVE STATUS will continue to show the slave thread is running happily. The gtid state is as follows…

primary: gtid_current_pos = 1-1-2153 (as are gtid_binlog_pos, gtidbinlog_state)
secondary: gtid_binlog_pos = 1-2-2158
gtid_binlog_state 1-1-2153,1-2-2158
gtid_current_pos = 1-2-2158
gtid_slave_pos = 1-1-2153

Next, execute a transaction on the primary…

mysql> USE rhys
mysql> INSERT INTO t1 VALUES (8888);

Our gtid state is..

primary: gtid_current_pos = 1-1-2154 (as are gtid_binlog_pos, gtidbinlog_state)
secondary: gtid_binlog_pos = 1-2-2158
gtid_binlog_state 1-1-2153,1-2-2158
gtid_current_pos = 1-2-2158
gtid_slave_pos = 1-1-2153

Note the gtid_current_pos for the primary has incremented, but the gtid_slave_pos for the secondary has not. Something has happened…


The slave thread on the secondary has failed with the error…

An attempt was made to binlog GTID 1-1-2154 which would create an out-of-order sequence number with existing GTID 1-2-2158, and gtid strict mode is enabled.

How can this be fixed?

The Smartass answer here would be to say don’t execute errant transactions on your slave servers. But since we’re here already let’s have a look at some alternatives…

This skip trick works but you have to do it for every transaction until the GTIDS don’t clash any more. Don’t forget this skips entire transactions not individual statements.

mysql> STOP SLAVE;
mysql> SET GLOBAL sql_slave_skip_counter=1

Alternatively if there are a lot of transactions it may be easier to do…

mysql> SET GLOBAL gtid_strict_mode = 0;

Once the primary gtid sequence is higher the slave won’t complain any more. We can then switch back to gtid_strict_mode = 1.

It’s also possible to switch back to binlog filename & positions, replicate past the problem, and then switch back to gtid style replication. Whatever approach you take you have a data verification exercise to get cracking on.

Strategies to avoid this

No writes on slaves is the ideal situation but perhaps we’re forced to for various reasons. Instead we may wish to consider…

1. Execute SET SESSION gtid_domain=0 before your statement. This will avoid any problems if you use a domain_id that does not exist. Be careful about your replication hierarchy (i.e. slaves of slaves).
2. Execute SET SESSION sql_log_bin = 0 before your statement. No gtid generated, no problem. This can be dangerous though.
3. Be aware of what statements & actions will increment the gtid counter. You might not expect it but OPTIMIZE and ANALYZE table will increment the sequence number.

I’d be interested to know any thoughts you might have on this.

Better “read_only” slaves in MariaDB / MySQL

It’s always been an annoyance that read_only in MySQL actually means “read only apart from those with the SUPER priv”. Now I know it’s best practice not to give this permission out to users but sometimes we are stuck with the choices others have made.

We now have another option in the form of tx_read_only which was introduced in MariaDB 10 and MySQL 5.6.5. If we set this variable to ON then any writes will be rejected with…

ERROR 1792 (25006): Cannot execute statement in a READ ONLY transaction.

This also applies to users with the SUPER priv. A user with the SUPER priv can still perform some action entitled by that privileges, i.e. SET GLOBAL var, PURGE BINARY LOGS, CHANGE MASTER etc. Check your specific use case. Unfortunately this also prevents replication working. We can get around this restriction using the init_slave variable and the following SQL…


Now the replication thread will work! This all goes to hell if a user logs in and executes


You may be able to check the performance_schema to see if you have any users doing this…

FROM performance_schema.`events_statements_summary_by_digest`

It may be possible to override this using init_connect if you have any clients executing this by default as part of the connection setup (I need to test this). It would also go a good idea to keep using read_only as well. The my.cnf settings for this would be…

transaction_read_only = 1
read_only = 1

tx_read_only doesn’t seem to work in the cnf file for some reason. This is an inconsistency they may fix in a future version. There’s obviously still holes in this approach but I’m hopeful this will make my setup a little more resilient.

Table & Tablespace encryption in MariaDB 10.1.3

Here’s just a few notes detailing my investigations into table & tablespace encryption in MariaDB 10.1.3.

Table Encryption

First Generate encryption keys

linux> openssl enc -aes-256-cbc -k secretPassword -P -md sha1

Next build your key file from the above output. This should be in the following format..


This will look something like…


Where 1 is the id of the key. I can’t see anything specific in the documentation but presumably you’d just start a new line for each key. Place this into a file called key.txt. It’s worth noting that you must generate the enc file in the directory you will point to in MariaDB. Decryption doesn’t seem to work after they are moved.

linux> openssl enc -aes-256-cbc -md sha1 -k secretPassword2 -in key.txt -out key.enc

Don’t forget to cleanup the server of any unencrypted key files. You may wish to keep copies securely on a different server. Now we need to configure MariaDB

linux> vi /etc/my.cnf.d/server.cnf

Add the following options…


Note that the filekey password will be visible to anyone on the server. In MariaDB 10.1.4 we can set it with a file path, i.e. FILE:/path/to/pwd.txt, and it would be a sensible idea to do so for production. Now create and view some data using the table encryption feature…

linux> systemctl restart mysql
mariadb> CREATE DATABASE encrypted;
maraidb> USE encrypted;
mariadb> INSERT INTO test VALUES (101, ‘Hello, World!);
mariadb> SELECT * FROM test;
# data will display
mariadb> exit

Lets simulate someone stealing the data files and attempting to access them. We can do this by simply renaming the key file and restarting the MariaDB server…

linux> mv /home/mdb/key.enc /home/mdb/_key.enc
linux> systemctl restart mysql
linux> mysql -u root -p
mariadb> USE encrypted;
mariadb> SELECT * FROM test;

We will receive the following error…

ERROR 1932 (42S02): Table ‘encrypted.test’ doesn’t exist in engine

The error log will also contain lots of messages complaining about missing ibd files, tablespace corruption and so on. Now put the key back…

linux> mv /home/mdb/_key.enc /home/mdb/key.enc
linux> systemctl restart mysql
linux> mysql -u root -p

The data can be accessed again…

mariadb> USE encryption
mariadb> SELECT * FROM test;

To turn off encryption on individual tables with ALTER TABLE…


Tablespace Encryption

I also had a quick stab at tablespace encryption but ran into a whole series of problems. Lots of the documentation appears to be wrong. For example innodb-encrypt-tables insists it must be set with a value while the documentation shows otherwise. Looks like there’s been further work on encryption for 10.1.4 so I’ll have another go with that.

UPDATE 2015-04-18: I had a play with MariaDB 10.1.4. Various issues found. See the bug report.

Elasticsearch: Turn off index replicas

If you’re playing with elasticsearch on a single host you may notice your cluster health is always yellow. This is probably because your indexes are set to have one replica but there’s no other node to replicate it to.

To confirm if this is the case or not you can look in elasticsearch-head. In the Overview tab you should see a bunch of index shards marked as “Unassigned”. If your cluster health is 50% of the total then you probably have the setup of 1 shard and 1 replica per index. To make the status green change replicas to zero…

curl -XPUT -d '
    "index" : {
        "number_of_replicas" : 0

Please note this will affect all indexes. Do not execute this on a production cluster and be sure to back up any data you don’t want to lose.