Modifying elasticsearch index settings

To view the settings of an index run the following at the command-line…

curl -XGET http://hostname:9200/indexname/_settings

From here you can indeify the setting you need and modify it as you wish. This example sets the number of replicas to zero.

curl -XPUT http://hostname:9200/indexname/_settings -d '{ "index": {"number_of_replicas":"0"}}'

For further details see the manual.


Removing logstash indicies from elasticsearch‏

I’ve been playing with EFK and elasticsearch ended up eating all of the RAM on my test system. I discovered this was due to it attempting to cache all these indexes. Since this is a test system I’m not too bothered about having a long history here so I wrote this bash script to remove logstash indexes from elasticsearch, compress and archive them. This has the effect of reducing the memory pressure and a better working system. Explanatory comments are included.

#!/bin/bash
 
#######################################
# Author: Rhys Campbell               #
# Created: 2014-08-06                 #
# Description: Removes indicies with  #
# a modified date > N days from ES #
# memory and archives them using lzma #
# compression.                        #
#######################################
INDICIES_PREFIX="logstash"; # Indicies name prefix
INDICIES_ROOT="/data/elasticsearch/data/elasticsearch/nodes/0/indices/$INDICIES_PREFIX"; # Daily indicies root
DAYS=5; # Days of indexes to keep
ARCHIVE="/data/elasticsearch/data/elasticsearch/nodes/0/indices/archive"; # archive location
ES_URL="http://hostname:9200/";
 
logger -t elasticsearch "Begining archiving of elasticsearch indicies.";
 
for DIR in `find /data/elasticsearch/data/elasticsearch/nodes/0/indices/logstash* -maxdepth 0 -mtime +"$DAYS"`;
do
 # Remove index from elasticsearch
INDEX_NAME=`basename "$DIR"`;
REMOVAL_URL="$ES_URL$INDEX_NAME/_close";
#curl -XPOST "$REMOVAL_URL"; # Uncomment this line. Wordpess balks on this for some reason
EXIT=$?;
    if [ "$EXIT" -ne 0 ]; then
        logger -t elasticsearch "ERROR: Removal of elasticsearch index at $REMOVAL_URL failed Exit Code = $EXIT.";
        exit $EXIT;
    else
        logger -t elasticsearch "Successfully removed $REMOVAL_URL from elasticsearch.";
    fi;
    # Now archive the directory
    tar cvf "$DIR".lzma "$DIR" --lzma --remove-files;
    EXIT=$?;
    if [ "$EXIT" -ne 0 ]; then
        logger -t elasticsearch "ERROR: lzma compression of elasticsearch index file encountered an error. Exit Code = $EXIT.";
        exit $EXIT;
    else
        logger -t elasticsearch "Compressed elasticsearch index $INDEX_NAME successfully.";
    fi;
    mv "$DIR".lzma "$ARCHIVE";
    EXIT=$?;
    if [ "$EXIT" -ne 0 ]; then
        logger -t elasticsearch "ERROR: Could not move $DIR.lzma to archive location $ARCHIVE. Exit Code = $EXIT.";
        exit $EXIT;
    else
        logger -t elasticsearch "Removal and archiving of the elasticsearch index $INDEX_NAME completed successfully.";
    fi;
 
done
 
logger -t elasticsearch "Completed archiving of elasticsearch indicies.";

TSQL: Estimated database restore completion

Here’s a query proving you approximate percentage compeled, and estimated finish time, of any database restores happening on a SQL Server instance…

SELECT  st.[text],
		r.percent_complete, 
		DATEADD(SECOND, r.estimated_completion_time/1000, GETDATE()) AS estimated_completion_time,
		r.total_elapsed_time
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.[sql_handle]) st
WHERE [command] = 'RESTORE DATABASE';

The resultset will look something like below…

text				percent_complete	estimated_completion_time	total_elapsed_time
RESTRE DATABASE d...		47.57035		2014-08-08 13:49:48.373		958963

Monitoring fluentd with Nagios

Here’s just a few Nagios command strings you can use to monitor fluentd. I’ve thrown in a check for elasticsearch in case you’re monitoring an EFK system.

For checking td-agent. We should have 2 process, parent and child…

/usr/local/nagios/libexec/check_procs -w 2:2 -C ruby -a td-agent

For checking vanilla fluentd. Be aware your version name may differ…

/usr/local/nagios/libexec/check_procs -w 2:2 -C fluentd1.9

Check tcp ports. You requirements will vary…

/usr/local/nagios/libexec/check_tcp -H hostname -p 24224
/usr/local/nagios/libexec/check_tcp -H hostname -p 24230
/usr/local/nagios/libexec/check_tcp -H hostname -p 42185
/usr/local/nagios/libexec/check_tcp -H hostname -p 42186
/usr/local/nagios/libexec/check_tcp -H hostname -p 42187

For checking there is an elasticsearch process..

/usr/local/nagios/libexec/check_procs -w 1:1 -C java -a elasticsearch

TSQL: Database Mirroring with Certificates

Here’s some more TSQL for the 70-462 exam. The script shows the actions needed to configure database mirroring using certificates for authentication. Explanatory notes are included but you’re likely to need the training materials for this to make sense. TSQL is not included for the backup/restore parts needed for database mirroring.

SELECT *
FROM sys.symmetric_keys;
GO
 
-- Create a database master key
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Secret1234';
GO
 
-- Create certificate (SQL-A)
CREATE CERTIFICATE SQL_A_Cert 
WITH SUBJECT = 'My Mirroring certificate'
GO
 
-- Create certificate (SQL-B)
CREATE CERTIFICATE SQL_B_Cert 
WITH SUBJECT = 'My Mirroring certificate'
GO
 
-- Endpoint (SQL-A) certificate authentication
CREATE ENDPOINT Endpoint_Mirroring
AS TCP (LISTENER_IP = ALL, LISTENER_PORT = 7024)
FOR DATABASE_MIRRORING (AUTHENTICATION = CERTIFICATE SQL_A_Cert, ROLE = ALL);
GO
 
-- Endpoint (SQL-B) certificate authentication
CREATE ENDPOINT Endpoint_Mirroring
AS TCP (LISTENER_IP = ALL, LISTENER_PORT = 7024)
FOR DATABASE_MIRRORING (AUTHENTICATION = CERTIFICATE SQL_B_Cert, ROLE = ALL);
GO
 
-- Backup certificate SQL-A
BACKUP CERTIFICATE SQL_A_Cert TO FILE = 'C:\backup\SQL_A_Cert.cer';
 
-- Backup certificate SQL-B
BACKUP CERTIFICATE SQL_B_Cert TO FILE = 'C:\backup\SQL_B_Cert.cer';
 
-- SQL-A create login for sql_b
CREATE LOGIN SQL_B_login WITH PASSWORD = 'Pa$$w0rd';
GO
 
CREATE USER SQL_B_user FROM LOGIN SQL_B_login;
GO
 
-- SQL-B create login for sql_a
CREATE LOGIN SQL_A_login WITH PASSWORD = 'Pa$$w0rd';
GO
 
CREATE USER SQL_A_user FROM LOGIN SQL_A_login;
GO
 
-- create cert on sql-a from sql_b backup
CREATE CERTIFICATE SQL_B_Cert
FROM FILE = 'c:\backup\sql_b_cert.cer';
GO
 
-- create cert on sql-b from sql_a backup
CREATE CERTIFICATE SQL_A_Cert
FROM FILE = 'c:\backup\sql_a_cert.cer';
GO
 
-- on sql-a
GRANT CONNECT ON ENDPOINT::Endpoint_Mirroring TO SQL_B_login;
GO
 
-- on sql-b
GRANT CONNECT ON ENDPOINT::Endpoint_Mirroring TO SQL_A_login;
GO
 
-- on sql-b
SELECT *
FROM sys.endpoints
ALTER ENDPOINT Endpoint_Mirroring STATE = STARTED;
ALTER DATABASE [AdventureMirror] SET PARTNER = 'TCP://sql-a:7024';
GO
 
-- on sql-a
SELECT *
FROM sys.endpoints
ALTER ENDPOINT Endpoint_Mirroring STATE = STARTED;
 
ALTER DATABASE [AdventureMirror] SET PARTNER = 'TCP://sql-b:7024';
GO
 
-- DMVs to check the setup
SELECT *
FROM sys.database_mirroring
WHERE mirroring_guid IS NOT NULL;
 
SELECT *
FROM sys.database_mirroring_endpoints;