Check a html page with check_http

With the check_http Nagios plugin we can check that a url returns an OK status code as well as verifying the page contains a certain string of text. The usage format is a s follows…

/usr/local/nagios/libexec/check_http -H hostname  -r search_string

For example…

/usr/local/nagios/libexec/check_http -H www.youdidwhatwithtsql.com -r "wordpress"

If you want to make the check case-insensitive then change to…

/usr/local/nagios/libexec/check_http -H www.youdidwhatwithtsql.com -R "wordpress"

Happy monitoring!


Modifying elasticsearch index settings

To view the settings of an index run the following at the command-line…

curl -XGET http://hostname:9200/indexname/_settings

From here you can indeify the setting you need and modify it as you wish. This example sets the number of replicas to zero.

curl -XPUT http://hostname:9200/indexname/_settings -d '{ "index": {"number_of_replicas":"0"}}'

For further details see the manual.


Removing logstash indicies from elasticsearch‏

I’ve been playing with EFK and elasticsearch ended up eating all of the RAM on my test system. I discovered this was due to it attempting to cache all these indexes. Since this is a test system I’m not too bothered about having a long history here so I wrote this bash script to remove logstash indexes from elasticsearch, compress and archive them. This has the effect of reducing the memory pressure and a better working system. Explanatory comments are included.

#!/bin/bash
 
#######################################
# Author: Rhys Campbell               #
# Created: 2014-08-06                 #
# Description: Removes indicies with  #
# a modified date > N days from ES #
# memory and archives them using lzma #
# compression.                        #
#######################################
INDICIES_PREFIX="logstash"; # Indicies name prefix
INDICIES_ROOT="/data/elasticsearch/data/elasticsearch/nodes/0/indices/$INDICIES_PREFIX"; # Daily indicies root
DAYS=5; # Days of indexes to keep
ARCHIVE="/data/elasticsearch/data/elasticsearch/nodes/0/indices/archive"; # archive location
ES_URL="http://hostname:9200/";
 
logger -t elasticsearch "Begining archiving of elasticsearch indicies.";
 
for DIR in `find /data/elasticsearch/data/elasticsearch/nodes/0/indices/logstash* -maxdepth 0 -mtime +"$DAYS"`;
do
 # Remove index from elasticsearch
INDEX_NAME=`basename "$DIR"`;
REMOVAL_URL="$ES_URL$INDEX_NAME/_close";
#curl -XPOST "$REMOVAL_URL"; # Uncomment this line. Wordpess balks on this for some reason
EXIT=$?;
    if [ "$EXIT" -ne 0 ]; then
        logger -t elasticsearch "ERROR: Removal of elasticsearch index at $REMOVAL_URL failed Exit Code = $EXIT.";
        exit $EXIT;
    else
        logger -t elasticsearch "Successfully removed $REMOVAL_URL from elasticsearch.";
    fi;
    # Now archive the directory
    tar cvf "$DIR".lzma "$DIR" --lzma --remove-files;
    EXIT=$?;
    if [ "$EXIT" -ne 0 ]; then
        logger -t elasticsearch "ERROR: lzma compression of elasticsearch index file encountered an error. Exit Code = $EXIT.";
        exit $EXIT;
    else
        logger -t elasticsearch "Compressed elasticsearch index $INDEX_NAME successfully.";
    fi;
    mv "$DIR".lzma "$ARCHIVE";
    EXIT=$?;
    if [ "$EXIT" -ne 0 ]; then
        logger -t elasticsearch "ERROR: Could not move $DIR.lzma to archive location $ARCHIVE. Exit Code = $EXIT.";
        exit $EXIT;
    else
        logger -t elasticsearch "Removal and archiving of the elasticsearch index $INDEX_NAME completed successfully.";
    fi;
 
done
 
logger -t elasticsearch "Completed archiving of elasticsearch indicies.";

TSQL: Estimated database restore completion

Here’s a query proving you approximate percentage compeled, and estimated finish time, of any database restores happening on a SQL Server instance…

SELECT  st.[text],
		r.percent_complete, 
		DATEADD(SECOND, r.estimated_completion_time/1000, GETDATE()) AS estimated_completion_time,
		r.total_elapsed_time
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.[sql_handle]) st
WHERE [command] = 'RESTORE DATABASE';

The resultset will look something like below…

text				percent_complete	estimated_completion_time	total_elapsed_time
RESTRE DATABASE d...		47.57035		2014-08-08 13:49:48.373		958963

Monitoring fluentd with Nagios

Here’s just a few Nagios command strings you can use to monitor fluentd. I’ve thrown in a check for elasticsearch in case you’re monitoring an EFK system.

For checking td-agent. We should have 2 process, parent and child…

/usr/local/nagios/libexec/check_procs -w 2:2 -C ruby -a td-agent

For checking vanilla fluentd. Be aware your version name may differ…

/usr/local/nagios/libexec/check_procs -w 2:2 -C fluentd1.9

Check tcp ports. You requirements will vary…

/usr/local/nagios/libexec/check_tcp -H hostname -p 24224
/usr/local/nagios/libexec/check_tcp -H hostname -p 24230
/usr/local/nagios/libexec/check_tcp -H hostname -p 42185
/usr/local/nagios/libexec/check_tcp -H hostname -p 42186
/usr/local/nagios/libexec/check_tcp -H hostname -p 42187

For checking there is an elasticsearch process..

/usr/local/nagios/libexec/check_procs -w 1:1 -C java -a elasticsearch