Browse Tag: backup

Backup/restore Elasticsearch index

[UPDATED 2017-03-09]
I still get comments/questions regarding this process I hacked together many moons ago. I must request that anybody who’s looking for a way to backup Elasticsearch indices STOP and do not follow the process described — it was for ES 0.00000000001, written back in 2011. You should not do what I suggest here! I’m saving this purely for historical purposes.

What you should do instead is save your events in flat text — in Logstash, output to both your ES index for searching via Kibana or whatnot, and also output your event to a flat file, likely periodic (per-day or month or whatever). Backup and archive these text files, since they compress quite well. When you want to restore data from a period, just re-process it through Logstash — CPU is cheap nowadays with cloud instances! The data is the important part — processed or not, if you have the data in an easily stored format, you can re-process it.

[Original post as follows]

I’ve been spending a lot of time with Elasticsearch recently, as I’ve been implementing logstash for our environment. Logstash, by the way, is a billion times awesome and I can’t recommend it enough for large-scale log management/search. Elasticsearch is pretty awesome too, but considering the sheer amount of data I was putting into it, I don’t feel satisfied with its replication-based redundancy — I need backups that I can save and restore at will. Since logstash creates a new Elasticsearch index for each day worth of logs, I want the ability to backup and restore arbitrary indices.

Elasticsearch has a concept of a gateway, wherein you can configure a gateway that maintains metadata and snapshots are regularly taken. “Regularly” as in every 10 seconds by default. The docs recommend using S3 as a gateway, meaning every 10s it’ll ship data up to S3 for backup purposes, and if a node ever needs to recover data, it can just look to S3 and get the metadata and fill in data from that source. However, this model does not support the “rotation”-style backup and restore I’m looking for, and it can’t keep up with the rate of data I’m sending it (my daily indices are about 15gb apiece, making for about 400k log entries an hour).

So I’ve come up with a pair of scripts that allow me to manage logstash/Elasticsearch index data, allowing for arbitrary restore of an index, as well as rotation so as to keep the amount of data that Elasticsearch keeps track of manageable. As always, I wrote my scripts for my environment, so I take no responsibility if they do not work in yours and instead destroy all your data (a distinct possibility). I include these scripts here because I spent a while trying to figure this out and couldn’t find any information elsewhere on the net.

The following script backs up today’s logstash index. I’m retarded at timezones, so I managed to somehow ship my logs to logstash in GMT, so my “day” ends at 5pm, when logstash closes its index and opens a new one for the new day. Shortly after logstash closes an index (stops writing to it, not “close” in the Elasticsearch sense), I run the following script in cron, which backs up the index, backs up the metadata, creates a restore script, and sticks it all in S3:

#!/bin/bash
# herein we backup our indexes! this script should run at like 6pm or something, after logstash
# rotates to a new ES index and theres no new data coming in to the old one. we grab metadatas,
# compress the data files, create a restore script, and push it all up to S3.

TODAY=`date +"%Y.%m.%d"`
INDEXNAME="logstash-$TODAY" # this had better match the index name in ES
INDEXDIR="/usr/local/elasticsearch/data/logstash/nodes/0/indices/"
BACKUPCMD="/usr/local/backupTools/s3cmd --config=/usr/local/backupTools/s3cfg put"
BACKUPDIR="/mnt/es-backups/"
YEARMONTH=`date +"%Y-%m"`
S3TARGET="s3://backups/elasticsearch/$YEARMONTH/$INDEXNAME"

# create mapping file with index settings. this metadata is required by ES to use index file data
echo -n "Backing up metadata... "
curl -XGET -o /tmp/mapping "http://localhost:9200/$INDEXNAME/_mapping?pretty=true" > /dev/null 2>&1
sed -i '1,2d' /tmp/mapping #strip the first two lines of the metadata
echo '{"settings":{"number_of_shards":5,"number_of_replicas":1},"mappings":{' >> /tmp/mappost 
# prepend hardcoded settings metadata to index-specific metadata
cat /tmp/mapping >> /tmp/mappost
echo "DONE!"

# now lets tar up our data files. these are huge, so lets be nice
echo -n "Backing up data files (this may take some time)... "
mkdir -p $BACKUPDIR
cd $INDEXDIR
nice -n 19 tar czf $BACKUPDIR/$INDEXNAME.tar.gz $INDEXNAME 
echo "DONE!"

echo -n "Creating restore script... "
# time to create our restore script! oh god scripts creating scripts, this never ends well...
cat << EOF >> $BACKUPDIR/$INDEXNAME-restore.sh
#!/bin/bash
# this script requires $INDEXNAME.tar.gz and will restore it into elasticsearch
# it is ESSENTIAL that the index you are restoring does NOT exist in ES. delete it
# if it does BEFORE trying to restore data.

# create index and mapping
echo -n "Creating index and mappings... "
curl -XPUT 'http://localhost:9200/$INDEXNAME/' -d '`cat /tmp/mappost`' > /dev/null 2>&1
echo "DONE!"

# extract our data files into place
echo -n "Restoring index (this may take a while)... "
cd $INDEXDIR
tar xzf $BACKUPDIR/$INDEXNAME.tar.gz
echo "DONE!"

# restart ES to allow it to open the new dir and file data
echo -n "Restarting Elasticsearch... "
/etc/init.d/es restart
echo "DONE!"
EOF
echo "DONE!" # restore script done

# push both tar.gz and restore script to s3
echo -n "Saving to S3 (this may take some time)... "
$BACKUPCMD $BACKUPDIR/$INDEXNAME.tar.gz $S3TARGET.tar.gz
$BACKUPCMD $BACKUPDIR/$INDEXNAME-restore.sh $S3TARGET-restore.sh
echo "DONE!"

# cleanup tmp files
rm /tmp/mappost
rm /tmp/mapping

Restoring from this data is just as you would expect — download the backed up index.tar.gz and the associated restore.sh to the same directory, chmod +x the restore.sh, then run it. It will automagically create the index and put the data in place. This has the benefit of making backed up indices portable — you can “export” them from one ES cluster and import them to another.

As mentioned, because of logstash, I have daily indices that I back up; I also rotate them to prevent ES from having to search through billions of gigs of data over time. I keep 8 days worth of logs in ES (due to timezone issues) by doing the following:

#!/bin/bash
# Performs 'rotation' of ES indices. Maintains only 8 indicies (1 week) of logstash logs; this script
# is to be run at midnight daily and removes the oldest one (as well as any 1970s-era log indices,
# as these are a product of timestamp fail).  Please note the insane amount of error-checking
# in this script, as ES would rather delete everything than nothing...

# Before we do anything, let's get rid of any nasty 1970s-era indices we have floating around
TIMESTAMPFAIL=`curl -s localhost:9200/_status?pretty=true |grep index |grep log |sort |uniq |awk -F\" '{print $4}' |grep 1970 |wc -l`
if [ -n $TIMESTAMPFAIL ]
	then
		curl -s localhost:9200/_status?pretty=true |grep index |grep log |sort |uniq |awk -F\" '{print $4}' |grep 1970 | while read line
			do
				echo "Indices with screwed-up timestamps found; removing"
				echo -n "Deleting index $line: "
				curl -s -XDELETE http://localhost:9200/$line/
				echo "DONE!"
			done
fi


# Get list of indices; should we rotate?
INDEXCOUNT=`curl -s localhost:9200/_status?pretty=true |grep index |grep log |sort |uniq |awk -F\" '{print $4}' |wc -l`
if [ $INDEXCOUNT -lt "9" ] 
	then
		echo "Less than 8 indices, bailing with no action"
		exit 0
	else
		echo "More than 8 indices, time to do some cleaning"
		
		# Let's do some cleaning!
		OLDESTLOG=`curl -s localhost:9200/_status?pretty=true |grep index |grep log |sort |uniq |awk -F\" '{print $4}' |head -n1`
		echo -n "Deleting oldest index, $OLDESTLOG: "
		curl -s -XDELETE http://localhost:9200/$OLDESTLOG/
		echo "DONE!"
fi

Sometimes, due to the way my log entries get to logstash, the timestamp is mangled, and logstash, bless its heart, tries so hard to index it. Since logstash is keyed on timestamps, though, this means every once in a while I get an index dated 1970 with one or two entries. There’s no harm save for any overhead of having an extra index, but it also makes it impossible to back those up or to be able to make any assumptions about the index names. I nuke the 1970s indices from orbit, and then, if there are more than 8 indices in logstash, drop the oldest. I run this script at midnight daily, after index backup. Hugest caveat in the world about the rotation: running `curl -s -XDELETE http://localhost:9200/logstash-10.14.2011/’ will delete index logstash-10.14.2011, as you’d expect. However, if that variable $OLDESTLOG is mangled somehow and this command is run: `curl -s -XDELETE http://localhost:9200//’, you will delete all of your indices. Just a friendly warning!