Command-line Elasticsearch client

We use the ELK stack extensively at my job, thanks to my evangelizing and endless hard work. With all our servers logging to logstash and being pushed to Elasticsearch, logging into servers via ssh just to check logs is a thing of the past, and to help push that ideology along, I’ve hacked up a simple bash script to query Elasticsearch and return results in a manner that mimics running `tail` on a server’s logs. It quite literally just runs a query against Elasticsearch’s HTTP API, but I added some niceties so I can allow folks to make queries to ES without having to read a novel on how to do so.

[kstedman@kalembp:~/bin] $ ./ -h 10.x.x.x:9200 -q "+host:xxx6039 +type:syslog" -t 1000 -n 4
2014-07-09T05:05:26.000000+00:00 xxx6039 snmpd[15279]: Connection from UDP: [10.x.x.x]:57258
2014-07-09T05:05:26.000000+00:00 xxx6039 snmpd[15279]: Received SNMP packet(s) from UDP: [10.x.x.x]:57258
2014-07-09T05:05:26.000000+00:00 xxx6039 snmpd[15279]: Connection from UDP: [10.x.x.x]:57258
2014-07-09T05:05:27.000000+00:00 xxx6039 snmpd[15279]: Connection from UDP: [10.x.x.x]:57258

Of course, since this is text output on the console, you can use it as inputs/outputs to scripts, sed/grep/awk to your heart’s content, etc. Requires python datetime and json. Enjoy!

# Search server logs from the comfort of your terminal!
# This is a command-line wrapper for Elasticsearch's RESTful API.
# This is super-beta, version .000001-alpha. Questions/comments/hatemail to Kale Stedman,
# I'm so sorry. You should probably pipe the output to less.
# usage: ./ -u $USER -p $PASS -h es-hostname -q "$query" -t $time -n 500
# ex: ./ -u kstedman -p hunter2 -h -q "program:crond" -t 5 -n 50
# -h host      The Elasticsearch host you're trying to connect to.
# -u username  Optional: If your ES cluster is proxied through apache and you have http auth enabled, username goes here
# -p password  Optional: If your ES cluster is proxied through apache and you have http auth enabled, password goes here
# -q query     Optional: Query to pass to ES. If not given, "*" will be used.
# -t timeframe Optional: How far back to search. Value is in mimutes. If not given, defaults to 5.
# -n results   Optional: Number of results to return. If not given, defaults to 500.

# Declare usage fallback/exit
usage() { echo "Usage: $0 -h host [ -u USER ] [ -p PASS ] [ -q "QUERY" ] [ -t TIMEFRAME ] [ -n NUMRESULTS ]" 1>&2; exit 1; }

# Parse options
while getopts ":u:p:h:q:t:n:" o; do
    case "${o}" in
shift $((OPTIND-1))

if [ -z "${p}" ] && [ ! -z "${u}" ] ; then
  echo -n "Password: "
  read -s p

# Check for required variables
if [ -z "${h}" ] ; then

# Set defaults if not set
if [ -z "${n}" ] ; then
  # default: 500 results returned 

if [ -z "${q}" ] ; then
  # default: query "*"

if [ -z "${t}" ] ; then
  # default: 5 minutes ago

# cross-platform time compatibilities
FROMDATE=`python -c "from datetime import date, datetime, time, timedelta; print ( - timedelta(minutes=${t})).strftime('%s')"`
NOWDATE=`python -c "from datetime import date, datetime, time, timedelta; print ('%s')"`

# Build query

if [ ! -z "${u}" ] ; then

# run query and prettify the output
curl -s -XGET "${URL}" -d ''"${query}"'' | python -mjson.tool |grep '"message"' | awk -F\: -v OFS=':' '{ $1=""; print $0}' | sed -e 's/^: "//g' | sed -e 's/", $//g' | sed -e 's/\\n/\

long-running bash command notifier for osx

I stumbled across this fantastic blog post that offers a clever bash script to notify you of the completion of long-running commands in your bash shell. I made a couple tweaks to make it work for OSX, and gave it a little blacklist (I usually run `less’ or `vim’ for >10 seconds, for example).

Requires growl and growlnotify, bash, and this clever pre-exec hook for bash. Download that pre-exec hook:

mkdir -p ~/src/shell-tools
curl > ~/src/shell-tools/preexec.bash

Now copy and paste this into ~/src/shell-tools/long-running.bash:

# Source this, and then run notify_when_long_running_commands_finish_install
# Relies on
# Full credit to
# Modified slightly for OSX support and blacklist (see the egrep loop in the
# precmd() function

if [ -f ~/src/shell-tools/preexec.bash ]; then
    . ~/src/shell-tools/preexec.bash
    echo "Could not find preexec.bash"


function notify_when_long_running_commands_finish_install() {
    local RUNNING_COMMANDS_DIR=~/.cache/running-commands
    for pid_file in $RUNNING_COMMANDS_DIR/*; do
        local pid=$(basename $pid_file)
        # If $pid is numeric, then check for a running bash process.
        case $pid in
        ''|*[!0-9]*) local numeric=0 ;;
        *) local numeric=1 ;;

        if [[ $numeric -eq 1 ]]; then
            local command=$(ps -o command= $pid)
            if [[ $command != $BASH ]]; then
                rm -f $pid_file


    function precmd () {

        if [[ -r $_LAST_COMMAND_STARTED_CACHE ]]; then

            local last_command_started=$(head -1 $_LAST_COMMAND_STARTED_CACHE)
            local last_command=$(tail -n +2 $_LAST_COMMAND_STARTED_CACHE)

            if [[ -n $last_command_started ]]; then
                local now=$(date -u +%s)
                local time_taken=$(( $now - $last_command_started ))
                if [[ $time_taken -gt $LONG_RUNNING_COMMAND_TIMEOUT ]]; then
                  if [ `echo "$last_command" | egrep -c "less|more|vi|vim|man|ssh"` == 1 ] ; then 
                    exit 0
                    growlnotify \
                        -m "$last_command completed in $time_taken seconds" \
                        "Command complete:"
            # No command is running, so clear the cache.
            echo -n > $_LAST_COMMAND_STARTED_CACHE

    function preexec () {
        date -u +%s > $_LAST_COMMAND_STARTED_CACHE
        echo "$1" >> $_LAST_COMMAND_STARTED_CACHE


Finally, source it by adding the following to your ~/.bash_profile:

. ~/src/shell-tools/preexec.bash
. ~/src/shell-tools/long-running.bash

also: site redesign! (read: i installed a new theme from the gallery, go team)

Über-simple generic RHEL/CentOS init script

Fill in the indicated bits, drop in /etc/rc.d/init.d/ , chmod +x, and away you go!

# chkconfig: 2345 90 90
# description: program_name
# Provides: program_name
# Required-Start: network
# Required-Stop: network
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Description: Start the program

### Fill in these bits:
START_CMD="java -jar /home/kale/bin/program_name.jar > /var/log/program_name/program_name.log 2>&1 &"

### No further muckin' about needed!


killproc() {
  pkill -u $USER -f $PGREP_STRING

start_daemon() {
  eval "$*"

log_success_msg() {
  echo "$*"
  logger "$_"

log_failure_msg() {
  echo "$*"
  logger "$_"

check_proc() {
  pgrep -u $USER -f $PGREP_STRING >/dev/null

start_script() {
  if [ "${CUR_USER}" != "root" ] ; then
    log_failure_msg "$NAME can only be started as 'root'."
    exit -1

  if [ $? -eq 0 ]; then
    log_success_msg "$NAME is already running."
    exit 0

  [ -d /var/run/$NAME ] || (mkdir /var/run/$NAME )

   # make go now 
    start_daemon /bin/su $USER -c "$START_CMD"

  # Sleep for a while to see if anything cries
  sleep 5

  if [ $? -eq 0 ]; then
    log_success_msg "Started $NAME."
    log_failure_msg "Error starting $NAME."
    exit -1

stop_script() {
  if [ "${CUR_USER}" != "root" ] ; then
    log_failure_msg "You do not have permission to stop $NAME."
    exit -1

  if [ $? -eq 0 ]; then
    killproc -p $PID_FILE >/dev/null

    # Make sure it's dead before we return
    until [ $? -ne 0 ]; do
      sleep 1

    if [ $? -eq 0 ]; then
      log_failure_msg "Error stopping $NAME."
      exit -1
      log_success_msg "Stopped $NAME."
    log_failure_msg "$NAME is not running or you don't have permission to stop it"

check_status() {
  if [ $? -eq 0 ]; then
    log_success_msg "$NAME is running."
    log_failure_msg "$NAME is stopped."
    exit -1

case "$1" in
    echo "Usage: $0 {start|stop|restart|status}"
    exit 1

exit 0

sed and newlines

sed’s really bad when it comes to newlines — and especially so on OSX. This snippet works quite well for “multiline” sedding:


cat test |sed -e ':a' -e 'N' -e '$!ba' -e 's/s1\n        /s1, /g' 

poops1, poop
butts1, butt

bash, /dev/net and you

found during a random goog search on Dave Smith’s Blog:

exec 3<>/dev/tcp/
echo -e "GET / HTTP/1.1\n\n" >&3
cat <&3

seriously, wrap your head around that. /dev/net isn't a real device, it's a magical pseudodevice that bash intercepts and opens a socket as requested.

Add fields to a MySQL table without doing an ALTER TABLE

I have a database table that was created about 2 years ago and has been filling up quite quickly over the years. These days, it’s massive. Our database dumps are 68gb uncompressed, and 60gb of that is this table. It’s used quite regularly, as it contains all of the error reports we receive, but to call it “unwieldy” is an understatement.

I was content to just let sleeping dogs lie, but alas — one of my devs needs a couple extra fields added to the table for more data and sorting and whatnot. If this wasn’t a 60gb table in our production database, I’d happily run an ALTER TABLE and call it a day. (In fact, I attempted to do this — and then the site went down because the whole db was locked. oops)

Instead, I discovered a better way to add fields while retaining both uptime and data (!). MySQL’s CREATE TABLE command actually has a lot of interesting functionality that allows me to do this:

CREATE TABLE errors2 (
  keywords VARCHAR(255), 
  errorid VARCHAR(64), 
  stacktrace TEXT, 
  is_silent BOOL, 
  PRIMARY KEY (id), 
  KEY playerid (playerid,datecreate), 
  KEY datecreate (datecreate), 
  KEY hidden (hidden,datecreate), 
  KEY hidden_debug (hidden,is_debug,datecreate)
SELECT * from errors; 

What this CREATE TABLE statement does is create a new table with 5 explicitly-specified fields (keywords, errorid, stacktrace, is_silent, and id). Four of these are what I wanted to add; ‘id’ exists in the original table, but I specify it here because I need to make it AUTO_INCREMENT (as this is a table setting, not a bit of data or schema that can be copied). Additional keys are specified verbatim from a SHOW CREATE TABLE errors (the original table), as is the AUTO_INCREMENT value.

After specifying my table creation variables, I perform a SELECT on the original table. MySQL is smart enough to know that if I’m SELECTing during a CREATE TABLE, I probably want any applicable table schema copied as well, so it does exactly that — copies over any columns missing from the schema I specified in my CREATE statement. Even better, because the various keys were specified, the indexes get copied over as well.

The result? An exact copy of the original table — with four additional fields added. All that’s left is to clean up:

DROP TABLE errors;
RENAME TABLE errors2 TO errors;

And that, as they say, is that.

EC2 metadata get

Today I learned about the EC2 metadata service. Try it from any EC2 instance!


for the list of metadata objects


for the public IP, for example!

Backup/restore Elasticsearch index

[UPDATED 2017-03-09]
I still get comments/questions regarding this process I hacked together many moons ago. I must request that anybody who’s looking for a way to backup Elasticsearch indices STOP and do not follow the process described — it was for ES 0.00000000001, written back in 2011. You should not do what I suggest here! I’m saving this purely for historical purposes.

What you should do instead is save your events in flat text — in Logstash, output to both your ES index for searching via Kibana or whatnot, and also output your event to a flat file, likely periodic (per-day or month or whatever). Backup and archive these text files, since they compress quite well. When you want to restore data from a period, just re-process it through Logstash — CPU is cheap nowadays with cloud instances! The data is the important part — processed or not, if you have the data in an easily stored format, you can re-process it.

[Original post as follows]

I’ve been spending a lot of time with Elasticsearch recently, as I’ve been implementing logstash for our environment. Logstash, by the way, is a billion times awesome and I can’t recommend it enough for large-scale log management/search. Elasticsearch is pretty awesome too, but considering the sheer amount of data I was putting into it, I don’t feel satisfied with its replication-based redundancy — I need backups that I can save and restore at will. Since logstash creates a new Elasticsearch index for each day worth of logs, I want the ability to backup and restore arbitrary indices.

Elasticsearch has a concept of a gateway, wherein you can configure a gateway that maintains metadata and snapshots are regularly taken. “Regularly” as in every 10 seconds by default. The docs recommend using S3 as a gateway, meaning every 10s it’ll ship data up to S3 for backup purposes, and if a node ever needs to recover data, it can just look to S3 and get the metadata and fill in data from that source. However, this model does not support the “rotation”-style backup and restore I’m looking for, and it can’t keep up with the rate of data I’m sending it (my daily indices are about 15gb apiece, making for about 400k log entries an hour).

So I’ve come up with a pair of scripts that allow me to manage logstash/Elasticsearch index data, allowing for arbitrary restore of an index, as well as rotation so as to keep the amount of data that Elasticsearch keeps track of manageable. As always, I wrote my scripts for my environment, so I take no responsibility if they do not work in yours and instead destroy all your data (a distinct possibility). I include these scripts here because I spent a while trying to figure this out and couldn’t find any information elsewhere on the net.

The following script backs up today’s logstash index. I’m retarded at timezones, so I managed to somehow ship my logs to logstash in GMT, so my “day” ends at 5pm, when logstash closes its index and opens a new one for the new day. Shortly after logstash closes an index (stops writing to it, not “close” in the Elasticsearch sense), I run the following script in cron, which backs up the index, backs up the metadata, creates a restore script, and sticks it all in S3:

# herein we backup our indexes! this script should run at like 6pm or something, after logstash
# rotates to a new ES index and theres no new data coming in to the old one. we grab metadatas,
# compress the data files, create a restore script, and push it all up to S3.

TODAY=`date +"%Y.%m.%d"`
INDEXNAME="logstash-$TODAY" # this had better match the index name in ES
BACKUPCMD="/usr/local/backupTools/s3cmd --config=/usr/local/backupTools/s3cfg put"
YEARMONTH=`date +"%Y-%m"`

# create mapping file with index settings. this metadata is required by ES to use index file data
echo -n "Backing up metadata... "
curl -XGET -o /tmp/mapping "http://localhost:9200/$INDEXNAME/_mapping?pretty=true" > /dev/null 2>&1
sed -i '1,2d' /tmp/mapping #strip the first two lines of the metadata
echo '{"settings":{"number_of_shards":5,"number_of_replicas":1},"mappings":{' >> /tmp/mappost 
# prepend hardcoded settings metadata to index-specific metadata
cat /tmp/mapping >> /tmp/mappost
echo "DONE!"

# now lets tar up our data files. these are huge, so lets be nice
echo -n "Backing up data files (this may take some time)... "
mkdir -p $BACKUPDIR
nice -n 19 tar czf $BACKUPDIR/$INDEXNAME.tar.gz $INDEXNAME 
echo "DONE!"

echo -n "Creating restore script... "
# time to create our restore script! oh god scripts creating scripts, this never ends well...
cat << EOF >> $BACKUPDIR/$
# this script requires $INDEXNAME.tar.gz and will restore it into elasticsearch
# it is ESSENTIAL that the index you are restoring does NOT exist in ES. delete it
# if it does BEFORE trying to restore data.

# create index and mapping
echo -n "Creating index and mappings... "
curl -XPUT 'http://localhost:9200/$INDEXNAME/' -d '`cat /tmp/mappost`' > /dev/null 2>&1
echo "DONE!"

# extract our data files into place
echo -n "Restoring index (this may take a while)... "
tar xzf $BACKUPDIR/$INDEXNAME.tar.gz
echo "DONE!"

# restart ES to allow it to open the new dir and file data
echo -n "Restarting Elasticsearch... "
/etc/init.d/es restart
echo "DONE!"
echo "DONE!" # restore script done

# push both tar.gz and restore script to s3
echo -n "Saving to S3 (this may take some time)... "
echo "DONE!"

# cleanup tmp files
rm /tmp/mappost
rm /tmp/mapping

Restoring from this data is just as you would expect — download the backed up index.tar.gz and the associated to the same directory, chmod +x the, then run it. It will automagically create the index and put the data in place. This has the benefit of making backed up indices portable — you can “export” them from one ES cluster and import them to another.

As mentioned, because of logstash, I have daily indices that I back up; I also rotate them to prevent ES from having to search through billions of gigs of data over time. I keep 8 days worth of logs in ES (due to timezone issues) by doing the following:

# Performs 'rotation' of ES indices. Maintains only 8 indicies (1 week) of logstash logs; this script
# is to be run at midnight daily and removes the oldest one (as well as any 1970s-era log indices,
# as these are a product of timestamp fail).  Please note the insane amount of error-checking
# in this script, as ES would rather delete everything than nothing...

# Before we do anything, let's get rid of any nasty 1970s-era indices we have floating around
TIMESTAMPFAIL=`curl -s localhost:9200/_status?pretty=true |grep index |grep log |sort |uniq |awk -F\" '{print $4}' |grep 1970 |wc -l`
		curl -s localhost:9200/_status?pretty=true |grep index |grep log |sort |uniq |awk -F\" '{print $4}' |grep 1970 | while read line
				echo "Indices with screwed-up timestamps found; removing"
				echo -n "Deleting index $line: "
				curl -s -XDELETE http://localhost:9200/$line/
				echo "DONE!"

# Get list of indices; should we rotate?
INDEXCOUNT=`curl -s localhost:9200/_status?pretty=true |grep index |grep log |sort |uniq |awk -F\" '{print $4}' |wc -l`
if [ $INDEXCOUNT -lt "9" ] 
		echo "Less than 8 indices, bailing with no action"
		exit 0
		echo "More than 8 indices, time to do some cleaning"
		# Let's do some cleaning!
		OLDESTLOG=`curl -s localhost:9200/_status?pretty=true |grep index |grep log |sort |uniq |awk -F\" '{print $4}' |head -n1`
		echo -n "Deleting oldest index, $OLDESTLOG: "
		curl -s -XDELETE http://localhost:9200/$OLDESTLOG/
		echo "DONE!"

Sometimes, due to the way my log entries get to logstash, the timestamp is mangled, and logstash, bless its heart, tries so hard to index it. Since logstash is keyed on timestamps, though, this means every once in a while I get an index dated 1970 with one or two entries. There’s no harm save for any overhead of having an extra index, but it also makes it impossible to back those up or to be able to make any assumptions about the index names. I nuke the 1970s indices from orbit, and then, if there are more than 8 indices in logstash, drop the oldest. I run this script at midnight daily, after index backup. Hugest caveat in the world about the rotation: running `curl -s -XDELETE http://localhost:9200/logstash-10.14.2011/’ will delete index logstash-10.14.2011, as you’d expect. However, if that variable $OLDESTLOG is mangled somehow and this command is run: `curl -s -XDELETE http://localhost:9200//’, you will delete all of your indices. Just a friendly warning!

Renaming a node in chef

Too bad there’s no `knife node rename ‘, eh?

Here’s what you gotta do instead:

knife client delete oldname
knife node delete oldname

On the node itself:

rm /etc/chef/client.pem
sed -i 's/oldname/newname/g' /etc/chef/client.rb
ls /etc/chef/validation.pem # ensure it's there!
chef-client -N newname

This will register the new node name with chef. The runlist will be empty, so you’ll have to rebuild it. Voila!

Apache request-based throttling

Ok, theoretically my last post about mod_rpaf was supposed to lead to mod_qos working. It did, in the most technical way… it just made it instantly obvious that mod_qos was not the solution I was looking for! mod_qos performs qos on a URI but applies it to all connecting clients, not just offenders. It’s best used for resource limiting… not in API throttling to put a stop to abuse, which is my intent.

I grudgingly turned to mod_security. I’ve known all along that mod_security would be the best tool to help me reach my goal; however, mod_security is the least user-friendly piece of software that I’ve ever used, with a highly esoteric language and odd processing rules. Forced to sit down and make it work, however, I’ve come up with a few rules that may help others who wish to perform request-based throttling.

SecAction "phase:2,pass,nolog,initcol:IP=%{REQUEST_HEADERS.X-Forwarded-For}"
SecAction "phase:2,nolog,setvar:IP.hitcount=+1,deprecatevar:IP.hitcount=1/1"
SecRule IP:hitcount "@gt 3" "phase:2,pause:3000,nolog,allow,msg:'API abuser, throttling'"

First, I initialize a collection called “IP”, based on the X-Forwarded-For header. Because I’m using mod_rpaf, I could technically use the remote address, but “just in case” I opted for the X-Forwarded-For, since that’s much more important to me. It also prevents the load balancer from getting blocked… ever.

Second line is where I do the IP increment — and decrement. As you can see, for every hit from that IP I increment the IP.hitcount variable by 1; the ‘deprecatevar:IP.hitcount=1/1’ tells the variable to decrement the count by one per second. If the user makes one hit per second, they will never hit the limit. If they make 2 hits per second, the net gain will be 1 one first second, 2 the next, 3 the next, etc.

The last line, of course, is where we do our test. If the hitcount is greater than 3, I’m allowing the request to go through, but adding a 3000ms pause — 3 seconds.

I configured these rules within my VirtualHost definition, and used Location tags to specify the URIs that require throttling. It works like a champ. In each of the rules, I’ve specified ‘nolog’, as it’s pretty spammy, though you’ll want to change that to ‘log’ for testing. Because I’m disabling mod_security’s spammy logging, I’m timing requests with a custom log format:

LogFormat "%h %l %u %t \"%r\" %>s %B \"%{Referer}i\" \"%{User-Agent}i\" %D" combined-time
CustomLog "/var/log/httpd/access_log" combined-time

The %D at the end of the LogFormat spits out the total time taken by Apache to fulfill the request in microseconds, which will include the artificial delay. With this CustomLog definition, you can now easily visualize throttled requests:

tail -f access_log |awk '($NF > 3000000)'

mod_rpaf and Amazon ELB

Amazon’s ELB service is nice — magical load balancers that just work, sitting in front of your servers, that you can update and modify on a whim. Of course, because it’s a load balancer (a distributed load balancer infrastructure, to be more precise), Apache and other applications sitting behind it see all the incoming traffic as coming from the load balancer — ie, $REMOTE_ADDR is instead of the end client’s public IP.

This is normal behavior when sitting behind a load balancer, and it’s also normal behavior for the load balancer to encapsulate the original client IP in an X-Forwarded-For header. Using Apache, we can, for example, modify LogFormat definitions to account for this, logging %{X-Forwarded-For}i to log the end user’s IP.

Where this falls short, however, is when you want to *do* things with the originating IP beyond logging. The real-world scenario I ran into was using mod_qos to do rate-limiting based on URIs within Apache — mod_qos tests against the remote IP, not the X-Forwarded-For, so using the module as is, I’m unable to apply any QoS rules against anything beyond the load balancer… which of course defeats the purpose.

Luckily, I’m not the only person to have ever run into this issue. The Apache module mod_rpaf is explicitly designed to address this type of situation by translating the X-Forwarded-For header into the remote address as Apache expects, so that other modules can properly run against the originating IP — not the load balancer.

ELB makes implementation of mod_rpaf much more difficult that it should be, however. ELB is architected as a large network of load balancers, such that incoming outside requests bounce around a bit within the ELB infrastructure before being passed to your instance. Each “bounce” adds an additional IP to X-Forwarded-For, essentially chaining proxies. Additionally, there are hundreds of internal IPs within ELB that would need to be accounted for to use mod_rpaf as is, as you must specify the proxy IPs to strip.

So I patched up mod_rpaf to work with ELB. I’ve been running it for a day or so in dev and it appears to be working as expected, passing the original client value to mod_qos (and mod_qos testing and working against that), but of course if you run into issues, please let me know (because your issues will probably show up in my environment as well).

Here is the patch:

--- mod_rpaf-2.0.c	2008-01-01 03:05:40.000000000 +0000
+++ mod_rpaf-2.0.c~	2011-08-25 20:04:39.000000000 +0000
@@ -136,13 +136,25 @@
 static int is_in_array(const char *remote_ip, apr_array_header_t *proxy_ips) {
-    int i;
+   /* int i;
     char **list = (char**)proxy_ips->elts;
     for (i = 0; i < proxy_ips->nelts; i++) {
         if (strcmp(remote_ip, list[i]) == 0)
             return 1;
     return 0;
+    */
+    return 1;
+static char* last_not_in_array(apr_array_header_t *forwarded_for,
+			       apr_array_header_t *proxy_ips) {
+    int i;
+    for (i = (forwarded_for->nelts)-1; i > 0; i--) {
+	if (!is_in_array(((char **)forwarded_for->elts)[i], proxy_ips))
+	    break;
+    }
+    return ((char **)forwarded_for->elts)[i];
 static apr_status_t rpaf_cleanup(void *data) {
@@ -161,7 +173,7 @@
     if (!cfg->enable)
         return DECLINED;
-    if (is_in_array(r->connection->remote_ip, cfg->proxy_ips) == 1) {
+    /* if (is_in_array(r->connection->remote_ip, cfg->proxy_ips) == 1) { */
         /* check if cfg->headername is set and if it is use
            that instead of X-Forwarded-For by default */
         if (cfg->headername && (fwdvalue = apr_table_get(r->headers_in, cfg->headername))) {
@@ -183,7 +195,8 @@
             rcr->old_ip = apr_pstrdup(r->connection->pool, r->connection->remote_ip);
             rcr->r = r;
             apr_pool_cleanup_register(r->pool, (void *)rcr, rpaf_cleanup, apr_pool_cleanup_null);
-            r->connection->remote_ip = apr_pstrdup(r->connection->pool, ((char **)arr->elts)[((arr->nelts)-1)]);
+            /* r->connection->remote_ip = apr_pstrdup(r->connection->pool, ((char **)arr->elts)[((arr->nelts)-1)]); */
+            r->connection->remote_ip = apr_pstrdup(r->connection->pool, last_not_in_array(arr, cfg->proxy_ips));
             r->connection->remote_addr->sa.sin.sin_addr.s_addr = apr_inet_addr(r->connection->remote_ip);
             if (cfg->sethostname) {
                 const char *hostvalue;
@@ -201,7 +214,7 @@
-    }
+    /* } */
     return DECLINED;

Or, if you’d prefer ez-mode, I rolled some RPMs of mod_rpaf that include this patch:


And, for completeness, mod_rpaf.conf:

LoadModule rpaf_module        modules/

RPAFenable On
RPAFsethostname On
RPAFproxy_ips 10.
RPAFheader X-Forwarded-For

Extra logging wrapper script for SES Postfix transport

I’m using Amazon’s SES service for my servers’ emails. To implement, instead of re-writing all of our code to hook into the SES API, I simply configured Postfix to use the example script provided by Amazon. It works fine and dandy, with mails happily going out to their intended recipients via SES.

However, that’s not good enough for me. You see, if you send a mail through SES and it bounces, you’ll receive the bounce message at the original From: address, as expected, but because a lot of ISPs/ESPs strip the original To: header in their bounce templates to prevent backscatter, and SES mangles the message ID set on the email by Postfix (replacing it with their own), it’s very possible to get bounce messages that have no information on the intended recipient. How do you do bounce management when you have no information that links the bounce to the original email that you sent?

While Amazon strips the message ID assigned by Postfix, it adds its own message ID — AWSMessageID. This value is returned by the SES API when you submit an email to the service; the provided example scripts, however, don’t do anything with this ID.

To address this issue in my environment, I wrote the following script, which I set as my Postfix transport (rather than

# send mail via SES and create a log with returned messageid for bounce processing

RCPTTOLOG=`echo $* | awk '{$1=""; print $0}' | awk '{sub(/^[ \t]+/, "")};1'`
RCPTTO=`echo $RCPTTOLOG | sed -e 's/\ /,/g'`
TIMESTAMP=`date +"%Y-%m-%d %H:%M:%S"`
THEMAIL=`cat -`
SUBJECT=`echo "$THEMAIL" |awk '($0 ~ /Subject: /) {$1=""; print $0}' |awk '{sub(/^[ \t]+/, "")};1'`

if echo "$OUTPUT" |grep -q Error
	exit 1 # SES error, postfix should defer this msg

MESSAGEID=`echo $OUTPUT |awk '{print $4}' |awk -F\> '{print $2}' |awk -F\< '{print " AWSMessageID=" $1}'`

# log
echo "$TIMESTAMP from=$MAILFROM to=\"$RCPTTOLOG\" subject=\"$SUBJECT\" $MESSAGEID" >> /var/log/ses_maillog

Set ACCESS to the location of the file containing your AWS key and secret, and of course configure paths as needbe. The transport should be configured as such in

aws-email  unix  -       n       n       -       -       pipe
  flags=R user=mail argv=/usr/local/amazon/ ${sender} ${recipient}

You’ll get a log file at /var/log/ses_maillog that looks something like this:

2011-08-23 16:26:24 to="" subject="this is my email subject"  AWSMessageID=00000131f8f261e2-75f27db7-b6d2-43ca-9c26-9a4a92ecbfd0-000000
2011-08-23 16:26:23 to="" subject="Re: this is my email subject"  AWSMessageID=00000131f8f761b9-acfceec3-73ab-4d5e-8959-f7bb9ee00665-000000
2011-08-23 16:26:25 to="" subject="another email subject"  AWSMessageID=00000131f8f76669-1540d563-41c0-4ba9-adc0-122ee41f4b28-000000

Now you can grep grep grep away for the AWSMessageID to match the one in the bounce email to find the original recipient and update your lists accordingly.

Add domains and users

Quick one liner to take a list of domains and create Apache vhosts from a template, create users, set their home dir, permissions etc

cat domains.out |while read line ; do DOMAIN=$line ; NODOTDOMAIN=`echo $DOMAIN | sed -e 's/\.//g'` ; mkdir -p /var/www/vhosts/$DOMAIN ; sed -e "s/$DOMAIN/g" /etc/httpd/vhost.d/default.vhost > /etc/httpd/vhost.d/$DOMAIN.conf ; useradd -d /var/www/vhosts/$DOMAIN $NODOTDOMAIN ; chown $NODOTDOMAIN:$NODOTDOMAIN /var/www/vhosts/$DOMAIN ; PASSWERD=`head -n 50 /dev/urandom | tr -dc A-Za-z0-9 | head -c8` ; echo $PASSWERD | passwd $NODOTDOMAIN --stdin ; echo "Domain: $DOMAIN" ; echo "User: $NODOTDOMAIN" ; echo "Password: $PASSWERD" ; echo ; done