Browse Category: one-liners

sed and newlines

sed’s really bad when it comes to newlines — and especially so on OSX. This snippet works quite well for “multiline” sedding:

test:
poops1
        poop
butts1
        butt

cat test |sed -e ':a' -e 'N' -e '$!ba' -e 's/s1\n        /s1, /g' 

output:
poops1, poop
butts1, butt

EC2 metadata get

Today I learned about the EC2 metadata service. Try it from any EC2 instance!

curl http://169.254.169.254/latest/meta-data/

for the list of metadata objects

curl http://169.254.169.254/latest/meta-data/public-ipv4

for the public IP, for example!

Add domains and users

Quick one liner to take a list of domains and create Apache vhosts from a template, create users, set their home dir, permissions etc


cat domains.out |while read line ; do DOMAIN=$line ; NODOTDOMAIN=`echo $DOMAIN | sed -e 's/\.//g'` ; mkdir -p /var/www/vhosts/$DOMAIN ; sed -e "s/domain.com/$DOMAIN/g" /etc/httpd/vhost.d/default.vhost > /etc/httpd/vhost.d/$DOMAIN.conf ; useradd -d /var/www/vhosts/$DOMAIN $NODOTDOMAIN ; chown $NODOTDOMAIN:$NODOTDOMAIN /var/www/vhosts/$DOMAIN ; PASSWERD=`head -n 50 /dev/urandom | tr -dc A-Za-z0-9 | head -c8` ; echo $PASSWERD | passwd $NODOTDOMAIN --stdin ; echo "Domain: $DOMAIN" ; echo "User: $NODOTDOMAIN" ; echo "Password: $PASSWERD" ; echo ; done

Enumerate columns for awk

I’m bad at counting, so when I’m using awk to print specific fields, I end up with greasy fingerprints on my screen as I manually count out each field. Thanks to my colleague James, here’s a script that counts for you!

awk 'NR == 1 { for (i=1;i<=NF;i++) {printf i " "} print ""} {print}' | column -t

Works with STDIN as is, assuming default field separator (space):

[kale@superhappykittymeow log]# tail -n 1 xferlog |awk 'NR == 1 { for (i=1;i<=NF;i++) {printf i " "} print ""} {print}' | column -t
1    2    3   4         5     6  7          8    9                          10  11  12  13  14    15   16  17  18
Sat  Jun  19  13:19:25  2010  1  127.0.0.1  220  /var/www/poop/wp-rss2.php  b   _   i   r   root  ftp  0   *   c

Or, if you're lazy like myself, encapsulate it in an alias:

alias count='awk 'NR == 1 { for (i=1;i<=NF;i++) {printf i " "} print ""} {print}' | column -t'

[kale@superhappykittymeow log]# tail -n 1 xferlog | count
1    2    3   4         5     6  7          8    9                          10  11  12  13  14    15   16  17  18
Sat  Jun  19  13:19:25  2010  1  127.0.0.1  220  /var/www/poop/wp-rss2.php  b   _   i   r   root  ftp  0   *   c

Can’t fork?

Can’t fork but need to see what’s going on? Hint: a box that can’t fork can often `exec’.

Here are a pair of slick bash functions that can be lifesavers in dire situations:

`ls’:

$ myls() { while [ $# -ne 0 ] ; do echo "$1" ; shift ; done ; }
$ myls /etc/s*
/etc/services
/etc/shells
/etc/syslog.conf

`cat’:

$ mycat() { while IFS="" read l ; do echo "$l" ; done < $1 ; }
$ mycat /etc/shells

Run Urchin on-demand for all profiles at once

There’s no built-in way in Urchin to re-run the processing job for all domains (such as after fixing a problem). This can, however, be done on the command line with a while loop:

[code lang=”bash”]ls -alh ../usr/local/urchin/data/reports/ |awk ‘{print $NF}’ |while read line ; do /usr/local/urchin/bin/urchin -p”$line” ; done[/code]

WHOIS visiting your site?

I’m fond of WHOIS data for getting an idea who’s visiting a site, though most WHOIS servers return data that’s full of disclaimers and irrelevant data. Rather, I much prefer Team Cymru’s batch WHOIS lookup server, whois.cymru.com.

First, extract your IPs:
[code lang=”bash”]F=ips.out ; echo “begin”>>$F ; echo “verbose”>>$F ; awk ‘{print $1}’ tech-access_log |sort |uniq>>$F ; echo “end” >>$F[/code]

Now send them to Cymru for processing:
[code lang=”bash”]nc whois.cymru.com 43 < $F | sort > whois.out[/code]

Review whois.out at your leisure for detailed IP information. It’s well-formatted, allowing for easily scripting against:

91      | 128.113.197.128  | 128.113.0.0/16      | US | arin     | 1986-02-27 | RPI-AS - Rensselaer Polytechnic Institute
91      | 128.113.247.58   | 128.113.0.0/16      | US | arin     | 1986-02-27 | RPI-AS - Rensselaer Polytechnic Institute
9121    | 88.232.9.77      | 88.232.0.0/17       | TR | ripencc  | 2005-10-27 | TTNET TTnet Autonomous System
9       | 128.2.161.88     | 128.2.0.0/16        | US | arin     | 1984-04-17 | CMU-ROUTER - Carnegie Mellon University
9136    | 91.186.50.28     | 91.186.32.0/19      | DE | ripencc  | 2006-11-07 | WOBCOM WOBCOM GmbH - www.wobcom.de
9143    | 212.203.31.1     | 212.203.0.0/19      | NL | ripencc  | 2000-08-08 | ZIGGO Ziggo - tv, internet, telefoon

Easier-to-read MySQL “show table status”

[code lang=”bash”]mysqlshow –status db_name |sort -n -k10 |awk -F\| ‘($6 !~ /0/)’ |awk -F\| ‘{print $2 ” ” $6 ” ” $7 ” ” $14}’ |egrep -v “^ “[/code]

Creates a much easier-to-read view of the output of “show table status”:

Name                    Rows   Avg_row_length   Update_time         
 wp_users                1      140              2009-08-08 04:13:07 
 wp_links                9      106              2009-10-16 12:57:32 
 wp_comments             14     464              2009-11-28 16:09:43 
 wp_usermeta             15     166              2009-11-29 06:41:19 
 wp_term_taxonomy        53     40               2009-11-20 14:06:21 
 wp_postmeta             141    46               2009-11-29 06:44:05 
 wp_options              172    4624             2009-11-29 06:40:59 
 wp_term_relationships   357    21               2009-11-21 02:35:42 

Calculate SMTP and POP3/IMAP bandwidth from qmail logs

[code lang=”bash”](echo “smtp: `(cat maillog maillog.processed && zcat maillog.processed.*) | grep bytes | grep qmail: | awk ‘{sum=sum+$11} END { print sum}’`” && (cat maillog maillog.processed && zcat maillog.processed.*) | grep pop3 | grep LOGOUT | awk ‘{print $13,$14}’ | sed ‘s/,//g;s/….=//g’ | awk ‘{sumrcvd=sumrcvd+$1; sumsent=sumsent+$2} END {print “rcvd: “,sumrcvd,”\n” “sent: “,sumsent}’) | awk ‘{total=total+$2; print} END {print “total: “,total/1024/1024 “MB”}'[/code]

This ugly one-liner comes to us courtesy Chuck. Plesk calculates bandwidth statistics by literally reading the raw log files and performing math based on the byte totals noted in the log entries. This beast will run against the Plesk maillogs and give you a pretty summary of mail bandwidth:

[code]smtp: 397852373
rcvd: 228219
sent: 211813204
total: 581.64MB[/code]

Auto-iptables off IPs with high connection counts

via Paul (lovepig.org):

[code lang=”bash”]netstat -npa –inet | grep :80 | sed ‘s/:/ /g’ | awk ‘{print $6}’ | sort | uniq -c | sort -n | while read line; do one=`echo $line | awk ‘{print $1}’`; two=`echo $line | awk ‘{print $2}’`; if [ $one -gt 100 ];
then iptables -I INPUT -s $two -j DROP; fi; done; iptables-save | grep -P ‘^-A INPUT’ | sort | uniq -c | sort -n | while read line; do oneIp=`echo $line | awk ‘{print $1}’`; twoIp=`echo $line | awk ‘{print $5}’`; if [ $oneIp -gt 1 ]; then iptables -D INPUT -s $twoIp -j DROP; fi; done[/code]

This one-liner is quite effective when tossed into a file and run as a cronjob once per minute. Any IP with more than 100 concurrent connections — which, quite honestly, is far more than any one IP should ever have on a standard webserver — will be blocked via iptables. This script as a cronjob is extremely effective dealing with small-to-midsize DDoSes (too much traffic for Apache/whatever service to handle, but not saturating the pipe).

Obtaining Plesk user for a domain

…for a list of domains, without digging through the database!

[code lang=”bash”]cat domains | sort |uniq |while read line ; do ls -ld /home/httpd/vhosts/$line/httpdocs |awk ‘{print $3}'[/code]

‘domains’, of course, is a text file with a list of domains hosted on the server. Can be populated in whatever way you need. Easily plugged into other Plesk utilities (such as changing Plesk FTP passwords).

Combining text files as columns

To combine two (or more) text files as individual columns in the same file, such as:

file1:

[code]foo
foo1
foo2
foo3[/code]

file2:

[code]foobar
foobar1
foobar2
foobar3[/code]

into:

[code]foo foobar
foo1 foobar1
foo2 foobar2
foo3 foobar3[/code]

rather than using an ugly combination of sed and awk, you can use the `paste’ command:

[code lang=”bash”]paste file1 file2[/code]

Curl with postdata and cookies

Great for command-line logging into sites to pull content for whatever reason.

[code lang=”bash”]curl -c cookies.txt -d “username=username&password=password&action=login” -o /home/kale/outputfile.txt “http://www.domain.com/authenticated_page.php?foo=bar”[/code]

Of course, you’ll have to look at the source for the target location’s login page to see what variables it wants. I use it to grab a single Cacti-generated graph that is normally password protected, but I want to include a single graph on another site, so I cron’d a script to run a line similar to the above to log in and save it locally.

  • 1
  • 2