Tuesday 31 December 2013

cassandra sstable version in filename

Look in the filename of the sstable, you'll see some letters between hyphens/dashes, ie "-"; these are the version of your sstable, see below for possible values.

See: https://github.com/QwertyManiac/cassandra-cdh4/blob/master/src/java/org/apache/cassandra/io/sstable/Descriptor.java

Btw, cassandra "support" loves to send you to look at code, so download the code and get familiar with the basic structure (somehow).



        public static final Version LEGACY = new Version("a"); // "pre-history"
        // b (0.7.0): added version to sstable filenames
        // c (0.7.0): bloom filter component computes hashes over raw key bytes instead of strings
        // d (0.7.0): row size in data component becomes a long instead of int
        // e (0.7.0): stores undecorated keys in data and index components
        // f (0.7.0): switched bloom filter implementations in data component
        // g (0.8): tracks flushed-at context in metadata component
        // h (1.0): tracks max client timestamp in metadata component
        // hb (1.0.3): records compression ration in metadata component
        // hc (1.0.4): records partitioner in metadata component
        // hd (1.0.10): includes row tombstones in maxtimestamp
        // he (1.1.3): includes ancestors generation in metadata component
        // hf (1.1.6): marker that replay position corresponds to 1.1.5+ millis-based id (see CASSANDRA-4782)
        // ia (1.2.0): column indexes are promoted to the index file
        // records estimated histogram of deletion times in tombstones
        // bloom filter (keys and columns) upgraded to Murmur3
        // ib (1.2.1): tracks min client timestamp in metadata component

Thursday 26 December 2013

Friday 6 December 2013

aws ec2 cli filter by tag name and value

  1. aws ec2 describe-instances --filter Name=tag:Name,Values=ADS-prod-ads
The horrible syntax of "and" filters:
  1.  aws ec2 describe-instances 
    1. --filter 
      1. '{"Name":"tag:backup","Values":["yes"]}' 
      2. '{"Name":"instance-state-name","Values":["running"]}' 
    2. non-breakout view below
aws ec2 describe-instances --filter '{"Name":"tag:backup","Values":["yes"]}' '{"Name":"instance-state-name","Values":["running"]}'

Saturday 30 November 2013

Wednesday 27 November 2013

Chromecast tricks

  1. is casting a tab
    1. don't do full screen
      1. instead, shrink browser window to size of original video
        1. what is put on the screen is relative to the window size
        2. seems to save on bandwidth
          1. passed through your wireless router
          2. less broken stream therefore

Friday 8 November 2013

Building latest collectd on Amazon Linux server in AWS

  1. yum install byacc flex automake libtool libgcrypt-devel glib2-devel libtool-ltdl-devel perl-ExtUtils-MakeMaker
    1. comments below imply this may be necessary: apt-get install bison
  2. git clone https://github.com/collectd/collectd.git
  3. cd collectd
  4. ./build.sh
  5. ./configure
  6. make 
 Some helpful libs, start over to use them
  1. yum install lvm2-devel net-snmp-devel liboping-devel libpcap-devel libesmtp-devel libcurl-devel libmnl-devel
Dated: Nov, 2013

Openbox: monitor laptop battery via CLI and notify-send


while (true);do acpi -b | perl -n -e 's/.*?(\d+)%.*/$1/;chomp;print "$_...";if ($_ <= 15) {`notify-send batalert:$_`};';sleep 180;done

Would like a beep, but can't get one, no PC speaker on MacBook Air and mplayer buffers.

Wednesday 6 November 2013

Change X / Openbox screen brightness on CLI

  1. sudo apt-get -y install xbacklight
  2. xbacklight +10
  3. xbacklight +10
  4. xbacklight +10
  5. xbacklight -10
  6. xbacklight -10
  7. xbacklight -10

Tuesday 5 November 2013

Recover wireless of Ubuntu on MacBook Air 4,2

  1. sudo apt-get --reinstall install bcmwl-kernel-source
  2. sudo modprobe -r b43 ssb wl brcmfmac brcmsmac bcma
  3. sudo modprobe wl
Broadcom 802.11 Linux STA wireless driver source

wicd: Could not connect to wicd's D-bus interface

I don't know why, she swallowed the fly.

After

  1. Wicd needs to access your computer's network cards
  2. Could not connect to wicd's D-Bus interface

Do

  1. sudo mv -v /etc/resolv.conf /etc/resolv.conf.backup
  2. sudo ln -s /run/resolvconf/resolv.conf /etc/resolv.conf
  3. sudo rm -v /var/lib/wicd/resolv.conf.orig
  4. sudo service wicd start
  5. yum -y install wicd-gtk
  6. wicd-gtk

Saturday 2 November 2013

Zenoss on Amazon Linux

Someone saved my life tonight, Sugarbear: http://binarynature.blogspot.com/2012/11/zenoss-core-4-installation.html

Also taken: http://magazine.redhat.com/2007/12/04/hacking-rpms-with-rpmrebuild/

WARNING: USE AT OWN RISK
  1. remove non-critical, unmet dependencies in rpm
    1. get good version of rpmrebuild
      1. wget ftp://fr2.rpmfind.net/linux/opensuse/ports/update/12.3/noarch/rpmrebuild-2.9-7.4.1.noarch.rpm
      2. yum localinstall rpmrebuild-2.9-7.4.1.noarch.rpm
    2. wget http://downloads.sourceforge.net/project/zenoss/zenoss-4.2/zenoss-4.2.4/4.2.4-1897/zenoss_core-4.2.4-1897.el6.x86_64.rpm
    3. rpmrebuild -e -n -p zenoss_core-4.2.4-1897.el6.x86_64.rpm
      1. remove lines
        1. Requires:      libgcj
        2. %dir %attr(0755, root, root) "/etc/sudoers.d"
        3. # ensure that the system uses the /etc/sudoers.d directory
        4. SUDOERSD_TOKEN="#includedir /etc/sudoers.d"
        5. SUDOERSD_FOUND=`/bin/egrep "^$SUDOERSD_TOKEN" /etc/sudoers`
        6. if [ -z "$SUDOERSD_FOUND" ]; then
        7.    echo "# zenoss rpm, ensure that /etc/sudoers.d loads" >> /etc/sudoers
        8.    echo $SUDOERSD_TOKEN >> /etc/sudoers
        9. fi
      2. save file and "continue"
      3. note place resultant file is created, copy to local working directory
        1. if you can't find it where it is supposed to be
          1. yum -y install mlocate
          2. updatedb
          3. locate zenoss
  2. ntp
    1. ntpq -pn
    2. make sure ntp is installed and running
  3. java
    1. remove openjdk and install oracle java
      1. yum -y remove java jdk
      2. wget -N -O jre-6u31-linux-x64-rpm.bin http://javadl.sun.com/webapps/download/AutoDL?BundleId=59622
      3. chmod +x jre-6u31-linux-x64-rpm.bin
      4. ./jre-6u31-linux-x64-rpm.bin
    2. verify
      1. java -version
      2. echo $JAVA_HOME
    3. cover missing libgcj, not strictly necessary
      1. yum -y install ecj
      2. only for running java classes natively instead of using JVM
        1. about as obscure as it gets
  4. Add extra repos
    1. rpm --import http://dev.zenoss.org/yum/RPM-GPG-KEY-zenoss
    2. yum-config-manager --enable epel
    3. yum --nogpgcheck -y localinstall http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm
  5. install some deps
    1. yum -y install memcached net-snmp
  6. start some dep services
    1. service memcached start
    2. service snmpd start
    3. chkconfig memcached on
    4. chkconfig snmpd on
  7. mysql
    1. yum -y install mysql-server
    2. service mysqld start
    3. chkconfig mysqld on
  8. install amazon linux compatible zenoss rpm you created above
    1. yum --nogpgcheck localinstall zenoss-4.2.4-1897.el6.x86_64.rpm
    2. echo > /opt/zenoss/var/zenpack_actions.txt
      1. you REALLY don't want these right now, install by-hand later
  9. start more dep services
    1. service rabbitmq-server start
    2. chkconfig rabbitmq-server on
  10. start and stop zenoss core, dbs and files are created
    1. service zenoss start;sleep 5;service zenoss stop
  11. post zenoss updates as zenoss user, then back to root user
    1. su -l zenoss
    2. wget --no-check-certificate https://raw.github.com/osu-sig/zenoss-autodeploy-4.2.3/master/secure_zenoss.sh
    3. sh secure_zenoss.sh
    4. egrep 'user|password' $ZENHOME/etc/global.conf | grep -v admin
    5. zodbpw=$(grep zodb-password $ZENHOME/etc/global.conf | awk '{print $2}')
    6. sed -i.orig "5s/zenoss/$zodbpw/" $ZENHOME/etc/zodb_db_{main,session}.conf
    7. tail -n +1 $ZENHOME/etc/zodb_db_{main,session}.conf
    8. exit
  12. set mysql password for zenoss user using password generated above
    1. mysql -u root -p
    2. SET PASSWORD FOR 'zenoss'@'localhost' = PASSWORD('18zmcTgYsA+AjczljwQd');
      1. sub your password in instead
  13. create zenoss user for rabbitmq
    1. vim set-rabbitmq-perms.sh
      1. grab code below and put in here
    2. sh set-rabbitmq-perms.sh
    3. service rabbitmq-server restart
  14. start zenoss final time
    1. service zenoss restart
    2. chkconfig zenoss on
    3. verify
      1. su -l zenoss -c 'zenoss status'
      2. rabbitmqctl -p /zenoss list_queues
  15. Add this line to the end of /etc/sudo.conf
    1. #includedir /etc/sudoers.d
    2. was removed during rpm tweak
NOTE: this is probably not perfect, but very close, please, send corrections

###################################################
## code for set-rabbitmq-perms.sh
#!/usr/bin/env bash
set -e
ZENHOME="/opt/zenoss"
VHOSTS="/zenoss"
USER="zenoss"
PASS="grep amqppassword \$ZENHOME/etc/global.conf | awk '{print \$2}'"
if [ $(id -u) -eq 0 ]
then
RABBITMQCTL=$(which rabbitmqctl)
$RABBITMQCTL stop_app
$RABBITMQCTL reset
$RABBITMQCTL start_app
$RABBITMQCTL add_user "$USER" "$(su -l zenoss -c "$PASS")"
for vhost in $VHOSTS; do
$RABBITMQCTL add_vhost "$vhost"
$RABBITMQCTL set_permissions -p "$vhost" "$USER" '.*' '.*' '.*'
done
exit 0
else
echo "Error: Run this script as the root user." >&2
exit 1

Listen to internet radio without Flash on Mac and Linux CLI

NOTE: nice classical music option! https://github.com/klutometis/radio/blob/master/radio.sh
  1. OS type
    1. Mac
      1. brew install mplayer
        1. See this if you hit the bug: https://github.com/mxcl/homebrew/issues/23503
    1. Linux (Debian-based)
      1. apt-get install mplayer
  2. play on CLI
    1. mplayer <stream>
      1. e.g. mplayer http://www.antenne.de/webradio/channels/das-schlager-karussell.m3u 
Or/else:
  1. paste stream link into a browser directly
  2. use VLC to play stream

Read ext4 on Mac


  1. install xcode
  2. install brew
  3. brew install ext4fuse
    1. follow any instructions on changing permissions
  4. ext4fuse <device> <mountpoint>

Wednesday 30 October 2013

Latest aws cli tools on Redhat

  1. wget https://s3.amazonaws.com/aws-cli/awscli-bundle.zip
  2. unzip awscli-bundle.zip
  3. sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
 Result is in /usr/local/bin, so set up your PATH accordingly.

Wednesday 23 October 2013

aws cli run-instances block-device-mappings ephemeral encrypted

aws --version => aws-cli/1.1.1 Python/2.6.8 Linux/3.4.43-43.43.amzn1.x86_64
  1. aws ec2 run-instances 
    1. --image-id 
      1. ami-eeff1122 
    2. --instance-type 
      1. m2.2xlarge 
    3. --security-group-ids 
      1. sg-eeff1122
    4. --subnet-id 
      1. subnet-eeff1122
    5. --private-ip-address 
      1. 10.0.0.2
    6. --user-data 
      1. file://meta_myserver.txt 
    7. --block-device-mappings 
      1. '[{ "DeviceName":"/dev/sdb", "VirtualName":"ephemeral0" }]'
For 50G EBS attached on boot (auto-deleted on terminate unless you override), block device mapping becomes:
  1.  '[{ "DeviceName":"/dev/sdb", "VirtualName":"ephemeral0" },{"DeviceName":"/dev/sdc","Ebs":{"VolumeSize":50}}]'
WARNING: "Ebs" is very case sensitive here.

To encrypt the Ebs volume, add "Encrypted": true to the device params like so:
  1.  {"DeviceName":"/dev/sdc","Ebs":{"VolumeSize":50,"Encrypted": true}}


Tuesday 15 October 2013

Use rvm in cron

  1. rvm list
    1. find what looks like your gems set, e.g
        1. ruby-1.9.3-p194
  2. echo $rvm_path/bin, e.g.
    1. /usr/lib/rvm/bin
  3. tack on the output of #1 to the output of #2, e.g.
    1. /usr/lib/rvm/bin/ruby-1.9.3-p194
  4. your cron entry should be the result of #3 followed by your ruby script, e.g.
    1. 0 0 * * * /usr/lib/rvm/bin/ruby-1.9.3-p194 /optt/mydir/myscript.rb

Test exim from CLI without "mail" command

If you don't have "mail" on the CLI for whatever, weird (Redhat-based) reasons, circumvent like so:
  1. /path/to/exim -v 'user@domain'
  2. type a multi-line message here ending with blank line
  3. hit ^D to end message and send
  4. you should be returned to shell
Taken: http://atmail.com/kb/2008/testing-email-with-exim/

Saturday 5 October 2013

Edit files on a remote server via your Mac using ssh, sshfs and brew


  1. install latest xcode
  2. install brew
  3. install sshfs using brew
    1. make sure to change any permissions specified
  4. mkdir mytmpdir
  5. sshfs -o uid=<your local numerical id> root@<remote server>:<remote dir> mytmpdir
    1. e.g. sshfs -o uid=501 root@10.1.0.100:images mytmpdir
  6. edit files that appear in tmpdir, and when you save them, the remote files will be updated
Unmount
  1. umount mytmpdir

Friday 4 October 2013

Simple unbound upstart script

  1. put below in /var/tmp/unbound.conf
  2. pkill unbound
  3. lsof -nP -i :53
  4. pgrep unbound
  5. cp -v /var/tmp/unbound.conf /etc/init/
  6. start unbound
  7. status unbound
  8. status unbound
  9. start unbound
start on runlevel [3]
expect fork
exec unbound

Thursday 3 October 2013

Sanity of growing a stripped LVM volume

Quote:

However, with LVM you can easily grow a logical volume. But, you cannot use stripe mapping to add a drive to an existing striped logical volume because you can’t interleave the existing stripes with the new stripes. This link explains it fairly concisely.

    “In LVM 2, striped LVs can be extended by concatenating another set of devices onto the end of the first set. So you can get into a situation where your LV is a 2 stripe set concatenated with a linear set concatenated with a 4 stripe set.”

Taken: Pick Your Pleasure: RAID-0 mdadm Striping or LVM Striping?

Tuesday 1 October 2013

Create isolated bucket on S3

  1. setup
    1. create IAM group
      1. add simple, custom policy below
      2.  do not add any other policies to group
    1. create IAM user and put in above IAM group
      1. create and download key and secret for user
    2. create bucket "mybucket01" in S3
      1. you don't have to touch perms of bucket itself
  2. client
    1. install s3fox addon for Firefox from www.s3fox.net
      1. older versions FAIL! get it only at www.s3fox.net
    2. open s3fox addon
      1. Firefox -> Tools -> S3 Organizer
    3. add only one user to "Manage Accounts" using user key and secret
    4. in right-hand window of s3fox add "/mybucket01" NOT "/"
      1. "/" will give you "Access Denied"
        1. because user does not have perms to list root buckets, only itself
{
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "arn:aws:s3:::mybucket01"
        },
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::mybucket01",
                "arn:aws:s3:::mybucket01/*"
            ]
        }
    ]
}   

Snapshot AWS instance store as AMI

  1. install api-tools
  2. install ami-tools
  3. generate key / cert
  4. create IAM user 
  5. upload cert
  6. java install / export JAVA_HOME
  7. export key and secret
  8. ec2-bundle-vol 
    1. --user  <AWS acct #> 
    2. --privatekey /myhome/my-key.pem
    3. --cert /myhome/my-cert.pem
    4. --arch x86_64
    5. --destination /var/tmp
    6. --exclude
      1. /backup,
      2. /mnt,
      3. /swapfile
  9. ec2-upload-bundle
    1. --manifest /var/tmp/image.manifest.xml
    2. --bucket mybucket/hostname
    3. --access-key <AWS Key>
    4. --secret-key <AWS Secret>
    5. --location EU
  10. ec2-register
    1. --region eu-west-1
    2. --name "myaminame"
    3. --description "Backing up hostname"
    4. mybucket/hostname/image.manifest.xml
Taken:
  1. http://www.dowdandassociates.com/content/howto-create-an-instance-store-backed-amazon-ec2-ami/
  2. http://www.dowdandassociates.com/content/howto-install-aws-cli-amazon-elastic-compute-cloud-ec2-ami-tools/
  3. http://www.dowdandassociates.com/content/howto-install-aws-cli-amazon-elastic-compute-cloud-ec2-api-tools/
NOTE: the above link's content has typos in very essential parts, proof all steps

Monday 30 September 2013

Monitoring Zookeeper

Option 1
  1. yum -y install git
  2. cd
  3. mkdir bin
  4. mkdir tools
  5. cd tools
  6. git clone https://github.com/phunt/zktop.git
  7. nice updatedb
  8. locate zoo.cfg
    1. jot this path down for step below
    2. let's call it "mypathtozoocfg"
    3. the name of your zk conf may vary, adjust if so
  9. cd
  10. cd bin
  11. ln -s /root/tools/zktop/zktop.py .
    1. make sure you put the '.' on the end of that command
  12. /root/bin/zktop.py --config /<mypathtozoocfg>/zoo.cfg
Option 2, by hand
  1. echo srvr | nc localhost 2181
  2. echo stat | nc localhost 2181
  3. echo cons | nc localhost 2181
  4. etc.
  5. Try: watch -d "echo stat | nc localhost 2181" 
    1. on all zk nodes 
    2. in separate terms
Break-out
  1. srvr
    1. version
    2. latencies
    3. received client requests
    4. sent client responses and notifications
    5. outstanding requests
    6. zxid, cluster id
    7. mode in cluster, leader or follower
    8. node count (?)
  2. stat
    1. similar to srvr
      1. but has actual connections listed by IP near top
Taken: http://phunt1.wordpress.com/2010/03/29/monitoring-zookeeper-3-3-even-more-cussin/

Friday 20 September 2013

Set IPs on vagrant-lxc VMs

Cross-communication is always nice.

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  config.vm.define "web", primary: true do |web|
    web.vm.box = "quantal64"
    web.vm.provider :lxc do |lxc|
      lxc.customize 'network.ipv4', '10.0.3.100/32'
    end
  end

  config.vm.define "db" do |db|
    db.vm.box = "quantal64"
    db.vm.provider :lxc do |lxc|
      lxc.customize 'network.ipv4', '10.0.3.101/32'
    end
  end

end


FYI, versioning info:

vagrant -v
Vagrant 1.3.3

vagrant plugin list
vagrant-lxc (0.6.0)

Tuesday 6 August 2013

vncserver on Amazon Linux on Amazon's AWS

NOTE: this first step may be outdated in new versions of Amazon Linux, which provides libjpeg-turbo.

First, get a good version of libjpeg-turbo:
  1. yum clean all
  2. yum --enablerepo=amzn-preview install libjpeg-turbo
  3. see: https://forums.aws.amazon.com/thread.jspa?threadID=121128
Necessary packages:
  1. gnutls-2.8.5-10.el6.x86_64.rpm
  2. libfontenc-1.0.5-2.el6.x86_64.rpm
  3. libtasn1-2.3-3.el6.x86_64.rpm
  4. libxdmcp-1.1.1-7.ram1.x86_64.rpm
  5. libXfont-1.4.5-2.el6.x86_64.rpm
  6. libxkbfile-1.0.6-1.1.el6.x86_64.rpm
  7. libXmu-1.1.1-2.el6.x86_64.rpm
  8. mesa-dri-drivers-9.0-0.7.el6.x86_64.rpm
  9. pixman-0.26.2-5.el6_4.x86_64.rpm
  10. tigervnc-license-1.3.0-16.el6.noarch.rpm
  11. tigervnc-server-1.3.0-16.el6.x86_64.rpm
  12. tigervnc-server-minimal-1.3.0-16.el6.x86_64.rpm
  13. xkeyboard-config-2.6-6.el6.noarch.rpm
  14. xorg-x11-proto-devel-7.6-25.el6.noarch.rpm
  15. xorg-x11-xauth-1.0.2-7.1.el6.x86_64.rpm
  16. xorg-x11-xkb-utils-7.7-4.el6.x86_64.rpm
Necessary packages for fluxbox:
  1. pyxdg-0.18-1.el6.noarch.rpm
Here's a link to all the packages needed: rpms

Start by trying to install tigervnc-server. That will fail. Follow the dependencies down, installing packages one by one as they come up. If they fail, install their dependencies first.

These packages were special cases
  1. mesa-dri-drivers-9.0-0.7.el6.x86_64.rpm
    1. just "--nodeps" install this one
    2. the libraries of the dependencies are never called from my experience
  2. libXtst-1.2.1-2.el6.x86_64.rpm
    1. there is a more recent version of this already in Amazon Linux, so just skip this one and "--nodeps" the parent package, which is libXmu-1.1.1-2.el6.x86_64.rpm
If not found, add location of libXdmcp.so.6 to /etc/ld.so.conf and run "ldconfig", e.g. locations is /usr/lib/libXdmcp.so.6 so add the line "/usr/lib".

Once I got the vncserver up with

  vncserver :66 -localhost

bizarrely, it was listening on port 5966 and some 6066 port. The 5966 port is the one you want.

See previous post on Redhat VNC for details, click here.

Wednesday 31 July 2013

DHCP on CLI for Ubuntu-like systems


  1. Add these lines to /etc/network/interfaces, or tweak existing eth0 lines
    1. auto eth0
    2. iface eth0 inet dhcp
  2. bring it up
    1. sudo ifup eth0
  3. bring it down
    1. sudo ifdown eth0
  4. add some stuff to /etc/dhcp/dhclient.conf
    1. interface "eth0" {
    2.     prepend domain-name-servers 192.168.100.10, 8.8.8.8, 8.8.4.4;
    3.     supersede domain-search "mydom.com", "mydom-vpc.internal";
    4. }
  5. flush on occasion
    1. ip addr flush eth0

Saturday 27 July 2013

Match different sets of equally distributed things into groups: hosts and weeks in the year

Use modulo if you want to match up sets of things into groups.

They have to be equally distributed by number.

Here it is with modulo 3: hosts on left, weeks of the year on right.


Wednesday 24 July 2013

Clear CLI with 1000 blank lines

For when you really want older output way out of your way, e.g., debugging, copying/pasting.
  1. for i in {1..1000};do echo;done

Thursday 18 July 2013

Zenoss: remodel all Linux servers at once



  1. su - zenoss
  2. zenmodeler run --path=/Server/Linux 

Wednesday 17 July 2013

Mac "Screen Sharing" using vncviewer and ssh tunnel

problem: Remmina (or similar) fail to connect to Mac over Cisco wireless (or whatever router is the problem), and you are tired of having to physically connect your Mac to make VNC work.

  1. make sure "Screen Sharing" is on as normal on your Mac
  2. ssh -L5900:localhost:5900 192.168.X.X
    1. replace 192.168.X.X with the IP of your Mac
    2. "Remote Login" is on
    3. set display to "Scaled" and "1024x768" if your Mac is just an Outlook client now
  3. vncviewer -encodings 'copyrect tight zrle hextile' localhost:5900

Monday 15 July 2013

Selenium test on CLI in 5 minutes using Java

This took me two weeks to nail down looking at it here and there.
  1. mkdir stests
  2. cd stests
  3. wget http://selenium.googlecode.com/files/selenium-java-2.33.0.zip
  4. unzip selenium-java-2.33.0.zip
  5. mkdir jars
  6. find selenium-2.33.0 -type f -name '*jar' -exec mv -v {} jars \;
  7. rm -rfv selenium-*
  8. mkdir src
  9. vi src/Test.java
    1. use below code
  10. mkdir out
  11. javac -d out -cp 'jars/*' src/MyTest.java
    1. ignore "Notes" output
  12. cd out
  13. rm -rfv org
  14. java -cp '../jars/*:.' MyTest

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.support.ui.ExpectedCondition;
import org.openqa.selenium.support.ui.WebDriverWait;

public class MyTest  {
    public static void main(String[] args) {
        WebDriver driver = new FirefoxDriver();
        driver.get("http://www.google.com");
        WebElement element = driver.findElement(By.name("q"));
        element.sendKeys("Cheese!");
        element.submit();
        System.out.println("Page title is: " + driver.getTitle());
        (new WebDriverWait(driver, 10)).until(new ExpectedCondition<Boolean>() {
            public Boolean apply(WebDriver d) {
                return d.getTitle().toLowerCase().startsWith("cheese!");
            }
        });
        System.out.println("Page title is: " + driver.getTitle());
        driver.quit();
    }
}

Saturday 6 July 2013

Could not update ICEauthority file /home/myuser/.ICEauthority


Your home directory perms got messed up somehow
  1. sudo chown myuser:myuser /home/myuser
If it still doesn't work, this may be necessary as well
  1. sudo chmod 750 /home/myuser

Friday 5 July 2013

apache-cassandra11 conflicts with apache-cassandra11-1.1.11-1.noarch

The Error:

Running rpm_check_debug
ERROR with rpm_check_debug vs depsolve:
apache-cassandra11 conflicts with apache-cassandra11-1.1.11-1.noarch

Solution:
  1. wget http://rpm.riptano.com/community/noarch/apache-cassandra11-1.1.11-1.noarch.rpm
  2. rpm -i apache-cassandra11-1.1.11-1.noarch.rpm --nodeps
This might be necessary before step 2, but make sure you back up your data first, even this should not delete it, you never know:
  1. yum remove apache-cassandra11

Thursday 4 July 2013

Poke an ssh tunnel to your house


  1. remote server
    1. ssh -R 19999:localhost:22 myhomeuser@myhome.domain.org
  2. home server
    1. ssh myremoteuser@localhost -p 19999

  1. use below on your remote server to keep connection open
    1. ~/.ssh/config
Host myhome.domain.org
  User myhomeuser
  ServerAliveInterval 60

Wednesday 3 July 2013

bash: funky hostname expansion in for loop


  1. for host in myhosts-{2{2,{6..9}},3{2,5,{7..9}}}.mydomain.com
    1. do 
      1. echo $host
      2. echo "ssh $host cmd"
  2. done

Thursday 27 June 2013

Redhat: vnc to remote server

NOTE: if the vncserver insists on starting on a port other than 5966, like 6099, wipe the ~/.vnc directory and start over again. If that doesn't help, change the second instance of 5966 below to 6066 in the port forwarding ssh command, e.g. '-L 5966:localhost:6066'.
  1.  as remote root on myhost
    1. yum install tigervnc
    2. yum install tigervnc-server
    3. yum install libXfont pixman
    4. yum install fluxbox
    5. yum install firefox
  2. as a remote user, myuser, on myhost
    1. vncserver :66 -localhost
      1. set a password, call it mypassword
  3. as local user
    1. ssh -L 5966:localhost:5966 myuser@myhost
      1. leave running and do next step in another local term
    2. vncviewer -encodings 'copyrect tight zrle hextile' localhost:5966
      1. authenticate with mypassword
  4. as a remote user, myuser, on myhost
    1. export DISPLAY=:66
    2. xterm &
    3. fluxbox &
    4. firefox &

Wednesday 26 June 2013

xvfb


  1. sudo apt-get install xfonts-100dpi xfonts-75dpi xfonts-scalable xfonts-cyrillic
  2. sudo apt-get install xvfb
  3. sudo apt-get install xtightvncviewer
  4. apt-get install x11vnc
  5. sudo apt-get install fluxbox
  6. export DISPLAY=:1
  7. Xvfb :1 -screen 0 1024x768x16 &
  8. fluxbox &
  9. x11vnc -display :1 -bg -nopw -listen localhost -xkb
  10. export DISPLAY=:0
  11. vncviewer -encodings 'copyrect tight zrle hextile' localhost:5900
  12. you should see fluxbox running within another window which you can navigate
  13. try, for fun
    1. export DISPLAY=:1
    2. xterm
    3. pkill fluxbox
    4. fluxbox &
Gets weirder
  1. close the above vncviewer window
  2. x11vnc -display :1 -bg -nopw -listen localhost -xkb
  3. x11vnc -display :1 -bg -nopw -listen localhost -xkb
  4. x11vnc -display :1 -bg -nopw -listen localhost -xkb
  5. vncviewer -encodings 'copyrect tight zrle hextile' localhost:5900
  6. vncviewer -encodings 'copyrect tight zrle hextile' localhost:5901
  7. vncviewer -encodings 'copyrect tight zrle hextile' localhost:5902
Launch apps in the new display
  1. sudo apt-get install firefox
  2. DISPLAY=:1 firefox &
Note: if vncviewer is not available on the CLI, use xtightvncviewer instead, same thing

See: http://en.wikipedia.org/wiki/Xvfb

Tuesday 25 June 2013

Zenoss: multi-graph report


Note: used version 3.x, other version nav my vary slightly
  1. Make a group
    1. create a new group under Infrastructure
    2. name it group001
    3. drag-and-drop a bunch of related servers into it
  2. Reports -> Multi-Graph Reports, left-nav
    1. Add Multi-Graph Report, bottom left-nav '+' sign
      1. name it report001
      2. Collections
        1. Add Collection
          1. def: a collection is just a previously defined set of devices
            1. like your group001
          2. name it collection001
            1. Group, in drop-down
            2. click on group001
            3. Add to Collection
        2. nav in 3.x sucks, click back on report name
          1. upper middle-nav "breadcrumb"
      3. Graph Definitions
        1. Add Graph
          1. name it graph001
          2. Graph Points
            1. Add DataPoint
              1. laLoadInt15_laLoadInt15
                1. hard to go wrong with this data point
                2. later, you can explore others
                3. naming can be very, very ugly
                  1. e.g. os/interfaces/eth0/ifOutOctets_ifOutOctets
          3. nav sucks, click back on report name in breadcrumb
      4. Graph Groups
        1. Add Graph Group
          1. name it graphgroup001
          2. select
            1. collection: collection001
            2. graph definition: graph001
            3. method: All devices on single graph
            4. save
        2. nav sucks, click back on report name in breadcrumb
    2. View Report, upper, upper left-nav
      1. should see some points plotted on a graph for all servers in your "group
      2. one can always go back to reports main screen to view report

Wednesday 19 June 2013

tsunami-udp: faster than rsync


  1. build
    1. sudo apt-get install git gcc
    2. sudo apt-get install automake autoconf
    3. git clone git://github.com/rriley/tsunami-udp.git
    4. cd tsunami-udp
    5. ./recompile.sh
    6. sudo make install
  2. run
    1. you'll need a port open to allow direct connection from client to server
      1. unfortunately, this doesn't work through NAT firewalls alone
      2. firewall / port forwarding
        1. to server, TCP, 46224 by default
        2. to client, UDP, 46224 by default
    2. start up server
      1. tsunamid myfile.gz
    3. connect with client
      1. tsunami set rate 5M connect myserver.domain.com get myfile.gz
      2. it will flood your connection if you dont set rate properly
  3. documentation
    1. http://tsunami-udp.cvs.sourceforge.net/viewvc/tsunami-udp/docs/USAGE.txt
    2. splits files automatically
    3. allows wildcards when running server and client commands, "*", namely
      1. client will auto-find all files served, one after the next
      2. use forward-slash, i.e. get \*, for client command 
        1. so bash doesn't intrepret the asterisk
  4. undocumented
    1. doesn't do subdirectories, better tar that up and have plenty of disk space

Tuesday 18 June 2013

bash substring matching


  1. #!/bin/bash
  2. [[ "$(hostname -s)" =~ $'dev' ]] && exit
  3. echo "we are not a dev host"

Monday 17 June 2013

telnet vs netcat


  1. netcat
    1. prints only what is sent by the remote host
  2. telnet
    1. not suitable for arbitrary binary data 
      1. reserves some bytes as control characters 
    2. quits when its input runs out
      1. you may not see what the other end sends
    3. doesn't do UDP

Friday 14 June 2013

Fix bad/wrong aclocal version during make


  1. autoreconf -fi
    1. updates generated configuration files
This was necessary when building tsunami-udp from cvs repository, the configure files were old/incompatible.

Taken: http://stackoverflow.com/questions/8865093/should-a-configure-script-be-distributed-if-configure-ac-is-available

Thursday 13 June 2013

Exclude domains in your google search results


  1.  Put '-' in front of 'site:' operator
    1. e.g.
      1. Try: how to learn tibco -site:tibco.com -site:tibcommunity.com
      2. searches for materials on "how to learn tibco" while ignoring all Tibco's noise
FYI: It seems there is a copyright, so searching for "SOA" instead might lead to more books with desired material covered.

Monday 10 June 2013

Out of inodes: file write error (No space left on device)


  1. df -hi
    1. proves you are out of inodes or not
    2. cause is most likely tons of small files in some "problem directory", poke around
  2. find <random_dir> -type f | wc -l
    1. give a count of file in that subdir
    2. common problem dirs
      1. /var/spool/<XYZ>
      2. /tmp
  3. find <problem_dir> -type f -delete
    1. deletes one file at a time
    2. rm will get stuck finding files first if you use wildcard like *

git push/pull just current branch


  1. git config --global push.default tracking
  2. git config --global pull.default tracking
FYI, these settings are saved in ~/.gitconfig

Thursday 6 June 2013

Zenoss: Linux SSH commands


  1. On CLI
    1. su - zenoss
    2. zenpack --list
    3. wget http://community.zenoss.org/servlet/JiveServlet/download/3435-6-2917/ZenPacks.zenoss.LinuxMonitor-1.1.5-py2.6.egg.zip
      1. unzip
      2. zenpack --install ZenPacks.zenoss.LinuxMonitor-1.1.5-py2.6.egg
    4. wget http://community.zenoss.org/servlet/JiveServlet/download/3493-6-3219/ZenPacks.community.LinuxMonitorAddOn-1.0-py2.6.egg.zip
      1. unzip 
      2. zenpack --install ZenPacks.community.LinuxMonitorAddOn-1.0-py2.6.egg
    5. Restart zenoss so all stuff is picked up
      1. shouldn't be necessary, but Monitoring Templates were missing/erroring for me without
  2. Via web interface
    1. Drag-and-drop server from Device list into Interface -> Device classes -> Server -> SSH -> Linux
    2. Set that servers Configuration Properties
      1. zCommandUsername
      2. zCommandPassword
      3. This requires that you have at least one user that can SSH in via a password

Tuesday 4 June 2013

dead simple irc gui client

apt-get install lostirc

Sunday 2 June 2013

Fetch Cassandra keyspaces and column families from nodetool command via Ruby

#!/usr/bin/ruby

require 'logger'

log = Logger.new('/var/log/cassandra/repair.log', 'daily')
log.level = Logger::INFO
log.datetime_format = "%Y-%m-%d %H:%M:%S"

keyspaces = {}

result = %x[nodetool cfstats | egrep 'Keyspace:|Column Family:']
result = result.gsub(/\s/, '')
#log.debug(result.inspect)

result.split("Keyspace:").each do | keyspace |
  #log.debug(keyspace.inspect)
  keyname = keyspace.split("ColumnFamily:")[0]
  next if (keyname == nil)
  next if (keyname == 'OpsCenter' or keyname == 'system')
  #log.debug(keyname.inspect)
  cfs = keyspace.split("ColumnFamily:").drop(1)
  keyspaces[keyname] = cfs
end
#log.debug(keyspaces.inspect)

keyspaces.keys.each {|x|
  keyspaces[x].each do |y|
    log.info("Repair start: #{x} #{y}")
#    result = %x[nodetool getcompactionthreshold #{x} #{y}]
#    log.info(result)
    log.info("Repair end: #{x} #{y}")
  end
}

Wednesday 29 May 2013

Direct ssh to a server via proxy using putty/plink on Windows


  1. Make sure seamless ssh keys are setup to your bastion server for your username
    1. Not covered here
    2. See: http://www.ualberta.ca/CNS/RESEARCH/LinuxClusters/pka-putty.html
  2. Session -> Host Name -> mytargetserver.mydomain.com
  3. Connection -> Proxy
    1. Proxy Type -> Local
    2. Telnet command, or local proxy command 
      1. c:/program files (x86)/putty/plink.exe myproxy.mydomain.com -l myusername -agent -nc %host:%port
        1. adjust this path to plink.exe to match your local setup
          1. hint: install the complete putty install package, not just putty
  4. Tunnels
    1.   L8081 mytargetserver.mydomain.com:8081
Hint: always hit "Save", no matter what you do, or however inconvenient it was designed to be.

Another example: plink -L 127.0.0.1:1433:mysqlserver.com:1433 admin@google.com -i myprivkeyfile

Friday 24 May 2013

Show progress during dd copy

kill -USR1  <pid of dd>

Thursday 23 May 2013

In-memory page states and kscand


  1. kscand task
    1. periodically sweeps through all the pages in memory
    2. notes "last access time"
      1. was accessed?
        1. increments page's age counter
      2. wasn't accessed?
        1. decrements page's age counter
      3. age counter at zero
        1. move page to inactive dirty state
In-memory page states
  1. free
    1. begin in this state
    2. not being used
    3. available for allocation, i.e. made active
  2. active
    1. allocated
    2. actively in use
  3. inactive dirty
    1. has fallen into disuse
    2. candidate for removal from main memory
  4. inactive laundered
    1. interim state
    2. contents are being moved to disk
      1. when disk I/O operation complete
        1. moved to the inactive clean state
      2. if, during the disk operation, the page is accessed
        1. moved back into the active state
  5. inactive clean
    1. laundering succeeded, i.e. contents in sync with copy on disk
    2. may be 
      1. deallocated
      2. overwritten
Taken: http://www.redhat.com/magazine/001nov04/features/vm/

Wednesday 22 May 2013

LVM crypt disks on Linux/AWS



  1. dd if=/dev/urandom of=/keys/xvdm.key bs=1024 count=4
  2. dd if=/dev/urandom of=/keys/xvdn.key bs=1024 count=4
  3. cryptsetup --verbose -y luksFormat /dev/xvdm /keys/xvdm.key
  4. cryptsetup --verbose -y luksFormat /dev/xvdn /keys/xvdn.key
  5. cryptsetup luksOpen /dev/xvdm cryptm --key-file /etc/xvdm.key
  6. cryptsetup luksOpen /dev/xvdn cryptn --key-file /etc/xvdn.key
  7. pvcreate /dev/mapper/cryptm /dev/mapper/cryptn
  8. Add entries to /etc/crypttab for reboots and test somehow
    1. cryptm /dev/xvdm /etc/xvdm.key luks
    2. cryptn /dev/xvdn /etc/xvdn.key luks
Complete LVM setup and add entries to /etc/fstab.

Hint: don't make one, single typo...ever.

Thursday 16 May 2013

Double looping with bash

Neat:
  1. for ITEM in $(find /cassandra/data -type d -name snapshots)
    1. do for DIR in $(find ${ITEM} -maxdepth 1 -mindepth 1 -type d -mtime -1)
      1. do echo $ITEM $DIR
    2. done
  2. done

Tuesday 14 May 2013

Confluence: Lock wait timeout exceeded; try restarting transaction

WARNING! Atlassian themselves recommend STRONGLY against this procedure. If any action, take the action that shows you which table is locking, DO NOT DELETE anything unless you are 100% confident you can reverse your deletions. DO NOT DELETE, DO NOT DELETE!

Seeing this?

2013-05-14 16:39:55,581 ERROR [QuartzScheduler_Worker-1] [sf.hibernate.util.JDBCExceptionReporter] logExceptions Lock wait timeout exceeded; try restarting transaction
2013-05-14 16:39:55,581 ERROR [QuartzScheduler_Worker-1] [sf.hibernate.impl.SessionImpl] execute Could not synchronize database state with session


The first is actually reported from MySQL itself, the second from Hibernate, which wraps databases for Java apps.


If you are desperate, try deleting all rows from mysql's crowd.cwd_membership table after backing it up, worked for me, syncs started working again in under 16ms.
  1. mysqldump crowd | bzip2 -c > /mnt/dump_crowd_`date +%Y%m%d`.sql.bz2
  2. mysql crowd -e 'delete from cwd_membership'
If that doesn't help, try deleting old users from any confluence groups that are still in your LDAP dir, be brutal. While your at it, delete old users period from LDAP.

To spot the problem table, this might help, if another table is your problem:
  1. watch "mysql -e 'show processlist'"
  2. Then, run LDAP sync update via admin web GUI, and watch to see which table is locking
Other things you might be seeing in your logs if you have this issue:

"batch failed falling back to individual processing java.lang.RuntimeException: could not flush session"
"Error occurred while refreshing the cache for directory"
"synchroniseCache full synchronisation for directory [ XXXX ] starting"
"could not insert: [com.atlassian.crowd.embedded.hibernate2.HibernateMembership#YYYYY]"
"Lock wait timeout exceeded; try restarting transaction"
"Could not synchronize database state with session"
"could not flush session"


Monday 13 May 2013

Put stuff on your Nexus 4


  1. apt-get install gmtp
  2. Make sure your "Storage" is in MTP mode
P.S. Or, if you have access to a Mac: "Android File Transfer"

Saturday 11 May 2013

Tuesday 7 May 2013

EC2 server to VPC private instance via VPC NAT instance


  1. iptables -t nat -A PREROUTING -s 23.23.23.23/32 -d 10.0.0.254/32 -i eth0 -p tcp -m tcp --sport 1024:65535 --dport 3306 -j DNAT --to-destination 10.0.12.10:3306
    1. 23.23.23.23 is your external server's public IP address
    2. 10.0.0.254 is your VPC NAT instance's IP address in the public subnet
    3. 10.0.12.10 is the VPC IP address of your server in a private subnet
    4. 3306 is the port your service is listening on

Monday 6 May 2013

ec2-create-image: attached EBS volumes are snapshot and mapped

"ec2-create-image does snapshot the attached EBS volumes and add a block device mapping for those snapshots in the created AMI"
Taken: https://forums.aws.amazon.com/message.jspa?messageID=211674

Nicer settings for cssh: terminal_font, terminal_size, terminal_args

.clusterssh/config

  1. terminal_font=5x8
  2. terminal_size=140x48
  3. terminal_args=-fg green
  4. auto_close=1

Sunday 5 May 2013

Slow SSH: one possible solution, set "useDNS" to "no"


  1. In sshd_config on the targer server, set "useDNS" to "no", and restart sshd

Friday 3 May 2013

mysqldump between two servers over ssh


  1. set up ssh keys so server1 user can ssh to a server2
  2. set $HOME/.my.cnf so both users can get into respective mysql cli without passwords
    1. see below for sample file
  3. create the new, empty database on server2, receiving server
  4. from server1
    1. mysqldump mydatabase | ssh server2 mysql mydatabase

# $HOME/.my.cnf
[client]
password=myusersmysqlpassword

Openfire: use your 3rd-party, signed SSL cert

PLEASE LET ME KNOW IF YOU HAVE FIXES FOR THIS WITH LATEST VERSIONS
  1. default keytool password is "changeit"
    1. use it for all password prompts
    2. works 99%
    3. if it doesn't work, ask around, poke around
  2. Get keytool command in your PATH
  3. Use Openfire's web interface to "generate self-signed certificates"
    1. NOTE: "import a signed certificate and its private key"
      1. broken
        1. says certs were loaded in green, but shows no result in "Server Certificates" list
      2. whole reason for this post
  4. find existing keystores on your chat server
    1. nice updatedb
    2. locate keystore
    3. locate truststore
    4. here, we'll assume /opt/openfire/resources/security
  5. list the "domain" Openfire used for the "generate self-signed certificates" action above
    1. keytool -list -v -keystore /opt/openfire/resources/security/keystore | grep rsa
      1. e.g.: Alias name: my.domain.com_rsa
    2. remember this for a later step
  6. load your CAs root cert into the truststore
    1. first, see if it is there
      1. keytool -list -v -keystore /opt/openfire/resources/security/truststore | grep "Issuer:"
    2. if not, download it from your CA, and
      1. keytool -import -alias myCAsRootCertAlias -file myCAsRootCert.crt -keystore /opt/openfire/resources/security/truststore
      2. verify
  7. create a p12 with your key, cert and CA's cert
    1. openssl pkcs12 -export -in myCert.crt -inkey myKey.key -out myP12.p12 -name my.domain.com_rsa -CAfile myCAsCert.crt -caname root
  8. dump it to a new keystore
    1. keytool -importkeystore -deststorepass changeit -destkeypass changeit -destkeystore mykeystore -srckeystore myP12.p12 -srcstoretype PKCS12 -srcstorepass changeit -alias my.domain.com_rsa
  9. cp -v /opt/openfire/resources/security/keystore /opt/openfire/resources/security/keystore_2013xxyy
  10. cp -v mykeystore /opt/openfire/resources/security/keystore
  11. restart openfire

Sunday 28 April 2013

Ubuntu: convert desktop to server fast

Below as root:
  1. apt-get remove ubuntu-desktop
  2. apt-get install linux-server linux-image-server
  3. apt-get purge lightdm
  4. /etc/default/grub, change matching lines to below
    1. #GRUB_HIDDEN_TIMEOUT [comment it out]
    2. GRUB_CMDLINE_LINUX_DEFAULT=""
    3. GRUB_TERMINAL=console
  5. update-grub
  6. reboot

Thursday 25 April 2013

tcpdump HTTP headers


  1. tcpdump -vvvs 1024 -l -A port 80 | egrep '^[A-Z][a-zA-Z\-]+:|GET|POST'
    1. Match your port, here it is 80, could be 8080 or 443, e.g.

Edit remote files with local editor using ssh and sshfs


  1. apt-get -y install sshfs
  2. Add your local user to the fuse group
  3. mkdir ~/mylocaldir
  4. sshfs -o idmap=user mylocaluser@myremoteserver.com:/remotepath ~/mylocaldir
  5. Edit files under ~/mylocaldir, and as you save them, they are automatically updated in /remotepath
Note: the "-o uid=500" can be used if you get permission errors, but replace "500" with you local id number

Errors
  1. "Couldn't read packet: Connection reset by peer"
    1. change this line in your /etc/ssh/sshd_config file to match what's here
      1. Subsystem sftp internal-sftp
    2. happens on RedHat Enterprise 6.1 for sure

Quick CLI screenshots on Linux or Openbox / Fluxbox


  1. sudo apt-get -y install imagemagick eog
  2. import myscreenshot.jpg
    1. select portion of screen with the crosshairs
  3. eog myscreenshot.jpg

Meetings


  1. Who is participating and do I know what each of them wants to get out of this meeting? 
  2. What are my goals and what's the minimum that I want to achieve? 
  3. Can I give in on certain points?
  4. Are there issues I won't budge on?
  5. What are next steps after the meeting?
  6. Who will ultimately decide whether I get what I want or not?
  7. Are there things I don't want to lay out on the table and not discuss in this meeting?
  8. Who should do most of the talking?

Wednesday 24 April 2013

keytool: put your SSL key into a new keystore


  1. openssl pkcs12 -export -in mycert.crt -inkey mykey.key -out myp12blob.p12 -name mykeystorealias -CAfile mycascert.crt
    1. Set the password to "changeit"
  2. keytool -importkeystore -deststorepass changeit -destkeypass changeit -destkeystore mykeystore -srckeystore myp12blob.p12 -srcstoretype PKCS12 -srcstorepass changeit -alias myalias
  3. keytool -list -v -keystore mykeystore

One-liner, CLI web server on port 8000


  1. python -m SimpleHTTPServer

Friday 12 April 2013

Cassandra in 30 seconds


  1. writes 
    1. writes entries directly to disk without checking if they already exist
    2. does fancy indexing of entries
    3. returns a write "OK" to the writing client after a quorum of nodes have confirmed
  2. reads
    1. tries to return the newest entry when client does a read
    2. has methods to eventually get the newest entry to return even if old ones still around
  3. replication
    1. stores entries to multiple nodes if replication is turned on
  4. deletes
    1. doesn't offically delete, just marks dead entries with a "tombstone"
    2. compaction is what gets rid of old versions of entries and dead entries
  5. balancing
    1. automatically fills in data holes if a node disappears
    2. automatically spreads data if new nodes are added
  6. resurrection
    1. 3-nodes: X, Y, Z, all replicate all data
    2. server X goes down
    3. delete goes to Y and Z for key A
    4. Y and Z are "compacted"
      1. i.e., redundant keys & tombstones cleaned up / removed
      2. key A is completely gone as far as  Y and Z know
    5. X comes up and has value for key A
    6. A is back! resurrected from the dead! life sucks.
    7. NOTE: if Y and Z didn't have tombstones removed, they would have had a date that was more recent than X's key A entry, so they would have invalidated X's key A. But, they are gone after a compaction or cleanup.

Move huge directory on the root partition to a huge non-root parition


Assumption: /mnt is a huge disk partition separate from the / partition  (aka root partition)
  1. mkdir -p /mnt/home/myfatdirectory
  2. kill all processes that have open files to /home/myfatdirectory
    1. lsof /home/myfatdirectory
    2. make sure you get ZERO results, ie no processes have open files to this directory
  3. mv /home/myfatdirectory /home/myfatdirectory_old 
  4. mkdir -p /home/myfatdirectory
  5. mount --bind /mnt/home/myfatdirectory /home/myfatdirectory
  6. add to bottom of /etc/fstab, so the mount is picked up on reboot
    1. /mnt/home/myfatdirectory /home/myfatdirectory none bind 0 0
NOTES:
  1. fix perms as necessary by interleaving your own steps into the above
  2. for the paranoid: you might want to make sure fstab entries work fine on reboot


Monday 8 April 2013

Recover accidentally deleted file as long as some process still has it open, on Linux


  1. lsof | grep myfile
    1. the second column is the process id
    2. the number in the fourth column is the file descriptor
  2. cp /proc/<process id>/fd/<file descriptor> myfile.saved

Wednesday 3 April 2013

Build unbound from source on redhat/centos

NOTE: unbound is now available via epel repo on Amazon Linux
    1. install requirements
      1. yum groupinstall "Development Tools"
      2. yum install openssl-devel
      3. yum install expat-devel
    2. build
      1. ldns
        1. wget http://www.nlnetlabs.nl/downloads/ldns/ldns-1.6.16.tar.gz
        2. tar zxvf ldns-1.6.16.tar.gz
        3. cd ldns-1.6.16/
        4. ./configure --disable-gost --disable-ecdsa
        5. make
        6. make install
      2. unbound
        1. wget http://unbound.net/downloads/unbound-latest.tar.gz
        2. tar zxvf unbound-latest.tar.gz
        3. cd unbound-1.4.20/
        4. ./configure --disable-gost --disable-ecdsa
        5. make
        6. make install
    3. add libs to system lib path
      1. vi /etc/ld.so.conf.d/ldnsandunbound.conf
        1. add this one line
          1. /usr/local/lib
      2. sudo ldconfig
    4. add unbound user
      1. adduser --system unbound
    5. tweak config
      1. vi /usr/local/etc/unbound/unbound.conf
        1. see simple sample below
    6. run
      1. unbound
    7. check
      1. lsof -nP -i :53
    8. stop
      1. pkill unbound
    9. restart
      1. unbound
    server:
            verbosity: 1
            interface: 0.0.0.0
            access-control: 10.0.0.0/16 allow
    forward-zone:
           name: "my-vpc.internal"
           forward-addr: 252.252.199.199
           forward-first: no

    Taken: https://calomel.org/unbound_dns.html

    Tuesday 2 April 2013

    Set up private, internal DNS for your VPC using Route 53 and unbound

    CRITICAL: AWS now offers internal VPC DNS! Below is no longer necessary AFAIK. Woo hoo!

    http://aws.amazon.com/about-aws/whats-new/2014/11/05/amazon-route-53-now-supports-private-dns-with-amazon-vpc/

    BELOW IS DEPRECATED!
    1. create a Hosted Zone, something like "mydomain.internal"
    2. get the IP addresses of the name servers assigned to your new zone
      1. STRIP OFF '.' at the end of the name servers or BOOM!
    3.  create a new DHCP Options Set
      1. add the IP addresses you gathered above to the domain-name-servers field
    4. Change DHCP Options Set of your VPC by right-clicking it
    5. run sudo dhclient on any already-running instance in the VPC to pick up changes
    6. debug changes have taken place on an instance: cat /etc/resolv.conf

    RECOMMEND ALTERNATE SOLUTION: here's a sample unbound.conf I ended up using for a DNS forwarding server within my VPC -- see comments below. I adjusted the "options set" to point at this DNS server instead, 10.0.0.254 in my case.

    NOTE: Btw, unbound is available under "epel" yum repo.

    server:
            verbosity: 1
            interface: 0.0.0.0
            access-control: 10.0.0.0/16 allow
    forward-zone:
           name: "mydomain.internal"
           forward-host: ns-123.awsdns-12.com
           forward-host: ns-234.awsdns-34.biz
           forward-host: ns-567.awsdns-56.net
           forward-host: ns-890.awsdns-78.org
           forward-first: no 
     

    See also:

    unbound, custom records:  http://sysadminandnetworking.blogspot.com/2014/05/unbound-custom-records.html
    unbound, default to google: http://sysadminandnetworking.blogspot.com/2014/05/unbound-default-to-googles-dns.html

    Thursday 28 March 2013

    Run one command on many Linux servers, install pssh, works on Mac


    1. sudo easy_install pip
    2. sudo pip install pssh
    3. Create a file with the list of servers you want to control, call it servers or something similar
    4. pssh -h servers "date"
    5. Put your ssh pub key up to all of them
      1. pssh -h servers -i "echo 'ssh-rsa AA...wh me@myfqdn' >> /home/user/.ssh/authorized_keys"
    Taken: http://kaspergrubbe.dk/2012/using-pssh-for-executing-parallel-ssh-commands/

    Note: csshX is very nice if you want to see all terminals at once as you type, more later

    Wednesday 27 March 2013

    github and multiple accounts, git keeps asking for password

    Taken: http://net.tutsplus.com/tutorials/tools-and-tips/how-to-work-with-github-and-multiple-accounts/
    1. ssh-keygen -t rsa -C "me@mycompany.com" -f ~/.ssh/id_rsa_mycompany
    2. ssh-add ~/.ssh/id_rsa_mycompany
    3. Add below to ~/.ssh/config
    4. git clone git@github-mycompany:mycompany/myrepo.git
    Host github-mycompany
      HostName github.com
      User git
      IdentityFile ~/.ssh/id_rsa_mycompany

    Monday 25 March 2013

    Generate gpg keys, upload to server, pull from server, from CLI


    1. gpg --gen-key
    2. gpg --list-keys
    3. gpg --keyserver pgp.mit.edu --send-keys '62E49F5A'
      1. that funky number is listed in the output of "list-keys", just look carefully
        1. your funky number will be unique
        2. should be 8 digits long and hex
    4. gpg --keyserver pgp.mit.edu --search-keys 'youremail@yahoo.com'
    5. gpg --keyserver pgp.mit.edu --search-keys 'yourgirl@yahoo.com'
    6. gpg --keyserver pgp.mit.edu --recv-keys 1F3B6ACA
      1. Get her key with the ID you saw in previous step
    7. Use keys to encrypt content
      1. Can be encrypted for multiple people in one go, and only those listed can open the result

    Friday 15 March 2013

    Searching with an LDAP filter


    1. Set the dn you wish to search through
      1. e.g., ou=Employees,dc=mycompaniesdomain,dc=com
    2. Set the filter
      1. e.g., (&(objectclass=inetorgperson)(uid=myfirstname.mylastname))
        1. inetorgperson is an LDAP standard "object", btw, there are a bunch of others
    Btw: one can also -- quick and dirty -- dump the whole LDAP db to a ldif file, and do a text search on that.

    Simple Ruby email out localhost:25, no OpenSSL::SSL::SSLError, no tlsconnect error

    Notes:
    1. This skips the common OpenSSL::SSL::SSLError / tlscommon errors somehow, see below for error output.
    2. DON'T use pony's "smtp" hash option, it has the same problem. Notice it is missing here!
    Steps:
    1. gem install pony
    2. take below code 
      1. put in ~/bin/mail_test.rb
      2. tweak for your environment
      3. chmod +x ~/bin/mail_test.rb 

    https://github.com/pcharlesleddy/misc/blob/master/mail_test.rb

    #!/usr/bin/ruby

    require 'rubygems'
    require 'pony'

    mystring = "a\nb\nc"

    Pony.mail(:to => 'abc@efg.org', :from => 'me@example.com', :subject => 'Test mail script', :body => 'Hello there.', :attachments => {"mail_test.txt" => File.read("/home/me/bin/mail_test.rb"), "mystring.txt" => mystring})


    Common, irritating tlscommon error:

    /usr/lib/ruby/1.8/openssl/ssl-internal.rb:123:in `post_connection_check': hostname was not match with the server certificate (OpenSSL::SSL::SSLError)
    from /usr/lib/rvm/gems/ruby-1.9.3-p194/gems/mail-2.5.3/lib/mail/core_extensions/smtp.rb:17:in `tlsconnect'
    from /usr/lib/ruby/1.8/net/smtp.rb:562:in `do_start'
    #!/usr/bin/ruby
    from /usr/lib/ruby/1.8/net/smtp.rb:525:in `start'
    from /usr/lib/rvm/gems/ruby-1.9.3-p194/gems/mail-2.5.3/lib/mail/network/delivery_methods/smtp.rb:136:in `deliver!'
    from /usr/lib/rvm/gems/ruby-1.9.3-p194/gems/mail-2.5.3/lib/mail/message.rb:245:in `deliver!'
    from /usr/lib/rvm/gems/ruby-1.9.3-p194/gems/pony-1.4/lib/pony.rb:166:in `deliver'
    from /usr/lib/rvm/gems/ruby-1.9.3-p194/gems/pony-1.4/lib/pony.rb:138:in `mail'

    Generate IAM certs for users on AWS


    1. openssl genrsa 1024 > username-env-pk.pem
      1. pk stands for private key
    2. openssl req -new -x509 -nodes -sha1 -days 365 -key username-env-pk.pem -outform PEM > username-env-cert.pem
      1. lasts for 365
    3. Paste username-env-cert.pem in to the AWS Signing Certificates area for that user
    4. Give user both username-env-pk.pem and username-env-cert.pem, and wish them luck

    Redirect all command output, stdin/stderr, to a file on Linux

    puppet agent --test --noop >/var/tmp/puppet_noop_20130315 2>&1

    Notes:
    1. The 2>&1 redirects stderr to where stdin points
      1. stdin points to the console by default unless you change that
      2. here stdin is redirected to a file under /var/tmp


    vagrant on aws


    1. vagrant plugin install vagrant-aws
    2. vagrant box add aws001 https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box
    3. vagrant init
    4. Adapt below and put in the "Vagrantfile" file
    5. vagrant up --provider=aws
    6. vagrant ssh
    7. vagrant destroy
    Vagrant.configure("2") do |config|
      config.vm.box = "aws001"

      config.vm.provider :aws do |aws|
        aws.access_key_id = "<your_aws_key_id>"
        aws.secret_access_key = "<your_aws_secret>"
        aws.keypair_name = "<your_keypair_name>"
        aws.ssh_private_key_path = "/home/<you>/.ssh/<your_keypair_name>.pem"

        aws.region = "eu-west-1"
        aws.ami = "ami-01080b75"
        aws.ssh_username = "ubuntu"
      end
    end

    Thursday 14 March 2013

    2G swap file


    1. dd if=/dev/zero of=/swapfile bs=1M count=2048
    2. mkswap /swapfile
    3. swapon /swapfile

    Wednesday 13 March 2013

    Get provisioned public key for AWS EC2 instance via curl

    curl http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key

    Tuesday 12 March 2013

    Specify ssh key when using rsync

    WARNING: don't use ~ and don't use double quotes.
    1. rsync -av -e 'ssh -i /home/me/.ssh/id_rsa_other' root@logging.gumby.com:/remotedir/ /localdir/
    Also, some alternative port:
    1. rsync -av -e 'ssh -p 2221' root@logging.gumby.com:/remotedir/ /localdir/

    Tuesday 5 March 2013

    Build rpm of monit 5.5

    1. Download https://github.com/pcharlesleddy/misc/blob/master/monit.spec
      1. change "_topdir" to match your local system.
    2. cd into what you set _topdir to
    3. mkdir -p {BUILD,RPMS,SOURCES,SPECS,SRPMS,tmp}
    4. Download monit-5.5.tar.gz file and put it in the SOURCES directory
    5. Put monit.spec in the SPECS directory
    6. rpmbuild -v -bb --clean SPECS/monit.spec
      1. yum -y install rpmdevtools
    7. Output should mention where the rpm ended up
    8. rpm -qlp on the rpm file to see what's in it
    Gory details: http://fedoraproject.org/wiki/How_to_create_an_RPM_package

    Notes
    1. "%spec -q" means be "quiet" when untarring, not that interesting, but people use it a lot

      Thursday 28 February 2013

      Exim: rewrite "From" field


      Amazon's AWS SES service requires email be addressed from a particular user.
      1. Add to begin rewrite section
        1. *  root@abc.com  Ffrs
      2. Reload exim

      Wednesday 27 February 2013

      Right-click with Mac trackpad

      Try selecting something and clicking anywhere on the trackpad with TWO fingers.

      AWS SES with exim4 on debian-based Linux



      1. apt-get install exim4
      2. dpkg-reconfigure exim4-config
        1. Select: internet site; mail is sent and received directly using SMTP
        2. IP-addresses to listen on for incoming SMTP connections:
          1. 127.0.0.1 ; ::1 (it's the default anyways)
        3. Take most defaults
        4. Split configuration into small files?
          1. NO!
      3. lsof -nP -i :25
        1. Make sure you aren't allowing the world to connect!
          1. 127.0.0.1:25 is good
      4. AWS -> SES
        1. Verified Senders
          1. Verify one of your existing email addresses
        2. SMTP Settings
          1. Create My SMTP Credentials
          2. Use downloaded credentials.csv file contents for below steps
      5. Edit /etc/exim4/exim4.conf.template
        1. Find ALREADY existing line "public_name = LOGIN"
          1. change to "public_name = OLD_LOGIN"
        2. Add below sections to existing sections in the file
        3. Use info from credentials.csv in place of pointy brackets, e.g. <aws_id> 
      6. service exim4 stop
      7. service exim4 start
      8. tail -F /var/log/exim4/mainlog
        1. Keep this running in another terminal while you do the below
      9. echo test001 | mail -r <email_you_verified_with_aws> -s "Test subject" <a_test_email_address>
        1. e.g. echo test001 | mail -r bob@myawsverified.com -s test bob@testaccount.com
      10. See this to set the "From:" field to your verified user for every email sent from your system
        1. http://sysadminandnetworking.blogspot.com/2013/02/exim-rewrite-from-field.html


      begin routers

      send_via_ses:
        driver = manualroute
        domains = ! +local_domains
        transport = ses_smtp
        route_list = * email-smtp.us-east-1.amazonaws.com


      begin transports

      ses_smtp:
        driver = smtp
        port = 25
        hosts_require_auth = $host_address
        hosts_require_tls = $host_address


      begin authenticators

      ses_login:
        driver = plaintext
        public_name = LOGIN
        client_send = : <aws_id> : <aws_secret>

      Saturday 23 February 2013

      JMX ports to open in firewall for jconsole to Cassandra


      1. Port 7199
        1. Used for about a dozen packets when JMX connection first made
          1. A handshake of sorts
          2. Probably sets up the agreement on which high port to connect to, used below
            1. Similar to SIP
            2. Similar to old FTP
        2. Not used again after initial handshake
      2. Port range 55000 to 55999
        1. To see these packets, on JVM server
          1. tcpdump -nn ! port 22 and host <jconsole client IP> (not literal, replace this)
      3. If jconsole starts showing graphs, you are connected
      To run jconsole directly on the server via VNC, see this article: http://sysadminandnetworking.blogspot.com/

      Tricks and Tips
      1. If you don't want to expose 1000 ports to the world for some reason
        1. Open all ports on firewall in front of JVM server
        2. On JVM server: tcpdump -nn ! port 22 and host <jconsole client IP>
        3. Start jconsole connection on client machine
        4. Watch to see which port JVM server is trying to reach jconsole client via
        5. Close all but that port in the firewall, will be between 55000-55999
      2. Do a local experiment to a local JVM JMX-able application if unsure of good jconsole connection result
      3. Get your external IP from where you are running jconsole client
        1. CLI: curl http://ipaddr.me
        2. Or web browser: http://ipaddr.me

      What's an MBean or JavaBean?

      A fancy name for a Java class that:
      1. is serializable
        1. means you can write/read the contents directly to disk as is
      2. has a 0-argument constructor
      3. has getter and setter methods
        1. Have you ever tweaked a mbean value via jconsole before, btw?
      Understand, there was major hype for Java back in circa 2000 that didn't quite pan out as expected.

      Friday 22 February 2013

      Increase bash history size on Mac


      1. Add below lines to end of ~/.bash_profile
      2. Source result or log out and back in
        1. source ~/.bash_profile
      3. Test
        1. echo $HISTFILESIZE
        2. echo $HISTSIZE
      export HISTFILESIZE=2500

      export HISTSIZE=""

      Thursday 14 February 2013

      rsync: include only these


      1. rsync -avP --include=*/ [set of includes] --exclude=* 
        1. for example
          1. rsync -n -avP --include=*/ --include=*.dat  --include=*.idx --exclude=* /informatica/ /backup/informatica/


      Wednesday 6 February 2013

      Linux equivalent of updatedb on Mac

      sudo /usr/libexec/locate.updatedb

      Disable CAPS LOCK key on MacBook Pro


      1. System Preferences -> Keyboard -> Modifier Keys
      2. MAKE SURE YOU PICK THE RIGHT KEYBOARD
        1. "Select Keyboard"
      3. Caps Lock Key -> No Action
      Might as well do all the other keyboards while you're at it, eh?

      Sunday 3 February 2013

      Fix jEdit fonts for MacBook Pro with Retina display


      From: https://bugs.eclipse.org/bugs/show_bug.cgi?id=382972#c4
      Assumptions: jEdit was installed into /Applications directory.


      1. Close any running jEdit
      2. Edit /Applications/jEdit.app/Contents/Info.plist
        1. At end of file, add the top two lines below above the bottom two lines, and save
      3. Drag jEdit to desktop
      4. Start jEdit and see if fonts fixed
      5. If fixed, drag jEdit back to /Applications, and retest.

      <key>NSHighResolutionCapable</key>
      <true/>
      </dict>
      </plist>

      Friday 1 February 2013

      Open snmp requests to world


      1. Open snmp requests to world
        1. snmpd.conf
          1. rocommunity public 0.0.0.0/0
          2. #agentAddress  udp:127.0.0.1:161
            1. comment this line out if it exists so snmpd listens to the world
      2. Make sure snmpd is listening on all interfaces
        1. lsof -nP -i
          1. snmpd  ......stuff in here........  UDP *:161
      3. Test from another server
        1. snmpwalk -cpublic -v1 <IP address serving snmp request>

      Thursday 31 January 2013

      Vagrant: how to set vm memory and force gui mode


      # -*- mode: ruby -*-
      # vi: set ft=ruby :

      Vagrant::Config.run do |config|

        config.vm.define :zenoss do |zenoss_config|
            zenoss_config.vm.box  = "quantal64"
            zenoss_config.vm.network :hostonly, "10.66.66.10"
            zenoss_config.vm.forward_port 80, 8885
            zenoss_config.vm.customize ["modifyvm", :id, "--memory", 1024]
        end

        config.vm.define :desktop do |desktop_config|
            desktop_config.vm.box  = "quantal64"
            desktop_config.vm.network :hostonly, "10.66.66.20"
            desktop_config.vm.boot_mode = :gui
        end

      end

      Friday 25 January 2013

      How to convert encryption keys: RSA to PEM



      1. RSA to PEM
        1. ssh-keygen -t rsa
        2. openssl rsa -in ~/.ssh/id_rsa -outform pem > id_rsa.pem

      Thursday 17 January 2013

      Asterisk pre-reqs for compiling on Debian/Ubuntu

      apt-get -y install make libncurses-dev libxml2-dev sqlite3 libsqlite3-dev libiksemel-dev libssl-dev  subversion

      Thursday 10 January 2013

      Redirection on CLI: greater-thans, ampersands and numbers

      1. myCLIapp > /dev/null 2>&1
        1. Order is important, don't reverse the redirects
          1. See below
        2. First redirect sends STDOUT to kernel's blackhole equivalent
          1. result:  STDOUT is forgotten, never shown
          2. STDOUT is implied when no number before '>'
        3. Second redirect sends STDERR to where STDOUT points
          1. STDERR goes where STDOUT goes
          2. So STDERR ends up in blackhole too
          3. The amphersand is necessary
            1. to specify this is a "file handle" 
            2. and not a filename
      2. Reverse mistake
        1. myCLIapp 2>&1 > /dev/null 
          1. Read left to right
          2. 2>&1
            1. Would 1st send STDERR to where STDOUT is pointing currently
              1. Which is STDOUT so far
          3. > /dev/null
            1. STDOUT is implied since no number before '>'
            2. Send STDOUT to /dev/null
          4. Result is
            1. STDERR being displayed
            2. STDOUT being sent to blackhole
      3. Remember
        1. if no number given before '>', them implied is STDOUT
      http://www.tldp.org/LDP/abs/html/io-redirection.html

      Tuesday 8 January 2013

      Quick start howto for divish on Debian

      1. Prep
        1. Here, there is one backup server and two client servers that need to be backed up
        2. Make sure root user on backup server can ssh to mybox01.mydomain.net and mybox02.mydomain.net without password
          1. Later, much later, see online for better, more secure ways
        3. install dirvish and rsync on server, rsync on clients
          1. apt-get install dirvish
          2. apt-get install rsync
      2. Clients
        1. The below are just example directories, use your own, the ones you want backed up
        2. mkdir -p /data/backups/
        3. mkdir -p /data/backups/etc
        4. mkdir -p /data/backups/var/log
        5. rsync -av /etc/ /data/backups/etc/
        6. rsync -av /opt/ /data/backups/opt/
        7. Make a cron job to do the rsyncs above nightly
      3. Server
        1. mkdir -p /backup/dirvish/mybox01/dirvish
        2. mkdir -p /backup/dirvish/mybox02/dirvish
        3. vi /backup/dirvish/mybox01/dirvish/default.conf
          1. get contents below
        4. vi /backup/dirvish/mybox02/dirvish/default.conf
          1. get contents below
        5. dirvish --vault mybox01 --init
        6. dirvish --vault mybox02 --init
      4. Verify
        1. Backed up files should now be under /backups/dirvish on backup server
          1. tree /backup/ -d -L 3
          2. find /backup/dirvish -ls | less
      5. Tell dirvish to do nightly pull
        1. vi  /etc/dirvish/master.conf.mybackup
          1. get contents below
      6. Tomorrow, verify the pull worked, and next week too
      7. Exclusion and expire options
        1. Research options one can add to master.conf.mybackup
      Server files

      /etc/dirvish/master.conf

      bank:
          /backup/dirvish

      Exclude:
          lost+found/


      /etc/dirvish/master.conf.mybackup

      Runall:
          mybox01
          mybox02

      /backup/dirvish/mybox01/dirvish/default.conf

      client: mybox01.mydomain.net
      tree: /data/backups

      xdev: true
      index: gzip


      /backup/dirvish/mybox02/dirvish/default.conf

      client: mybox02.mydomain.net
      tree: /data/backups

      xdev: true
      index: gzip



      Friday 4 January 2013

      Install lex on Debian

      sudo apt-get install byacc flex

      Thursday 3 January 2013

      Quick start how-to graphite base install on Debian

      UPDATE: Also see Latest Graphite on Amazon Linux at AWS.

      This only gets graphite working on your local Linux box. Left to user to translate to remote server installation thereafter.

      NOTE: Do everything as root user

      1. make sure apache2 is installed and working with wsgi
        1. apt-get install apache2 -y
        2. apt-get install libapache2-mod-wsgi -y
        3. a2enmod wsgi
        4. wsgi needs a place for its sockets
          1. mkdir /etc/apache2/run
          2. mkdir /var/run/wsgi
          3. chmod 777 /etc/apache2/run /var/run/wsgi
          4. this seems undocumented, thanks!
      2. Install dependencies
        1. apt-get install -y libapache2-mod-wsgi python-twisted python-memcache python-pysqlite2 python-simplejson
        2. apt-get install -y python2.6 python-pip python-cairo python-django python-django-tagging
      3. Install graphite elements
        1. mkdir -p /root/graphite-install
        2. cd /root/graphite-install
        3. git clone https://github.com/graphite-project/graphite-web.git
        4. git clone https://github.com/graphite-project/carbon.git
        5. git clone https://github.com/graphite-project/whisper.git
        6. git clone https://github.com/graphite-project/ceres
        7. cd /root/graphite-install/whisper
        8. git checkout 0.9.x
        9. python setup.py install
        10. cd /root/graphite-install/ceres
        11. python setup.py install
        12. cd /root/graphite-install/carbon
        13. git checkout 0.9.x
        14. python setup.py install
        15. cd /root/graphite-install/graphite-web
        16. git checkout 0.9.x
        17. python check-dependencies.py
          1. fix any missing dependencies
          2. ignore warnings if certain you don't need a feature
        18. python setup.py install
      4. Setup configuration files and permissions
        1. cp -v /opt/graphite/conf/carbon.conf.example /opt/graphite/conf/carbon.conf
        2. cp -v /opt/graphite/conf/storage-schemas.conf.example /opt/graphite/conf/storage-schemas.conf
        3. Create Django database
          1. cp -v /opt/graphite/webapp/graphite/local_settings.py.example /opt/graphite/webapp/graphite/local_settings.py
            1. Edit this file by uncommenting the entire DATABASES section
              1. Otherwise next commands may fail with bizarre error
          2. cd /opt/graphite/webapp/graphite
          3. python manage.py syncdb
            1. Make sure NO errors
            2. Create a user/pass you'll never forget
        4. chown -Rv www-data:www-data /opt/graphite/storage/
          1. EXTREMELY IMPORTANT
      5. Apache
        1. Add below Apache configuration as a virtual host (details not covered here)
          1. Or do this and struggle to make it work
            1. cp -v /opt/graphite/examples/example-graphite-vhost.conf /etc/apache2/sites-available/graphite
        2. Every "directory" and "location" in the virtual hosts file need these entries, or you'll get permission denied errors
          1. apache versions before 2.4
            1. Order deny,allow
            2. Allow from all
          2. apache versions 2.4 and after
            1. Require all granted
        3. a2ensite graphite
        4. cp -v /opt/graphite/conf/graphite.wsgi.example /opt/graphite/conf/graphite.wsgi
        5. apache2ctl -S
          1. Check all is OK
        6. service apache2 stop
        7. service apache2 start
        8. If you have trouble check permissions under this directory and/or run this again
          1. chown -Rv www-data:www-data /opt/graphite/storage/
          2. Seems some files are created by root that should be created by www-data user maybe
      6. Start carbon daemon
        1. cd /opt/graphite
        2. /opt/graphite/bin/carbon-cache.py start
          1. verify with:
            1. lsof -nP -i :2003
          2. TASK: automate this to start on reboot somehow
      7. Add to /etc/hosts
        1. 127.0.1.3 graphite
      8. tail -F /opt/graphite/storage/log/webapp/error.log
      9. python /opt/graphite/examples/example-client.py
      10. Hit http://graphite
        1. Should show initial graphite interface
      11. View results
        1. Click down into "system" -> loadavg_5min
        2. Find "Select Recent Data" icon in upper-left toolbar
          1. Set to 10 mins
        3. You should see lines appearing as script runs and feed data to Graphite via Carbon
      12. Check out giraffe once you get some stats to shove in it
        1. https://github.com/kenhub/giraffe
        2. Just dump the files you get from the git clone into a directory under you default Apache install and tweedle with dashboard.js until you see something
      13. Try this a few times on the CLI of your graphite server every few minutes
        1. echo "system.logs.changed_last10 `find /var/log -mmin -10 | wc -l` `date +%s`" | nc -w 1 localhost 2003
        2. Try similar commands to pass in other stats
          1. echo must output a single integer.

      WSGISocketPrefix /var/run/wsgi
      
      <VirtualHost *:80>
          ServerName graphite
          DocumentRoot "/opt/graphite/webapp"
          ErrorLog /opt/graphite/storage/log/webapp/error.log
          CustomLog /opt/graphite/storage/log/webapp/access.log common
          
          WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120
          WSGIProcessGroup graphite
          WSGIApplicationGroup %{GLOBAL}
          WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL}
          WSGIScriptAlias / /opt/graphite/conf/graphite.wsgi 
      
          Alias /content/ /opt/graphite/webapp/content/
          <Location "/content/">
              SetHandler None
          </Location>
          
          Alias /media/ "@DJANGO_ROOT@/contrib/admin/media/"
          <Location "/media/">
              SetHandler None
          </Location>
          
          <Directory /opt/graphite/conf/>
              Order deny,allow
              Allow from all
          </Directory>
      </VirtualHost>
      

      Interview questions: 2020-12

      Terraform provider vs provisioner Load balancing Network Load Balancer vs Application Load Balancer  Networking Layer 1 vs Layer 4 haproxy u...