Showing posts from 2013

cassandra sstable version in filename

Look in the filename of the sstable, you'll see some letters between hyphens/dashes, ie "-"; these are the version of your sstable, see below for possible values.


Btw, cassandra "support" loves to send you to look at code, so download the code and get familiar with the basic structure (somehow).

public static final Version LEGACY = new Version("a"); // "pre-history" // b (0.7.0): added version to sstable filenames // c (0.7.0): bloom filter component computes hashes over raw key bytes instead of strings // d (0.7.0): row size in data component becomes a long instead of int // e (0.7.0): stores undecorated keys in data and index components // f (0.7.0): switched bloom filter implementations in data component // g (0.8): tracks flushed-at context in metadata component …

Mac spotlight equivalent for Openbox: dmenu_run

apt-get install suckless-tools dmenu_run

aws ec2 cli filter by tag name and value

aws ec2 describe-instances --filter Name=tag:Name,Values=ADS-prod-ads The horrible syntax of "and" filters:
 aws ec2 describe-instances --filter '{"Name":"tag:backup","Values":["yes"]}' '{"Name":"instance-state-name","Values":["running"]}' non-breakout view below aws ec2 describe-instances --filter '{"Name":"tag:backup","Values":["yes"]}' '{"Name":"instance-state-name","Values":["running"]}'

Completely wipe Chrome data on Mac

find ~/Library/ -type d -name 'Chrome' -exec rm -rfv {} \;

Chromecast tricks

is casting a tabdon't do full screeninstead, shrink browser window to size of original videowhat is put on the screen is relative to the window sizeseems to save on bandwidthpassed through your wireless routerless broken stream therefore

Building latest collectd on Amazon Linux server in AWS

yum install byacc flex automake libtool libgcrypt-devel glib2-devel libtool-ltdl-devel perl-ExtUtils-MakeMakercomments below imply this may be necessary: apt-get install bison git clone collectd./   Some helpful libs, start over to use them
yum install lvm2-devel net-snmp-devel liboping-devel libpcap-devel libesmtp-devel libcurl-devel libmnl-devel Dated: Nov, 2013

Openbox: monitor laptop battery via CLI and notify-send

while (true);do acpi -b | perl -n -e 's/.*?(\d+)%.*/$1/;chomp;print "$_...";if ($_ <= 15) {`notify-send batalert:$_`};';sleep 180;done
Would like a beep, but can't get one, no PC speaker on MacBook Air and mplayer buffers.

Change X / Openbox screen brightness on CLI

sudo apt-get -y install xbacklightxbacklight +10xbacklight +10xbacklight +10xbacklight -10xbacklight -10xbacklight -10

Recover wireless of Ubuntu on MacBook Air 4,2

sudo apt-get --reinstall install bcmwl-kernel-sourcesudo modprobe -r b43 ssb wl brcmfmac brcmsmac bcmasudo modprobe wl Broadcom 802.11 Linux STA wireless driver source

wicd: Could not connect to wicd's D-bus interface

I don't know why, she swallowed the fly.


Wicd needs to access your computer's network cardsCould not connect to wicd's D-Bus interface

sudo mv -v /etc/resolv.conf /etc/resolv.conf.backupsudo ln -s /run/resolvconf/resolv.conf /etc/resolv.confsudo rm -v /var/lib/wicd/resolv.conf.origsudo service wicd startyum -y install wicd-gtkwicd-gtk Taken:

Zenoss on Amazon Linux

Someone saved my life tonight, Sugarbear:

Also taken:

remove non-critical, unmet dependencies in rpmget good version of rpmrebuildwget localinstall rpmrebuild-2.9-7.4.1.noarch.rpmwget -e -n -p zenoss_core-4.2.4-1897.el6.x86_64.rpmremove linesRequires:      libgcj%dir %attr(0755, root, root) "/etc/sudoers.d"# ensure that the system uses the /etc/sudoers.d directorySUDOERSD_TOKEN="#includedir /etc/sudoers.d"SUDOERSD_FOUND=`/bin/egrep "^$SUDOERSD_TOKEN" /etc/sudoers`if [ -z "$SUDOERSD_FOUND" ]; then   echo "# zenoss rpm, ensure that /etc/sudoers.d loads"…

Listen to internet radio without Flash on Mac and Linux CLI

NOTE: nice classical music option!
OS typeMacbrew install mplayerSee this if you hit the bug: (Debian-based)apt-get install mplayerplay on CLImplayer <stream>e.g. mplayer  Or/else: paste stream link into a browser directlyuse VLC to play stream

Read ext4 on Mac

install xcodeinstall brewbrew install ext4fusefollow any instructions on changing permissionsext4fuse <device> <mountpoint>

Latest aws cli tools on Redhat

wget awscli-bundle.zipsudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws  Result is in /usr/local/bin, so set up your PATH accordingly.

aws cli run-instances block-device-mappings ephemeral encrypted

aws --version => aws-cli/1.1.1 Python/2.6.8 Linux/3.4.43-43.43.amzn1.x86_64
aws ec2 run-instances --image-id ami-eeff1122 --instance-type m2.2xlarge --security-group-ids sg-eeff1122 --subnet-id subnet-eeff1122 --private-ip-address file://meta_myserver.txt --block-device-mappings '[{ "DeviceName":"/dev/sdb", "VirtualName":"ephemeral0" }]' For 50G EBS attached on boot (auto-deleted on terminate unless you override), block device mapping becomes:
 '[{ "DeviceName":"/dev/sdb", "VirtualName":"ephemeral0" },{"DeviceName":"/dev/sdc","Ebs":{"VolumeSize":50}}]' WARNING: "Ebs" is very case sensitive here.

To encrypt the Ebs volume, add "Encrypted": true to the device params like so:
 {"DeviceName":"/dev/sdc","Ebs":{"VolumeSize":50,"Encrypted": true}}

Use rvm in cron

rvm listfind what looks like your gems set, e.gruby-1.9.3-p194 echo $rvm_path/bin, e.g./usr/lib/rvm/bin tack on the output of #1 to the output of #2, e.g./usr/lib/rvm/bin/ruby-1.9.3-p194your cron entry should be the result of #3 followed by your ruby script, e.g.0 0 * * * /usr/lib/rvm/bin/ruby-1.9.3-p194 /optt/mydir/myscript.rb

Test exim from CLI without "mail" command

If you don't have "mail" on the CLI for whatever, weird (Redhat-based) reasons, circumvent like so:
/path/to/exim -v 'user@domain'type a multi-line message here ending with blank linehit ^D to end message and sendyou should be returned to shell Taken:

Edit files on a remote server via your Mac using ssh, sshfs and brew

install latest xcodeinstall brewinstall sshfs using brewmake sure to change any permissions specified mkdir mytmpdirsshfs -o uid=<your local numerical id> root@<remote server>:<remote dir> mytmpdire.g. sshfs -o uid=501 root@ mytmpdiredit files that appear in tmpdir, and when you save them, the remote files will be updated Unmount umount mytmpdir

Simple unbound upstart script

put below in /var/tmp/unbound.confpkill unboundlsof -nP -i :53pgrep unboundcp -v /var/tmp/unbound.conf /etc/init/start unboundstatus unboundstatus unboundstart unboundstart on runlevel [3] expect fork exec unbound

Sanity of growing a stripped LVM volume


However, with LVM you can easily grow a logical volume. But, you cannot use stripe mapping to add a drive to an existing striped logical volume because you can’t interleave the existing stripes with the new stripes. This link explains it fairly concisely.

    “In LVM 2, striped LVs can be extended by concatenating another set of devices onto the end of the first set. So you can get into a situation where your LV is a 2 stripe set concatenated with a linear set concatenated with a 4 stripe set.”

Taken: Pick Your Pleasure: RAID-0 mdadm Striping or LVM Striping?

Create isolated bucket on S3

setupcreate IAM groupadd simple, custom policy below do not add any other policies to groupcreate IAM user and put in above IAM groupcreate and download key and secret for usercreate bucket "mybucket01" in S3you don't have to touch perms of bucket itself clientinstall s3fox addon for Firefox from www.s3fox.netolder versions FAIL! get it only at www.s3fox.netopen s3fox addonFirefox -> Tools -> S3 Organizeradd only one user to "Manage Accounts" using user key and secretin right-hand window of s3fox add "/mybucket01" NOT "/""/" will give you "Access Denied"because user does not have perms to list root buckets, only itself { "Statement": [ { "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "arn:aws:s3:::mybucket01" }, { "Effect": "Allow", …

Snapshot AWS instance store as AMI

install api-toolsinstall ami-tools generate key / certcreate IAM user upload cert java install / export JAVA_HOMEexport key and secretec2-bundle-vol --user  <AWS acct #> --privatekey /myhome/my-key.pem--cert /myhome/my-cert.pem--arch x86_64--destination /var/tmp--exclude/backup,/mnt,/swapfileec2-upload-bundle--manifest /var/tmp/image.manifest.xml--bucket mybucket/hostname--access-key <AWS Key>--secret-key <AWS Secret>--location EUec2-register--region eu-west-1--name "myaminame"--description "Backing up hostname"mybucket/hostname/image.manifest.xml Taken: NOTE: the above link's content has typos in very essential parts, proof all steps

Monitoring Zookeeper

Option 1
yum -y install gitcdmkdir binmkdir toolscd toolsgit clone updatedblocate zoo.cfgjot this path down for step belowlet's call it "mypathtozoocfg"the name of your zk conf may vary, adjust if so cdcd binln -s /root/tools/zktop/ .make sure you put the '.' on the end of that command /root/bin/ --config /<mypathtozoocfg>/zoo.cfg Option 2, by hand
echo srvr | nc localhost 2181echo stat | nc localhost 2181echo cons | nc localhost 2181etc. Try: watch -d "echo stat | nc localhost 2181" on all zk nodes in separate terms Break-out
srvrversionlatenciesreceived client requestssent client responses and notificationsoutstanding requestszxid, cluster idmode in cluster, leader or followernode count (?)statsimilar to srvrbut has actual connections listed by IP near top Taken:

Set IPs on vagrant-lxc VMs

Cross-communication is always nice.

VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.define "web", primary: true do |web| = "quantal64" web.vm.provider :lxc do |lxc| lxc.customize 'network.ipv4', '' end end config.vm.define "db" do |db| = "quantal64" db.vm.provider :lxc do |lxc| lxc.customize 'network.ipv4', '' end end end

FYI, versioning info:

vagrant -v Vagrant 1.3.3 vagrant plugin list vagrant-lxc (0.6.0)

vncserver on Amazon Linux on Amazon's AWS

NOTE: this first step may be outdated in new versions of Amazon Linux, which provides libjpeg-turbo.

First, get a good version of libjpeg-turbo:
yum clean allyum --enablerepo=amzn-preview install libjpeg-turbosee: Necessary packages:
gnutls-2.8.5-10.el6.x86_64.rpmlibfontenc-1.0.5-2.el6.x86_64.rpmlibtasn1-2.3-3.el6.x86_64.rpmlibxdmcp-1.1.1-7.ram1.x86_64.rpmlibXfont-1.4.5-2.el6.x86_64.rpmlibxkbfile-1.0.6-1.1.el6.x86_64.rpmlibXmu-1.1.1-2.el6.x86_64.rpmmesa-dri-drivers-9.0-0.7.el6.x86_64.rpmpixman-0.26.2-5.el6_4.x86_64.rpmtigervnc-license-1.3.0-16.el6.noarch.rpmtigervnc-server-1.3.0-16.el6.x86_64.rpmtigervnc-server-minimal-1.3.0-16.el6.x86_64.rpmxkeyboard-config-2.6-6.el6.noarch.rpmxorg-x11-proto-devel-7.6-25.el6.noarch.rpmxorg-x11-xauth-1.0.2-7.1.el6.x86_64.rpmxorg-x11-xkb-utils-7.7-4.el6.x86_64.rpm Necessary packages for fluxbox: pyxdg-0.18-1.el6.noarch.rpm Here's a link to all the packages needed: rpms
Start by trying to instal…

DHCP on CLI for Ubuntu-like systems

Add these lines to /etc/network/interfaces, or tweak existing eth0 linesauto eth0iface eth0 inet dhcpbring it upsudo ifup eth0bring it downsudo ifdown eth0add some stuff to /etc/dhcp/dhclient.confinterface "eth0" {    prepend domain-name-servers,,;    supersede domain-search "", "mydom-vpc.internal";}flush on occasionip addr flush eth0

Match different sets of equally distributed things into groups: hosts and weeks in the year

Use modulo if you want to match up sets of things into groups.

They have to be equally distributed by number.

Here it is with modulo 3: hosts on left, weeks of the year on right.

Clear CLI with 1000 blank lines

For when you really want older output way out of your way, e.g., debugging, copying/pasting.
for i in {1..1000};do echo;done

Zenoss: remodel all Linux servers at once

su - zenosszenmodeler run --path=/Server/Linux 

Mac "Screen Sharing" using vncviewer and ssh tunnel

problem: Remmina (or similar) fail to connect to Mac over Cisco wireless (or whatever router is the problem), and you are tired of having to physically connect your Mac to make VNC work.

make sure "Screen Sharing" is on as normal on your Macssh -L5900:localhost:5900 192.168.X.Xreplace 192.168.X.X with the IP of your Mac"Remote Login" is onset display to "Scaled" and "1024x768" if your Mac is just an Outlook client nowvncviewer -encodings 'copyrect tight zrle hextile' localhost:5900

Selenium test on CLI in 5 minutes using Java

This took me two weeks to nail down looking at it here and there.
mkdir stestscd stestswget selenium-java-2.33.0.zipmkdir jarsfind selenium-2.33.0 -type f -name '*jar' -exec mv -v {} jars \;rm -rfv selenium-*mkdir srcvi src/Test.javause below codemkdir outjavac -d out -cp 'jars/*' src/MyTest.javaignore "Notes" outputcd outrm -rfv orgjava -cp '../jars/*:.' MyTest
import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.firefox.FirefoxDriver; import; import;
public class MyTest  {     public static void main(String[] args) {         WebDriver driver = new FirefoxDriver();         driver.get("");         WebElement element = driver.findElement("q"));         element.sendKe…

Could not update ICEauthority file /home/myuser/.ICEauthority

Your home directory perms got messed up somehow
sudo chown myuser:myuser /home/myuser If it still doesn't work, this may be necessary as well sudo chmod 750 /home/myuser

apache-cassandra11 conflicts with apache-cassandra11-1.1.11-1.noarch

The Error:

Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: apache-cassandra11 conflicts with apache-cassandra11-1.1.11-1.noarch
Solution: wget -i apache-cassandra11-1.1.11-1.noarch.rpm --nodeps This might be necessary before step 2, but make sure you back up your data first, even this should not delete it, you never know: yum remove apache-cassandra11

Poke an ssh tunnel to your house

remote serverssh -R 19999:localhost:22 myhomeuser@myhome.domain.orghome serverssh myremoteuser@localhost -p 19999
use below on your remote server to keep connection open~/.ssh/configHost   User myhomeuser   ServerAliveInterval 60

bash: funky hostname expansion in for loop

for host in myhosts-{2{2,{6..9}},3{2,5,{7..9}}}.mydomain.comdo echo $hostecho "ssh $host cmd"done

Redhat: vnc to remote server

NOTE: if the vncserver insists on starting on a port other than 5966, like 6099, wipe the ~/.vnc directory and start over again. If that doesn't help, change the second instance of 5966 below to 6066 in the port forwarding ssh command, e.g. '-L 5966:localhost:6066'.
 as remote root on myhostyum install tigervncyum install tigervnc-serveryum install libXfont pixmanyum install fluxboxyum install firefoxas a remote user, myuser, on myhostvncserver :66 -localhostset a password, call it mypasswordas local userssh -L 5966:localhost:5966myuser@myhostleave running and do next step in another local termvncviewer -encodings 'copyrect tight zrle hextile' localhost:5966authenticate with mypasswordas a remote user, myuser, on myhostexport DISPLAY=:66xterm &fluxbox &firefox &


sudo apt-get install xfonts-100dpi xfonts-75dpi xfonts-scalable xfonts-cyrillicsudo apt-get install xvfbsudo apt-get install xtightvncviewerapt-get install x11vncsudo apt-get install fluxboxexport DISPLAY=:1Xvfb :1 -screen 0 1024x768x16 &fluxbox &x11vnc -display :1 -bg -nopw -listen localhost -xkbexport DISPLAY=:0vncviewer -encodings 'copyrect tight zrle hextile' localhost:5900you should see fluxbox running within another window which you can navigatetry, for funexport DISPLAY=:1xtermpkill fluxboxfluxbox & Gets weirder close the above vncviewer windowx11vnc -display :1 -bg -nopw -listen localhost -xkbx11vnc -display :1 -bg -nopw -listen localhost -xkbx11vnc -display :1 -bg -nopw -listen localhost -xkbvncviewer -encodings 'copyrect tight zrle hextile' localhost:5900vncviewer -encodings 'copyrect tight zrle hextile' localhost:5901vncviewer -encodings 'copyrect tight zrle hextile' localhost:5902 Launch apps in the new display sudo apt-get install…

Zenoss: multi-graph report

Note: used version 3.x, other version nav my vary slightly
Make a groupcreate a new group under Infrastructurename it group001drag-and-drop a bunch of related servers into itReports -> Multi-Graph Reports, left-navAdd Multi-Graph Report, bottom left-nav '+' signname it report001CollectionsAdd Collectiondef: a collection is just a previously defined set of deviceslike your group001name it collection001Group, in drop-downclick on group001Add to Collectionnav in 3.x sucks, click back on report nameupper middle-nav "breadcrumb"Graph DefinitionsAdd Graphname it graph001Graph PointsAdd DataPointlaLoadInt15_laLoadInt15hard to go wrong with this data pointlater, you can explore othersnaming can be very, very uglye.g. os/interfaces/eth0/ifOutOctets_ifOutOctetsnav sucks, click back on report name in breadcrumbGraph GroupsAdd Graph Groupname it graphgroup001selectcollection: collection001graph definition: graph001method: All devices on single graphsavenav sucks, click back o…

tsunami-udp: faster than rsync

buildsudo apt-get install git gccsudo apt-get install automake autoconfgit clone git:// tsunami-udp./recompile.shsudo make installrunyou'll need a port open to allow direct connection from client to serverunfortunately, this doesn't work through NAT firewalls alonefirewall / port forwardingto server, TCP, 46224 by defaultto client, UDP, 46224 by defaultstart up servertsunamid myfile.gzconnect with clienttsunami set rate 5M connect get myfile.gzit will flood your connection if you dont set rate properlydocumentation files automaticallyallows wildcards when running server and client commands, "*", namelyclient will auto-find all files served, one after the nextuse forward-slash, i.e. get \*, for client command so bash doesn't intrepret the asteriskundocumenteddoesn't do subdirectories, better tar that up and have plenty of disk sp…

bash substring matching

#!/bin/bash[[ "$(hostname -s)" =~ $'dev' ]] && exitecho "we are not a dev host"

telnet vs netcat

netcatprints only what is sent by the remote hosttelnetnot suitable for arbitrary binary data reserves some bytes as control characters quits when its input runs outyou may not see what the other end sendsdoesn't do UDP

Fix bad/wrong aclocal version during make

autoreconf -fiupdates generated configuration files This was necessary when building tsunami-udp from cvs repository, the configure files were old/incompatible.


Exclude domains in your google search results

Put '-' in front of 'site:' operatore.g.Try: how to learn tibco -site:tibcommunity.comsearches for materials on "how to learn tibco" while ignoring all Tibco's noise FYI: It seems there is a copyright, so searching for "SOA" instead might lead to more books with desired material covered.

Out of inodes: file write error (No space left on device)

df -hiproves you are out of inodes or notcause is most likely tons of small files in some "problem directory", poke aroundfind <random_dir> -type f | wc -lgive a count of file in that subdircommon problem dirs/var/spool/<XYZ>/tmpfind <problem_dir> -type f -deletedeletes one file at a timerm will get stuck finding files first if you use wildcard like *

git push/pull just current branch

git config --global push.default trackinggit config --global pull.default tracking FYI, these settings are saved in ~/.gitconfig

Zenoss: Linux SSH commands

On CLIsu - zenosszenpack --listwget --install ZenPacks.zenoss.LinuxMonitor-1.1.5-py2.6.eggwget zenpack --install zenoss so all stuff is picked upshouldn't be necessary, but Monitoring Templates were missing/erroring for me withoutVia web interfaceDrag-and-drop server from Device list into Interface -> Device classes -> Server -> SSH -> LinuxSet that servers Configuration PropertieszCommandUsernamezCommandPasswordThis requires that you have at least one user that can SSH in via a password

dead simple irc gui client

apt-get install lostirc

Fetch Cassandra keyspaces and column families from nodetool command via Ruby


require 'logger'

log ='/var/log/cassandra/repair.log', 'daily')
log.level = Logger::INFO
log.datetime_format = "%Y-%m-%d %H:%M:%S"

keyspaces = {}

result = %x[nodetool cfstats | egrep 'Keyspace:|Column Family:']
result = result.gsub(/\s/, '')

result.split("Keyspace:").each do | keyspace |
  keyname = keyspace.split("ColumnFamily:")[0]
  next if (keyname == nil)
  next if (keyname == 'OpsCenter' or keyname == 'system')
  cfs = keyspace.split("ColumnFamily:").drop(1)
  keyspaces[keyname] = cfs

keyspaces.keys.each {|x|
  keyspaces[x].each do |y|"Repair start: #{x} #{y}")
#    result = %x[nodetool getcompactionthreshold #{x} #{y}]
#"Repair end: #{x} #{y}")

Direct ssh to a server via proxy using putty/plink on Windows

Make sure seamless ssh keys are setup to your bastion server for your usernameNot covered hereSee: -> Host Name -> mytargetserver.mydomain.comConnection -> ProxyProxy Type -> LocalTelnet command, or local proxy command c:/program files (x86)/putty/plink.exe -l myusername -agent -nc %host:%portadjust this path to plink.exe to match your local setuphint: install the complete putty install package, not just puttyTunnels  L8081 Hint: always hit "Save", no matter what you do, or however inconvenient it was designed to be.


Show progress during dd copy

kill -USR1  <pid of dd>

In-memory page states and kscand

kscand taskperiodically sweeps through all the pages in memorynotes "last access time"was accessed?increments page's age counterwasn't accessed?decrements page's age counterage counter at zeromove page to inactive dirty state In-memory page states
freebegin in this statenot being usedavailable for allocation, i.e. made activeactiveallocatedactively in useinactive dirtyhas fallen into disusecandidate for removal from main memoryinactive launderedinterim statecontents are being moved to diskwhen disk I/O operation completemoved to the inactive clean stateif, during the disk operation, the page is accessedmoved back into the active stateinactive cleanlaundering succeeded, i.e. contents in sync with copy on diskmay be deallocatedoverwritten Taken:

LVM crypt disks on Linux/AWS

dd if=/dev/urandom of=/keys/xvdm.key bs=1024 count=4dd if=/dev/urandom of=/keys/xvdn.key bs=1024 count=4cryptsetup --verbose -y luksFormat /dev/xvdm /keys/xvdm.keycryptsetup --verbose -y luksFormat /dev/xvdn /keys/xvdn.keycryptsetup luksOpen /dev/xvdm cryptm --key-file /etc/xvdm.keycryptsetup luksOpen /dev/xvdn cryptn --key-file /etc/xvdn.keypvcreate /dev/mapper/cryptm /dev/mapper/cryptnAdd entries to /etc/crypttab for reboots and test somehowcryptm /dev/xvdm /etc/xvdm.key lukscryptn /dev/xvdn /etc/xvdn.key luks Complete LVM setup and add entries to /etc/fstab.
Hint: don't make one, single typo...ever.

Double looping with bash

for ITEM in $(find /cassandra/data -type d -name snapshots)do for DIR in $(find ${ITEM} -maxdepth 1 -mindepth 1 -type d -mtime -1)do echo $ITEM $DIRdonedone

Confluence: Lock wait timeout exceeded; try restarting transaction

WARNING! Atlassian themselves recommend STRONGLY against this procedure. If any action, take the action that shows you which table is locking, DO NOT DELETE anything unless you are 100% confident you can reverse your deletions. DO NOT DELETE, DO NOT DELETE!

Seeing this?

2013-05-14 16:39:55,581 ERROR [QuartzScheduler_Worker-1] [sf.hibernate.util.JDBCExceptionReporter] logExceptions Lock wait timeout exceeded; try restarting transaction
2013-05-14 16:39:55,581 ERROR [QuartzScheduler_Worker-1] [sf.hibernate.impl.SessionImpl] execute Could not synchronize database state with session

The first is actually reported from MySQL itself, the second from Hibernate, which wraps databases for Java apps.

If you are desperate, try deleting all rows from mysql's crowd.cwd_membership table after backing it up, worked for me, syncs started working again in under 16ms.
mysqldump crowd | bzip2 -c > /mnt/dump_crowd_`date +%Y%m%d`.sql.bz2mysql crowd -e 'delete from cwd_membership' If that doe…

Put stuff on your Nexus 4

apt-get install gmtpMake sure your "Storage" is in MTP mode P.S. Or, if you have access to a Mac: "Android File Transfer"

Check if a UDP port is open through a firewall

nmap -sU -p4569 remotehost

EC2 server to VPC private instance via VPC NAT instance

iptables -t nat -A PREROUTING -s -d -i eth0 -p tcp -m tcp --sport 1024:65535 --dport 3306 -j DNAT --to-destination is your external server's public IP address10.0.0.254 is your VPC NAT instance's IP address in the public subnet10.0.12.10 is the VPC IP address of your server in a private subnet3306 is the port your service is listening on

ec2-create-image: attached EBS volumes are snapshot and mapped

"ec2-create-image does snapshot the attached EBS volumes and add a block device mapping for those snapshots in the created AMI"

Nicer settings for cssh: terminal_font, terminal_size, terminal_args


terminal_font=5x8terminal_size=140x48terminal_args=-fg greenauto_close=1

Slow SSH: one possible solution, set "useDNS" to "no"

In sshd_config on the targer server, set "useDNS" to "no", and restart sshd Details:

mysqldump between two servers over ssh

set up ssh keys so server1 user can ssh to a server2set $HOME/.my.cnf so both users can get into respective mysql cli without passwordssee below for sample filecreate the new, empty database on server2, receiving serverfrom server1mysqldump mydatabase | ssh server2 mysql mydatabase Taken:
# $HOME/.my.cnf [client] password=myusersmysqlpassword

Openfire: use your 3rd-party, signed SSL cert

default keytool password is "changeit"use it for all password promptsworks 99%if it doesn't work, ask around, poke aroundGet keytool command in your PATHUse Openfire's web interface to "generate self-signed certificates"NOTE: "import a signed certificate and its private key"brokensays certs were loaded in green, but shows no result in "Server Certificates" listwhole reason for this postfind existing keystores on your chat servernice updatedblocate keystorelocate truststorehere, we'll assume /opt/openfire/resources/securitylist the "domain" Openfire used for the "generate self-signed certificates" action abovekeytool -list -v -keystore /opt/openfire/resources/security/keystore | grep rsae.g.: Alias name: my.domain.com_rsaremember this for a later stepload your CAs root cert into the truststorefirst, see if it is therekeytool -list -v -keystore /opt…

Ubuntu: convert desktop to server fast

Below as root:
apt-get remove ubuntu-desktopapt-get install linux-server linux-image-serverapt-get purge lightdm/etc/default/grub, change matching lines to below#GRUB_HIDDEN_TIMEOUT [comment it out]GRUB_CMDLINE_LINUX_DEFAULT=""GRUB_TERMINAL=consoleupdate-grubreboot Taken:

tcpdump HTTP headers

tcpdump -vvvs 1024 -l -A port 80 | egrep '^[A-Z][a-zA-Z\-]+:|GET|POST'Match your port, here it is 80, could be 8080 or 443, e.g.

Edit remote files with local editor using ssh and sshfs

apt-get -y install sshfsAdd your local user to the fuse groupmkdir ~/mylocaldirsshfs -o idmap=user files under ~/mylocaldir, and as you save them, they are automatically updated in /remotepath Note: the "-o uid=500" can be used if you get permission errors, but replace "500" with you local id number

"Couldn't read packet: Connection reset by peer"change this line in your /etc/ssh/sshd_config file to match what's hereSubsystem sftp internal-sftphappens on RedHat Enterprise 6.1 for sure

Quick CLI screenshots on Linux or Openbox / Fluxbox

sudo apt-get -y install imagemagick eogimport myscreenshot.jpgselect portion of screen with the crosshairseog myscreenshot.jpg


Who is participating and do I know what each of them wants to get out of this meeting? What are my goals and what's the minimum that I want to achieve? Can I give in on certain points?Are there issues I won't budge on?What are next steps after the meeting?Who will ultimately decide whether I get what I want or not?Are there things I don't want to lay out on the table and not discuss in this meeting?Who should do most of the talking? Taken:

keytool: put your SSL key into a new keystore

openssl pkcs12 -export -in mycert.crt -inkey mykey.key -out myp12blob.p12 -name mykeystorealias -CAfile mycascert.crtSet the password to "changeit"keytool -importkeystore -deststorepass changeit -destkeypass changeit -destkeystore mykeystore -srckeystore myp12blob.p12 -srcstoretype PKCS12 -srcstorepass changeit -alias myaliaskeytool -list -v -keystore mykeystore Taken:

One-liner, CLI web server on port 8000

python -m SimpleHTTPServer

Cassandra in 30 seconds

writes writes entries directly to disk without checking if they already existdoes fancy indexing of entriesreturns a write "OK" to the writing client after a quorum of nodes have confirmedreadstries to return the newest entry when client does a readhas methods to eventually get the newest entry to return even if old ones still aroundreplicationstores entries to multiple nodes if replication is turned ondeletesdoesn't offically delete, just marks dead entries with a "tombstone"compaction is what gets rid of old versions of entries and dead entriesbalancingautomatically fills in data holes if a node disappearsautomatically spreads data if new nodes are addedresurrection3-nodes: X, Y, Z, all replicate all dataserver X goes downdelete goes to Y and Z for key AY and Z are "compacted"i.e., redundant keys & tombstones cleaned up / removedkey A is completely gone as far as  Y and Z knowX comes up and has value for key AA is back! resurrected from the dead…

Move huge directory on the root partition to a huge non-root parition

Assumption: /mnt is a huge disk partition separate from the / partition  (aka root partition)
mkdir -p /mnt/home/myfatdirectorykill all processes that have open files to /home/myfatdirectorylsof /home/myfatdirectorymake sure you get ZERO results, ie no processes have open files to this directorymv /home/myfatdirectory /home/myfatdirectory_oldmkdir -p /home/myfatdirectorymount --bind /mnt/home/myfatdirectory /home/myfatdirectoryadd to bottom of /etc/fstab, so the mount is picked up on reboot/mnt/home/myfatdirectory /home/myfatdirectory none bind 0 0 NOTES:
fix perms as necessary by interleaving your own steps into the abovefor the paranoid: you might want to make sure fstab entries work fine on reboot

Find human-readable size of all files in a particular directory which have been modified in the last day and that are over 100 megs

find /var -mtime -1 -type f -size +100M -exec ls -lh {} \;

Recover accidentally deleted file as long as some process still has it open, on Linux

lsof | grep myfilethe second column is the process idthe number in the fourth column is the file descriptorcp /proc/<process id>/fd/<file descriptor> myfile.savedTaken:

Build unbound from source on redhat/centos

NOTE: unbound is now available via epel repo on Amazon Linux
install requirementsyum groupinstall "Development Tools"yum install openssl-develyum install expat-develbuildldnswget zxvf ldns-1.6.16.tar.gzcd ldns-1.6.16/./configure --disable-gost --disable-ecdsamakemake installunboundwget zxvf unbound-latest.tar.gzcd unbound-1.4.20/./configure --disable-gost --disable-ecdsamakemake installadd libs to system lib pathvi /etc/ this one line/usr/local/libsudo ldconfigadd unbound useradduser --system unboundtweak configvi /usr/local/etc/unbound/unbound.confsee simple sample belowrununboundchecklsof -nP -i :53stoppkill unboundrestartunboundserver:         verbosity: 1         interface:         access-control: allow forward-zone:        name: "my-vpc.internal"        forward-addr:        forward-fir…

Set up private, internal DNS for your VPC using Route 53 and unbound

CRITICAL: AWS now offers internal VPC DNS! Below is no longer necessary AFAIK. Woo hoo!

create a Hosted Zone, something like "mydomain.internal"get the IP addresses of the name servers assigned to your new zoneSTRIP OFF '.' at the end of the name servers or BOOM!  create a new DHCP Options Setadd the IP addresses you gathered above to the domain-name-servers fieldChange DHCP Options Set of your VPC by right-clicking itrun sudo dhclient on any already-running instance in the VPC to pick up changesdebug changes have taken place on an instance: cat /etc/resolv.conf
RECOMMEND ALTERNATE SOLUTION: here's a sample unbound.conf I ended up using for a DNS forwarding server within my VPC -- see comments below. I adjusted the "options set" to point at this DNS server instead, in my case.

NOTE: Btw, unbound is available unde…

Run one command on many Linux servers, install pssh, works on Mac

sudo easy_install pipsudo pip install psshCreate a file with the list of servers you want to control, call it servers or something similarpssh -h servers "date"Put your ssh pub key up to all of thempssh -h servers -i "echo 'ssh-rsa AA...whme@myfqdn' >> /home/user/.ssh/authorized_keys" Taken:

Note: csshX is very nice if you want to see all terminals at once as you type, more later

github and multiple accounts, git keeps asking for password

ssh-keygen -t rsa -C "" -f ~/.ssh/id_rsa_mycompanyssh-add ~/.ssh/id_rsa_mycompanyAdd below to ~/.ssh/configgit clone git@github-mycompany:mycompany/myrepo.gitHost github-mycompany   HostName   User git   IdentityFile ~/.ssh/id_rsa_mycompany

Generate gpg keys, upload to server, pull from server, from CLI

gpg --gen-keygpg --list-keysgpg --keyserver --send-keys '62E49F5A'that funky number is listed in the output of "list-keys", just look carefullyyour funky number will be uniqueshould be 8 digits long and hexgpg --keyserver --search-keys ''gpg --keyserver --search-keys ''gpg --keyserver --recv-keys 1F3B6ACAGet her key with the ID you saw in previous stepUse keys to encrypt contentCan be encrypted for multiple people in one go, and only those listed can open the result

Searching with an LDAP filter

Set the dn you wish to search throughe.g., ou=Employees,dc=mycompaniesdomain,dc=comSet the filtere.g., (&(objectclass=inetorgperson)(uid=myfirstname.mylastname))inetorgperson is an LDAP standard "object", btw, there are a bunch of others Btw: one can also -- quick and dirty -- dump the whole LDAP db to a ldif file, and do a text search on that.

Simple Ruby email out localhost:25, no OpenSSL::SSL::SSLError, no tlsconnect error

This skips the common OpenSSL::SSL::SSLError / tlscommon errors somehow, see below for error output.DON'T use pony's "smtp" hash option, it has the same problem. Notice it is missing here! Steps:
gem install ponytake below code put in ~/bin/mail_test.rbtweak for your environmentchmod +x ~/bin/mail_test.rb


require 'rubygems'
require 'pony'

mystring = "a\nb\nc"

Pony.mail(:to => '', :from => '', :subject => 'Test mail script', :body => 'Hello there.', :attachments => {"mail_test.txt" =>"/home/me/bin/mail_test.rb"), "mystring.txt" => mystring})

Common, irritating tlscommon error:
/usr/lib/ruby/1.8/openssl/ssl-internal.rb:123:in `post_connection_check': hostname was not match with the server certificate (OpenSSL::SSL::SSLError) from /usr/lib/rvm/gems…

Generate IAM certs for users on AWS

openssl genrsa 1024 > username-env-pk.pempk stands for private keyopenssl req -new -x509 -nodes -sha1 -days 365 -key username-env-pk.pem -outform PEM > username-env-cert.pemlasts for 365Paste username-env-cert.pem in to the AWS Signing Certificates area for that userGive user both username-env-pk.pem and username-env-cert.pem, and wish them luck

Redirect all command output, stdin/stderr, to a file on Linux

puppet agent --test --noop >/var/tmp/puppet_noop_20130315 2>&1

Notes: The 2>&1 redirects stderr to where stdin pointsstdin points to the console by default unless you change thathere stdin is redirected to a file under /var/tmp

vagrant on aws

vagrant plugin install vagrant-awsvagrant box add aws001 initAdapt below and put in the "Vagrantfile" filevagrant up --provider=awsvagrant sshvagrant destroyVagrant.configure("2") do |config| = "aws001"
  config.vm.provider :aws do |aws|     aws.access_key_id = "<your_aws_key_id>"     aws.secret_access_key = "<your_aws_secret>"     aws.keypair_name = "<your_keypair_name>"     aws.ssh_private_key_path = "/home/<you>/.ssh/<your_keypair_name>.pem"
    aws.region = "eu-west-1"     aws.ami = "ami-01080b75"     aws.ssh_username = "ubuntu"   end end

2G swap file

dd if=/dev/zero of=/swapfile bs=1M count=2048mkswap /swapfileswapon /swapfile

Get provisioned public key for AWS EC2 instance via curl


Specify ssh key when using rsync

WARNING: don't use ~ and don't use double quotes.rsync -av -e 'ssh -i /home/me/.ssh/id_rsa_other' /localdir/Also, some alternative port:rsync -av -e 'ssh -p 2221' /localdir/

Build rpm of monit 5.5

Download "_topdir" to match your local into what you set _topdir tomkdir -p {BUILD,RPMS,SOURCES,SPECS,SRPMS,tmp}Download monit-5.5.tar.gz file and put it in the SOURCES directoryPut monit.spec in the SPECS directoryrpmbuild -v -bb --clean SPECS/monit.specyum -y install rpmdevtoolsOutput should mention where the rpm ended uprpm -qlp on the rpm file to see what's in it Gory details:

"%spec -q" means be "quiet" when untarring, not that interesting, but people use it a lot

Exim: rewrite "From" field

Amazon's AWS SES service requires email be addressed from a particular user.
Add to begin rewrite section*  FfrsReload exim Stolen from:

Right-click with Mac trackpad

Try selecting something and clicking anywhere on the trackpad with TWO fingers.

AWS SES with exim4 on debian-based Linux

apt-get install exim4dpkg-reconfigure exim4-configSelect: internet site; mail is sent and received directly using SMTPIP-addresses to listen on for incoming SMTP connections: ; ::1 (it's the default anyways)Take most defaultsSplit configuration into small files?NO!lsof -nP -i :25Make sure you aren't allowing the world to connect! is goodAWS -> SESVerified SendersVerify one of your existing email addressesSMTP SettingsCreate My SMTP CredentialsUse downloaded credentials.csv file contents for below stepsEdit /etc/exim4/exim4.conf.templateFind ALREADY existing line "public_name = LOGIN"change to "public_name = OLD_LOGIN"Add below sections to existing sections in the fileUse info from credentials.csv in place of pointy brackets, e.g. <aws_id> service exim4 stopservice exim4 starttail -F /var/log/exim4/mainlogKeep this running in another terminal while you do the belowecho test001 | mail -r <email_you_verified_with_aws> -s &qu…

JMX ports to open in firewall for jconsole to Cassandra

Port 7199Used for about a dozen packets when JMX connection first madeA handshake of sortsProbably sets up the agreement on which high port to connect to, used belowSimilar to SIPSimilar to old FTPNot used again after initial handshakePort range 55000 to 55999To see these packets, on JVM servertcpdump -nn ! port 22 and host <jconsole client IP> (not literal, replace this)If jconsole starts showing graphs, you are connected To run jconsole directly on the server via VNC, see this article:

Tricks and Tips
If you don't want to expose 1000 ports to the world for some reasonOpen all ports on firewall in front of JVM serverOn JVM server: tcpdump -nn ! port 22 and host <jconsole client IP>Start jconsole connection on client machineWatch to see which port JVM server is trying to reach jconsole client viaClose all but that port in the firewall, will be between 55000-55999Do a local experiment to a local JVM JMX-able application if unsur…

What's an MBean or JavaBean?

A fancy name for a Java class that:
is serializablemeans you can write/read the contents directly to disk as ishas a 0-argument constructorhas getter and setter methodsHave you ever tweaked a mbean value via jconsole before, btw? Understand, there was major hype for Java back in circa 2000 that didn't quite pan out as expected.

Increase bash history size on Mac

Add below lines to end of ~/.bash_profileSource result or log out and back insource ~/.bash_profileTestecho $HISTFILESIZEecho $HISTSIZE export HISTFILESIZE=2500

export HISTSIZE=""

rsync: include only these

rsync -avP --include=*/ [set of includes] --exclude=* for examplersync -n -avP --include=*/ --include=*.dat  --include=*.idx --exclude=* /informatica/ /backup/informatica/


Linux equivalent of updatedb on Mac

sudo /usr/libexec/locate.updatedb

Disable CAPS LOCK key on MacBook Pro

System Preferences -> Keyboard -> Modifier KeysMAKE SURE YOU PICK THE RIGHT KEYBOARD"Select Keyboard"Caps Lock Key -> No Action Might as well do all the other keyboards while you're at it, eh?

Fix jEdit fonts for MacBook Pro with Retina display

Assumptions: jEdit was installed into /Applications directory.

Close any running jEditEdit /Applications/ end of file, add the top two lines below above the bottom two lines, and saveDrag jEdit to desktopStart jEdit and see if fonts fixedIf fixed, drag jEdit back to /Applications, and retest.

Open snmp requests to world

Open snmp requests to worldsnmpd.confrocommunity public  udp: this line out if it exists so snmpd listens to the worldMake sure snmpd is listening on all interfaceslsof -nP -isnmpd  ......stuff in here........  UDP *:161Test from another serversnmpwalk -cpublic -v1 <IP address serving snmp request>

Vagrant: how to set vm memory and force gui mode

# -*- mode: ruby -*-
# vi: set ft=ruby : do |config|

  config.vm.define :zenoss do |zenoss_config|  = "quantal64" :hostonly, ""
      zenoss_config.vm.forward_port 80, 8885
      zenoss_config.vm.customize ["modifyvm", :id, "--memory", 1024]

  config.vm.define :desktop do |desktop_config|  = "quantal64" :hostonly, ""
      desktop_config.vm.boot_mode = :gui


How to convert encryption keys: RSA to PEM

RSA to PEMssh-keygen -t rsaopenssl rsa -in ~/.ssh/id_rsa -outform pem > id_rsa.pem

Asterisk pre-reqs for compiling on Debian/Ubuntu

apt-get -y install make libncurses-dev libxml2-dev sqlite3 libsqlite3-dev libiksemel-dev libssl-dev  subversion

Redirection on CLI: greater-thans, ampersands and numbers

myCLIapp > /dev/null 2>&1Order is important, don't reverse the redirectsSee belowFirst redirect sends STDOUT to kernel's blackhole equivalentresult:  STDOUT is forgotten, never shownSTDOUT is implied when no number before '>' Second redirect sends STDERR to where STDOUT pointsSTDERR goes where STDOUT goesSo STDERR ends up in blackhole tooThe amphersand is necessaryto specify this is a "file handle" and not a filenameReverse mistakemyCLIapp 2>&1 > /dev/null Read left to right 2>&1Would 1st send STDERR to where STDOUT is pointing currentlyWhich is STDOUT so far> /dev/nullSTDOUT is implied since no number before '>' Send STDOUT to /dev/nullResult is STDERR being displayedSTDOUT being sent to blackholeRememberif no number given before '>', them implied is STDOUT

Quick start howto for divish on Debian

PrepHere, there is one backup server and two client servers that need to be backed up Make sure root user on backup server can ssh to and without passwordLater, much later, see online for better, more secure ways install dirvish and rsync on server, rsync on clients apt-get install dirvishapt-get install rsyncClientsThe below are just example directories, use your own, the ones you want backed upmkdir -p /data/backups/mkdir -p /data/backups/etcmkdir -p /data/backups/var/logrsync -av /etc/ /data/backups/etc/rsync -av /opt/ /data/backups/opt/Make a cron job to do the rsyncs above nightly Servermkdir -p /backup/dirvish/mybox01/dirvishmkdir -p /backup/dirvish/mybox02/dirvish vi /backup/dirvish/mybox01/dirvish/default.confget contents below vi /backup/dirvish/mybox02/dirvish/default.confget contents belowdirvish --vault mybox01 --initdirvish --vault mybox02 --initVerifyBacked up files should now be under /backups/dirvish on backup servertree /backu…

Install lex on Debian

sudo apt-get install byacc flex

Quick start how-to graphite base install on Debian

UPDATE: Also see Latest Graphite on Amazon Linux at AWS.

This only gets graphite working on your local Linux box. Left to user to translate to remote server installation thereafter.

NOTE: Do everything as root user

make sure apache2 is installed and working with wsgiapt-get install apache2 -yapt-get install libapache2-mod-wsgi -ya2enmod wsgiwsgi needs a place for its socketsmkdir /etc/apache2/runmkdir /var/run/wsgi chmod 777 /etc/apache2/run /var/run/wsgithis seems undocumented, thanks! Install dependenciesapt-get install -y libapache2-mod-wsgi python-twisted python-memcache python-pysqlite2 python-simplejsonapt-get install -y python2.6 python-pip python-cairo python-django python-django-taggingInstall graphite elementsmkdir -p /root/graphite-installcd /root/graphite-installgit clone clone clone clone…