Server setup./easyrsa init-pkidon't do this twice! ./easyrsa build-caUser key and cert signing request on complete separate machine./easyrsa init-pkidon't do this twice! ./easyrsa gen-req myuserServer signs user cert req./easyrsa import-req myuser.req myuser./easyrsa sign-req client myuser
Generate your server key and cert in a similar manner to a user.
Any client with a signed cert may connect to the server. There is no record of the client cert on the server itself; since the server signed the user cert, that is authority enough to validate the user cert.
Only if a user cert needs to be revoked, is a "revocation file" created on the server; this revocation file disallows that user from connecting. If no users need to be revoked, nothing needs to be done, nothing needs to exist about users on the server-side.
Now, the files in tmp001 map to your remote /var/www directory. And access to them uses an ssh session that is maintained in your /tmp directory, i.e. all interactions are performed over the same ssh session.
To see the tmp file, if you just opened the sshfs session in the last 10 mins
cp -v /opt/graphite/conf/graphTemplates.conf.example /opt/graphite/conf/graphTemplates.confvi /opt/graphite/conf/graphTemplates.confchange '[default]' section to '[uggs]'change another section, e.g. '[solarized-dark]', to '[default]'reload dashboard and you should see changed colors
The below is for setting up unicast, not multicast. AWS does not support multicast networking.
Key concepts: gmond daemons run on every server and use C code to collect the server's stats; this data is stored in local memory. Multiple gmonds can send their data on to one central gmond to hold, call this a gmond "bank"; this "bank" also uses only memory to store the server stats. gmetad comes along and collects the data from the gmond "banks" and stores in it rrds, these are files; the web interface uses these rrd files, and it usually runs on the same server as gmetad.
Cluster name: cluster name is key in grouping data and getting it from gmond to gmond and then on to gmetad. data_source is the way gmetad find the "banks"; and, by the way, you can have redundant "banks" for one cluster data_source.
Getting rid of multicast settings: comment out all references to multicast: bind_hostname, mcast_join, bind. Comment them all out.
Local variables for this post, adjust to fit your setup:OpenVPN client server IP192.168.1.200Remote network172.16.1.0/24Add this to the client server that is using OpenVPN to connect to the remote server: sudo iptables -A POSTROUTING -o tun0 -j MASQUERADEas root user, doecho 1 > /proc/sys/net/ipv4/ip_forward Add this to your local computer Linux: ip route add 172.16.1.0/24 via 192.168.1.200Mac:route -n add 172.16.1.0/24 192.168.1.200
Now, you should be able to ping from your local computer, through the client machine, and to a server in the remote network. Once that works, try ssh.
NOTE: take a look at /etc/sysctl.conf if you want the ip_forward to last through reboots of client server: net.ipv4.ip_forward=1
local-zone: "mydomain.internal" static local-data: "app01.mydomain.internal IN A 10.0.0.10" local-data: "app02.mydomain.internal IN A 10.0.0.11" local-data: "biggie01.mydomain.internal IN A 10.0.0.12" local-data: "mysql01.mydomain.internal IN A 10.0.0.20" local-data: "mysql02.mydomain.internal IN A 10.0.0.31" local-data: "apache01.mydomain.internal IN A 10.0.0.200"
Become zenoss user 1st
su - zenoss
Remodel a bunch
for i in server1 server2 server3;do zenmodeler run --now -d $i;done
Discover a bunch
for i in server1 server2 server3;do zendisc run --deviceclass=/Server/Linux --device=$i;done
creategpg --gen-keyif entropy taking too longsudo apt-get install rng-toolsnote your key ID from output pub 4096R/B110C232 2014-04-23here it is: B110C232 pushgpg --send-keys --keyserver keyserver.ubuntu.com B110C232replace B110C232 with the key ID output from abovemore to come
UPDATE: Amazon Linux is now on Ruby 2.x, so below is DEPRECATED for new Amazon Linux images. But parts may be useful. yum -y remove rubyyum -y install ruby19gem install --no-rdoc --no-ri puppet --version=3.1.1 /usr/local/bin/puppet -Vadd /usr/local/bin to $PATH of users that need it vi /usr/local/share/gems1.9/gems/facter-1.7.5/lib/facter/ec2.rbchange line 28 to:if (Facter::Util::EC2.can_connect?)Reference:
http://projects.puppetlabs.com/issues/7559existing line 28 is much longer
If you manage user passwords with Puppet yum -y install ruby19-develyum -y groupinstall "Developer Tools"gem install --no-rdoc --no-ri ruby-shadow
lxc-create -n puppetmaster01 -t debianlxc-create -n puppetclient01 -t debian/etc/default/lxc-netfind subnet defined by LXC_NETWORKvi /var/lib/lxc/puppetmaster01/configadd ip addr ending in .100 to subnetfor example, lxc.network.ipv4 = 10.0.1.100/24vi /var/lib/lxc/puppetclient01/configadd ip add ending in .101 to subnetfor example, lxc.network.ipv4 = 10.0.1.101/24lxc-start -d -n puppetmaster01don't forget the "-d" or you'll be stuck in tty sessionlxc-start -d -n puppetclient01lxc-attach -n puppetmasterapt-get install puppetmasterlxc-attach -n puppetclient01apt-get install puppetvi /etc/hosts and add entry "puppet" to point at pmaster
WARNING: for distro "saucy" as your container/host system, dnsmasq is broken, vms can not get DHCP IP address from dnsmasq. To attempt to fix, try: sudo iptables -t mangle -A POSTROUTING -o lxcbr0 -p udp --dport bootpc -j CHECKSUM --checksum-fillrefresh vm IPstop and start vm, or kill existing dhclient process on vm, an…