Let’s encrypt!

“The Let’s Encrypt project aims to make encrypted connections to World Wide Web servers ubiquitous. By getting rid of payment, web server configuration, validation emails, and dealing with expired certificates it is meant to significantly lower the complexity of setting up and maintaining TLS encryption.”Wikipedia

Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG), a public benefit organization. Major sponsors are the Electronic Frontier Foundation (EFF), the Mozilla Foundation, Akamai, and Cisco Systems. Other partners include the certificate authority IdenTrust, the University of Michigan (U-M), the Stanford Law School, the Linux Foundation as well as Stephen Kent from Raytheon/BBN Technologies and Alex Polvi from CoreOS.

So, in summary, with Let’s Encrypt you will be able to request for a valid SSL certificate, free of charges, issued for a recognized CA and avoiding the mess of the mails, requests, forms typical of the webpages of the classic CA issuers.

The rest of the article shows the required steps needed for a complete setup of a Nginx server using Let’s encrypt issued certificates:

Install letsencrypt setup scripts:

cd /usr/local/sbin
sudo wget https://dl.eff.org/certbot-auto
sudo chmod a+x /usr/local/sbin/certbot-auto

Preparing the system for it:

sudo mkdir /var/www/letsencrypt
sudo chgrp www-data /var/www/letsencrypt

A special directory must be accesible for certificate issuer:

location ~ /.well-known {
allow all;
}

Service restart. Let’s encrypt will be able to access to this domain and verify the athenticity of the request:

sudo service nginx restart

Note: At this point you need a valid A DNS entry pointing to the nginx server host. In my example, I will use the mydomain.com domain as an example.

Requesting for a valid certificate for my domain:

sudo certbot-auto certonly -a webroot --webroot-path=/var/www/ca -d mydomain.com

If everything is ok, you will get a valid certificate issued for your domain:

/etc/letsencrypt/live/mydomain.com/fullchain.pem

You will need a Diffie-Hellman PEM file for the crypthographic key exchange:

sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Setting the nginx virtual host with SSL as usual but using the Let’s encrypt issued certificate:

...
ssl on;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
...

Check the new configuration and restarting the Nginx:

nginx -t
sudo service nginx restart

Renewing the issued certitificate:

cat << EOF > /etc/cron.d/letsencrypt
30 2 * * 1 root /usr/local/sbin/certbot-auto renew >> /var/log/le-renew.log
35 2 * * 1 root /etc/init.d/nginx reload
EOF

Paramiko example

I have the pleasure of presenting a tip from the past. Today from long time ago: Paramiko.

import os
import paramiko
hostname="vps.doc.com"
username="admin"
password="password"
port=22
remotepath="/tmp/test"

ssh = paramiko.SSHClient()
ssh.load_host_keys(os.path.expanduser(os.path.join("~", ".ssh", "known_hosts")))
ssh.connect(hostname, port=port, username=str(username), password=str(password))
sftp = ssh.open_sftp()

remote_file = sftp.file(remotepath, "r")
remote_file.set_pipelined(True)
file_lines = remote_file.read()
return file_lines
file_lines = ...

sftp.open(remotepath, 'w').write(file_lines)
sftp.close()
ssh.close()

My custom settings for Zabbix Agents

Since some time ago, I’m used working with ZABBIX systems – as server as agent, proxy … . Now, I don’t going to explain all the fantastic features about this software in this post. In this post, I will only enumerate my preferred ZABBIX Agent settings and my best values for each one.

  • Server=zproxy01.mon.mng.amz.coorp.com

    List of comma delimited IP addresses (or hostnames) of ZABBIX servers.
    No spaces allowed. First entry is used for sending active checks.
    Note that hostnames must resolve hostname->IP address and
    IP address->hostname.

  • ServerPort=10051

    Server port for sending active checks

  • Hostname=php611.int.amz.coorp.com

    Unique hostname. Required for active checks.
    This hostname must correspond with the name set on ZABBIX Server for this
    host

  • ListenPort=10050

    Listen port. Default is 10050.

  • ListenIP=8.8.8.8

    IP address to bind agent
    If missing, bind to all available IPs

  • StartAgents=5

    Number of pre-forked instances of zabbix_agentd.
    Default value is 5.
    This parameter must be between 1 and 16

  • RefreshActiveChecks=90

    How often refresh list of active checks. 2 minutes by default.
    This check list are sending by the ZABBIX server/proxy.
    See in zabbix.com
    to known more about Zabbix Agent’s protocol.
    This value can’t be lower than 60 seconds.

  • DisableActive=0

    Disable active checks. The agent will work in passive mode listening server.

  • EnableRemoteCommands=1

    Enable remote commands for ZABBIX agent. By default remote commands disabled.

  • DebugLevel=3

    Specifies debug level:

    • 0 – debug is not created
    • 1 – critical information
    • 2 – error information
    • 3 – warnings
    • 4 – information (default)
    • 5 – for debugging (produces lots of information)
  • PidFile=/var/run/zabbix-agent/zabbix_agentd.pid

    Name of PID file

  • LogFile=/var/log/zabbix-agent/zabbix_agentd.log

    Name of log file.
    If not set, syslog will be used

  • LogFileSize=5

    Maximum size of log file in MB. Set to 0 to disable automatic log rotation.

  • Timeout=15

    Spend no more than Timeout seconds on processing
    Must be between 1 and 30

  • UserParameter=http.basicaction,wget -t 2 -T 10 -q -O – “http://server:80/StatusInfo.php&#8221; | grep -e “App:”

    Format: UserParameter=key,shell_command
    Note that shell command must not return empty string or EOL only

More info:

Flushing ARP table entries for one specific IP

In some cases, the network elements use caching strategies in order to improve the network throughput. In this enviroment, is frequently that we are using subsystems like LVS with purpose of balance one service IP between a couple of hosts. In this cases, we can have several problems when we want use HA system because some network switch don’t releases  the old ARP entry of the service IP (due to caching effect). To avoid this aim, we’d use arping to force releases of  the old ARP entry.

  arping -v -c 1 -i eth0 -S 192.40.0.200 -t ff:ff:ff:ff:ff:ff 192.40.0.200
  arping: invalid option -- '-'
  Usage: arping [-fqbDUAV] [-c count] [-w timeout] [-I device] [-s source] destination
   -f : quit on first reply
   -q : be quiet
   -b : keep broadcasting, don't go unicast
   -D : duplicate address detection mode
   -U : Unsolicited ARP mode, update your neighbours
   -A : ARP answer mode, update your neighbours
   -V : print version and exit
   -c count : how many packets to send
   -w timeout : how long to wait for a reply
   -I device : which ethernet device to use (eth0)
   -s source : source ip address
   destination : ask for what ip address

For example, we can use this tip in the network/intefaces conffile:

auto eth0
iface eth0 inet static
  address 10.240.97.99
  netmask 255.255.255.0
  post-up arping -v -c 1 -i eth0 -S 192.40.0.99 -t ff:ff:ff:ff:ff:ff 192.40.0.99

The best documentation in the blackboard, … always

 

IPTables
IPTables

 

This diagram is chairing my work office. It has been drawn in a little blackboard  so that it is easilly visible for all members of the team. Many times, look up it is more useful than a big effort  of our mind.

The diagram show the different tables that IPTables is composited, the IP packages logic-flow arround the tables and try to show the relation between IPTables and the route tables of the system.

ClusterSSH: A GTK parallel SSH tool

From some time ago, I’m using a amazing admin tool named clusterSSH (aka cssh).

With this tool (packages available for GNU/Debian like distributions, at least), we
can interact simultaneously against a servers cluster
. This is very useful,
when your are making eventual tasks in similar servers
(for example, Tomcat Cluster nodes, … ) and you want execute the same intructions
in all of them.

cssh

My config (~/.csshrc) file for cssh is look like to the default settings:

auto_quit=yes
command=
comms=ssh
console_position=
extra_cluster_file=~/.clusters <<<<<<<<<
history_height=10
history_width=40
key_addhost=Control-Shift-plus
key_clientname=Alt-n
key_history=Alt-h
key_paste=Control-v
key_quit=Control-q
key_retilehosts=Alt-r
max_host_menu_items=30
method=ssh
mouse_paste=Button-2
rsh_args=
screen_reserve_bottom=60
screen_reserve_left=0
screen_reserve_right=0
screen_reserve_top=0
show_history=0
ssh=/usr/bin/ssh
ssh_args= -x -o ConnectTimeout=10
telnet_args=
terminal=/usr/bin/xterm
terminal_allow_send_events=-xrm ‘*.VT100.allowSendEvents:true’
terminal_args=
# terminal_bg_style=dark
terminal_colorize=1
terminal_decoration_height=10
terminal_decoration_width=8
terminal_font=6×13
terminal_reserve_bottom=0
terminal_reserve_left=5
terminal_reserve_right=0
terminal_reserve_top=5
terminal_size=80×24
terminal_title_opt=-T
title=CSSH
unmap_on_redraw=no
use_hotkeys=yes
window_tiling=yes
window_tiling_direction=right

The  ~/.clusters file is the file which defined the concrete clusters (see man ):

# home cluster
c-home tor@192.168.1.10 pablo@192.168.1.11

# promox-10.40.140
promox-10.40.140 10.40.140.17 10.40.140.18 10.40.140.19 10.40.140.33 10.40.140.17 10.40.140.18 10.40.140.33

# kvm-10.41.120
kvm-10.41.120 10.41.120.17 10.41.120.18

When I want work with c-home cluster, we execute de cssh as following:

# cssh c-home

In addition, I have written a tiny python script that automatized the cluster lines generation. This script is based in pararell executed ICMP queries. Thats is cool when your servers are deploying in a big VLAN or the number of them is big. In this cases, we can execute my script to found the servers.

# ./cssh-clusterline-generator.py -L 200 -H 250 -d mot -n 10.40.140 >> ~/.clusters

# mot-10.40.140-range-10-150
mot-10.40.140-range-10-150 10.40.140.17 10.40.140.19 10.40.140.32 10.40.140.37

Finally, … the script:

import os
from threading import Thread
from optparse import OptionParser

class Thread_(Thread):
def __init__ (self,ip):
Thread.__init__(self)
self.ip = ip
self.status = -1
def run(self):

res = os.system(“ping -c 1 %s > /dev/null” % self.ip)
res_str = “Not founded”

self.status = res

threads_=[]
ips = “”

parser = OptionParser()
parser.add_option(“-n”, “–net”, dest=”network”, default=”10.121.55″,
help=”Class C Network”, metavar=”NETWORK”)
parser.add_option(“-L”, “–lowrange”, dest=”lowrange”, default=”1″,
help=”Low range”, metavar=”LOW”)
parser.add_option(“-H”, “–highrange”, dest=”highrange”, default=”254″,
help=”High range”, metavar=”HIGH”)
parser.add_option(“-d”, “–deploy”, dest=”deploy”, default=”Net”,
help=”Deploy name”, metavar=”DEPLOY”)
parser.add_option(“-v”, “–verbose”, dest=”verbose”,
default=False, action=”store_true”,
help=”Verboise mode”)

(options, args) = parser.parse_args()

low_range = int(options.lowrange)
high_range = int(options.highrange)
net=options.network
deploy_id = options.deploy
verbose=options.verbose

for i in range (low_range, high_range+1):
ip = net + “.” + str(i)
h = Thread_(ip)
threads_.append(h)
h.start()

for h in threads_:
res_str = “Not founded”
h.join()
count=0

if h.status == 0:
count = count + 1
res_str = “FOUNDED”
if verbose:
print “Looking host in %s … %s” % (h.ip, res_str)
ips += h.ip + ” ”

if verbose:
print “Finished word. %s host founded” % count

print “”
print “# ” + deploy_id + “-” + net + “-range-” + str(low_range) + “-” + str(high_range)
line = deploy_id + “-” + net + “-range-” + str(low_range) + “-” + str(high_range) + ” ” + ips
print line