Defining function args as list of arguments for Python


Python offers a way to define functions args as a tuple. The syntax is similar to C language, we’ll use *args to refer the tuple of arguments which are used in the function invocation.

def test_args(*args):
    for arg in args:
        print "another arg:", arg  

test_args(1, "two", 3)


another arg: 1
another arg: two
another arg: 3

Using *args when calling a function

Also, this special syntax can be used, not only in function definitions, but also when calling a function.

def test_args_call(arg1, arg2, arg3):
    print "arg1:", arg1
    print "arg2:", arg2
    print "arg3:", arg3

args = ("two", 3)
test_args_call(1, *args)


arg1: 1
arg2: two
arg3: 3

More info about:


Is not the same …

Usually, we don’t note  little, but relevant, differents in the code that we are reviewing. For example, the two next classes, apparently, are equivalents:

  class A:
    l = []
  class B:
    def __init__(self):
      self.l = []


But, really, this two classes are differ in their behavior:

  >>> a = A()
  >>> a.l.append(1)
  >>> a2 = A()
  >>> a2.l.append(2)
  >>> print a.l
  >>> b = B()
  >>> b.l.append(1)
  >>> b2 = B()
  >>> b2.l.append(2)
  >>> print b.1

Class A, due to l var is defined in class definition, share the l var between all A objects instanciates.

Chardet: econding autodetect for Python

Last week, I has been fighting against the hordes of the character encoding. My new task in tha job-tasks-pool is develop a friendly web app to manage configuration files of cluster apps. Ohh, great idea!, why didn’t I realize it before? (irony) . Only I need 4 things: one parser for each conffile syntax, a stable and secure way to get/save remote files, work with non-controlled kind of differents char encodings, … and a fancy GUI. As you can intuit, the task is not just a bit app, but this post only refer to a very useful python lib ( chardet discovered a few days ago. This lib can be auto-detect the encoding of a file very reliable. I suggest you that visit chardet homesite to see some clear examples.

Using it in a couple of lines of code:

import io
import chardet‘channels_info’, ‘r’, errors=’replace’)

# For example:
# fileencoding=”iso-8859-15″
fe = chardet.detect(r)[‘encoding’]
fl = fl.decode(fe)
print fl

Object factory using the Python import mechanism

Today something related to Python programing … These snippets were rescued from a forgotten repository into my old Inspiron 6000.

Speaking clearly, this post is about one app (purpose of it is not relevant) which internal handlers class can be setting in configuration time. This idea is cool when you want minimize hookups in your app. For example, this solution is a easy way when, in a hypothetical future, a third developer want to extend your app with a new handler. In this scenario, this developer only should build the handler as a external module and load it in the python classpath.

First step, I build my Factory class. These objects class create DataSources objects:

class DataSourceFactory (object):

def create (self,handlerClassname, object_uid=None, key_value=None, \

modulename, classname = handlerClassname.rsplit('.',1)
module = __import__(modulename, {}, {}, classname)
handler_class = getattr (module,classname)
ds_handler = handler_class()
for k,v in handler_options.items():
setattr(ds_handler, k, eval(v))

ds = DataSource(ds_handler)
ds.key_value = key_value
ds.object_uid = object_uid

return ds

You note, two things in the previous code:

  • The create function received a string with the classname of the handler
  • I use getattr and __import__ ( object reflexion ) for instantiate the hadler objects received as parameter

The classname, in my app, is setting in the app configuration file. This file is a standard Python config file:


This confs are loaded into de app using the RawConfigParser:

def create_Synchronizer(self,config_filename):
  # RawConfigParser not interpolate attribute values
  cfg = self.ConfigParser.RawConfigParser()

# DataSourceFactory
data_source_factory = self.datasources.DataSourceFactory()
# Load class name of origin handler
origin_data_source_handler_classname = \
eval (cfg.get('origin','datasource_handler'))
# For example: 'syncldap.datasources.LdapDataSourceHandler'

# Load origin options
origin_handler_options = dict (cfg.items('opt:origin_handler'))
origin_key_value = eval \

origin_object_uid = eval \

# Creating origin source
origin_source = \
  data_source_factory.create(origin_data_source_handler_classname, \
  origin_object_uid, origin_key_value, origin_handler_options)

ClusterSSH: A GTK parallel SSH tool

From some time ago, I’m using a amazing admin tool named clusterSSH (aka cssh).

With this tool (packages available for GNU/Debian like distributions, at least), we
can interact simultaneously against a servers cluster
. This is very useful,
when your are making eventual tasks in similar servers
(for example, Tomcat Cluster nodes, … ) and you want execute the same intructions
in all of them.


My config (~/.csshrc) file for cssh is look like to the default settings:

extra_cluster_file=~/.clusters <<<<<<<<<
ssh_args= -x -o ConnectTimeout=10
terminal_allow_send_events=-xrm ‘*.VT100.allowSendEvents:true’
# terminal_bg_style=dark

The  ~/.clusters file is the file which defined the concrete clusters (see man ):

# home cluster
c-home tor@ pablo@

# promox-10.40.140

# kvm-10.41.120

When I want work with c-home cluster, we execute de cssh as following:

# cssh c-home

In addition, I have written a tiny python script that automatized the cluster lines generation. This script is based in pararell executed ICMP queries. Thats is cool when your servers are deploying in a big VLAN or the number of them is big. In this cases, we can execute my script to found the servers.

# ./ -L 200 -H 250 -d mot -n 10.40.140 >> ~/.clusters

# mot-10.40.140-range-10-150

Finally, … the script:

import os
from threading import Thread
from optparse import OptionParser

class Thread_(Thread):
def __init__ (self,ip):
self.ip = ip
self.status = -1
def run(self):

res = os.system(“ping -c 1 %s > /dev/null” % self.ip)
res_str = “Not founded”

self.status = res

ips = “”

parser = OptionParser()
parser.add_option(“-n”, “–net”, dest=”network”, default=”10.121.55″,
help=”Class C Network”, metavar=”NETWORK”)
parser.add_option(“-L”, “–lowrange”, dest=”lowrange”, default=”1″,
help=”Low range”, metavar=”LOW”)
parser.add_option(“-H”, “–highrange”, dest=”highrange”, default=”254″,
help=”High range”, metavar=”HIGH”)
parser.add_option(“-d”, “–deploy”, dest=”deploy”, default=”Net”,
help=”Deploy name”, metavar=”DEPLOY”)
parser.add_option(“-v”, “–verbose”, dest=”verbose”,
default=False, action=”store_true”,
help=”Verboise mode”)

(options, args) = parser.parse_args()

low_range = int(options.lowrange)
high_range = int(options.highrange)
deploy_id = options.deploy

for i in range (low_range, high_range+1):
ip = net + “.” + str(i)
h = Thread_(ip)

for h in threads_:
res_str = “Not founded”

if h.status == 0:
count = count + 1
res_str = “FOUNDED”
if verbose:
print “Looking host in %s … %s” % (h.ip, res_str)
ips += h.ip + ” ”

if verbose:
print “Finished word. %s host founded” % count

print “”
print “# ” + deploy_id + “-” + net + “-range-” + str(low_range) + “-” + str(high_range)
line = deploy_id + “-” + net + “-range-” + str(low_range) + “-” + str(high_range) + ” ” + ips
print line

Working with

One of my pending aims is try to work with git (SCM). Normally, when I am developing , I work with SVN or BZR. From some time ago, I use git only as a way to get source tarballs and nothing else, but now, I think that can try to use git in some minor project to obtain a real experience opinion.
In next lines, I am going to write some little notes about use of git into
I think that this notes are useful for me, primarly, but can be useful for other git-beginners, too.

Global setup:

Download and install Git
git config –global “Pablo Saavedra”
git config –global
Add your public key at

Next steps, creating a new repository:

mkdir awstats-update
cd awstats-update
git init
touch README
git add README
git commit -m ‘first commit’
git remote add origin
git push origin master

Other way, existing Git Repo:

cd existing_git_repo
git remote add origin
git push origin master

The last way, importing from Subversion:

git-svn can be used to import as well. Note that there may be issues if you have branches or tags (they won’t be imported over). If you only have a trunk, like many svn repositories, this method should work for you without issue.
First, be sure to create your repository on GitHub
$ git svn clone -s SVN_REPO_URL LOCAL_DIR
$ git remote add origin
$ git push origin master
Note that the -s switch implies that your svn repo is set up with the standard branches/tags/trunk structure.
git-svn adds a note to every commit message it copies over from svn. To get rid of that note, add –no-metadata to the command.
You can pull in later commits by running git-svn rebase from within your repo. If you ever lose your local copy, just run the import again with the same settings and you’ll get another working directory with all the necessary git-svn settings saved.



Finally, I have achieved to push my first git repository at This repo is a set  formed by silly (but usefull) tools: awstats-update.

Basically, this tools are a couple de scripts which can be executed as cronjobs. These, periodically, are checking  if exist new sites enabled in the apache/nginx configuration and building the corresponding awstats configuration file for them.

The repository can be accessed from: