return to OCLUG Web Site
A Django site.
May 10, 2012

Michael P. Soulier
But I Digress
» Post-transaction events in Django

So, at work I’m using Django quite a bit, and I ran into a problem where I need the database transaction to be committed, and then I need to trigger additional server-side code that will act on that data.

Putting all of this into the view function with a manually managed transaction sucks, far too much code. There’s transaction middleware, but by then your view function has returned. What to do?

Simple. I added my own middleware, and I return a new property that I tag into the HttpResponse object. Python is flexible enough to allow this hack.


So in my MslEventMiddleware, I look for a new property in the response, and if it’s present, I execute the requested command, which will happen after the TransactionMiddleware has called commit.

    def process_response(self, request, response):
        if hasattr(response, 'mslevent'):

Simple enough. Although a real post-processing API in Django would be helpful.

May 7, 2012

Michael P. Soulier
But I Digress
» Eclipse is still mean to me

So in working on Android programming I’ve been working through O’Reilly’s book on the subject, or one of them at least, and I’m at the stage of including the jtwitter library to talk to twitter. So, I add it as a library in eclipse, and everything builds, and then when the app runs I get a spectacular backtrace with a NoClassDefFoundError exception on the very library that I just included, winterwell.jtwitter.Twitter. Umm, ok. Is it added or not?

Trying a variety of configuration changes, I’m unable to get it working, but I finally came across this post on stack overflow about moving the jar to the top of the classpath order. So under Build Path -> Order and Export, I move the jtwitter.jar to the top of the list.

Voila. It’s working now. Seems rather braindead to me, as should Java not simply search the entire classpath until it finds the the requested import? I don’t understand why this works, and that isn’t good, ’cause it’ll just happen again. If someone has an explanation I’d appreciate it.


May 4, 2012

Michael P. Soulier
But I Digress
» Blogging in the mobile age

What a strange world where I am writing a blog post on my phone. Kinda slow going, even using this cool Swype app to speed things up. Mind you, I suck at it…

Perhaps a Bluetooth keyboard will be needed.

Anyone else phone-blogging?

February 16, 2012

Michael P. Soulier
But I Digress
» Bash, sometimes you suck

So, I’m trying to pass a ! character to a shell command. Should be simple right?

Not so much.

twit --post 'OMG...I'm out of stoppage!'
bash: !': event not found

Huh? Oh yeah, in the fine tradition of crappy shells like csh, bash uses ! to re-run commands from the command history. But you’d think that in a single-quoted string it wouldn’t interpret that, it would just pass it into the command I’m running. Nope.

Can I escape it with a backslash? You bet.

$ echo "foobar\!"

K, still not what I wanted.

Simple solution, don’t use bash.

sh twit --post 'OMG...I'm out of stoppage!'

Works fine in a POSIX mode.

Am I missing something? Is there a simpler way to get what I want? This blows the 80/20 rule totally.

July 23, 2010

Michael P. Soulier
But I Digress
» I really love *nix

So I’ve recently been playing with Ditz, a ruby-based distributed issue tracker, to go along with my distributed workflow in Git. It’s a good start, but not quite polished yet. I added the issue-claiming plugin, played with it for a while, and then realized that I don’t need it since I’m the only developer on the projects that I want to use it for.

Then I removed the plugin, but it left behind sections in the ditz yaml files that caused it to now spew warnings.

msoulier@egor:...ier/work/mbg-bugs$ ditz todo
warning: unknown field "claimer" in YAML for,2008-03-06:issue; ignoring
warning: unknown field "claimer" in YAML for,2008-03-06:issue; ignoring
warning: unknown field "claimer" in YAML for,2008-03-06:issue; ignoring
warning: unknown field "claimer" in YAML for,2008-03-06:issue; ignoring

Well that’s unacceptable. So now I need to remove this claimer line from each file. Well, this is *nix so I’m not doing it by hand. I could use a perl one-liner but I’m a tad more familiar with ex commands, editing in Vim all day as I do.

So, I make an exscript file containing this:


And then run it on the files like so

for file in $(find bugs -name "issue*.yaml")
   ex - $file < exscript

Presto. Fixed. So happy.

June 11, 2010

Michael P. Soulier
But I Digress
» Twisted Python and Chunked Encoding

When I was first writing a little web service in Twisted Python that would return JSON encoded data, and I was having some issues with loading it up using Javascript, I used Wireshark to trace the whole thing and was surprised at how the response looked.

There were delimiters around the data, and the response headers included a reference to “Chunked Transfer-Encoding”. I had to look it up to find out what it was, and I had no idea how to turn it off so I posted on the Twisted Python mailing list, and got a prompt reply.

Chunked encoding has nothing to do with the content type. It is used if
you do not set a content-length header.

So, figure out your response’s length (in bytes), and set the
content-length header to that.

Aha! So this in my http.Request handler fixed it."sending response")
# Set the content length so that we don't respond with chunked
# encoding.
size = len(content)
log.debug("content length is %d bytes" % size)
self.setHeader('Content-Length', size)

Well, not a fix really as there was no bug, but I wanted to rule out the chunked encoding as the source of a problem that I was seeing.

» Humour in manpages

I just discovered surfraw in the results of an apt-cache search (love that command) and I had to laugh at the manpage:

       Surfraw provides a fast unix command line interface to a variety
       of popular WWW search engines and other artifacts
       of power.  It reclaims google, altavista, dejanews, freshmeat,
       research index, slashdot and many others  from  the
       false‐prophet,  pox‐infested  heathen  lands  of html‐forms, placing
       these wonders where they belong, deep in unix
       heartland, as god loving extensions to the shell.

I know, I’m a geek, but to me it’s funny.

June 5, 2010

Michael P. Soulier
But I Digress
» Cross-Origin Requests in Twisted

I’ve just been learning about Cross-Origin Resource Sharing, to permit javascript downloaded from one domain to make Ajax requests out to another domain. I started learning this because I was writing a Google Maps client to test some back-end code and it wasn’t working for some reason. Thanks to the help of someone on the Prototype mailing list, and a packet trace, the problem was quickly found.

When I loaded my static page off of the disk, the browser assigns it an origin of null. I was then accessing a service running on my desktop, so its origin was localhost. As the origins differ, when I tried to make an Ajax request to it my browser automagickally makes an OPTIONS request to the server, requesting permission.

Let me show an example, captured via tcpdump:

sudo tcpdump -i lo -nn -s0 -w out.pcap tcp port 8000

When I load up this pcap file in wireshark and follow TCP stream, I see:

OPTIONS /route/?start=sta-9998&end=sta-9999&starttime=1274469161 HTTP/1.1
Host: localhost:8000
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/20091206 Gentoo Firefox/3.5.4
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Origin: null
Access-Control-Request-Method: GET
Access-Control-Request-Headers: x-prototype-version,x-requested-with

This is the OPTIONS request to the server, asking if it is permitted for this client to make a cross-origin request to that server. Specifically, it is asking permission to make a GET request from an Origin of “null”. If the server doesn’t respond with the right access-control headers, the browser will not permit the GET request to take place.

I had to modify my server, written in Twisted Python, to respond with:

HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET
Access-Control-Allow-Headers: x-prototype-version,x-requested-with
Content-Length: 0
Access-Control-Max-Age: 2520

So here I’m saying, yes, it is permitted from any origin (hence the *) to make a GET request, and the client can cache this permission for 2520 seconds (42 minutes). This won’t be my response when I deploy, I will tightly control the domains that this service permits, and lower the max-age to more like 10 minutes.

Now, this initial response is not enough, be aware. These headers must be supplied in every response, not just the response to the OPTIONS request. So when the GET finally takes place it looks like:

GET /route/?start=sta-9998&end=sta-9999&starttime=1274469161 HTTP/1.1
Host: localhost:8000
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/20091206 Gentoo Firefox/3.5.4
Accept: application/json
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
X-Requested-With: XMLHttpRequest
X-Prototype-Version: 1.6.1
Origin: null

And the server now responds with:

HTTP/1.1 200 OK
Content-Length: 76
Access-Control-Allow-Headers: x-prototype-version,x-requested-with
Access-Control-Max-Age: 2520
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET
Content-Type: application/json

    "reason": "No workers ready, try again soon",
    "status": "defer"

This is just an example while the server is loading a rather large data set, and cannot respond yet. Note the Access-Control headers in the response, just like the initial OPTIONS response.

Doing this in Twisted is simple enough. Inside of a http.Request handler, you can set response headers with self.setHeader(header_name, header_value), like so:

            self.setHeader('Access-Control-Allow-Origin', '*')
            self.setHeader('Access-Control-Allow-Methods', 'GET')
            self.setHeader('Access-Control-Max-Age', 2520)
            self.setHeader('Content-type', 'application/json')

My next steps are to tighten this granting of access, probably via configuration file, but I’m sure you get the idea.

December 12, 2009

Michael P. Soulier
But I Digress
» Ruby silliness

Ok, this is just dumb.

msoulier@kanga:~$ gem list torrent --remote


Well that’s wrong, I know there’s a RubyTorrent gem.

msoulier@kanga:~$ gem list tftp --remote


tftpplus (0.4)

It finds my tftp library just fine with a substring.

msoulier@kanga:~$ gem list RubyTorrent --remote


rubytorrent (0.3)

So why do I have to be so specific?

I shouldn’t need a web interface to find code in a repository people! Learn from apt-cache.

November 20, 2009

Michael P. Soulier
But I Digress
» Java time capsule

I’ve been involved in some discussions regarding Java recently, and I’ve repeated said that I mostly find it a solution that is still looking for a problem.

Looking back at this post by Paul Graham on “Java’s Cover” I find it interesting how many of his points still ring true, 8 years later.

My favorite quote:

It could be that in Java’s case I’m mistaken. It could be that a language promoted by one big company to undermine another, designed by a committee for a “mainstream” audience, hyped to the skies, and beloved of the DoD, happens nonetheless to be a clean, beautiful, powerful language that I would love programming in. It could be, but it seems very unlikely.

My problem with it is simple, and it’s why I dislike ClearCase, and many other technologies; it makes easy things hard. I’m busy. I’d use it if forced to, and then I’d try desperately to like it. Until then, I have better things to do.

November 18, 2009

Michael P. Soulier
But I Digress
» Java sucks

Well yes, we know this already from the way that it makes easy things hard, and hard things nearly impossible, but it’s rarely been captured with the eloquence that I find in these wonderful quotes on the topic.

And yet, as cockroaches are to humans this technology/language/marketing campaign continues unabated until having java on one’s resume is a requirement to find a job through some ignorant HR department that has no idea what is is. Like the job posting I saw a few years ago for a java programmer with 10-years experience when java was only 7 years old. Luck with that.

My favourite quote:

If Java had true garbage collection, most programs would delete themselves upon execution. — Robert Sewell

My boss asks me occasionally why I don’t use Java, and I tell him that I have many tools in my toolbox, some good and some bad that I bought on impulse due to good marketing or simply because they were new. Java is like my trendy flip-grip pliers from Crappy Tire that try to be both clippers and needle-nose pliers, but suck at both jobs. I don’t hate the tool, but I certainly don’t reach for it often, and I’m thinking of throwing it out.

August 2, 2009

Michael P. Soulier
But I Digress
» Dijkstra’s Shunting-Yard Algorithm in Python

For work purposes I had the need to implement a query parser for a simple query grammar on a product that I work on. I wanted the query to be provided in infix notation, something like

foo and not bar

To compute the answer, the simplest thing to do seemed to be to convert it to postfix notation, also known as reverse-polish. The simplest algorithm for this that I could find was Dijkstra’s “Shunting-Yard Algorithm”.

The basics were simple enough to implement in Python when all I care about is simple text tokens, parens and the AND, OR and NOT operators.

class Infix2Postfix(object):
    """This class implements a parser for a query in infix
    notation, and it uses the Shunting-yard algorithm to
    parse and reorder the query into postfix notation for
    def __init__(self):
        self.stack = []
        self.tokens = []
        self.postfix = []

    def tokenize(self, input):
        self.tokens = shlex.split(input)
        # Add whitespace around parens.
        newtokens = []
        for token in self.tokens:
            rightparen = False
            if token[0] == "(":
                token = token[1:]
            if len(token) > 0 and token[-1] == ")":
                token = token[:-1]
                rightparen = True
            if rightparen:

        self.tokens = newtokens

    def is_operator(self, token):
        if token == "and" or token == "or" or token == "not":
            return True

    def manage_precedence(self, token):
        if token != 'not':
            while len(self.stack) > 0:
                op = self.stack.pop()
                if op == 'not':


    def right_paren(self):
        found_left = False
        while len(self.stack) > 0:
            top_op = self.stack.pop()
            if top_op != "(":
                found_left = True
        if not found_left:
            raise ParseError, "Parse error: Mismatched parens"

        if len(self.stack) > 0:
            top_op = self.stack.pop()
            if self.is_operator(top_op):

    def parse(self, input):
        if len(self.tokens) > 0:
        factory = PostfixTokenFactory()
        for token in self.tokens:
            if self.is_operator(token):
                # Look for parens.
                if token == "(":
                elif token == ")":
        while len(self.stack) > 0:
            operator = self.stack.pop()
            if operator == "(" or operator == ")":
                raise ParseError, "Parse error: mismatched parens"
        return self.postfix

So using it is simple enough…

parser = Infix2Postfix()
pf = parser.parse("(foo and not bar) or bash")

There’s a lot more to the final solution, including a calculator class that processes the postfix as a list of token objects with some Django knowledge, but that’s the basic idea. It works pretty well, and it should be simple enough to add new operators over time if I need to.

May 24, 2009

Michael P. Soulier
But I Digress
» WordPress updated

I just upgraded from a very old version of wordpress on this blog to the latest, and I must say that I’m very impressed with how simple the upgrade was. All software should be, at a minimum, this simple to upgrade if not more.

Nicely done people.

April 13, 2009

Michael P. Soulier
But I Digress
» Big Gentoo upgrade today

Checking emerge for available updates for my Gentoo workstation, I was surprised to see a big jump in many packages. The reason was Gentoo pushing Xorg 1.5 as the new stable version as opposed to 1.3.

I know it works, as I’m already running it on my laptop in Ubuntu 8.10, but there it’s configured to use evdev and HAL, and I have HAL disabled right now in Gentoo to try to keep things light and fast, so I wasn’t sure what I’d run into, even after reading the upgrade guide. Nice that a –pretend emerge run pointed me at the news. I like prompts like that, they’re very helpful to me.

It took hours on my little AMD Athlon, and when I restarted X I hit this problem, which is technically my fault for not reading the notes at the end of the upgrade and rebuilding my mouse support. Thankfully someone posted to the gentoo mailing list about it, and I captured his note before restarting X, so all is well now.

An update in QT seems to have broken qbittorrent, and the next version isn’t stable yet, so I’ve switched to rtorrent for now. Some change is good as long as my whole box isn’t useless to me for long periods of time.

I’m still debating dumping Gentoo as too much work, but it’s forcing me to keep up-to-date with some changes in the community, so it’s not really a bad thing. We’ll see. Running bleeding edge is hard with older hardware that suddenly finds itself unsupported. Which is funny since older hardware is one of the best reasons to run Gentoo and keep the builds lighter than the prebuilt binaries from most distros.

Maybe I should bite the bullet and just build with HAL support. Obviously HAL isn’t going away.

April 9, 2009

Michael P. Soulier
But I Digress
» Tftpy state machine overhaul

I just posted this news item to SourceForge.

I’ve decided that the state handling in tftpy is too difficult to maintain, and I’m ditching it. I’ve started that work in a private branch in Git.

First though, I’m going to merge all of the contributed patches into an experimental branch and push that to github. I’ll then rebase my state-machine branch on that and keep going.

It’s a big rewrite, so expect breakage in the short term. Contributed unit tests are welcome, I really need to flesh those out.


A merging I will go, a merging I will go…

April 8, 2009

Michael P. Soulier
But I Digress
» More building rpms from Git

A fellow #oclug friend Michael Richardson (mcr) responded to my post on building rpms from Git in his own blog.

I find the need to wrap up content into a tar.gz so that I can build it really dumb. It wasn’t always like this… with the DEBIAN version the rpm command, I can actually just do:

rpmbuild -vv –define=”_topdir $(RPMTOPDIR)” –define=”Version $(VERSION)” -bb pt-vnc-connector.spec

and produce an RPM directly from my source directory… This no longer works with RHEL4/FC8/Centos5 versions of RPM, which I find funny.

No, that should still work if you build out of a proper rpm repository. I see you’re setting topdir, so presumably you’ve got an rpms/{SPECS,SOURCES,SRPMS,RPMS,BUILD} directory structure there. I build out of $HOME/rpms at work all the time. The -ta trick is just handy for building out of source control systems if you can dump a tarball, which you’ve just made easier for me:

git-archive –format=tar –prefix=$fullname/ . | gzip >$HOME/rpms/SOURCES/${fullname}.tar.gz

Mental note. Learn more Git plumbing commands.

We’re working on CentOS 4 and 5 at work, and I just do

cd $HOME/rpms/SPECS && rpmbuild -ba <my-specfile>.spec

I also predefine my _topdir.

soulierm@espresso:~$ cat $HOME/.rpmmacros
%packager       Michael P. Soulier
%distribution   Mitel Networks
%vendor         Mitel Networks
%_topdir        /home/speech/soulierm/rpms

April 7, 2009

Michael P. Soulier
But I Digress
» Querying the db schema from SQLite

I’ve been trying to put together a migration strategy for SQLite that is not simply a bunch of versioned SQL fragments that are extremely difficult to backport.

The information is there but it’s not obvious. You can fetch the list of tables in an SQLite database with this little snippet.

    FROM sqlite_master
    WHERE type='table'
    AND NOT name='sqlite_sequence'
    ORDER BY name;

From there, you can simply loop on the table names and pull out all of the table columns and their types via

PRAGMA table_info($table_name)

I have some rudimentary code now in Perl that queries this and builds-up a multi-level hash of all of the tables, their columns and metadata. Pass that to each migration fragment and we don’t need a schema version anymore. Each migration fragment has enough information to conditionally do migration.

At least, I think it’ll work.

April 2, 2009

Michael P. Soulier
But I Digress
» Building rpms out of Git

Having gotten heavily into using Git lately for a lot of my work, when I’m prototyping something and I want to make an rpm out of it, I find this script, kept at the root of my working files, quite helpful.

msoulier@kostya:...itel-msl-webproxy$ cat git2rpm
version=$(grep Version ${specfile}.spec \
                 | head -1 | awk -F: '{print $2}' \
                 | cut -b2-)
git clone . /var/tmp/$fullname || exit 1
rm -rf /var/tmp/$fullname/.git*
rm -f $dest
tar -C /var/tmp -zcvf $dest $fullname && rm -rf /var/tmp/$fullname
rpmbuild -ta $dest

Here I’m building an rpm called mitel-msl-webproxy. All I do is clone the local repository to /var/tmp/$fullname, build a tarball out of that into my $HOME/rpms/SOURCES directory, and then run rpmbuild -ta on it, which works on tarballs that have specfiles at their root.

Dead simple. It should go without saying that even building rpms out of Git is a lot faster than building them out of ClearCase.

March 29, 2009

Michael P. Soulier
But I Digress
» Where migration frameworks fail

At work I am using Django for a web-based management interface, using PostgreSQL as the database back-end.

Django does not yet come with a migration framework to evolve the database schema, so I wrote a really simple one, based on Rails migrations.

The system is simple. You put fragments of SQL code in separate text files at a predetermined location on disk, and prefix each one of them with a number, representing the version of the database schema. Your migration code also maintains a schema table in the database to keep track of the current versions, and whether the last migration attempt was successful.

So, on upgrade, if the schema version is 30 and the fragments on disk to up to 43, then you run 31-43, wrapping each in a transaction, and rolling-back at the first failure. So, if 41 fails, you’ll get as far as version 40, store an error, and you’re done.

Sounds ok, right? Well, if this is for a single database instance, sure. Unfortunately for this scheme, the product that I work on has roughly 6000 instances in the field in servers all over the world, with less than half running the most recent release. So, stream management becomes an issue. And stream management is something that I find that most of these modern frameworks overlook.

What if a bugfix that I just made on the HEAD has an associated schema change, and I want to backport that fix to the previous maintenance release? If the last schema version of the previous release is, say, 16, and I just added migration fragment 62, then we have a problem. And this is it.

Every migration fragment is dependent on the success of the previous one.

So, I can’t just backport fragment 62, I’d have to backport 17 - 62. Yikes.

The solution is actually simple, and it’s something that the SMEServer’s native databases do already. Each migration fragment is not raw SQL, it’s code, in this case Perl, but it could be anything. So, instead of blinding executing the migration fragments in order, you blinding execute the migration code fragments and let each and every one determine whether they need to do their particular job.

Need to make a varchar(512) a varchar(1024)? No problem, just check it’s size now, and if it’s 1024 then you don’t need to do anything. Now each fragment doesn’t depend on the one that came before it, and you can safely backport only what you wanted to backport.

So how do we know what the database, in this case a relational database like PostgreSQL, looks like now? As it turns out, the standard does have some support for that, and it’s in the information_schema.

This would fetch all of the column names from a table called “clients”:

select column_name from information_schema.columns
     where table_name = 'clients';

And this would determine the current length of the character field in that table called “name”:

select character_maximum_length
    from information_schema.columns
    where table_name = 'clients'
    and column_name = 'name';

So, an ideal migration framework would provide this information to the migration fragments, to keep their job simple. Migration isn’t done constantly so we’re not that concerned with performance. Keep the code simple and bug-free.

Anyone feel like writing it? I suspect I may have to.

December 8, 2008

Michael P. Soulier
But I Digress
» Java dependency hell

So, while I’m not Java’s biggest fan, I went ahead and built the sun-jdk package on my Gentoo system so I could have java applet support in Firefox, and occasionally write a little code in it (rare, but it happens at times).

It built quickly enough, with only 3 dependencies. So, I follow that up with a check of what it will take to build Ant, Sun’s horrible XML-based answer to Make (if you are being encouraged to hand-write XML, you’re using the wrong solution).

And, and emerge -p dev-java/ant turns up this!

Calculating dependencies... done!
[ebuild  N    ] virtual/jdk-1.6.0  0 kB
[ebuild  N    ] dev-python/pyxml-0.8.4-r1  USE="-doc -examples" 718 kB
[ebuild  N    ] virtual/jre-1.6.0  0 kB
[ebuild  N    ] dev-java/javatoolkit-0.3.0-r2  17 kB
[ebuild  N    ] dev-java/antlr-2.7.7  USE="cxx python -debug -doc -examples -java -mono -script -source" 1,774 kB
[ebuild  N    ] dev-java/ant-core-1.7.0-r1  USE="-doc -source" 6,683 kB
[ebuild  N    ] dev-java/libreadline-java-0.8.0-r2  USE="-doc -source" 76 kB
[ebuild  N    ] dev-java/javacup-0.10k-r1  USE="-source" 187 kB
[ebuild  N    ] dev-java/jakarta-oro-2.0.8-r2  USE="-doc -examples -source" 338 kB
[ebuild  N    ] dev-java/ant-nodeps-1.7.0  0 kB
[ebuild  N    ] dev-java/xml-commons-external-1.3.04  USE="-doc -source" 645 kB
[ebuild  N    ] dev-java/xml-commons-resolver-1.2  USE="-doc -source" 257 kB
[ebuild  N    ] dev-java/bcel-5.2  USE="-doc -source" 256 kB
[ebuild  N    ] dev-java/sun-jaf-1.1.1  USE="-doc -source" 123 kB
[ebuild  N    ] dev-java/commons-logging-1.1.1  USE="-avalon-framework -avalon-logkit -doc -log4j -servletapi -source -test" 187 kB
[ebuild  N    ] dev-java/ant-swing-1.7.0  0 kB
[ebuild  N    ] dev-java/jzlib-1.0.7-r1  USE="-doc -source" 50 kB
[ebuild  N    ] dev-java/junit-3.8.2-r1  USE="-doc -source" 451 kB
[ebuild  N    ] dev-java/ant-antlr-1.7.0  0 kB
[ebuild  N    ] dev-java/log4j-1.2.15-r1  USE="-doc -javamail -jms -jmx -source" 2,051 kB
[ebuild  N    ] dev-java/jakarta-regexp-1.4-r1  USE="-doc -source" 135 kB
[ebuild  N    ] dev-java/xjavac-20041208-r5  2 kB
[ebuild  N    ] dev-java/jdepend-2.9-r4  USE="-doc -source" 296 kB
[ebuild  N    ] dev-java/ant-junit-1.7.0  0 kB
[ebuild  N    ] dev-java/xalan-serializer-2.7.1  USE="-doc -source" 6,138 kB
[ebuild  N    ] dev-java/commons-net-1.4.1-r1  USE="-doc -examples -source" 224 kB
[ebuild  N    ] dev-java/ant-apache-resolver-1.7.0  0 kB
[ebuild  N    ] dev-java/jsch-0.1.37-r1  USE="zlib -doc -examples -source" 263 kB
[ebuild  N    ] dev-java/ant-apache-bcel-1.7.0  0 kB
[ebuild  N    ] dev-java/sun-javamail-1.4.1  USE="-doc -source" 387 kB
[ebuild  N    ] dev-java/ant-apache-oro-1.7.0  0 kB
[ebuild  N    ] dev-java/ant-apache-log4j-1.7.0  0 kB
[ebuild  N    ] dev-java/ant-apache-regexp-1.7.0  0 kB
[ebuild  N    ] dev-java/jython-2.1-r11  USE="readline -doc -source" 1,272 kB
[ebuild  N    ] dev-java/ant-commons-logging-1.7.0  0 kB
[ebuild  N    ] dev-java/ant-jdepend-1.7.0  0 kB
[ebuild  N    ] dev-java/ant-commons-net-1.7.0  0 kB
[ebuild  N    ] dev-java/ant-jsch-1.7.0-r1  0 kB
[ebuild  N    ] dev-java/ant-javamail-1.7.0  0 kB
[ebuild  N    ] dev-java/xerces-2.9.1  USE="-doc -examples -source" 1,672 kB
[ebuild  N    ] dev-java/xalan-2.7.1  USE="-doc -source" 0 kB
[ebuild  N    ] dev-java/bsf-2.4.0-r1  USE="python -doc -examples -javascript -source -tcl" 293 kB
[ebuild  N    ] dev-java/ant-trax-1.7.0  0 kB
[ebuild  N    ] dev-java/ant-apache-bsf-1.7.0-r1  0 kB
[ebuild  N    ] dev-java/ant-tasks-1.7.0-r4  USE="X antlr bcel bsf commonslogging commonsnet javamail jdepend jsch log4j oro regexp resolver -jai -jmf" 0 kB
[ebuild  N    ] dev-java/ant-1.7.0  0 kB

45 dependent packages! OMG…I’ll just grab the Ant tarball from upstream and be done with it, thanks. Better yet, I’ll just use Make.

Java - 1
Simplicity - 0