return to OCLUG Web Site
A Django site.
December 21, 2012

Michael P. Soulier
But I Digress
» Signatures in OAuth

Hi again. I already went into the basics of OAuth in a previous post, and alluded to the signatures being the hard part of the implementation. I'm using a python-oauth module for the client, but for the server, I decided to implement my own in Perl and Mojolicious, since I had problems with Net::OAuth that I deemed a protocol violation. Besides, I learn more this way.

There are multiple signature methods supported in OAuth 1.0, specifically three.

  2. HMAC-SHA1
  3. RSA-SHA1

Unsurprisingly, PLAINTEXT is the simplest. In this case, the signature is a simple combination of the consumer secret and the token secret, if there is any yet, with a slight twist. To ensure that the characters are treated properly, they must first be "normalized". This means decoding them as you would any percent-encoded string, and then...

  • convert them to UTF-8

  • escape them with percent-encoding from RFC 3986 like so

    • these characters are not encoded: alphanumerics, "-", ".", "_", "~"
    • all other characters must be encoded
    • any hexidecimal used in encoding must be upper-case

I found that hard to read, but I did find this bit of Perl to solve the problem in Net::OAuth, which I converted to a Mojolicious helper.

helper encode => sub {
    my $self = shift;
    my $str = shift;
    $str = "" unless defined $str;
        if ($str =~ /[\x80-\xFF]/ and !utf8::is_utf8($str)) {
            warn("your OAuth message appears to contain some "
                . "multi-byte characters that need to be decoded "
                . "via or a PerlIO layer first. "
                . "This may result in an incorrect signature.");

Decoding is even simpler.

helper decode => sub {
    my $self = shift;
    my $str = shift;
    return uri_unescape($str);

In Python, the same code looks like so.

import urllib

def escape(s):
    """Escape a URL including any /."""
    return urllib.quote(s, safe='~')

def _utf8_str(s):
    """Convert unicode to utf-8."""
    if isinstance(s, unicode):
        return s.encode("utf-8")
        return str(s)

escaped = escape(_utf8_str(s))

# Unquoting is just
unquoted = urllib.unquote(s)

So, this needs to be done to each of the parameters, in this case, the consumer key and the token secret, if any, and then they are combined with an ampersand. So, if your consumer key is, say, "myconsumerkey", and you don't have a token yet, then the initial PLAINTEXT signature is myconsumerkey&.

Now, this isn't too bad, but once you get into HMAC-SHA1 signatures, it gets a lot worse. The signature from the PLAINTEXT method becomes the key for the signature, and you'll already have the code for that now, but the raw input to the HMAC-SHA1 algorithm is a base string, that is rather difficult to construct. The input is the HTTP method, the request URI, both normalized like above and contatenated with an ampersand. Then, this will be contatenated with an ampersand to all of the input parameters in the request, constructed in a particular way.

  1. Take all input parameter names and values from all sources, and normalize them like above. (but skip the oauth_signature parameter)
  2. Sort all parameters by the normalized parameter name.
  3. Pair the names and values, contatenated with an "=".
  4. Concatenate all pairs with ampersands in the sorted order.
  5. Escape the entire string using the method above.

This is the base string to the HMAC-SHA1 algorithm, along with the key we mentioned. The final signature should match what the client generated. Oh, and if you're running your service on a nonstandard port (80 or 443), then you must include the port in the URI.


A call to http://localhost/initiate on port 80 or 443, a GET request, with the following params:

{'oauth_nonce': '21823552', 'oauth_timestamp': 1356129798,
 'oauth_consumer_key': 'Mitel test', 'oauth_signature_method': 'HMAC-SHA1',
 'oauth_version': '1.0', 'oauth_signature': 'pevzNqSnJ8QtqFUDWVlYhVRp8D0=',
 'oauth_callback': 'oob'}

The base string would look like this:


with a key of:


and a final signature of:


Oddly, if I used the b64encode method in Digest::HMAC_SHA1, a final "=" sign is missing on the final result, so I had to pull in MIME::Base64 and do this instead:

my $sig = encode_base64($hmac->digest);

The equivalent Python in the oauth library does this:

import binascii

# HMAC object.
    import hashlib # 2.5
    hashed =, raw, hashlib.sha1)
    import sha # Deprecated
    hashed =, raw, sha)

# Calculate the digest base 64.
return binascii.b2a_base64(hashed.digest())[:-1]

That just leaves RSA-SHA1, but that requires a pre-existing SSL relationship with the server, using SSL certificates. As such, I'm not worrying about it just yet. I don't think it'll be used much.

I'll need to do some interop testing with a few different clients, I'm hoping that they're not all snowflakes. The point of the rigid nature of the base string construction is that the final product is supposed to be reproducable.

The base string construction is definitely the hardest part, and I've read that the signatures were dropped in OAuth 2.0 because they were too hard to do. I'd rather not drop the added security, and while they're a pain, there are sample implementations to follow. I think that OAuth 1.0 is a better choice. And it's, like, finished.

November 15, 2011

» Linux Patching and Oracle: Detect RPM conflicts before they happen

A common scenario in the life of a DBA on a linux server looks something like this:

From: Sue-the-sysadamin To:the-dba Date: 2011-10-28 17:54:34 Dear DBA, We will be patching the linux systems this Friday night, October 28th at 21:00. The list of patches is attached. Let us know if there are any conflicts with Oracle. BTW, we need to know by 14:00 tomorrow. Sue

Yikes! The SysAdmin wants to apply these patches this week, and needs to know if there are any conflicts this afternoon.

So you open up the list of patches. When you see the lengthy list of patches you suddenly realize the long lunch you had planned with your sweetheart is going to be rescheduled.

I recently received several lists of patches that were to be applied to a number of different linux servers that were all running oracle, which led to this blog entry.

Matching Oracle shared lib dependencies to RPM files

Being faced with the task of determining which, if any, components of Oracle would possibly be affected by the list of RPMs to be upgraded across several servers with is a daunting task.  One quick method to deal with this is to look for glibc in the list of RPMs.  The list I received contained the following:

  • glibc-2.5-58.el5_6.4.i686
  • glibc-2.5-58.el5_6.4.x86_64
  • glibc-common-2.5-58.el5_6.4.x86_64
  • glibc-devel-2.5-58.el5_6.4.i386
  • glibc-devel-2.5-58.el5_6.4.x86_64
  • glibc-headers-2.5-58.el5_6.4.x86_64

Since nearly everything will be affected by an update to glibc,  it would be a good idea to test these new RPMs out on the test server for that purpose before deploying them in productions.

What if however you need a more exacting list of which Oracle components are affected by which packages?

This information is a little harder to come by.  While Oracle does supply a list of RPMs that must be installed on a Linux system, that information does not help you in determining which Oracle files are dependent on the RPMs in question.

If the list of RPMs to be updated is small, it may even be that that few, if any of the Oracle files on a server may be affected.

Here’s how that can be determined.  This may not be complete, but I think its a pretty good start.

Find the dependency files

Stepping through this manually, we can check an oracle executable file for the list of other files that is dependent on with ldd, which will display shared library dependencies.

[oracle@oravm01 patch_check]$ ldd $ORACLE_HOME/bin/oracle => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54b700000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54b802000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54b958000) => /lib64/ (0x0000003ed8c00000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54bb2c000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54bef4000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54c00d000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54c130000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54c66e000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54c771000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54c90b000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54caa5000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54cbb1000) => /usr/lib64/ (0x00002ab54cd33000) => /lib64/ (0x0000003ed8000000) => /lib64/ (0x0000003ed7c00000) => /lib64/ (0x0000003ed8400000) => /lib64/ (0x0000003edb400000) => /lib64/ (0x0000003ed7800000)
        /lib64/ (0x0000003ed7400000)

Here we can see that the oracle binary is dependent on several library files.  Many of them are owned by oracle, but some of them are files that are part of the Linux system.

Find the RPM

The  next step is find which RPM the files are part of.  We can use the rpm command for this.

The -q option is for ‘query’, while the ‘-f’ causes rpm to report which package a file is associated with.

Choosing one of the linux owned files as an example:

[oracle@oravm01 patch_check]$ rpm -qf /lib64/

What happens if rpm -qf is run against one of the oracle owned dependency shared library files?

[oracle@oravm01 patch_check]$ rpm -qf /u01/app/oracle/product/11.2.0/vmdb01/lib/
file /u01/app/oracle/product/11.2.0/vmdb01/lib/ is not owned by any package

As you might expect, rpm has no knowledge of oracle owned files, and reports that fact.

Does the patch affect Oracle?

The final step is to compare the current package with the patch list.  In this case there is a match – glibc-2.5-49 is to be replaced with glibc-2.5.58.

Now all that you need to do is repeat this process for all oracle binary files.  This obviously does not work well as a manual process.

Fortunately this is not too difficult a task to automate.   Those readers already familiar with me will not be surprised to learn that this automation involves the use of Perl.

The Patch Check Script

Or rather, scripts.  The Perl script is complemented by the shell script  The reason for using a driver shell script is that there may be more than one Oracle home directory on the server that you need to check.

One of the few limitations of using Perl with Oracle involves the LD_LIBRARY_PATH variable.   This variable cannot be set from within a Perl script, as the Perl compiler loads in the value of LD_LIBRARY_PATH before running your script, so subsequent attempts to change it are futile.  There are some hacks that will work on some platforms that allow LD_LIBRARY_PATH to be changed at runtime, but I would rather not go that route.  The limitations of using LD_LIBARY_PATH are pretty well known, so it is best just to work around that particular issue.

The RPM patch list

The list of RPM should bin a file, one RPM per line. The file names should not contain the .rpm extension. If the extension is included, you will need to remove it. Either that, or you will need to change the script to work with the .rpm extension.

Why does the list of RPMs not include the .rpm extension?  Simple, that’s how I received the list.

Here is a sample of files from a list of RPMs to be updated:

[oracle@oravm01 patch_check]$ head -5 patch_list.txt

Which Files are searched?

Not all files in an Oracle home are processed.  The find command is used with the -perm  /u+x option to find all files that have the executable bit set for the file owner.

These files are then processed to determine RPM dependencies.

The Perl script

Let’s invoke to see what it does when run against a single Oracle home with the standard setting for LD_LIBRARY_PATH and other oracle environment variables.

Here’s the current environment:

[oracle@oravm01 patch_check]$ set | grep -E 'ORA|LD_LIB'

Now run the script. Notice that STDERR has been redirected to a file – more on that later

[oracle@oravm01 patch_check]$ ./ -verbosity 0 -oracle_home $ORACLE_HOME -linux_patch_list patch_list.txt 2> pc.err

#### Possible Conflicts Found #####
--- gdbm-1.8.0-26.2.1.el5_6.1.x86_64
old: 1.8.020 26.220
new: 1.8.020 26.2.1.el5_6.120
--- glibc-2.5-58.el5_6.4.x86_64
old: 2.520 4920
new: 2.520 58.el5_6.420
--- libgcc-4.1.2-50.el5.x86_64
old: 4.1.220 4820
new: 4.1.220 50.el520
--- libstdc++-4.1.2-50.el5.x86_64
old: 4.1.220 4820
new: 4.1.220 50.el520 has found 4 RPM packages that are scheduled to be updated and that Oracle is dependent on.
This new information may cause the patch schedule to be pushed back until it is determined that these updates will have no adverse affects on Oracle.

Now let’s consider the file pc.err, where all error messages were redirected to.

We see there 336 lines in the error file.

[oracle@oravm01 patch_check]$ wc -l pc.err
336 pc.err

The majority of the errors are from rpm -qf  – as seen earlier the RPM database knows nothing of files owned by Oracle.   As such, these are errors we are not really concerned with.  There may however be errors hiding in the file that we are concerned about.

[oracle@oravm01 patch_check]$ head -7 pc.err
working on /u01/app/oracle/product/11.2.0/vmdb01/jdk/jre/lib/amd64/ dependencies => not found
working on /u01/app/oracle/product/11.2.0/vmdb01/jdk/jre/lib/amd64/ dependencies => not found

Following is another error  – this one is not an rpm error, so it may be something to be concerned about.

##### ldd error ############
error encountered with /usr/bin/ldd /u01/app/oracle/product/11.2.0/vmdb01/jdk/jre/lib/amd64/
run the command manually to see the error

Running the command manually results in a segmentation fault.

[oracle@oravm01 patch_check]$ ldd /u01/app/oracle/product/11.2.0/vmdb01/jdk/jre/lib/amd64/
/usr/bin/ldd: line 116:  5216 Segmentation fault      LD_TRACE_LOADED_OBJECTS=1 LD_WARN= LD_BIND_NOW= LD_LIBRARY_VERSION=$verify_out LD_VERBOSE= "$@"

A quick google search on ‘ldd segmentation fault’ suggests this is a linux bug. We will not be investigating this further, as we aren’t too concerned with that at this time.

With the large number of errors that we do not really care about being reported by the rpm -qf command, how can we reduce the number of errors?

The shell script

The script serves two purposes.  It reads the file oracle_homes.txt and runs for each ORACLE_HOME.  The script also sets the LD_LIBARY_PATH variable to eliminate many of the false positives seen in pc.err previously. Following is the entire script.  You may want to adjust the values used for LD_LIBARY_PATH to suit your environment.


# set the environment to call (patch check)


[ -r "$OHOMES_FILE" ] || {
echo The file $OHOMES_FILE is missing
exit 1

while read ohome
sid=$(basename $ohome)

echo $ohome

# you may have to experiment a bit to get the needed paths in LD_LIBRARY_PATH
# in 11g there will probably still be a few files that need to be checked manually
export LD_LIBRARY_PATH=$ohome/lib:$ohome/jdk/jre/lib/amd64:$ohome/jdk/jre/lib/amd64/server

./  -verbosity 0 -oracle_home $ohome -linux_patch_list $PATCH_LIST_FILE >$LOGFILE 2>$ERRFILE


Lets run the script and then check the error logs.

[oracle@oravm01 patch_check]$ ./

Checking on the output files shows that the error logs are much smaller than what was seen previously.

[oracle@oravm01 patch_check]$ ls -ltar *.log *.err
-rw-r–r– 1 oracle oinstall   455 Nov 14 13:18 patch_check_grid.err
-rw-r–r– 1 oracle oinstall 34384 Nov 14 13:18 patch_check_grid.log
-rw-r–r– 1 oracle oinstall   296 Nov 14 13:18 patch_check_vmdb01.err
-rw-r–r– 1 oracle oinstall 34384 Nov 14 13:18 patch_check_vmdb01.log

[oracle@oravm01 patch_check]$ wc -l patch*.err
9 patch_check_grid.err
6 patch_check_vmdb01.err
15 total

By including some more library directories in LD_LIBRARY_PATH, rpm -qf was able to resolve many more dependencies, greatly reducing the false positives that were included in the error files.   Here are the total contents of the error files:

[oracle@oravm01 patch_check]$ cat *.err
Use of uninitialized value in hash element at ./ line 393.
Use of uninitialized value in hash element at ./ line 393.
working on /u01/app/oracle/product/11.2.0/grid/jdk/jre/lib/amd64/ dependencies => not found
working on /u01/app/oracle/product/11.2.0/grid/jdk/jre/lib/amd64/ dependencies => not found
rpmq: no arguments given for query
working on /u01/app/oracle/product/11.2.0/vmdb01/jdk/jre/lib/amd64/ dependencies => not found
working on /u01/app/oracle/product/11.2.0/vmdb01/jdk/jre/lib/amd64/ dependencies => not found

The ‘uninitialized value’ errors are from Perl – this is likely a bug in the script. Looking at the script I can see that while it is a script in the bug, it is not causing any incorrect reporting.

[oracle@oravm01 patch_check]$ grep -n . | grep -3 393
389:                            ($dummy1,$dummy2,$file) = split(/\s+/,$line);
390:                    }
392:                    #print "      ###### FILE: $file\n";
393:                    push @{$self->{olibDeps}->{$file}{dependents}}, $exe;
394:            }
395:    }
397:    # all library dependencies now in %olibDepend

Line 393 should be executed only if there is a value defined for $file.

Changing it as follows this fixed the bug:

push @{$self->{olibDeps}->{$file}{dependents}}, $exe if $file;

The small number of lines now found in the .err files makes it simple to determine if these errors are an issue to be concerned with.

Download the scripts

The scripts can be downloaded here – Patch Check Distribution

The zip file contains a tarball patch_check_dist.tgz.  Use tar or your favorite unarchiving utility to extract the scripts.

December 9, 2010

Dave O'Neill
» Fun With Perl, Dynamic Linking, and C++ Exceptions

I'm hacking on some tools that use the Search::Xapian module to build up search indexes. It's an excellent Perl interface to Xapian, but unfortunately it seemed to be too slow for our purposes. Tracing our code showed that much of the slowness was in passing data back and forth between Perl and the C++ library for every call.

I decided to write my own XS module to speed things up. Instead of using Search::Xapian, I'd bundle everything up into a Perl datastructure, and pass it down to libxapian through my own module, once, and do all the indexing work in C++. This worked great -- until I started trying to do some exception handling.


June 18, 2010

» Keeping Up

I found I never published this post as it was sitting in my drafts few months now — it was written in 13th February, 2010. I’m publishing it without any changes. I learn therefore I am! I’ve just wrote few bits about learning a new technology and after skimming through my Google Reader, I noticed [...]

June 8, 2010

» Wherever I May Roam

Roamer, wanderer Nomad, vagabond Call me what you will $ENV{LC_ALL} = "anywhere"; my $time = localtime; say {$anywhere} my $mind; local *anywhere = sub { ... }; Anywhere I roam Where I 'git ghclone environment' is $HOME # 'grep may_roam($_) => @everywhere', # with apologies to Metallica Laziness and a severe addiction to yak shaving [...]

May 22, 2010

Dave O'Neill
» Migrating issues from Google Code to Github

I'd much rather use Git over Subversion, but I still have one project (leaguerunner) using Subversion on Google Code. Migrating the code itself to git is fairly simple, but being unable to migrate the issues still needs some tool development.


April 21, 2010

» DBD::Oracle and Windows 64bit

I have successfully compiled and installed DBD::Oracle on Windows 2008 Server 64bit operating system today.

I used the latest version of DBD::Oracle 1.24, version for 64bit Windows of Oracle’s
Instant Client Package – Basic along with the Instant Client Package – SQL*Plus and finally the Instant Client Package – SDK.

To get it to make and compile correctly I had to download Microsoft’s Visual Studio Ultimate

which should contain all the files you need. It is rather portly at 2+gb so you might want to grab lunch while you are downloading it.

After all the above downloading DBD::Oracle installed right out of the box.

All one has to do is select ‘Start Menu->All Programs->Microsoft Visual Studio 2010->Visual Studio Tools->Visual Studio x64 Win64 Command Prompt (2010)’
which will open a good old ‘dos’ window.

At this point CD to the directory where you downloaded DBD::Oracle


then set your ‘ORACLE_HOME to the Instant Client directory

     c:\DBD-Oracle set ORACLE_HOME=c:\IC_11

you should also set your NLS like this

     c:\DBD-Oracle set NLS_LANG=.WE8ISO8859P15

Once the above setting are done do a

     c:\DBD-Oracle perl Makefile.PL

and then a

     c:\DBD-Oracle nmake install

Which will produce a whole of warnings (these you can ignore, as they do not seem to effect DBD::Oracle at all) and near the end it should output something like this;

     Generating code
     Finished generating code
     if exist blib\arch\auto\DBD\Oracle\Oracle.dll.manifest mt -nologo -manifest blib\arch\auto\DBD\Oracle\Oracle.dll.manifest -outputresource:blib\arch\auto
     if exist blib\arch\auto\DBD\Oracle\Oracle.dll.manifest del blib\arch\auto\DBD\Oracle\Oracle.dll.manifest
     C:\Perl64\bin\perl.exe -MExtUtils::Command -e "chmod" -- 755 blib\arch\auto\DBD\Oracle\Oracle.dll
     C:\Perl64\bin\perl.exe -MExtUtils::Command -e "cp" -- blib\arch\auto\DBD\Oracle\
     C:\Perl64\bin\perl.exe -MExtUtils::Command -e "chmod" -- 644 blib\arch\auto\DBD\Oracle\
     C:\Perl64\bin\perl.exe "-Iblib\arch" "-Iblib\lib" ora_explain.PL ora_explain
Extracted ora_explain from ora_explain.PL with variable substitutions.
     C:\Perl64\bin\perl.exe -MExtUtils::Command -e "cp" -- ora_explain blib\script\ora_explain
        pl2bat.bat blib\script\ora_explain

At this point you are all done.

Well almost.

It is important that you test your code before you install but you will have to set a few things up first to get it to fully test correctly.

You will need a TNSNAMES.ORA file that points to a valid DB in the Instant Client Directory

Next you will need to set the ORACLE_USER_ID to a valid user

     c:\DBD-Oracle set ORACLE_USER_ID=system/system@XE

You will have to set up TNS_ADMIN to point to the Instant Client Directory

     c:\DBD-Oracle set TNS_ADMIN=c:\IC_11

Most importantly you will have to add the Instant Client directory to your path like this

     c:\DBD-Oracle path = c:\IC_11;%path%

If you do not do this step you will run into the dreaded

Can’t load ‘C:/Perl/lib/auto/DBD/Oracle/Oracle.dll’ for module DBD::Oracle: load_file:%1 is not a valid Win32 application at C:/Perl/lib/ line 202.

Error later on after the compile when you try to use DBD::Oracle.

What is actually going on is that Perl cannot find oci.dll (or one of the other .dlls it needs to run) the

C:/Perl/lib/auto/DBD/Oracle/Oracle.dll’ and the DynaLoader error

is just a false trail as perl is very limited in what it Windows errors it can report on. For more complet info on this sort of error check out this page;

Oracle Troubleshooter HOWTO

by Alexander Foken. It is rather dated but the facts of why perl did not find a dll are still valid.

now you can do this

     c:\DBD-Oracle nmake test

and all the tests should run and it will report.

Finally simply do a

     c:\DBD-Oracle nmake install

and you are all set.

That is about it.

At this point you might want to add the Instant Client directory permanently to your path so you will not run into the Dynaloader error again.

As well you do not need to keep Visual Studio around to use DBD::Oracle so you can uninstall that as well.

March 15, 2010

» dpanneur – your friendly DarkPAN/CPAN proxy corner store


There were two things I have wanted to do for some time now. The first was to come up with a way to quickly and easily set up a DarkPAN mirror so that we would have more control over our dependency chain at work. The second was to make a portable CPAN proxy service, so that I can always have access to my favorite modules, even if the machine I’m working on has no Internet access. Last week, I finally had a few ‘rount tuits’ to spend on this type of background itch, and the result is dpanneur (for dépanneur, French Canadian for convenience store).


As it stands, dpanneur is a very thin Catalyst application gluing together the goodiness of CPAN::Cache and MyCPAN::App::DPAN, and throwing in Git as the archive manager.

To get it running, first fetch it from Github

$ git clone git://

then check that you have all the dependencies

$ perl Makefile.PL

and run the script that will create the module repository

$ ./script/create_repo

For now, the module repository is hard-coded to be in the subdirectory cpan of dpanneur. A branch called proxy is created and checked out. Eventually, I’ll use GitStore to push newly fetched modules to the repository, but for the time being if dpanneur is to be used as a proxy, that branch must remain as the one being checked out.

All that is left is to fire up the server in whichever mode you prefer (single-thread test server would do nicely for now)

$ ./script/

and there you are, running your first dpanneur. Congrats! :-)

Using it as a caching proxy

You can use the server as a caching proxy, either for its own sake, or to seed the DarkPAN branches. To do that, you just have to configure your CPAN client to use http://yourmachine:3000/proxy:

$ cpan
cpan[1]> o conf urllist = http://localhost:3000/proxy
cpan[2]> reload index
cpan[3]> install Acme::EyeDrops
Running install for module 'Acme::EyeDrops'
Running make for A/AS/ASAVIGE/Acme-EyeDrops-1.55.tar.gz
Fetching with LWP:

As the modules are downloaded, they are also saved and committed within the repo

[dpanneur]$ cd cpan

[cpan (proxy)]$ git log -n 3
commit d065ad152f2204295334c5475104a3da517b6ae1
Author: Yanick Champoux <>
Date:   Wed Mar 10 20:32:52 2010 -0500


commit e8d2e83d1b16e2e0713d125f9a4bd2742681f859
Author: Yanick Champoux <>

Date:   Wed Mar 10 20:31:42 2010 -0500


commit 7e0b4b600bac8424c519199ee96dc56ffbb177eb
Author: Yanick Champoux <>
Date:   Wed Mar 10 20:30:47 2010 -0500


Using it as a DarkPAN server

There is not much more involved to enabling DarkPAN repos. All we have to do is to create a branch with the modules we want and have the ‘dpan’ utility bundled with MyCPAN::App::DPAN generate the right files for us.

To continue with the example of the previous section, let’s say that we want a DarkPAN branch containing Acme::EyeDrops, but not Acme::Bleach. Then we’d do

                        # only necessary if you are running
                        # the server while you work on the branch
[dpanneur]$ git clone cpan cpan-work   

[dpanneur]$ cd cpan-work

                        # branch just before we imported Acme::Bleach
[cpan-work (proxy)]$ git branch pictoral 7e0b4b600bac8424c519199ee96dc56ffbb177eb

[cpan-work (proxy)]$ git checkout pictoral
Switched to branch 'pictoral'

                        # cherry-pick the Acme::EyeDrops commit
[cpan-work (pictoral)]$ git cherry-pick d065ad152f2204295334c5475104a3da517b6ae1

                        # rebuild the module list
[cpan-work (pictoral)]$ dpan

                        # commit the new 02packages.details.txt.gz
[cpan-work (pictoral)]$ git add .
[cpan-work (pictoral)]$ git commit -m "dpan processing"

                        # push back to the mothership
[cpan-work (pictoral)]$ git push origin pictoral

And that’s it. Now point the cpan client to http://yourmachine:3000/pictoral, and you’ll get the limited mirror.

cpan[1]> o conf urllist http://localhost:3000/pictoral
cpan[2]> reload index

cpan[3]> i Acme::EyeDrops
Strange distribution name [Acme::EyeDrops]
Module id = Acme::EyeDrops
    CPAN_USERID  ASAVIGE (Andrew J. Savige &lt;>)
    CPAN_FILE    A/AS/ASAVIGE/Acme-EyeDrops-1.55.tar.gz
    UPLOAD_DATE  2008-12-02
    MANPAGE      Acme::EyeDrops - Visual Programming in Perl
    INST_FILE    /usr/local/share/perl/5.10.0/Acme/

cpan[4]> i Acme::Bleach
Strange distribution name [Acme::Bleach]
No objects found of any type for argument Acme::Bleach

March 11, 2010

Dave O'Neill
» WIND mobile towers in Ottawa

You may have heard that WIND Mobile is going to be offering service in Ottawa shortly. I got curious as to what their coverage might be and started looking for coverage maps. I didn't find any, but I ended up graphing their wireless backbone instead.


February 25, 2010

Dave O'Neill
» Perl module for detecting virtualization

A while back, I posted about detecting virtualization on Linux from a shell, using various tricks. Around that time, I also implemented those tricks as a perl script for internal use at work with the intent of eventually cleaning it up for public consumption. I finally got around to doing it, and the result is Sys::Detect::Virtualization availabe from CPAN, or from GitHub.

Currently, it can only handle Linux, and is only tested on a small number of virtualization hosts. Patches are welcome.

February 19, 2010

» cpanvote: a Perl Mini-Project

The Itch

For many, CPAN is a Canadian Prairies-sized field of modules where it’s darn hard to separate the wheat from the chaff.

While the CPAN Ratings service is the principal and official way CPAN tries to rank its distributions, for me at least, it doesn’t quite scratch the itch because . . . 

  1. not all distributions have reviews.
  2. even when there are reviews, they generally don’t answer the next question: what should I use, instead?.

The Dream

Consequently, for a while now I’ve been playing with ideas on how the rating could be improved. What I came up with so far is a very minimal system going straight for the goods, where a rating would consist of:

  1. The rating proper, which can be one of three values: “thumb-up”, “thumb-down”, or “neutral”.
  2. If you give the distribution a thumb-down (or for that matter, even if you give it a thumb up), you can recommend another distribution to be used instead.
  3. An accompanying short comment (140 characters or less so that it’s Twitter-ready. Longer, proper reviews can be done via CPAN Ratings).

Aaaand . . .  that’s it. Not exactly mind-blowing, but it’s so simple it could actually work.

JFDI, you say?

And now, since I’ve had a three-day week, I decided to give the idea a try and implement a prototype. Because I had only so many hours to devote to the project (hey, it was Valentine Day, after all), I’ve built it as a REST service. That way I didn’t have to spend any time on prettiness and, if the idea does to catch on, it can easily be grafted to a web site, IRC/IM bot, phone service, (well, I can dream big, can’t I?), and so on.

The cpanvote code is on Github. It’s all rather untidy, but it’s (roughly) functional. Let’s have a little tour of the application via the example REST client included in the repo, shall we?

First, we need an account, which can be created via the client:

$ --register --user max --password min

(And yes, this way of creating users is rather brain-dead, but this is only a rough prototype, so it’ll do for now.)

Once an account is created, reviews are as simple as:

$ --user max --password min XML-XPathScript --yeah


$ --user yanick --password foo Games::Perlwar --meh --comment "could use a little Catalyst love"


$ --user yanick --password foo Dist-Release --neah --instead Dist-Zilla --comment "nice try, but RJS is just better at it"

For the time being, I have implemented only very simple per-distribution results, which can be queried via any browser:

$ lynx -dump http://localhost:3000/dist/Dist-Release/summary
    - nice try, but RJS is just better at it
    - cute
    - Dist-Zilla
    meh: ~
    neah: 1
    yeah: 1
$ lynx -dump http://localhost:3000/dist/Dist-Release/detailed
    comment: nice try, but RJS is just better at it
    instead: Dist-Zilla
    vote: -1
    who: yanick
    comment: cute
    vote: +1
    who: max

Test Server

For the curious, I have an instance of the application running at ( –host …). It’s running a single-thread Catalyst with debug information, using a SQLite back-end on my home machine, which has rather pathetic bandwidth throughput, so please be gentle and don’t be too too surprised if it goes down.

Whaddya think?

This is the moment where I turn to the audience and prod them (that is, you) to see if I might be on to something, or if I’d be better to stop talking now. In other words, what’s your thought on this: --yeah, --neah or --meh?

Further considerations

Random thoughts for the next steps (assuming that there will be a next step).

  • Review accounts could potentially be PAUSE-based.
  • Give peeps the opportunity to submit tags for the module alongside their review, and let the taxonomy short itself à la
  • We could go meta all the way and vote on reviewers as well, which could give their opinion more weight.

February 3, 2010

» Contributing to CPAN: PAUSE Id, Bug Tracking, and Code Repositories

CPAN Wants You!

CPAN Wants You!

Want to contribute to your favorite CPAN module, or maybe create your own, but don’t have the foggiest idea how to do it? Here are a few notes, tips, tricks, and links that might help you get started.


While bringing awesome street cred, having a PAUSE id is strictly necessary only if you want to maintain or co-maintain a module. If you just want to contribute code, you’ll perfectly be able to do without, as it will usually be done via patches submitted to a bug tracking system, a code repository or using good ol’ email.

Becoming a co-maintainer

Becoming the co-maintainer of a module gives you the power to upload authorized releases of the modules on CPAN. To become one, the maintainer of the module simply has to promote you as such on PAUSE.

Creating a new module

You want to create your own module? Excellent! But before you do it, make sure that you . . . 

Once this is done, go ahead: log on PAUSE and register the module.

Adopting an abandoned module

There’s a module apparently abandoned by its owner that you’d love to take over? The procedure for that is given in the CPAN FAQ, and it can be boiled down to:

  • Try to get in contact with the current maintainer.
  • If you reach him or her, make puppy eyes and ask if you can take over.
  • If they are agreeable, they will flip over the maintenance of the module to you on PAUSE, and that’s all there is to it.
  • If you tried their email addresses, poked around in mailing lists, forums and other places without any luck, wait a little bit (a couple of weeks) and contact the PAUSE admins.

Module created! How do I upload releases, now?

You upload new versions of a module via the PAUSE interface or via ftp.

Alternatively, and more conveniently, you can also use the cpan-upload script provided by CPAN::Uploader.

Bug Tracking

Contributing to a module means fixing bugs or implementing new enhancements. To find those, who are you gonna call? The Bug Tracker!

By default, every module on CPAN has a bug tracking queue on For example, the one for Git::CPAN::Patch is at

Bug reporting can be done via the web interface, or by sending an email to (e.g.,

…or somewhere else

Sometimes, the module’s maintainer uses a different bug tracking system. You’ll typically find it mentioned in the POD, or in the META.yml:

$ curl -L | \
  perl -MYAML -0 -E'say Load(<>)->{resources}{bugtracker}'

If you don’t feel inclined to dig into META.yml for information, there is a Greasemonkey script that will do it for you, and automatically add a link to the bugtracker (along some other goodies) on the module’s CPAN page.

…or here, there and everywhere

What if the bugs are kept on, and on the local bug tracking system of Github, and a few other places? You can either follow them manually, or you can get funky and experiment with SD, a peer-to-peer ticket tracking system that can sync with and merge the information of several online bug tracking systems. SD, it goes without saying, can be found on CPAN as App::SD.

Code Repository

Most, but not all, modules have a public code repository somewhere out there. Just like for the bug tracking system look for it in the POD, or in the META.yml. It will be displayed on the module’s cpan page as well.

$ curl -L | \
    perl -MYAML -0 -E'say Load(<>)->{resources}{repository}'


<singing name=”Adam West”>Na na na na, Na na na na, GitPAN!</singing>

Thanks to Schwern, all CPAN distributions now have a GitHub repository tracking its releases. They can all be found under the GitHub gitpan account, and follow the pattern


Speaking of Git::CPAN::Patch, you can use its set of scripts to ease the creation of CPAN-related git repositories. It also includes scripts to send patches directly from your local repository to a rt bug queue.

January 11, 2010

» Local POD browsing: using Pod::POM::Web via the CLI

Half the time I want to peek at the doc of a module, I hit perldoc.  The rest of the time I type cpan Some::Module[1] in Firefox and read the POD straight out of CPAN.  And while it’s pretty and handy, it also feels kinda silly to go on a remote server to read documentation that is also sitting on my computer. Surely, I tell myself, there must be a better way.

Cue in the several Perl modules that act as local POD web servers.  After giving a few of them a quick test-run, I decided to give Pod::POM::Web a try. Being a CLI jockey, I wanted to be able to open the POD of a module from the command line.  Not a problem, I just had to create the script ‘pod’:



perl -MPod::POM::Web -e"Pod::POM::Web->server($POD_PORT)" 2> /dev/null &

PAGE=`perl -e's(::)(/)g for @ARGV; print @ARGV' $1`


kfmclient openURL "http://${HOSTNAME}:$POD_PORT/$PAGE";

There is not even a need to fire up the Pod::POM::Web server beforehand: the script will do it for us (if the server is already running, subsequent calls to pod will harmlessly try to start a new server on the same port and fail).  It should be noted that ‘kfmclient’ is KDE-specific — for any other desktop environment, you might want to change that to a direct call to firefox.

It’s already not too shabby, but wouldn’t it be even better with a little bit of auto-completeness magic?  To do that, we need a short script, pod_complete:


use 5.010;

use List::MoreUtils qw/ uniq /;

my ( $sofar ) = reverse split ' ', $ENV{COMP_LINE};

$sofar =~ s(::)(/)g;

my ( $path, $file ) = $sofar =~ m!^(.*/)?(.*)?$!;

my @dirs = map { $_.'/'.$path } @INC;

my @candidates;

for ( @dirs ) {
    opendir my $dir, $_;
    push @candidates, grep { /^\Q$file/ } grep { !/^\.\.?$/ } readdir $dir;

if ( $path ) {
    $_ = $path.'/'.$_ for @candidates;

s/\.pm$// for @candidates;
s(/+)(/)g for @candidates;

say for uniq @candidates;

All that is left is to add . . . 

complete -C pod_complete pod

 . . . to our bashrc, and it should all work (with the caveat that the modules must be entered as Some/Module instead of Some::Module).

$ pod XML/XPath
XML/XPath        XML/XPathScript

[1] If you don’t already know the trick: create a bookmark with keyword ‘cpan’ and location

December 31, 2009

» More Fun with Perl and Inline IFs

My old arch-nemesis, the in-line if ($q = $q == $a ? $b : $c;) reared its ugly little head again.

This time, it was in context of an web page that displayed some form values, something like this:

CGI::textfield({name  => 'price_dollars',
                        width => '5',
                        value => ($mode eq 'Edit' ? $line_item[$count]->price_dollars() : '0') });
CGI::textfield({name  => 'price_cents',
                         value => ($mode eq 'Edit' ? $line_item[$count]->price_cents(): '00' });

This carried on for 15 other fields on the form. So, we have 17 if else statements all checking to see if the form is in ‘Edit’ mode. If there were, say, ten line items on the form . . .  well no need to go any further, other than to say that is a whole lot of if statements.

While this does not take up much space, this multiplicity of ifs is not really necessary or even good, since to the compiler, an inline if and a bracketed one are the same. The inline is only a shorthand to make our code more readable.

We could of course just declare 17 scalars at each iteration, and then a use single if statment to set these scalars, like this . . . 

my $price_dollars = '0';
my $price_cents = '00';
if ($mode eq 'Edit') {
    $price_dollars = $line_item[$count]->price_dollars();
    $price_cents = $line_item[$count]->price_cents();

 . . . and then use them later on:

 CGI::textfield({name  => 'price_dollars',
                        width => '5',
                        value => $price_dollars}).
CGI::textfield({name  => 'price_cents',
                        value => $price_cents });

Well at least this cuts down on the if statements, but now I have a whole bunch of single-use scalars hanging about–not very neat.

I could go for an Array or a Hash for my values, but as you might have already guessed, $line_item[$count] is an object of some form. So why not just use that?

So. At each iteration I check to see if I have a invalid line item . . . 

if (!$line_item[$count]) {
    $line_item[$count] = Foo::Bar:LineItem->new();

 . . . and if I do, I simply create a new empty one, and then simply do this on my form:

 CGI::textfield({name  => 'price_dollars',
                        width => '5',
                        value => $line_item[$count]->price_dollars() });
 CGI::textfield({name  => 'price_cents',
                      value => $line_item[$count]->price_cents()});

Much simpler, and easier to read. Cheers!

December 7, 2009

» Today’s Perl Lesson

One thing I find fascinating in Perl is that I am always seeing new ways to perform the same mundane task.

Today I had to output some tabular data, so I thought it would be nice if I alternated colours for each row. Easy enough in Perl—just create a hash with your colours as the value and then the swapping variable as the key, like this:

my %colours=(1=>'red', 0=>'green');
my $swap=1;

foreach $item (@stuff) {
    my $colour=$colours{$swap};

Then I though about how to flip the $swap value. I could simply use the tried and true—and like me—Luddite style:

 if ($swap == 1) {
    $swap = 0;
 else {
    $swap = 1;

Well, it’s easy to read at least. Alternatively, I could reach back to my roots and do something C-ish like this:

$swap = $swap == 1 ? 0 : 1;

This is a little more compact but also a little harder to read, and it doesn’t save any code steps. And anyway, this is Perl, so my colleague Yannick suggested this . . . 


 . . . which I had never seen before. This is what I went with as it saves steps and is easy to read.

No doubt there are other approaches out there?

November 30, 2009

» Perl CGI::param Overloaded Method?

This is a little story of a little bug. This gremlin suddenly appeared in a CGI.PM web-based application I work with. To make a long story short, an email was coming out something like this . . . 

Subject:Update to Report #389939 by B. bloggins Description:389939 #389939: TPDD Now Deploying to monitoring for the MySQL servers.

 . . . when it should have been some thing like this:

Subject: TPDD Update to Report #389939 by B. bloggins

TPDD Now Deploying to monitoring for the MySQL servers.

After about an hour tracking things back, my team and I narrowed it down to this line of code:


We scratched our respective heads on this for a while, because for user type ‘A’, it worked fine; but for user type ‘B’, it did not. And they were both using the same web page to send the email.

But before we went and reinstalled our mail server, I remembered that this is how CGI::param is intended to work:

@values = $query->param('foo');
# or
$value = $query->param('foo');

Arghhh! A wretched, two-faced overloaded method! So, depending on how you call the method, the returned results may differ.

Looking deeper, I discovered that the page was in fact receiving three values for CGT::param(”rep_no”) when user was of type ‘B’, and thus sending that as an array to the send_TXT_email method, and as a result, buggering up the email content.

Even more digging I found out the root cause was a web page that was three pages back from the one that was sending the email. Seems some-one made a change to fix one bug and as a result caused this one.

This quick fix for this (as we could not change the root page) was

my $rep_no=CGT::param("rep_no");

Lesson learned: never trust a wretched, two-faced overloaded method.

» Ottawa Perl Mongers Presents: FormFu Assassin


Once more, the Ottawa Perl Mongers assemble!

When: Thursday December 3rd, 2009, at 7:00pm.

Where: at Pythian headquarters.

What: I’ll be presenting on how I’m implementing AJAX forms in a Catalyst application, using the deadly magic of Mason, Prototype, and FormFu.

Bonus: Pizza will be graciously provided by Pythian. So if you plan on coming, please let me know so that I can be a good little ninja and make the number of slices match the number of attendees.

See you there!

November 24, 2009

» Perl and Server Management Objects (SMO)?

No, I do not have squiggly worms in my head, and no, I haven’t gone over to the dark-side. It’s just that I had an opportunity over the past few days to attempt to use Microsoft’s Server Management Objects (SMO) with Perl to manage a SQL Server 2005 DB.

To make a long story into a short post, I blundered into the Win32::CLR module on CPAN, a little gem from Toshiyuki Yamato.

Here is all you need to get started.

  1. Perl 5.10 (Activestate and Strawberry Perl both work fine.)
  2. The Win32::CLR module.
  3. MS Visual C++ Express 2005 or 2008 (it is free and you can get it here:
  4. Open a DOS prompt from Visual C++ and Makefile and the nmake install Win32::CLR.

Once you have a working version of Win32::CLR in your Perl installation, you should check that you have the SMO objects installed on you computer. These are normally stored in c:\Program Files\Microsoft SQL Server\100\SDK\Assemblies. The DLL we are interested in is called Microsoft.SqlServer.Smo.dll. If it is not there, you will most likely have to get the latest Service Pack from Microsoft.

Finally, so that Perl can find it, you will have to make sure that you put the directory of the DLL in the path.

Once you have jumped these little hurdles, you can then get down to coding. As good Perl programmers, we start with:

use strict;

and then use our CLR mod:

use Win32::CLR;
use utf8;
binmode STDOUT, ":encoding(Shift_JIS)"; # japanese character set
binmode STDERR, ":encoding(sjis)"; # if japanese windows

(I am not sure why we have to use the binmode here, so if you want an answer you will have to ask Toshi.)

We then load the SMO DLL . . . 

my $smo= Win32::CLR->load_from("Microsoft.SqlServer.Smo.dll");

 . . . and create a server object that we can play with.

my $svr = Win32::CLR->create_instance("Microsoft.SqlServer.Management.Smo.Server");

my $conn=$svr->get_property("ConnectionContext");
$conn->set_property("Login","xxx") ;

The above will set up the connection for us and then . . . 

print "Edition=".$svr->get_property("Edition")."\n";

 . . . will connect to the server and print out the Edition details. So if you run it you will see:

Edition=Enterprise Evaluation Edition (64-bit)

Now for the big question: Why?

For one thing, ‘SQL-DMO’, which I have been using with Win32::OLE, is on the way out for any SQL Server edition after 2005.

Second, I though I would give PowerShell a try as a replacement for my Perl Scripts. But, after a few hours trying to get my head around it, I found it was just going to be easier to use Perl for what I had to do.

I think that anyone using PowerShell might find that the above Perl solution will pay off in the long run, as any program you write in Perl can be fully modular or even object oriented in design, whereas PowerShell’s OO is more limited.

As a bonus, I found that the monitoring footprint (disk-space, memory usage, CPU usage) on my server was smaller with Perl than with PowerShell, and very much smaller than a similar C# .Net app.

Of course there are some limitations in using Win32::CLR and SMO, but nothing insurmountable. You will have to write a number of little helper subroutines because Win32::CLR exposes only the Base Class Library of objects, and not the many wrappers that one is more familiar with.

But that is another story—one that I’ll tell in an upcoming post.

October 14, 2009

» Perl Module Dependencies: how to require the latest, and nothing less

Recently, hanekomu was contemplating how to make subsequent installs of a Task::BeLike module upgrade its dependencies to their latest version.

The idea is intriguing. It’s not something you want to do for a typical module, but it makes sense in the context of Task::BeLike. If you care enough about a module to put it in your Task::BeLike, you probably care enough to want to upgrade when there’s a new version out there.

Alas, I think hanekomu’s proposed way of doing it is flawed (mind you, the debate is still going on as of the writing of this entry, and I can very well still be proven wrong). But after some pondeferous chin scratching, I might have come with a cunning alternative to it.

Let’s say that in your Build.PL (the logic would be the same for a Makefile.PL) you have your dependencies stashed in %dependencies. Something akin to:

%dependencies = (
    XML::LibXML      => 0,          # any version will do
    XML::XPathScript => '1.42',     # 1.42 or higher
    Moose            => 'latest',   # nothing but the shiniest!

All we want to do, really, is to switch the latest for, well, the latest version available. Surprisingly, that something that is almost as easy to do than to say:

for my $mod ( keys %dependencies ) {
    next unless $dependencies{$mod} eq 'latest';

    require CPANPLUS::Backend;
    state $cb = CPANPLUS::Backend->new;

    $dependencies{$mod} = $cb->module_tree( $mod )->package_version;

Yes, that’s really all there is to it. A little further hacking later, I have incorporated the functionality to my own Task::BeLike::YANICK module. The way I implemented it, installing the module the usual way will yield no surprise (i.e., dependencies already present are not going to be updated). But if the environment variable TASK_UPGRADE is set to true, like so:

TASK_UPGRADE=1 cpan -f Task::BeLike::YANICK

 . . . then the magic is going to be unleashed (the -f is to force the re-install, if the Task has already been installed before).

Alternatively, just to know which dependencies are out-of-date, one can also extract the distribution and do a

perl ./Build.PL --upgrade
./Build prereq_report

May 17, 2009

Dave O'Neill
» Converting vimblog to ikiwiki

Until today, this blog ran on vimblog, a hand-rolled minimal script for displaying blog entries. In the last year or so, though, I've become a convert to git, and so what I really want to do is edit my posts on any system, commit them to a git repository, push to a remote and have them end up as blog entries on my server. Fixing vimblog to do this would have been more work than I want to deal with, but thankfully ikiwiki exists, and can do most of what I need.

So, as of now, this blog is in ikiwiki. To get there, I followed (more or less, since I'm documenting after-the-fact) these steps: