return to OCLUG Web Site
A Django site.
June 23, 2011

Ian Ward
excess - News
» Recording Both Sides of a Call

I set up a VM to present software to a client remotely, but I needed a way to record both the audio in and out so that I could capture both my presentation and the client's questions. In the past I've used some ALSA configuration magic for audio things advanced enough that they don't have a friendly GUI, but since Pulse Audio is the shiny new thing I decided to go that route.

It turns out to be fairly simple. I create a new null sink (think: fake sound card for output) and attach a loopback from the audio out monitor of the "real" sound card and another from the the audio in of the "real" sound card:

pactl load-module module-null-sink sink_name=bothsides
pactl load-module module-loopback latency_msec=5 sink=bothsides \
pactl load-module module-loopback latency_msec=5 sink=bothsides

The alsa_output... source comes from running pactl list and copying the device name. The second loopback automatically uses the only alsa_input... source device. Then I can record from the monitor of this null sink with a command like:

pacat --record -d 2 | sox -t raw -r 44100 -s -L -b 16 -c2 - "recording.wav"

The -d 2 option selects the new null sink monitor device I created (the index may be different in your case). Last, you may want to use the pavucontrol program to adjust the levels for the input and output so you don't end up with one sounding much louder than the other in the combined recording.

June 14, 2011

Ian Ward
excess - News
» Python 3 Argument Sketch Slides

Here are the slides from my Python talk at OLS this afternoon.

May 20, 2011

Ian Ward
excess - News
» Python 2 and 3 Slides

Catching up on some more old business: here are the slides from the Python 2 and Python 3 talk I gave at last month's OCLUG meeting.

I am also preparing some Python tutorials for the upcoming 2011 Linux Symposium in Ottawa June 13-15. Hope you can make it.

May 18, 2011

Bart Trojanowski
Bart's Blog
» how to manually create a 6in4 tunnel

I'm doing some IPv6 codig for a client and needed to setup a bunch of 6in4 tunnels.

Thre are many ways to do this through distribution init scripts (Debian, Fedora), but I wanted something less permanent and more dynamic for testing.

The procedure can be summarized in these steps:

  • create a tunnel mytun between local and remote

    ip tunnel add mytun mode sit local \
                    remote ttl 64 dev eth0
  • give the local end an address

    ip addr add dev mytun f8c0::
  • bring up the tunnel

    ip link set dev mytun up

[Read More]

May 16, 2011

Rick Leir
» Linux Symposium

Have you registered yet for the Linux Symposium? Featured speakers include Jon “Maddog” Hall and Jon C. Masters.

The goal of the Linux Symposium is to bring together Linux developers, enthusiasts, and systems administrators to improving communication, strengthen the personal connections within the Linux Community and to promote the open and free dissemination of new ideas. We see our community as the most diverse group ever to collaborate on a single project and we are very proud to have played our part for the last 13 years.

May 5, 2011

Rick Leir
» Practical Guide to Ubuntu

A Practical Guide to Ubuntu Linux, Mark G. Sobell, ( 2011)

You will find this useful if you are using any Linux Distro, not just Ubuntu. 1200 pages, with a DVD.

April 4, 2011

Ian Ward
excess - News
» Python Talk at OCLUG on Tuesday

I will be giving a talk on Python 2 and Python 3 at the Ottawa Canada Linux Users Group meeting on Tuesday. Hope to see you there.

April 3, 2011

Rob Echlin
Talk Software
» Learning about Git


Git was invented by Linus Torvalds to use with the Linux kernel. There were performance and political, aka licensing, issues with the previous version control system that the kernel used.

Selected Features, from Wikipedia

  • Strong support for non-linear development
  • Distributed development
  • Efficient handling of large projects
  • Pluggable merge strategies
  • Toolkit-based design


Git is a distributed system, modeled on a file system with built in versioning. Linus wrote it for speed in the use-cases he cared about with the kernel:

  • Patches supplied by email or grab from other user’s repositories
  • Pick and choose among patches provided
  • Merging many patches per day
  • All users comfortable with the command line

I have used it for about a year, and found it very useful, with a few caveats. I use it at home a lot. It allows me to propagate data between systems very quickly, so I use it as backup for my own work at home. I started with a single repository containing several small unrelated projects and non-software files, such as my career-tools folder which contains resumes, personal cards, graphics and sample letters. I learned the hard way that putting it on a USB key requires keeping free space on the stick greater than the total size of the repository. I haven’t gotten around to breaking it up into a couple dozen separate repositories, but I will. Also, I learned that a 600MB repo takes a long boring while to update on really slow media such as a USB stick.

When using Git, you grab a copy of the whole repository on your local machine. That’s what makes it a “distributed” version control system. You “commit” your changes to your copy of the repository. To share them with others, you then “push” them to a central repository when you are ready, or ask others to “pull” from your repository. This basically means committing all changes twice: locally, then centrally.

At work I used Git with a small team of three people. I learned that, like using a stick-shift transmission, Git requires a commitment to learning that is higher than on an automatic transmission.

Some people use manual transmissions inefficiently and in ways that I cringe at, shifting too often or not often enough, not matching revs, and have no idea how or when to double clutch. Modern car transmissions are designed to survive this treatment. Keep those people away from trucks.

Git is like a manual transmission. You should commit to learning more about it than you think you need to know. There is always more to learn. Also, you can use “work flows” with it that are difficult to do with Subversion or most other VCS’s.

The Linux kernel team has thousands of developers who contribute to it, hundreds who contribute frequently, from sites around the world. Their work flow might not work in a centralized repository. A simplified version of their work flow (assuming the change is perfect) goes like this. You grab a copy of the latest “blessed” kernel code from the canonical git repository on GitHub, make changes, test them, and then tell the person in charge of that part of the kernel. This person grabs the code from your GitHub copy of the kernel repository, or receives it from you by email, examines it, tests it, merges it with other changes in the same part of the code and submits it upstream to one of Linus’s main “lieutenants”. This gatekeeper will similarly examine it, merge it with other changes, test it, and promote it to Linus who may choose to accept it.

This work flow is very different from a “small” environment where, of maybe a hundred developers on a project, 5 or 20 are working on one part of a project and have that part of the code to themselves. Git works with these teams as well. It is fast and supports complex merges.

Git also supports different work flows that you may not be used to. For instance, I knew a team that used git internally, and then submitted changes centrally to the corporate Subversion repository. This had several advantages. The team leader could filter submissions from new employees. They could do local commits to their own repository frequently, say at 5 minute intervals on a temporary branch, share with their team mates on a common branch when a small change was ready, and not be delayed by the slow submit time to the Subversion server in England. Changes could be “summarized” in the corporate Subversion repository, so that all those unfinished changes in the 5 minute commits were not seen, and failed, by the Continuous Integration server.

Git also supports using a central repository that people can push their changes to. This is a repository that has no “working copy”, just the repository database.

“Tags” and “branches” are a local idea in git. This means you have to push your tags manually so that other people can see them. Also, if you don’t explicitly copy  your branch to the server, git will happily push your changes but you will not be able to find them easily as they are not on a branch on the central repository.

How to lose your changes in git: Create a new branch locally, make changes, push them to the server that does not have that branch, and delete that branch locally, forgetting to merge to a main branch. That’s a bit like trying to shift down from third to second on an uphill, taking too long and then stalling because the truck is going too slow for second gear when you finally complete the shift. In this case, my co-worker recovered because he found the commit ID on a scrollback log in the terminal window where he did the commit.

Git works best when someone on your team is an expert and everyone takes time to learn more than the minimum.

Main Git web pages


GUI tools for Git

There are now GUI tools for Windows, and plugins for many IDE’s.



Tagged: Linux, software, team tools

February 28, 2011

Rob Echlin
Talk Software
» Yet Another Dave, A?

I appreciate Dave’s sense of rumour, his up-to-wait technical knowledge, and  his assistance.

His wiki has some (ok lotsa) stuff I don’t have.

February 26, 2011

Rick Leir
» CompTIA Linux+

book coverCompTIA Linux+ Complete Study Guide Exams LX0-101 and LX0-102, Roderick W. Smith, (- 2010)

If you want to get certified, this will help.

February 3, 2011

Rick Leir
» Linux bible : boot up to Ubuntu, Fedora

book coverLinux bible : boot up to Ubuntu, Fedora, KNOPPIX, Debian, SUSE, and 13 other distributions / Christopher Negus, Wiley 2011

Here is an updated version of the user and admin manual for Linux. You can enjoy many hours of poking through and trying out all the tools and packages discussed in this book.  It contains a DVD and a CD containing various distros. At 800 pages, this is a fat book.

December 8, 2010

Rick Leir
» Linux Programming

book coverThe Linux Programming Interface – A Linux and UNIX System Programming Handbook, Michael Kerrisk, No Starch, 2010

This book is for programmers of C and C++ applications on Linux or Unix. Written in a precise style, it explains all the system calls in detail. There is much knowledge contained in this book, and it is the best reference I know of for this field. Hardcover, and at 1500 pages, you will be wanting a table to support it! Contents:

1   History and Standards
2   Fundamental Concepts
3   System Programming Concepts
4   File I/O: The Universal I/O Model
5   File I/O: Further Details
6   Processes
7   Memory Allocation
8   Users and Groups
9   Process Credentials
10   Times and Dates
11   System Limits and Options
12   Retrieving System and Process Information
13   File I/O Buffering
14   File Systems
15   File Attributes
16   Extended Attributes
17   Access Control Lists
18   Directories and Links
19   Monitoring File Events with inotify
20   Signals: Fundamental Concepts
21   Signals: Signal Handlers
22   Signals: Advanced Features
23   Timers and Sleeping
24   Process Creation
25   Process Termination
26   Monitoring Child Processes
27   Program Execution
28   Process Creation and Program Execution in More Detail
29   Threads: Introduction
30   Threads: Thread Synchronization
31   Threads: Thread Safety and Per-thread Storage
32   Threads: Thread Cancellation
33   Threads: Further Details
34   Process Groups, Sessions, and Job Control
35   Process Priorities and Scheduling
36   Process Resources
37   Daemons
38   Writing Secure Privileged Programs
39   Capabilities
40   Login Accounting
41   Fundamentals of Shared Libraries
42   Advanced Features of Shared Libraries
43   Interprocess Communication Overview
44   Pipes and FIFOs
45   Introduction to System V IPC
46   System V Message Queues
47   System V Semaphores
48   System V Shared Memory
49   Memory Mappings
50   Virtual Memory Operations
51   Introduction to POSIX IPC
52   POSIX Message Queues
53   POSIX Semaphores
54   POSIX Shared Memory
55   File Locking
56   Sockets: Introduction
57   Sockets: Unix Domain
58   Sockets: Fundamentals of TCP/IP Networks
59   Sockets: Internet Domains
60   Sockets: Server Design
61   Sockets: Advanced Topics
62   Terminals
63   Alternative I/O Models
64   Pseudoterminals
A   Tracing System Calls
B   Parsing Command-Line Options
C   Casting the NULL Pointer
D   Kernel Configuration
E   Further Sources of Information
F   Solutions to Selected Exercises

November 29, 2010

Ian Ward
excess - News
» WD HDD lying about 4K sectors

Many hard drives available today have 4K physical sectors instead of the old standard 512-byte sectors. The larger sectors allow the manufacturers to save space required for error correction, so they can save money, and in turn we get cheaper hard drives. Which is great, except that if a drive is using 4K sectors the drive must report it to the operating system or performance may suffer.

I recently purchased two WD HDDs: one 1.5TB and one 2TB, both "EARS" models. The 1.5TB drive happily reports that it has 4K physical sectors:

fdisk -l /dev/sdc
Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

But the 2TB drive claimed to have 512 byte sectors.

fdisk -l /dev/sdd
Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

"That's strange", I thought. So I ran a quick test to see if the 2TB drive was lying.

November 11, 2010

Michael P. Soulier
But I Digress
» DBUS, an interface only Java programmers could love

I decided to move away from Gnome on my netbook desktop, as recent Ubuntu updates caused more problems than they solved, and now I’m back to running my beloved Fluxbox. Of course, doing so means that I lose some conveniences in the Gnome desktop.

For example, when I close my laptop lid, the netbook no longer suspends. I’ve assigned a hotkey to this but maybe I want it to be automatic. This got me thinking about how to make this work. Well, it’s done through DBUS of course.

DBUS has become ubiquitous on Linux systems, providing a sub/pub model for events in Linux, which is a good thing given that the POSIX spec provides very little to make this kind of thing happen. I’m more of a fan of files, directories and symlinks myself, but I can understand why some programmers would want something a little more sophisticated, even if I don’t.

Being a Python hacker I want to use the python dbus module, so I install python-dbus and I’m off and running, albiet with little to no documentation. Google allows me to find enough clues that I’m led to qdbus to query the system bus, and I find org.freedesktop.DeviceKit.Power, which exposes a LidIsClosed property and a Changed event.

So, looking at some examples I manage to get this far…


import dbus, gobject
from dbus.mainloop.glib import DBusGMainLoop

power_iface = None

def main():
    global power_iface


    bus = dbus.SystemBus()

    proxy = bus.get_object('org.freedesktop.DeviceKit.Power',

if __name__ == '__main__':

As the object that I want is not available in my process, I must instantiate a proxy to it via dbus. To get this proxy I call get_object on the bus object, passing the name of the object and then, somewhat redundantly, a path that seems to represent a namespace of properties, signals and methods.

Ok, so now that I have the object I can just check the LidIsClosed property and register for that signal, right? Wrong. For no apparently reason now that I have the object, to call methods on it I need an interface object, representing one of the interfaces that this object implements. That’s just so…Java, that it rubs me the wrong way, but I forge on.

I want the interface that allows me to query properties, and apparently all objects in DBUS implement some common interfaces, because this is how I get the interface I need:

power_iface = dbus.Interface(proxy, 'org.freedesktop.DBus.Properties')

My take on this is that I am asking for the standard freedesktop DBus Properties interface on the DeviceKit.Power object, so that I can query its properties. Overly complex when simple would do? Absolutely. Does it work? Sure, but why would anyone choose to work this hard deliberately? Oh, Java programmers. I forgot.

Now I want to register a listener for any changes from the Power object, which includes the lid being closed. You do this like so, via a callback:


Why you don’t use the existing proxy object to DeviceKit.Power is beyond me. Maybe you can and I simply don’t understand the API. I would have envisioned something like…


…but that’s me. I guess some people really like typing.

Now, we finish this off by writing our handler…

def handle_lidclose(*args):
    closed = power_iface.Get('org.freedesktop.DBus.Properties', 'LidIsClosed')
    if closed:
        print "lid is closed"
        print "lid is open"

…and starting our main loop via…

    loop = gobject.MainLoop()

All that’s left is to write the call in my handler to suspend the laptop if
the lid is closed, but honestly, my fingers are tired. I particularly love the
way that I need to pass the full interface specification in the Get() method
even though I’m calling it via…an interface object. Can you say redundant?
No wonder Java programmers need IDEs.

I hope this situation improves. Unix used to stand for power through simplicity. Less code is less to break, and simple APIs leave developers free to worry more about bugs in their code than try to comprehend a needlessly complex API. Maybe I just don’t get it. These are first impressions, afterall, so perhaps I’m being overly judgemental. Perhaps there’s beauty and simplicity and elegance to be found in this overly wordy API requiring redundant information.

For now though, I just don’t see it.

November 5, 2010

Rob Echlin
Talk Software
» Xubuntu in

Xubuntu in.

XP usage down.

I made the Kitchen Computer dual boot, so the kids don’t have to use Windows, and deal with all those bugs and the malware.

And so I don’t have to re-install it, yet again.

I am using Ubuntu 8.04 on the other computer, and KDE at work.
I like KDE, so I thought Kubuntu 10.10 would be fun at home.

Not so nice. It swaps on 512MB with Firefox running.

So I tried Xubuntu.

  • Added Java and Flash for the kids.
  • Added the fix for IPv6 that I blogged about before.

Now everyone is OK using Linux in the kitchen.

Youtube in full screen mode is choppy. Why?

  • Driver for Intel MB?
  • Gnu’s Gnash instead of Flash?

Dunno. Not worried about it. Xubuntu rocks our house.

Tagged: Linux

October 21, 2010

Rob Echlin
Talk Software
» IPv6 lookup pain in Firefox

According to Google, Firefox has been slow for some people for several years, if they don’t set “network.dns.disableIPv6″ to “true” in “about:config”.

Yup, this issue is still a problem for FireFox users. Mozilla has not decided to “check once” for an IPv6 DNS server and use only IPv4 for a while, like an hour, before checking again.

I may also have this problem on my other machine, which is running Ubuntu 9.04. This is on the Kitchen computer, and when my wife says it’s way too slow in Linux and she’s going to boot back into Windows, I had to find out why.

So when we get IPv6 in our house, I will have to switch back.
Aww, who am I kidding – that will be years from now and my 512MB RAM machine will be long gone.

Tagged: frustration, Linux

October 20, 2010

Bart Trojanowski
Bart's Blog
» growing a live LVM volume

I have an LVM volume, with xfs on it, that is almost full:

$ df /scratch -h
Filesystem                Size  Used Avail Use% Mounted on
/dev/mapper/vg-scratch    180G  175G  5.4G  98% /scratch

$ sudo lvdisplay /dev/mapper/vg-scratch
  LV Size                180.00 GB

But I have some more space in the physical volume. Let's grow the logical volume.

[Read More]

Ian Ward
excess - News
» OCLUG Web Site to Become a Wiki

The Ottawa Canada Linux Users Group board of directors has decided to retire their Django meeting announcement web site and replace it with a plain wiki.

But, it's been a great run for close to 4 years:

  • 69 meetings posted, always up-to-date
  • 65 speakers
  • 34 local Linux jobs posted
  • hundreds of members' blog posts aggregated
  • zero maintenance time (removal of old meetings & jobs is automatic)
  • zero reported defects
  • zero down time

October 17, 2010

Rob Echlin
Talk Software
» Firefox sync << Xmarks, so far

I have used Xmarks for a couple of years. I track the same tech stuff at work as I do for my blog and personal research, so synchronizing is useful. With Xmarks about to disappear (not so likely now), I quickly installed Firefox Sync. It seemed to work, at first, but then one of my machines got into a state where all its bookmarks were gone, and it couldn’t get them back from the Firefox sync server.

I eventually fixed it, by configuring a different Firefox instance to push its bookmarks everywhere, then I was able to get the info onto the losing machine.

Xmarks is not perfect either. On my machines with 512MB RAM and one CPU, Xmarks ties down the machine completely while it syncs. Can’t do nuthin!

Tagged: frustration, Linux

October 8, 2010

Ian Ward
excess - News
» Good Linux Games just published an article listing some good Linux games (comments possibly even better than the article). I'm posting this here so I can find it again when I have the time to waste.