return to OCLUG Web Site
A Django site.
February 27, 2012

Ottawa Android
» Android and Ubuntu on One Device?

I really do hope that we shall see a device with Ubuntu and Android that switches when docked. I know I would want one. Wouldn’t you?

February 23, 2012

Rob Echlin
Talk Software
» delete .gconfd to fix missing panels

I have had to fix missing panels before in previous versions of Ubuntu and Xubuntu.

Previously, you deleted .gconf from your home directory.

Now you have to delete .gconfd.

“Now” means Xubuntu 10.10.
I am a little out of date on the family computer in the kitchen.

Tagged: Linux, ubuntu

June 22, 2010

» OVSAGE June Meeting

On June 17th, Pythian hosted the June of meeting of OVSAGE. This month, there was a presentation by Bill Stuart, CEO and VP Engineering , Karoly Molnar of Eseri. Eseri is a local Ottawa company that has integrated a full organization IT solution from the best of the world’s open source, from hosted Intranet to [...]

May 23, 2010

Bart Trojanowski
Bart's Blog
» console=ttyS0 with grub2

Just a quick note so I don't forget now to enable console logging on systems running grub2 (like Ubuntu 10.04, Lucid).

  • edit /etc/default/grub
    • set GRUB_CMDLINE_LINUX to "console=ttyS0,115200n8 console=tty0"
  • run update-grub
  • reboot

( more info can be found here )

May 3, 2010

» Blogrotate #25: The Weekly Roundup of News for System Administrators

Good evening and welcome to this weeks edition of Blogrotate. It’s a bit later than usual this week due to client concerns but I could not let this week go by without something. This week, after all, is the release of Ubuntu 10.04LTS (Lucid Lynx) so I get to leverage my supreme blogging power to promote the product since I use it pretty much everywhere now.

Operating Systems

So as I was saying, the release of Lucid Lynx has the world abuzz. We had a mini install fest here in the SA cluster at Pythian and 2/3 of it went well. It seems that video is the main source of install pain for us in this new version. My own install went well at home except for the proprietary NVidia drivers, and the fglrx (ATI) driver was an issue for a colleague in the office. Luckily we have the knowhow to get around these issues here at Pythian, but I would be concerned for new users trying to upgrade. Despite that I think it’s a bloody good package and well worth trying.

Here’s a short list of some sources of information on the new Ubuntu.

On in other news, let’s all shake our heads in disbelief at Unix copyrights: SCO want a new ruling.

Priya Ganapati at Wired writes about this weeks Palm purchase by HP. It’s long been known that HP had scrubbed the iPaq because that just could not nail the OS, but now they own WebOS so watch for the iPaq to make a comeback (minus the silly name (and resulting lawsuit from Apple) of course). But I digress, check out HP Buys Palm for $1.2 Billion.

If you are running Windows 7 you’ll want to beware of a recently discovered problem. See the Microsoft Ansrews Forum topic Windows 7 deletes all system restore points on reboot.


Joel Wineland is Senior Product Developer at Rackspace Managed Hosting. He writes about things to consider when evaluating cloud services. See Creating a Successful Cloud Environment.

Amazon Web Services (AWS) adds a Singapore Data Center so users can run their cloud computing infrastructure in the Asia Pacific region.

Have you considered the security risks of your impending cloud investment? Take a look at 10 Cloud Security Threats by Anil Chopra. My advice is to never trust a hosted cloud service with production, proprietary or sensitive data.


Media darling and bon vivant Steve Jobs was at it again in a tirade against the evils of Flash. You can get the short (and long) story at Engadget in Steve Jobs publishes some ‘thoughts on Flash’… many, many thoughts on Flash by Paul Miller. When you are done with that head on over to Ars Technica for a rebuttle of sorts in Pot, meet kettle: a response to Steve Jobs’ letter on Flash.

That’s all I’ll have time for this week. As always your comments and stories are welcomed.

Try Lucid Lynx. The power of blog compels you.

April 23, 2010

» Blogrotate #24: The Weekly Roundup of News for System Administrators

Good afternoon and welcome to another edition of Blogrotate. Though I have been contributing to Blogrotate since its inception, this is the first time I have had the honour of posting it myself. Go me!

Operating Systems

Red Hat has announced the availability of a public beta for Red Hat Enterprise Linux 6 (RHEL 6). There are a number of changes, for which Dave Courbanou at The VAR Guy does a pretty good job of providing an overview. Of note are that Red Hat has completed its migration from Xen to KVM as the supported virtualization technology (which began with RHEL 5.4), and that ext4 is now the default filesystem.

There have been a couple of tidbits of news in the Ubuntu world. The first being a bug with memory leakage in affecting beta 2 of Ubuntu 10.04. The discussion on Slashdot became a debate on the merits of time vs scope-based release schedules. Per the bug report, a fix has since been committed, which is good because — and this leads into the second bit of news — Ubuntu has announced the availability of the release candidate for 10.04. Things are moving fast as we approach its release next Thursday.

And for something that’s not release announcement related, M. Tim Jones has an interesting article over at IBM’s developerWorks about Kernel Shared Memory in the Linux 2.6.32 kernel. Without going into a lot of detail (I’ll let him do that), it’s basically the implementation of a daemon to handle de-duplication of memory pages. This has obvious implications in a virtualization environment as there is the potential to run more virtual machines on a host without increasing the memory footprint.


The big news on this front was that McAfee pushed out a virus definition update that falsely identified svchost.exe as a threat, resulting in Windows automatically rebooting. Peter Bright from Ars Technica has some good coverage of this, and linked to McAfee’s official solution. Meanwhile, Dave Courbanou over at The VAR Guy has a follow up on the situation with some additional detail, and Barry McPherson from McAfee has posted an official response stating that a ’small percentage’ of enterprise accounts were affected. And finally, Ben Grubb of ZDNet Australia reports that Coles had 10 percent of its point-of-sales terminals affected and shut down stores in WA and South Australia as a result.


Oracle has decided to charge for an ODF plugin for MS Office which allows users to import/export documents in Open Document Format. Matt Asay, COO at Canonical, provides some commentary on this stating that “$9,000 is the new ‘free’ for Oracle“.

Jono Bacon, Canonical’s Community Manager, wrote that Canonical has made the single sign-on component of Launchpad available as open source under the AGPL3 license. There is some coverage from The H on this as well. Launchpad itself was released under the AGPL3 license about a year ago.


On a final (interesting) note, ‘Cyber Cynic’ Steven J. Vaughan-Nichols writes that HP and Likewise to release Linux-based storage line about HP and Likewise partnering on a line of StorageWorks products that will make use of the Likewise CIFS stack to support Active Directory authentication.

Well, that’s all I have time for this week. Will Brad be back at the helm next week, or will I continue my reign? You’ll just have to wait and see…

April 15, 2010

» Installing TOra with Oracle Support on Ubuntu 10.04 (Lucid Lynx)

Once again into the breach. The release of Ubuntu 10.04 is at hand. I’ve been playing with “Lucid” for a couple of months now but since we’re in beta2 with the release candidate soon to follow, I thought I would really sit down and get my normal app stack working including TOra. All in all the instructions are mostly the same as last time around, with a couple of new improvements, caveats and quid pro quo.


This one is being written using the 32 bit version of everything. I tend to use my laptop as a testbed and I have not upgraded any 64 bit machines as of yet. The instructions will be the same, you just need to make some environmental changes to get it working with 64 bit systems. I’ll try to update the blog when I do it myself next month, or check my previous guides and extrapolate as necessary.

Get the packages

First off we’ll create new directories for the packages and get the sources. Ubuntu 10.04 is using TOra 2.1.1. If you want the full list of changes in this version then you are out of luck, you can check the NEWS file on the TOra svn site which states only “lots of notes missing for 2.x series”.

mkdir -p /path/to/deb/source/
cd /path/to/deb/source/
apt-get source tora

Now get the Oracle packages. Get them from the Oracle site. Since there’s a new version of the oracle client I am using the lastest and greatest.

  • oracle-instantclient11.2-basiclite-
  • oracle-instantclient11.2-devel-
  • oracle-instantclient11.2-sqlplus-

Install the prerequisites and development libraries

Next we’ll want to install the build dependencies via apt. To do this, run the following simple command.

sudo apt-get build-dep tora

Now we’ll get all the other build tools and libraries that we’ll need for this to work. This list is exactly the same as last time.

sudo apt-get install libqt3-mt-dev libqt3-compat-headers libqscintilla-dev build-essential g++ gcc autoconf automake flex zlib1g-dev docbook-xsl debhelper alien libaio1 dpatch fakeroot xsltproc texi2html texinfo libqt3-mt-psql libqt3-mt-odbc config-package-dev cmake qt4-dev-tools

Next install the Oracle clients. In the directory where you downloaded them run the following to convert and install the packages in one fell swoop.

cd /path/to/oracle/rpms
sudo alien -i *386.rpm

Environment Variables

Now that we have all the bits we need, we set up the build environment. First we set up the oracle home environment and library path.

export ORACLE_HOME="/usr/lib/oracle/11.2/client"

As before you’ll want to add these into your system wide profile or .bashrc in order to use TOra. Just to change things up a bit, this time around we’ll add it to the default profile. Oddly this was not working for me directly with sudo, so you will need to get a root shell going to make it happen.

sudo -s
[sudo] password for $you:
echo export ORACLE_HOME="/usr/lib/oracle/11.2/client" >> /etc/profile
echo export LD_LIBRARY_PATH="${ORACLE_HOME}/lib" >> /etc/profile
echo export TNS_ADMIN="${ORACLE_HOME}" >> /etc/profile

And now we have a new player. The last time around I had you create a symlink to the includes for oracle, this time It’s much easier to just use the CMAKE environment variables to point to the include files we need.

export CMAKE_INCLUDE_PATH=/usr/include/oracle/11.2/client

Now on to the main event.

Building and installing TOra

Go to your build directory and you’ll see there is a tora-2.1.1 directory. Change to this directory.

cd /path/to/tora/tora-2.1.1/

Run the script to build the package.

fakeroot debian/rules binary

Depending on your system speed, take a break while the compile runs. Once done, proceed to install it like so.

dpkg -i ../tora_2.1.1-1_i386.deb

The Update Issue

This time around I am going to break out the method of stopping updates to a few methods.

  • Method 1: Using dpkg

    Supports: apt-get, Synaptic

    This is the way I have been doing it through all the blogs in this series. The problem I found with this method is that some GUI package managers do not seem to respect the hold the way I think they should. So I can hold it all I want, but the second I let KPackageKit do an upgrade I am sunk. It also will not survive a dist-upgrade, not even using apt.

    Luckily for me I almost always do it on the command line using apt, so for those inclined as I am you simply need to run this command.

    echo "tora hold" | sudo dpkg --set-selections

    To ensure the change took, look at the package status like so. The response from dpkg is bolded.

    $ dpkg-query --status tora|grep Status
    Status: hold ok installed

    When you are done you should see this if you try to run an upgrade via apt.

    $ sudo apt-get upgrade
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following packages have been kept back:

    If it does not hold the package back, then check your status as above.

  • Method 2: Using aptitude

    Supports: aptitude, KPackageKit

    This method is similar to the above, but has the annoying habit of removing packages when you do it. Sure it’s only removing packages that you no longer need (considered unused by aptitude like old kernels). I personally like to control when packages are removed and adding a hold to a package is not the right time to uninstall things.

    If, after all that, you still want to try it then here is the command you need.

    sudo aptitude hold tora

    I did some searching and I see no method to display this hold status using aptitude. If someone can add a method for this in comments it would be helpful.

  • Method 3: Using $GUI front end

    I know there are quite a number of different package managers out there and most probably support methods 1 and/or 2. If not then you’ll need to figure that one our on your own. Feel free to add the package manager and method in comments.

  • Method 4: Super Guerilla Tactics(tm)

    Supports: Everything

    This method is easy but is not really for the faint of heart. It has the advantage of never having to worry about upgrades again, at least until a much newer version comes out. This method involves changing the package version number before compiling to create a package with a version number that should be higher than anything available in the repository. This needs to be done before you compile, or you can go back and compile it again.

    First, add an entry to the debian changelog file.

    cd /path/to/tora/tora-2.1.1
    vi debian/changelog

    You can see that this file has a series of change log entries with dates and version numbers. Add a new entry to the top of the file as I did with the below example. Be cautious of the spacing and blank lines between entries as the compiler will barf if they are even one space off. Pay particular attention to the 2 spaces between the email address and the date which cannot be replicated on a web page correctly due to html parsing (multiple spaces are not allowed, not even after proper punctuation :O)

    tora (2.1.1-10) unstable; urgency=low

    * Version incremented to avoid upgrades.

    -- Brad Hudson <> Thu, 15 Apr 2010 14:00:00 +0500

    tora (2.1.1-1) unstable; urgency=low

    * New upstream version.

    -- Michael Meskes <> Thu, 19 Nov 2009 15:18:19 +0100

    Save and exit the file. As you can see, all I did was change the packaged version to -10, which is higher than -1 through -9, meaning there would have to be 9 more official releases before it’ll overwrite your custom package. I did not update the actual code version (everything before the -) so if a version 2.1.2-1 comes out you’ll need to watch for it. These number can be changed to anything you want however (as long as it’s higher), so if you wanted to make sure it was never upgraded even for major version changes, you could set the version to this.

    tora (5.1.1-1) unstable; urgency=low

    The program code will not change through any of this, just the package name.

    Now that you’ve modified the changelog, go back to the build step and recompile everything. When you are done the build can install the shiny new version like so.

    sudo dpkg -i ../tora_2.1.1-10_i386.deb

End Game Re-Redux

Don’t forget your tnsnames.ora. We set up the environment to use TNS_ADMIN=/usr/lib/oracle/11.2/client which means that tora will look for tnsnames.ora there. The easiest way I found was to get the production tnsnames.ora file from the Oracle server itself, and place it in the $TNS_ADMIN directory. Once you have done so, start TOra and enjoy. Remember to start it from the xterm session that has the environment variables set if you have not yet logged out/in.

Once again the build was pretty smooth. I have replicated these instructions 3 times over the course of writing this so there should be very few problems for you if you follow step by step.

As I stated before I could write these up for other distros if anyone is interested. Leave comments if you are.


The TOra homepage
Installing TOra with Oracle support on Ubuntu 8.04LTS (Hardy Heron)
Installing TOra with Oracle support on Ubuntu 9.04 (Jaunty Jackalope)
Installing TOra with Oracle Support on Ubuntu 9.10 (Karmic Koala)
Kubuntu linux

April 12, 2010

Bart Trojanowski
Bart's Blog
» vmlinux on Ubuntu

If you're trying to do post-mortem analysis on a crashed river, or trying to find kernel-level bottlenecks with oprofile, you need the decompressed kernel w/ debug symbols. This comes in a form of a vmlinux file. Some distributions ship debuginfo packages, namely RHEL. On Ubuntu this seems lacking.

[Read More]

February 12, 2010

» Blogrotate #16: The Weekly Roundup of News for System Administrators

Welcome to another edition of Blogrotate. This has been an interesting week in the IT world, with Microsoft security issues being the major focus of attention.


Once again, security flaws in Microsoft Operating Systems caused major problems for system administrators this past week. It began with Microsoft’s Security Response Center’s posting of February’s security bulletin.

Microsoft’s attempt’s to fix a 17-year-old bug resulted in a large number of computers having problems restarting. More information can be found here Restart issues after installing MS10-015 and Security patch results in BSOD, stops Windows from booting. It appear that this issue may have been caused by machines being previously infected by a rootkit

Another patch from Microsoft, the reliability update for Windows 7 and Windows Server 2008 R2, turned out to be not so… reliable.

But what was of most concern to many system administrators was Microsoft’s security advisory concerning a vulnerability in the TLS and SSL protocols, since this affects not only the Microsoft Windows operating system but as TLS/SSL are an Internet standard, multiple vendors. Emil Protalinski at Ars Technica provides full coverage of the TLS/SSL flaw in Windows.

Just to prove that Microsoft is not the only one with security problems, Ryan Paul at Ars Technica has an interesting article about a hack announced at Black Hat where a researcher was able to circumvent a Trusted Platform Module (TPM) component. Although it requires physical access, it does prove that even hardware-based protection mechanisms considered “unhackable” are indeed still vulnerable. Here are a second and third link for further reading: Supergeek pulls off ‘near impossible’ crypto chip hack; and Researcher Cracks Security Of Widely Used Computer Chip.


Rumours that Microsoft was interested in purchasing RIM caused a stir this week.


The big news on the training front was that Novell and Canonical are joining forces to bolster Linux Certification and training efforts to compete with Red Hat.

Operating Systems

More from Ubuntu, with Canonical’s new COO Matt Asay speculating that the Apple iPad is attempting to bring about a new paradigm where the operating system is largely invisible to the user and the applications themselves are the operating system.


Computerworld’s Eric Lai had interesting article discussing the announcement of Ksplice Uptrack. It provides an overview of what the service is and raises concerns about security compliance, support from major vendors, and funding.

Facebook’s previously undocumented chat protocol now supports Jabber/XMPP, so a user may now communicate with contacts via third-party IM clients such as AIM, Pidgin, and so on. Facebook 24/7 anyone?

This wraps up another episode of Blogrotate. See you next week, same Blogrotate channel, same Blogrotate time.

October 30, 2009

» Blogrotate #4: The Weekly Roundup of News for System Administrators

Welcome to the all hallowed eve eve edition of Blogrotate. It was a relatively quiet week this week but the 2 standouts are from the OS department with more reviews of the just released Windows 7 and the release of Ubuntu 9.10. Here’s some of the stories that we took note of this week.

Operating Systems

Ubuntu 9.10 is released. Anyone who reads my blogs knows by now that I am a Kubuntu user and I think that it’s the best desktop Linux available right now. They’ve put a lot of work into this one and it’s the best version of Ubuntu yet, easy to install and use with all the features you could ask for. Ryan Paul at Ars Technica has his own review called Ubuntu 9.10 brings web sync, faster bootup, GNOME 2.28, check it out.

Here’s a short list of some types of Ubuntu you can get, and their niche.

  • Ubuntu – The standard desktop featuring Gnome.
  • Ubuntu Server Edition – Just how it sounds.
  • Ubuntu Netbook Remix – A version of Ubuntu designed to work on your netbook.
  • Kubuntu – The KDE desktop version of Ubuntu. With KDE it’s an easier conversion for Windows users in my opinion.
  • Edubuntu – Edubuntu is an educational operating system that is designed for kids, parents, teachers and schools. I have not tried this one yet, but my 3.5 year old is ready for it.
  • Mythbuntu – A replacement for Windows Media Center featuring MythTV. I use this for a PVR at home, easy install and great interface.
  • XUbuntu – A version of Ubuntu using the xfce desktop, and designed for older or less powerful machines that have trouble with the Gnome or KDE desktops.

Windows 7 is still fresh in the minds of many. If you want an exhaustive review of all the pros and cons of Windows 7, how about trying to get through a 15 page review by Peter Bright. For the impatient, he sums it up at the end saying “…Windows 7 is, overall, a fantastic OS. It builds on a solid platform, and just makes it even better”. Read the full review in Hasta la Vista, baby: Ars reviews Windows 7.

PC Pro has an interesting article up called The Crapware Con. This article has some interesting information on what sort of extra software each of the major manufacturers are adding to your laptop, and what sort of effect this has on your performance. If you have an Acer, Sony or HP laptop they are apparently the worst offenders.


Dan Goodin has an interesting article about a free Microsoft product that can identify and harden applications against common avenues of attack without even needing access to the source code itself. Read the scoop in Free Microsoft security tool locks down buggy apps.

Dan Goodin reports on a new Mozilla site that will check the plugins in your FireFox for old versions which may have security issues and allow you to update them easily. Mozilla service detects insecure Firefox plugins has the full story, and the plugin check page is here.


Paul Lorimer, Group Manager for Microsoft Office Interoperability, writes in his blog that “In order to facilitate interoperability and enable customers and vendors to access the data in .pst files on a variety of platforms, we will be releasing documentation for the .pst file format”. This will open up the specifications for the pst file, used by MS Outlook to store email, making it easier for other software vendors to tap into the file format. See more in Roadmap for Outlook Personal Folders (.pst) Documentation.


The Internet celebrated its second 40th birthday on Thursday marking the date that the first word, “Lo”, was sent between 2 machines at UCLA on October 29, 1969. Get more of the story in Internet pops champagne on (second) 40th birthday. On an unrelated note, this happened 40 years after the 1929 stock market crash.


Neil Mcallister at InfoWorld has an interesting article on the rise of the ARM processor as a competitor to the Intel’s Atom for mobile devices. Read on in ARM vs. Atom: The battle for the next digital frontier.

Computerworld has an article about the recent Intel release and recall of it’s SSD firmware update due to issues with data corruption. Intel pulls firmware for SSDs just a day after release has more details. Ars Technica also covered the story in Intel’s SSD firmware brings speed boost, mass death (again).

That’s all we have time for this week folks. Be sure to tune in again next week. Same bat time. Same bat channel.

October 9, 2009

» First Look at Kubuntu 9.10, “Karmic Koala”

I just installed a copy of the titular distro last night and have been playing with it a bit. So far it’s been less trouble than I would have expected from a first beta, and runs well. Get Kubuntu 9.10 Karmic Koala beta 1 here. A word to the wise, this is beta software and not yet ready for prime time.

I installed it on my laptop, I use it for a lot of things but the data is always expendable. I had installed it in a VM a few days previously but that was not as satisfying as trying it in the real world as opposed to the idealized world of the VM.

Test Specs

  • Compaq EVO n610c
  • Mobile Intel(R) Pentium(R) 4 – M CPU 2.40GHz
  • TEAC DW-224E-A 24x/24x writer cd/rw
  • Fujitsu MHT2040A 5400RPM drive
  • 512MB DDR
  • D-Link DWL-G630 802.11g Atheros AR2413
  • ATI Radeon Mobility M7 LW [Radeon Mobility 7500]

The Install

From the get-go there was trouble. The install CD would not boot past the initial menu where I had selected the install option. It just stopped and presented no error messages. I tried the alternate CD with the same result. Thanks to the alternate CD I was able to boot with the ‘noapic’ options which fixed the issue.

Install after that went smoothly. The install has been slick for the last several releases, and this one is no different. The partition tool has been updated and now support creating LVM and software raid in the menu (It may have been there in previous versions, but I never noticed it). I decided to use ext4, which is the default now, as I had not trusted it in 9.04 and wanted to give it a spin. I also said yes to using ecryptfs for my home directory (more on that in another post).

The alternate CD is a sloooow way to install. It’s all text based but seems to take much longer when the file copies are going. Maybe I was too excited and impatient. A new distro is always a bit like Christmas morning.

Desktop Problems

The first boot happened as planned. It seemed to be a fast boot but this laptop is not the best test for speed. I logged in and bam!, I hit my first roadblock. My desktop came up fine, but my panel was not rendered correctly. Not only that, the menu came up unreadable and the window dressings on each of my windows was similarly garbled.

I tried to connect to my wireless, but with network-manager running configuring it using command line tools was not working. I hooked up a wire and updated the system. After an ‘apt-get update’ followed by an ‘apt-get upgrade’ installed a massive 302MB of updates. Some were held back, so I also had to do an ‘apt-get dist-upgrade’ to get another 52MB of updates. This did not fix my display problem as I had hoped.

I did some digging through /var/log/Xorg.0.log and everything looked right, it was correctly detecting my video card and using the open source ati driver (this poor sad little card is not supported by fglrx). Playing with the KDE system settings did not yield any results. I was able to get a working desktop using the vesa driver, but that was not satisfying.

The fix was to force xorg to use a different acceleration method. Here’s how you do it. Create a custom xorg.conf file. Kubuntu 9.10 uses xorg 1.6.3 which does not need a config file by default. The xorg.conf can be used to override some settings.

Edit the file using sudo, as the /etc/X11 is owned by root.

sudo vi /etc/X11/xorg.conf

Add this into the file.

Section "Device"
        Identifier      "vga"
        Driver          "ati"
        Option          "AccelMethod"   "EXA"

Section "Monitor"
        Identifier      "Configured Monitor"

Section "Screen"
        Identifier "Default Screen"
        Device          "vga"
        Monitor         "Configured Monitor"

Log out and restart your X server, or reboot if you prefer.

The key here is the option specifying the AccelMethod as EXA. For some reason xorg was trying to use XAA by default, which is an older method which was replaced by EXA in 2005. Doing the above tells xorg to use EXA by default instead. Once this was done I had a nicely working desktop.

The desktop runs faster than I expected in the low power, low memory conditions. I have managed to set up most of my apps to my liking and got the essentials installed.

Essential Apps

While on the subject, here are a few of my immediate installs after a build. All apps below are referenced by package name, so just do an sudo apt-get install [packagename] to install them.

  • vim-full – ubuntu comes installed with an annoyingly pared down version of vim. Get this one to do stuff like navigating with arrow keys while in insert mode. If you know what I mean, then this is for you.
  • vpnc – A vpn client for linux which is great for connecting to Cisco VPNs.
  • Firefox – I have no idea why this is not installed by default, likely to do with licensing. The K->Applications->Internet menu has an item called “Firefox Installer”, which does not seem to work.
  • mplayer – The most incredibly awesome movie player for linux.
  • pidgin – Multiprotocol chat client. kubuntu installs kopete as the default, but I always prefer pidgin.
  • konversation – A great little IRC program for KDE. While Pidgin does IRC I find the old ways are best for IRC.
  • pan – The only news reader for Linux you’ll ever need.
  • Thunderbird – Another that should be installed by default, but a great mail client. Kontact/KMail is very good too and is included by default.
  • Gimp – The Gimp. Not quite Photoshop still, but excellent in its own right.
  • build-essential – This is a meta package, meaning that it installs a bunch of other packages. These are what you need if you ever want to compile anything. A must have if you compile from source like I do quite often.

I’ll post more as I have more time to play with Karmic. If I find anything of interest, I’ll let you know.

October 8, 2009

» Testing Thunderbird 3: What to do if it ’shreds’ your threads

I use Mozilla Thunderbird at work for reading my email and, since Mozilla Messaging is approaching the release of Thunderbird 3, I decided to give the latest beta a try. I’m an Ubuntu user (8.04 “Hardy Heron” on my workstation) so I sought out a PPA for development versions of Thunderbird, and came across ubuntu-mozilla-daily. I added the repository to my apt config and you can too, here’s how:

Add the following lines to /etc/apt/sources.list using your favourite editor.

# Thunderbird 3.0 beta builds from ubuntu-mozilla-daily
# sudo apt-key adv --recv-keys --keyserver 247510BE
deb jaunty main
deb-src jaunty main

Note: Pardon any wrapping that may occur. It’s really four lines, two of which start with a comment ('#'), one with deb and one with deb-src. Also, be sure to replace the text jaunty with the version of Ubuntu you are using. The ubuntu-mozilla-daily PPA linked above can produce these lines for you if you are unsure. Just click the link that says ‘Not using Ubuntu 9.10 (karmic)?’

When you have added the repository to apt, you will need to run the apt-key command listed in the comment above in order to add the signing key to the apt keyring. If you don’t, you’ll receive warnings that the package cannot be verified. Also note that adding the repository this way may cause apt to report that there are updates available for other installed packages. I haven’t tested that particular detail as I was doing this within the confines of a virtual machine.

Once this is done you can install Thunderbird 3 using the following commands:

sudo apt-get update
sudo apt-get install thunderbird-3.0

There will now be an entry in Applications -> Internet named Shredder 3 Mail/News. Upon running it and going through setup of my account, one of the first things I discovered is that Thunderbird 3 was not threading by subject as I had been used to. I spent some time researching why this had changed and if the behaviour was configurable. Eventually I came across a page on the Mozilla wiki that explained Thunderbird’s threading implementation in detail.

It turns out that the default value for two of the settings controlling thread behaviour changed between versions. By default, Thunderbird 3 uses strict threading which means that threading by subject is disabled. There are two settings that control this behaviour though: mail.strict_threading and mail.thread_without_re. The first setting enables/disables threading by subject while the second allows subject-threading even if “Re:” isn’t present. According to the wiki page there is also a setting new to the 3.x branch called mail.correct_threading which threads correctly regardless of the order messages are added to a folder.

These settings are important to me, as the ticketing system Pythian uses sends email notifications when requests are updated/modified, and it cannot use the References and In-Reply-To message headers. Also, the subject of these messages does not include the text “Re:”.

All of this means I needed to toggle the three thread-related settings from their default value in order to get the behaviour I expect. To do so I went into Edit -> Preferences -> Advanced and clicked Config Editor. After promising to be careful I filtered for “thread” and observed the following settings and their values:

Preference Name Value
mail.correct_threading true
mail.strict_threading true
mail.thread_without_re false

I double-clicked each of these to change their value and closed the dialog. I refreshed the view of my inbox and . .  .  still no threading! It occurred to me that I would need to rebuild the index, so I went into Edit -> Folder Properties and clicked the Rebuild Index button. I have a rather large inbox, so after going to get a coffee and checking in a while later I found that Thunderbird 3 was now displaying messages in my inbox in exactly the same way as had the previous version.

Now that was out of the way, I could continue with exploring the rest of the features the new version Thunderbird has to offer, including new search functionality with advanced filtering, and user interface improvements such as a tabbed interface and redesigned toolbar.

June 18, 2009

» Ubuntu 9.04 (Jaunty Jackalope), vpnc, and resolvconf

The environment

  • Ubuntu 9.04 Jaunty Jackalope
  • vpnc 0.5.3
  • resolvconf 1.43

The problem

Connecting to a cisco vpn device with vpnc on jaunty. If you use vpnc and vpnc-disconnect to bring the connection up and down, all works fine. If you leave the connection idle too long and are disconnected from the other end, the resolv.conf is not always updated. This is a problem because, when you do a DNS lookup in a browser you’ll experience delays, the DNS servers from your vpn connection are no longer available.

The easiest way to check this is to login to your vpn and check the contents of /etc/resolv.conf. For example, before you log in, your resolv.conf may look something like this (only the IPs have been changed to protect the innocent).

# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)

After connecting, you’ll see a different resolv.conf.

# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)


It would be easier to see with real IPs, but the vpnc daemon adds two more servers, and sometimes changes or adds the search domain. This is great—the first DNS servers you will lookup against on your vpn connection are those for the vpn, which makes it easier to resolve IPs on the corporate network.

The trouble begins when the connection times out. vpnc is very good about cleaning up its routing tables, but for some reason it does not always fix the resolv.conf as it should. This is because vpnc is not telling the resolvconf package to remove the config for the tunnel device.

Interlude: resolvconf

resolvconf is a package used primarily by the system to manage the name server information in /etc/resolv.conf dynamically. It replaces the old static resolv.conf file. Before moving to jaunty, I was using 8.04 Hardy Heron, and still do at work. The addition of resolvconf seems to coincide with the rise of network-manager for managing network interfaces in Linux. They work great when they work, but when problems arose, the old methods were much less confusing.

Networking utilities wishing to make use of resolvconf will drop a file into the /etc/resolvconf/run/interfaces directory. resolvconf will then combine this with other base files (located in /etc/resolvconf/resolv.conf.d) to create /etc/resolvconf/run/resolv.conf. This file is symbolically linked to /etc/resolv.conf.

So to make things clear, resolvconf will:

  • Take the base config files from /etc/resolvconf/resolv.conf.d:
    # ls -al
    total 16
    drwxr-xr-x 2 root root 4096 Apr 26 23:18 .
    drwxr-xr-x 6 root root 4096 Apr 26 23:18 ..
    -rw-r--r-- 1 root root    0 Aug  9  2006 base
    -rw-r--r-- 1 root root  151 Aug  9  2006 head
    -rw-r--r-- 1 root root  116 Apr 26 22:06 original
    -rw-r--r-- 1 root root    0 Apr 26 23:18 tail
  • Combine them with the information for each interface in /etc/resolvconf/run/interface;
    # ls -al
    total 16
    drwxr-xr-x 2 root root 4096 Jun 15 22:10 .
    drwxr-xr-x 3 root root 4096 Jun 15 22:48 ..
    -rw-r--r-- 1 root root   87 Jun 10 23:04 NetworkManager
    -rw-r--r-- 1 root root   91 May 23 21:41 eth0
  • Output one happy DNS configuration in /etc/resolvconf/run . . . 
    # cat resolv.conf
    # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  •  . . . which is a symbolic link to /etc/resolv.conf
    # ls -al /etc/resolv.conf
    lrwxrwxrwx 1 root root 31 Apr 29 16:06 /etc/resolv.conf -> /etc/resolvconf/run/resolv.conf

There you go—clear as mud.

End Interlude

So you have been disconnected from the vpn. If you look in /etc/resolvconf/run/interface, you will see a file left around from the session. For example, if your vpn connection is interface tun0 (which mine always is), there will be this file.

ls -al /etc/resolvconf/run/interface
-rw-r--r-- 1 root root   91 Jun 15 21:41 tun0

All this information just to get to . . . 

The workaround

The workaround for this is simple. resolvconf can be used from the command line to add, remove, or update this information, on the fly. In this case, we want to remove an interface. You’ll need to know what the interface for your vpn tunnel is. tun0 is the most common with vpnc, but if you are not sure, you can consult the /etc/resolvconf/run/interface directory as shown above and check the file name. Once you have that, the solution is simple.

# sudo /sbin/resolvconf -d tun0

Replace tun0 with your interface if it’s different.

Scheduled workaround

It occurred to me that if I need to do this, it’s annoying to do it by hand every time. Since vpnc is not cleaning up after itself, it makes sense to do the cleanup automatically. We can do this using a cron job. For ease of use, I will add this to /etc/crontab file as root, because the vpnc scripts need to be run as root to work.

sudo vi /etc/crontab

Note: As we all know I prefer vi from the command line, but you can use any old editor that you want, providing you are running it with root credentials so that you can write to the crontab file.

Now you need to add this line at the bottom of the file (allowances must be made here for paths, this works on my Ubuntu system). For the sake or argument, we’ll run this every 10 minutes.

*/10 * * * * root if [ -e /etc/resolvconf/run/interface/tun0 -a "`pidof vpnc`" == "" ] ; then /sbin/resolvconf -d tun0; fi

What this does, is checks to see if the tun0 file exists, and if it does, it will run the command to remove it, which will then regenerate the resolv.conf and remove the bad DNS information.


I know this was a lot of ’splaining for a simple one-line fix, but having worked through this from scratch, I thought it might interest someone to see the process.

There is an open bug on this issue, and you can find it here: “vpnc does not always call resolvconf -d on termination. This bug has been around for a couple of versions now. The vpnc project home page also states in its known bug list, “vpnc looses [sic] connection with some targets, even before the rekey-timer expires most probably due bugs with keepalive, dead-peer-detection or something else,” which may be the cause of this issue, because if the session does not die cleanly, it may also not clean up properly.

I have downloaded the source and straced my last session, so I may try my hand at fixing it myself. An initial look at it yielded no results, but I have not worked with C in many, many years, so it will take time. If you would like to help fix this bug check the bug report or contact the maintainer.

Till next time.

June 9, 2009

» How to Recover Data from a Dead MacBook

This post might seem outside of our focus, but life brings all kinds of challenges. A friend of mine bought a MacBook when she was on vacation in the USA. For obvious reasons, Macs are more common on the other side of the Atlantic. In Europe it’s still rare to see a person using Mac as a personal computer (no flame intended, just stating a fact).

Her Mac completely broke down. The service guys told her she’d need to replace the motherboard, which would cost almost the same as a new computer. The problem was her Mac wouldn’t even start, and all the data she had on a hard-drive was stuck in the neat white box without any signs of life.

Sure, I said, I’m a computer guy I can recover it, can’t I?

I had never worked with Mac before, so I started with initial research to find out what options I have with hardware available in my home computer den.

I came to know that Mac uses filesystem called HFS+, and it can’t be read from Windows 32bit. Great, I thought, I’ve two options—find someone else with a Mac or get it mounted on Linux.

Fortunately, I have a Linux box at home, so it should be easy. I unscrewed the MacBook, and behind the battery there was 2.5 SATA drive. To be able to connect it, I need the interface between 2.5″ SATA drive and USB. For this purpose I’m using a QCP converter cable, which allows you to connect internal 2.5″/3.5″ ATA/SATA drives directly to USB port. ( I really like this piece of hardware—it’s exactly the kind of gadget you want to have around for saving notebook drives.

After connecting the disk, I found that my OEL5.1 wouldn’t be friends with it. I simply couldn’t find the right hfsplus module for this distribution. Fortunately, there were many references about mounting hfsplus disks on Ubuntu Linux, which is my second system.

I downloaded the required package and dependency libraries for Ubuntu from here:

The packages installation is straight forward:

root@silverbox:~# dpkg -i libhfsp0_1.0.4-10ubuntu1_i386.deb libc6_2.3.6-0ubuntu20_i386.deb hfsplus_1.0.4-10ubuntu1_i386.deb

After that, I needed to load the hfsplus module:

root@silverbox:~# modprobe hfsplus
root@silverbox:~# cat /proc/filesystems | grep hfs

Next, I had to check which partition is the one I need to mount. For this purpose, I used parted:

root@silverbox:~# parted /dev/sdd
GNU Parted 1.7.1
Using /dev/sdd
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print

Disk /dev/sdd: 160GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End    Size   File system  Name                  Flags
 1      20.5kB  210MB  210MB  fat32        EFI System Partition  boot
 2      210MB   160GB  160GB  hfs+         Untitled

Knowing the partition containing the data was /dev/sdd2, and I could mount it.

root@silverbox:~# mount -t hfsplus /dev/sdd2 /mnt/macosx

The next problem I faced was privileges. The directories I needed to save were owned by a non-existent user, and so I wasn’t able to access that path.

To work around this, I created a new user and assigned the directory owner UID.

root@silverbox:~# useradd macuser
root@silverbox:~# usermod -u 501 macuser

This allowed me to access the directory I needed to recover, and copy files to another ntfs disk which will be readable by regular Windows machine.

May 5, 2009

» First Impressions of Kubuntu 9.04

I said I would follow up. Who knew I actually would?

I love my new PC. It’s been a few years since I did a build for myself, so I took my time lovingly feeling every piece for the tactile joy of it, and completely ignoring any printed material that came with the parts. Well, I did read the bit about the front panel connectors, that one is kind of a must when it’s not printed on the board.

For the record it consists of an ASUS M3A78-EM with an AMD Athlon 64X2 7750 Black Box. I was on a budget so I could not go for the quad core as yet, so I made sure I got a mobo that would stand some upgrades when the price-point drops. Check out the ports on the mobo, it has everything. Check out the cache on the CPU (1MB L2, 2MB L3). I am sticking with the on-board video for now; I prefer NVidia to ATI, but for the moment it will do. It fit the price.

All of that has nothing to do with Kubuntu. Since I got the parts together late, I did not have as much time to play as I would have liked, but I do know that it boots very quickly. I will time it this weekend, but it was around 15 seconds from GRUB to KDM. I did some installs of apps that were not shipped with the default desktop, such as Firefox, mplayer, fglrx, and a few other choice bits I like (which I will mention by name in a follow-up). I was fairly impressed so far.

Now for the bad news. From the get-go KDE 4.2 let me down. When 4.0 arrived with Kubuntu 8.10, I tried it for a day or two, and was very unimpressed. This time I thought it must have had some improvements, it’s now two minor revisions beyond the dreaded .0 version. While it is slightly more stable, within minutes I had had my first crash on the panel. I had several more, not hardware-related. Now these could be the fault of the applet developers and not KDE itself, but it certainly soured my first look. I will probably nuke this install and reinstall with the KDE3 remix over the weekend.

One I got fglrx running, I just had to install Nexuiz. I did buy faster hardware and lots more RAM, how else was I to see it in action? I gotta say it ran smoothly. So what if I can’t hit anything or bunny-hop my way out of danger.

Till next time, keep your clock multiplier high and your temperature low.

April 24, 2009

» End of School with Linux; Ubuntu 9.04 Released

Hi folks. I am back for the second in what will eventually be a long line of infrequent updates. Did you miss me?

End of School with Linux

OCLUG (The Ottawa Canada Linux Users Group) is putting on an event called—you guessed it—End of School with Linux. This is happening on April 28, 2009 starting at 11am at the University of Ottawa in the SITE building, room C0136. The purpose of the event is to help people with their Linux systems, install Linux, fix issues, and just generally help out in the community. Your humble blogger will be there, manning the booth from 1200-1600, so come on down. And tell a friend, too.

More details can be found at the CSSA’s page. Click here for a map of the campus.

Also check out the OCLUG home page.

Ubuntu 9.04 Released

Ubuntu 9.04 was released today. I have not yet looked at any reviews of the release candidates, but I am on my way out to pick up some new hardware on which to install it tonight.

If you were not on the sites downloading it early today you may have to wait, as most mirrors are very busy. If you have a newsgroup account then check out—the ISO for 32-bit and 64-bit are there in both desktop and server flavours. For some reason, the Kubuntu and Edubuntu releases are not in the newsgroups, but you can always install the meta-packages after installing from the main ubuntu dist using apt-get install kubuntu.

Over the next few days I will be doing an install of the 64-bit version on my hefty(ish) new system, and a 32-bit install on my sorely-outdated laptop. I will post results.

Don’t forget to check out the Ubuntu web site.

One final thought

Did I mention I am going to get new hardware tonight? It’s been years! I am blogging and doing work, but all I hear in my head is “EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE”! It’s a wonderful kind of feeling.

Till next time.

June 30, 2008

Bart Trojanowski
Bart's Blog
» Authenticating Linux against OSX LDAP directory

I was recently asked by a colleague, and now also a client, to look over the LDAP configuration on his Ubuntu boxen. He was having issues with the root account. The problem turned out being that the Ubuntu box was trying to get the root authentication from LDAP. It successfully found an LDAP account on the OSX LDAP server, but was unable to login since that account is disabled. The solution was to filter out the root account from the LDAP reply using the pam_filter directive in /etc/ldap.conf. Jay was also kind enough to document his setup for others that are trying to accomplish a similar task.

side note: Jay briefly showed me his OSX/Linux integration... looks pretty cool. Particularly the LDAP directory and automount of OSX exported volumes for users. OSX seems to make certain things really easy.

May 13, 2008

» Debian OpenSSL Package Introduces Vulnerability

The highlight today of probably every Linux-related mailing list and IRC channel was the announcement of CVE-2008-0166, affecting OpenSSL libraries on Debian-based Linux distributions, including the popular Ubuntu.

According to the Debian Security Advisory, a change made to Debian’s OpenSSL package makes its random number generator predictable. Obviously this is less than desirable in a random number generator used for things like, say, all of your SSH keys.

The vulnerability has been present since September of 2006, and Debian strongly suggests throwing your old keys out completely:

It is strongly recommended that all cryptographic key material which has been generated by OpenSSL versions starting with 0.9.8c-1 on Debian systems is recreated from scratch.

Debian has now disabled public key authentication on their project servers until further notice, and are generating new keys for those servers and new certificates for

So all you Debian and Ubuntu folks out there will probably want to do the same for your own keys and certificates. Note that this patch was never used by the upstream OpenSSL team nor by other distros like Fedora or RHEL (or CentOS), so they are not affected.

March 2, 2008

Bart Trojanowski
Bart's Blog
» fixing X for GeodeLX

Recently I have been doign a bit of contract work for Symbio Technologies. They have had me do various little projects part time. Most recently I got a chance to work on video drivers for the Geode family.

Here is the progress...

[Read More]