return to OCLUG Web Site
A Django site.
October 15, 2012

» Adding Networks to Exadata: Fun with policy routing

I’ve noticed that Exadata servers are now configured to use Linux policy routing. Peeking at My Oracle Support, note 1306154.1 goes in a bit more detail about this configuration. It’s apparently delivered by default with factory build images and later. The note goes on to explain that this configuration was implemented because of asymetric routing problems associated with the management network:

Database servers are deployed with 3 logical network interfaces configured: management network (typically eth0), client access network (typically bond1 or bondeth0), and private network (typically bond0 or bondib0). The default route for the system uses the client access network and the gateway for that network. All outbound traffic that is not destined for an IP address on the management or private networks is sent out via the client access network. This poses a problem for some connections to the management network in some customer environments.

It goes onto mention a bug where this was reported:


The bug is not public, but the title does show the type of error messages that would appear if a packet with a non-local source address comes out.

This configuration is implemented using RedHat Oracle Linux-style /etc/sysconfig/network-scripts files, with matched rule- and route- files for each interface.

A sample configuration, where the management network is in the 10.10.10/24 subnet, is:

[root@exa1db01 network-scripts]# cat rule-eth0
from table 220
to table 220
[root@exa1db01 network-scripts]# cat route-eth0 dev eth0 table 220
default via dev eth0 table 220

This configuration tells traffic originating from the IP (which is the management interface IP on this particular machine), and also traffic destined to this address, to be directed away from the regular system routing table, to a special routing table 220. route-eth0 configures table 220 with two router: one for the local network, and a default route through a router on the network.

This contrasts with default gateway of the machine itself:

[root@exa1db01 network-scripts]# grep GATEWAY /etc/sysconfig/network

The difference between this type of policy routing and regular routing is that traffic with the _source_ address of will automatically go through default gateway, regardless of the destination. (The bible for Linux routing configuration is the Linux Advanced Routing and Traffic Control HOWTO, for those looking for more details)

I ran into an issue with this configuration when adding a second external network on the bondeth1 interface. I set up the additional interface configuration for a network,

[root@exa1db01 network-scripts]# cat ifcfg-bondeth1
BONDING_OPTS="mode=active-backup miimon=100 downdelay=5000 updelay=5000 num_grat_arp=100"

I also added rule and route entries:

[root@exa1db01 network-scripts]# cat rule-bondeth1
from table 211
to table 211
[root@exa1db01 network-scripts]# cat route-bondeth1 dev bondeth1 table 211 via dev bondeth1 table 211
default via dev bondeth1 table 211

This was a dedicated data guard network to a remote server, IP

The problem with this configuration was: it didn’t work. Using tcpdump, I could see incoming requests come in on the bondeth1 interface, but the replies come out the system default route on bondeth0, and not reaching their destination. After some digging, I did find the cause of the problem: in order to determine the packet source IP, the kernel was looking up the destination in the default routing table (table 255). And the route for the network was in non-default table 211. So the packet followed the default route instead, got a source address in the client-access network, and never matched any of the routing rules for the data guard network.

The solution ended up being rather simple: taking out the “table 211″ for the data guard network route, effectively putting it in the default routing table:

[root@exa1db01 network-scripts]# cat route-bondeth1 dev bondeth1 table 211
default via dev bondeth1 table 211 via dev bondeth1

And then we ran into a second issue: the main interface IP could now be reached, but not the virtual IP (VIP). This is because the rule configuration, taken from the samples, doesn’t list the VIP address at all. To avoid this issue, and in case of VIP addresses migrating from other cluster nodes, we set up a netmask in the rule file, making all addresses in the data guard network use this particular routing rule:

[root@exa1db01 network-scripts]# cat rule-bondeth1
from table 211
to table 211

So to sum up, when setting up interfaces in a policy-routed Exadata system remember to:

  • Set up the interface itself and any bonds using ifcfg- files
  • Create a rule- file for the interface, encompassing every possible address the interface could have. I added the entire IP subnet. Add “from” and “to” lines with a unique routing table number
  • Create a route- file for the interface, listing a local network route and a default route with the default router of the subnet, all using the table number defined on the previous step
  • Add to the route- file any static routes required on this interface, but don’t add a table qualifier

The final configuration:

[root@exa1db01 network-scripts]# cat ifcfg-eth8
[root@exa1db01 network-scripts]# cat ifcfg-eth12
[root@exa1db01 network-scripts]# cat ifcfg-bondeth1
BONDING_OPTS="mode=active-backup miimon=100 downdelay=5000 updelay=5000 num_grat_arp=100"
[root@exa1db01 network-scripts]# cat rule-bondeth1
from table 211
to table 211
[root@exa1db01 network-scripts]# cat route-bondeth1 dev bondeth1 table 211
default via dev bondeth1 table 211 via dev bondeth1

September 20, 2012

» Troubleshooting ORA-27090 async I/O errors with systemtap

Last week I ran into an issue on a high-volume Oracle database, whereby sessions were periodically failing with ORA-27090 errors. Job queue processes were also seeing this error, and showing messages like this in the database alert log:

Tue Sep 11 20:56:15 2012
Errors in file /orahome/oracle/base/diag/rdbms/dbm/dbm1/trace/dbm1_j001_126103.trc:
ORA-27090: Unable to reserve kernel resources for asynchronous disk I/O
Linux-x86_64 Error: 11: Resource temporarily unavailable
Additional information: 3
Additional information: 128
ORA-27090: Unable to reserve kernel resources for asynchronous disk I/O
Linux-x86_64 Error: 11: Resource temporarily unavailable
Additional information: 3
Additional information: 128

The tracefile just showed the same ORA-27090 messages, so nothing particularly useful there. oerr is of no help:

$ oerr ora 27090
27090, 00000, "Unable to reserve kernel resources for asynchronous disk I/O"
// *Cause: The system call to reserve kernel resources for asynchronous I/O
// has failed.
// *Action: Check errno

There’s a known bug, 7306820 “ORA-7445 [krhahw] / ORA-27090 during file header read. Instance may crash”, but this bug is fixed in, and this database is running

And on top of that, it’s an Exadata system, so I/O to storage servers goes over the InfiniBand network rather than using async I/O (AIO) calls.

A web search turned up a blog entry from DBHK’s blog, pointing to a value of aio-max-nr being set too low. However aio-max-nr is actually set to the same level as the recommended value, and matching Exadata defaults as well:

# cat /proc/sys/fs/aio-max-nr

The Linux kernel documentation has a brief but meaty description of this parameter:

aio-nr & aio-max-nr:

aio-nr is the running total of the number of events specified on the io_setup system call for all currently active aio contexts. If aio-nr reaches aio-max-nr then io_setup will fail with EAGAIN. Note that raising aio-max-nr does not result in the pre-allocation or re-sizing of any kernel data structures.

Having a peek at aio-nr:

# cat /proc/sys/fs/aio-nr

We’re within 2 of the absolute limit, so it looks highly likely that this limit is indeed the problem. However, the question is: who is using these AIO events? This DB is a huge session hog (8000+ concurrent sessions) but even there, 3m is a pretty high limit. And at this point we can’t even be sure that it’s database processes using up the AIO events.

The only AIO-related information in /proc (or /sys for that matter) is the two files in /proc/sys/fs. To go into more detail requires some more tools.

Solaris admins will no doubt be familiar with DTrace, a kernel tracing framework that can expose all kinds of information in the OS kernel, among other things. Oracle has ported DTrace to Linux, but it requires the latest-and-greatest UEK2 kernel, and not yet supported on Exadata.

I came across another tool that also allows kernel inspection, and _is_ available in Oracle Linux 5: systemtap. systemtap hooks into the call stack, allowing function calls to be traced, arguments captured, and if you’re really brave, actually modified.

With dependencies, I ended up needing to add four packages. As this machine doesn’t (yet) have a working yum repository, I used to obtain the following:


The avahi package is tool for plug-and-play networking that I don’t exactly want running on a server, but the systemtap binary is linked to it for remote copmilation capability. Avahi configures itself to auto-start itself on next boot, so I disabled that:

# chkconfig avahi-daemon off
# chkconfig avahi-dnsconfd off

The systemtap packages complained about missing kernel package depedencies, since this system is running’s UEK kernel, naming the kernel package kernel-u2k instead. I ended up installing when with the –nodeps option to skip dependency checking.

I couldn’t find any pre-made scripts to monitor AIO, but in a 2008 presentation from Oracle Linux engineering does have a bullet point on it:

• Tracking resources tuned via aio_nr and aio_max_nr

So based on some of the many example scripts I set out to build a script to monitor AIO calls. Here is the end result:

stap -ve '
global allocated, allocatedctx, freed

probe syscall.io_setup {
  allocatedctx[pid()] += maxevents; allocated[pid()]++;
  printf("%d AIO events requested by PID %d (%s)\n",
  	maxevents, pid(), cmdline_str());
probe syscall.io_destroy {freed[pid()]++}

probe kprocess.exit {
  if (allocated[pid()]) {
     printf("PID %d exited\n", pid());
     delete allocated[pid()];
     delete allocatedctx[pid()];
     delete freed[pid()];

probe end {
foreach (pid in allocated) {
   printf("PID %d allocated=%d allocated events=%d freed=%d\n",
      pid, allocated[pid], allocatedctx[pid], freed[pid]);

Sample output (using sytemtap’s -v verbose option to see compilation details):

Pass 1: parsed user script and 76 library script(s) using 147908virt/22876res/2992shr kb, in 130usr/10sys/146real ms.
Pass 2: analyzed script: 4 probe(s), 10 function(s), 3 embed(s), 4 global(s) using 283072virt/49864res/4052shr kb, in 450usr/140sys/586real ms.
Pass 3: using cached /root/.systemtap/cache/11/stap_111c870f2747cede20e6a0e2f0a1b1ae_6256.c
Pass 4: using cached /root/.systemtap/cache/11/stap_111c870f2747cede20e6a0e2f0a1b1ae_6256.ko
Pass 5: starting run.
128 AIO events requested by PID 32885 (oracledbm1 (LOCAL=NO))
4096 AIO events requested by PID 32885 (oracledbm1 (LOCAL=NO))
128 AIO events requested by PID 69099 (oracledbm1 (LOCAL=NO))
4096 AIO events requested by PID 69099 (oracledbm1 (LOCAL=NO))
128 AIO events requested by PID 69142 (oracledbm1 (LOCAL=NO))
4096 AIO events requested by PID 69142 (oracledbm1 (LOCAL=NO))
128 AIO events requested by PID 69099 (oracledbm1 (LOCAL=NO))
128 AIO events requested by PID 69142 (oracledbm1 (LOCAL=NO))
128 AIO events requested by PID 32885 (oracledbm1 (LOCAL=NO))
4096 AIO events requested by PID 69142 (oracledbm1 (LOCAL=NO))
4096 AIO events requested by PID 69099 (oracledbm1 (LOCAL=NO))
128 AIO events requested by PID 69142 (oracledbm1 (LOCAL=NO))
128 AIO events requested by PID 69099 (oracledbm1 (LOCAL=NO))
(and when control-C is pressed):

PID 99043 allocated=6 allocatedevents=12672 freed=3
PID 37074 allocated=12 allocatedevents=25344 freed=6
PID 99039 allocated=18 allocatedevents=38016 freed=9
PID 69142 allocated=24 allocatedevents=50688 freed=12
PID 32885 allocated=36 allocatedevents=76032 freed=18
PID 69099 allocated=6 allocatedevents=12672 freed=3
Pass 5: run completed in 0usr/50sys/9139real ms.

It’s quite obvious here that the AIO allocations are all happening from oracle database processes.

From the summary output we can see that each process seems to run io_setup twice as much as io_destroy; kernel gurus may have an answer to this, but I suspect it has more to do with the data gathering than a massive leak in AIO events.

But the more interesting result is the frequent allocation of 4096 AIO events at a time. On a database with 8000 connections, that would be over 10 times the current limit.

The only major downside of increasing this limit seems to be th avoid exhausting kernel memory. From a 2009 post to the linux-kernel mailing list:

Each time io_setup is called, a ring buffer is allocated that can hold nr_events I/O completions. That ring buffer is then mapped into the process’ address space, and the pages are pinned in memory. So, the reason for this upper limit (I believe) is to keep a malicious user from pinning all of kernel memory.

And with consultation with Oracle support, we set aio-max-nr to 50 million, enough to accommodate three databases with 16k connections all allocating 4096 AIO events. Or in other words, way more than we ever expect to use.

# sysctl -w fs.aio-max-nr=50000000

And since this change, the ORA-27090 errors have gone away.

March 27, 2012

» Using Oracle VM with Amazon EC2

If you’re planning on running Oracle VM with Amazon EC2, there are some important limitations you should know about.

As part of my work getting the Oracle Linux Unbreakable Enterprise Kernel 2 working (yeah that’s a mouthful) I tried using the Oracle-supplied Oracle Linux 6 AMI images that are listed as community AMIs by Amazon:

[marc@shakybox2 ~]$ ec2-describe-images -a --filter "name=*Oracle Linux 6*"
IMAGE   ami-87a076ee    936851542197/Oracle Linux 6.1 x86_64 - OVM      936851542197    available       public          x86_64  machine aki-4e7d9527ebs      paravirtual     <b>ovm</b>
BLOCKDEVICEMAPPING      /dev/sda                snap-ea95e18e   20

The “OVM” part here is interesting. Amazon EC2 is typically set up on their own version of Xen, but this image uses a different, though similarly Xen-based, hypervisor: Oracle VM. This appears to be a result of the Oracle-Amazon partnership so touted by Larry Ellison in his OpenWorld 2010 keynote.

The AMI itself launched as expected (once I had set the instance type to large; it we seem we need the 7.5G of RAM the large instance provides! But after the initial install, I was completely unable to use any kernel other than the 2.6.32-100.34.1.el6uek.x86_64 kernel originally supplied.

Every time the instance would be listed as running, but unreachable with the console log showing a variation of

"main" "root=/dev/sda1" "ro" "4"
vbd 2049 is hd0
******************* BLKFRONT for device/vbd/2049 **********

backend at /local/domain/0/backend/vbd/313/2049
Failed to read /local/domain/0/backend/vbd/313/2049/feature-barrier.
Failed to read /local/domain/0/backend/vbd/313/2049/feature-flush-cache.
41943040 sectors of 512 bytes

Since it’s not possible to specify a –kernel alternate kernel option with Oracle VM hosts, I can’t explicitly attach a different kernel. And the ec2-register command used to create AMI images does not have an option to create an image with the Oracle VM hypervisor at all. This Amazon FAQ document gives a clue:

Q: Can customers import Oracle Virtual Machine templates into AmazonEC2?

Oracle will distribute templates of Oracle Applications that have been produced by Oracle and approved by AWS. Customers will be able to customize these environments and save them as derivative images which also run on the Oracle Virtual Machine.

So it’s possible to customize pre-made templates, but the ability to create new templates is conspicuously not mentioned. And indeed, it’s possible to create a derivative image using ec2-create-image that still has an OVM hypervisor, but there’s no mechanism to use a custom kernel with this command.

So the take away here seems to be:

  • It’s possible to create Oracle VM-based Amazon EC2 instances, but only from Oracle-created AMIs
  • There’s no mechanism to use kernels other than those bundled with the image, so there’s no prospect of applying security updates etc unless Oracle releases a new AMI
  • Fortunately, I was able to get the Oracle Linux UEK2 kernel working with Amazon’s ordinary Xen hypervisor, though there’s no pre-built AMI for it. I made a separate blog post about this.

For you readers: have any of you managed to get a successful Oracle VM-based Amazon EC2 instance working? How do you deal with the issue of kernel updates?

» Testing out Oracle’s Unbreakable Enterprise Kernel 2

As announced a few days ago, Oracle’s core database product is now supported on Oracle Linux 6. Coming a full 13 months after Oracle Linux 6′s launch, and 16 months after Red Hat Enterprise Linux 6, it’s a much anticipated announcement.

Update 28-Mar-12: the official certification information has come out on My Oracle Support. So far it’s only certified for Oracle Linux 6 Unbreakable Enterprise Kernel version 1, and only for version under Linux x86-64. It also means that a certified combination should be possible using the Oracle-supplied OEL 6 EC2 AMI, though it’s frozen at the original 6.2 release kernel. Unfortunately Oracle is not currently available on OTN but rather requires an active license to download from My Oracle Support.

Oracle’s UEK2 (and Red Hat Enterprise Linux 6) add some very interesting features like:

  • btrfs, a big break in filesystem organization from traditional ext2/3, with the promise of lightweight copy-on-write snapshot support
  • transparent hugepages, dynamically allocating hugepages as needed and performing background memory defragmentation to attempt to free up contiguous space
  • transmit packet steering, allowing multiple CPUs to load-balance outgoing network traffic

Although the press release states it’s available “immediately”, I couldn’t find any notes on Oracle’s My Oracle Support support portal relating to the product support; in fact, it still lists Oracle Enterprise 6 as being uncertified as of this writing. So I’m not sure how it will pass the pre-installation operating system checks.

No matter, I’m going to test this out, and an obvious way to do this would be using Amazon EC2, providing high-capacity instances on demand.

After some blind allies getting the Oracle Linux UEK2 kernel working with Amazon EC2 and Oracle VM I found that I could make it work without Oracle VM, with with Amazon’s default Xen hypervisor. Here are the steps I used:

- Sign up for an Amazon EC2 account and set up the EC2 API tools on your client machine. There are lots of tutorials on how to do this online

- Create a new EC2 instance using a “builder” AMI; I chose a 64-bit CentOS 6 image “CentOS 6 PVGrub EBS AMI”, AMI ID ami-1f66b276

[marc@shakybox2 tmp]$ ec2-run-instances -k marc-aws -n 1 -t m1.medium -z us-east-1d ami-1f66b276
RESERVATION     r-d18f28b2      462281317311    default
INSTANCE        i-22d8f846      ami-1f66b276                    pending marc-aws        0             m1.medium        2012-03-24T21:25:11+0000        us-east-1d      aki-427d952b                    monitoring-disabled                                    ebs                                     paravirtual   xen              sg-5fc61437     default

- Assign a name to the instance

[marc@shakybox2 tmp]$ ec2-create-tags i-22d8f846  --tag 'Name=Instance Builder'
TAG     instance        i-22d8f846      Name    Instance Builder

- Authorize incoming SSH

[marc@shakybox2 ~]$ ec2-authorize default -p 22 -s $(ip addr list dev eth0 | awk '/inet / {print $2}')

- Create a new 20G EBS volume; this will be the “golden image” root disk. Attach it to the builder instance

[marc@shakybox2 tmp]$ ec2-create-volume -s 20 -z us-east-1d
VOLUME  vol-d7340cbb    20              us-east-1d      creating        2012-03-24T21:31:39+0000

[marc@shakybox2 tmp]$ ec2-attach-volume -i i-22d8f846 -d /dev/sdd vol-d7340cbb
ATTACHMENT      vol-d7340cbb    i-22d8f846      /dev/sdd        attaching       2012-03-24T21:33:26+0000

- Get the IP address to connect to (substituting the correct image ID and hostname):

[marc@shakybox2 tmp]$ ec2-describe-instances  i-22d8f846
RESERVATION     r-d18f28b2      462281317311    default
INSTANCE        i-22d8f846      ami-1f66b276 ip-10-116-237-78.ec2.internal  running marc-aws        0               m1.medium       2012-03-24T21:25:11+0000      us-east-1d       aki-427d952b                    monitoring-disabled ebs                                      paravirtual     xen             sg-5fc61437     default
BLOCKDEVICE     /dev/sda1       vol-39310955    2012-03-24T21:25:28.000Z        true
TAG     instance        i-22d8f846      Name    Instance Builder

[marc@shakybox2 tmp]$ ssh -i marc-aws.pem

- Finding the volume inside our AMI, which just got hotplugged:

[root@ip-10-116-237-78 ~]# dmesg | tail -2
blkfront: xvdh: barriers disabled
 xvdh: unknown partition table

- Creating a filesystem and mounting. Note I’m not creating a partition table, but rather a raw filesystem. This will make things much easier if the volume ever needs to be resized.

[root@ip-10-116-237-78 ~]# mke2fs -j -L / /dev/xvdh
mke2fs 1.41.12 (17-May-2010)
Filesystem label=/
OS type: Linux
This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@ip-10-116-237-78 ~]# mkdir /mnt/ec2-fs
[root@ip-10-116-237-78 ~]# mount /dev/xvdh /mnt/ec2-fs

- Creating the base directories, as per Jonathan Hui’s excellent blog post

mkdir /mnt/ec2-fs/dev
/sbin/MAKEDEV -d /mnt/ec2-fs/dev/ -x console
/sbin/MAKEDEV -d /mnt/ec2-fs/dev/ -x null
/sbin/MAKEDEV -d /mnt/ec2-fs/dev/ -x zero
mkdir /mnt/ec2-fs/etc
mkdir /mnt/ec2-fs/proc

- Creating /etc/fstab

cat > /mnt/ec2-fs/etc/fstab <<EOF
LABEL=/ /       ext3    defaults 1 1
none    /proc   proc    defaults 0 0
none    /sys    sysfs   defaults 0 0

- Mounting /proc

mount -t proc none /mnt/ec2-fs/proc

- Grabbing the OEL 6 yum config file

[root@ip-10-116-237-78 ~]# cd /root
[root@ip-10-116-237-78 ~]# wget
--2012-03-24 22:42:54--
Connecting to||:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1461 (1.4K) 1
Saving to: âpublic-yum-ol6.repoâ

100%[======================================>] 1,461       --.-K/s   in 0s

2012-03-24 22:42:55 (106 MB/s) - âpublic-yum-ol6.repoâ

cat <<-EOF >> public-yum-ol6.repo

- Installing the base OS

[root@ip-10-116-237-78 ~]# yum -c /root/public-yum-ol6.repo --installroot=/mnt/ec2-fs -y groupinstall Core
ol6_latest                                               | 1.1 kB     00:00

- Installing the latest UEK2 kernel

yum -c /root/public-yum-ol6.repo --enablerepo=ol6_UEK_latest --installroot=/mnt/ec2-fs -y install kernel-uek

(not small: 200m+ for the kernel alone)

- Setting up base networking scripts

cat > /mnt/ec2-fs/etc/sysconfig/network-scripts/ifcfg-eth0 <<EOF
echo "NETWORKING=yes" > /mnt/ec2-fs/etc/sysconfig/network
echo "nameserver" > /mnt/ec2-fs/etc/resolv.conf
echo "UseDNS  no" >> /mnt/ec2-fs/etc/ssh/sshd_config
echo "PermitRootLogin without-password" >> /mnt/ec2-fs/etc/ssh/sshd_config
echo "hwcap 0 nosegneg" > /mnt/ec2-fs/etc/

- Script download of SSH private key on startup in case it’s missing (though with EBS-backed storage this shouldn’t be necessary)

cat > /mnt/ec2-fs/usr/local/sbin/ <<EOF
if [ ! -d /root/.ssh ] ;
then mkdir -p /root/.ssh
chmod 700 /root/.ssh
# Fetch public key using HTTP
/usr/bin/curl -f > /tmp/my-key
if [ $? -eq 0 ] ;
cat /tmp/my-key >> /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
rm -f /tmp/my-key
chmod u+x /mnt/ec2-fs/usr/local/sbin/
echo "/usr/local/sbin/" >> /mnt/ec2-fs/etc/rc.d/rc.local

- Clean up temporary files from the installs (mostly the RPM packages)

yum -c /root/public-yum-ol6.repo --installroot=/mnt/ec2-fs clean all

- Set up GRUB boot files (note the kernel version here should

chroot /mnt/ec2-fs
cd /boot/grub
cat > grub.conf <<EOF

title Oracle Linux UEK
        root (hd0)
        kernel $(echo /boot/vmlinuz*uek.x86_64) ro root=LABEL=/ ro console=hvc0 crashkernel=auto LANG=en_US.UTF8 selinux=0
        initrd $(echo /boot/initramfs*uek.x86_64.img)
title Oracle Linux Compatibility Kernel
        root (hd0)
        kernel $(echo /boot/vmlinuz*el6.x86_64) ro root=LABEL=/ ro console=hvc0 crashkernel=auto LANG=en_US.UTF8 selinux=0
        initrd $(echo /boot/initramfs*el6.x86_64.img)
ln -s grub.conf menu.lst

- Set up swap. There’s no need to put this on expensive EBS storage when ephemeral storage will do)

cat > /mnt/ec2-fs/usr/local/sbin/ <<EOF
dd if=/dev/zero of=\$VOL bs=\${SIZE}k count=1 && mkswap \$VOL \$SIZE && swapon \$VOL
chmod +x /mnt/ec2-fs//usr/local/sbin/
echo "/usr/local/sbin/ /dev/xvdb 2097152" >> /mnt/ec2-fs//etc/rc.d/rc.local

- Unmount our root disk

umount /mnt/ec2-fs/proc
umount /mnt/ec2-fs

- Log out of the builder instance; our work there is done


- Create a snapshot of the root volume. Use the volume ID originally used to create the volume

[marc@shakybox2 tmp]$ ec2-create-snapshot -d "UEK2 AMI creation point" vol-d7340cbb
SNAPSHOT        snap-b94519c3   vol-d7340cbb    pending 2012-03-25T02:05:43+0000                462281317311   20      UEK2 AMI creation point

- Check when it’s completed

[marc@shakybox2 tmp]$ ec2-describe-snapshots snap-b94519c3
SNAPSHOT        snap-b94519c3   vol-d7340cbb    completed       2012-03-25T02:05:43+0000        100%  462281317311     20      UEK2 AMI creation point

- Register the snapshot, effectively creating an AMI image. This is a long command and cannot be changed once created, so it deserves some explanation

-n: user-provided unique name
-a: architecture, which must match the 64-bit kernel
-d: description, a text description
–root-device-name: this maps to the “root” parameter given to the PVGRUB bootloader
-b: block mapping. There are two here: one pointing to the root volume snapshot we just created, and one on non-permanent storage we’ll use for swap
–kernel: This kernel is actually a stub kernel running PVGRUB, a bootloader that loads the UEK2 kernel from the root drive. This particular kernel is for a 64-bit unpartitioned image in the us-east region.

The kernel ID is a generic 64-bit AMazon PVGRUB kernel for the US-East region

[marc@shakybox2 tmp]$ ec2-register -n UEK2-AMI -a x86_64 -d "AMI using the Oracle Linux UEK2 kernel" --root-device-name /dev/sda -b /dev/sda=snap-b94519c3 -b /dev/sdb=ephemeral0 --kernel aki-427d952b
IMAGE   ami-c39f41aa

- Now for the moment of truth: launch a VM based on the newly-created AMI

[marc@shakybox2 tmp]$ ec2-run-instances -k marc-aws -n 1 -t m1.medium  ami-c39f41aa
RESERVATION     r-19b0167a      462281317311    default
INSTANCE        i-5688ab32      ami-c39f41aa                    pending marc-aws        0             m1.medium        2012-03-25T00:08:10+0000        us-east-1d      aki-427d952b                    monitoring-disabled                                    ebs                                     paravirtual   xen              sg-5fc61437     default
[marc@shakybox2 tmp]$ ec2-describe-instances i-5688ab32
RESERVATION     r-19b0167a      462281317311    default
INSTANCE        i-5688ab32      ami-c39f41aa       ip-10-62-98-125.ec2.internal   running marc-aws        0               m1.medium       2012-03-25T02:08:10+0000       us-east-1d      aki-427d952b                    monitoring-disabled                   ebs                                     paravirtual     xen             sg-5fc61437    default
BLOCKDEVICE     /dev/sda        vol-d59aa2b9    2012-03-25T00:08:28.000Z        true
[marc@shakybox2 tmp]$ ec2-create-tags --tag="Name=UEK2 Test Instance" i-5688ab32
TAG     instance        i-5688ab32      Name    UEK2 Test Instance

ssh’ing into the machine, we can confirm it’s running the UEK:

[root@ip-10-62-98-125 ~]# uname -a
Linux ip-10-62-98-125 2.6.39-100.5.1.el6uek.x86_64 #1 SMP Tue Mar 6 20:26:00 EST 2012 x86_64 x86_64 x86_64 GNU/Linux

Adding swap

Oracle’s pre-installation steps require swap space. Since Amazon EBS charges for storage by the GB, it makes little sense to pay for persistent storage for swap. The alternative is to use transient storage for this. Since at boot time we can’t be guaranteed of a state it’s safest to zero it out and to create swap at that point. We set aside some space on /dev/sdb (which maps to /dev/xvdb since the Oracle UEK kernel doesn’t to the drive mapping that the Amazon kernel does).

We’ll create a startup script to run in rc.local, the last point in the startup. It will take a while to run, but since sshd and other system services will already be running, it shouldn’t slow down the startup of any other processes.

cat > /usr/local/sbin/ <<EOF
dd if=/dev/zero of=\$VOL bs=\${SIZE}k count=1 && mkswap \$VOL \$SIZE && swapon \$VOL
chmod +x /usr/local/sbin/
echo "/usr/local/sbin/ /dev/xvdb 2097152" >> /etc/rc.d/rc.local


There aren’t a whole lot of troubleshooting tools in EC2, especially compared to something like Oracle databases. There is one invaluable tool to debug AMI builds though: the console output. It usually takes several minutes to appear, but it can help determine what went wrong when an instance is inaccessible.

[marc@shakybox2 tmp]$ ec2-get-console-output i-76634012

It was particularly frustrating to get GRUB to find the root devices; when it can’t find them it just displays a grubdom> prompt without error message. The official documentation recommends a –rootdevice of /dev/sda1 (rather than /dev/sda) and hd00 kernel aki-4e7d9527 but I couldn’t get these to work. It might be because there is no partition table on the root disk, but without access to the interactive grub interface or more diagnostic output I can’t know for sure.


Amazon docs for use specified kernels
Jonathan Hui’s blog post about creating CentOS images from scratch
Amazon docs on how to launch images from snapshots
Wim Coakearts’ blog post on using

Coming up: the actual Oracle database software install

November 15, 2011

» Linux Patching and Oracle: Detect RPM conflicts before they happen

A common scenario in the life of a DBA on a linux server looks something like this:

From: Sue-the-sysadamin To:the-dba Date: 2011-10-28 17:54:34 Dear DBA, We will be patching the linux systems this Friday night, October 28th at 21:00. The list of patches is attached. Let us know if there are any conflicts with Oracle. BTW, we need to know by 14:00 tomorrow. Sue

Yikes! The SysAdmin wants to apply these patches this week, and needs to know if there are any conflicts this afternoon.

So you open up the list of patches. When you see the lengthy list of patches you suddenly realize the long lunch you had planned with your sweetheart is going to be rescheduled.

I recently received several lists of patches that were to be applied to a number of different linux servers that were all running oracle, which led to this blog entry.

Matching Oracle shared lib dependencies to RPM files

Being faced with the task of determining which, if any, components of Oracle would possibly be affected by the list of RPMs to be upgraded across several servers with is a daunting task.  One quick method to deal with this is to look for glibc in the list of RPMs.  The list I received contained the following:

  • glibc-2.5-58.el5_6.4.i686
  • glibc-2.5-58.el5_6.4.x86_64
  • glibc-common-2.5-58.el5_6.4.x86_64
  • glibc-devel-2.5-58.el5_6.4.i386
  • glibc-devel-2.5-58.el5_6.4.x86_64
  • glibc-headers-2.5-58.el5_6.4.x86_64

Since nearly everything will be affected by an update to glibc,  it would be a good idea to test these new RPMs out on the test server for that purpose before deploying them in productions.

What if however you need a more exacting list of which Oracle components are affected by which packages?

This information is a little harder to come by.  While Oracle does supply a list of RPMs that must be installed on a Linux system, that information does not help you in determining which Oracle files are dependent on the RPMs in question.

If the list of RPMs to be updated is small, it may even be that that few, if any of the Oracle files on a server may be affected.

Here’s how that can be determined.  This may not be complete, but I think its a pretty good start.

Find the dependency files

Stepping through this manually, we can check an oracle executable file for the list of other files that is dependent on with ldd, which will display shared library dependencies.

[oracle@oravm01 patch_check]$ ldd $ORACLE_HOME/bin/oracle => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54b700000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54b802000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54b958000) => /lib64/ (0x0000003ed8c00000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54bb2c000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54bef4000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54c00d000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54c130000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54c66e000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54c771000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54c90b000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54caa5000) => /u01/app/oracle/product/11.2.0/vmdb01/lib/ (0x00002ab54cbb1000) => /usr/lib64/ (0x00002ab54cd33000) => /lib64/ (0x0000003ed8000000) => /lib64/ (0x0000003ed7c00000) => /lib64/ (0x0000003ed8400000) => /lib64/ (0x0000003edb400000) => /lib64/ (0x0000003ed7800000)
        /lib64/ (0x0000003ed7400000)

Here we can see that the oracle binary is dependent on several library files.  Many of them are owned by oracle, but some of them are files that are part of the Linux system.

Find the RPM

The  next step is find which RPM the files are part of.  We can use the rpm command for this.

The -q option is for ‘query’, while the ‘-f’ causes rpm to report which package a file is associated with.

Choosing one of the linux owned files as an example:

[oracle@oravm01 patch_check]$ rpm -qf /lib64/

What happens if rpm -qf is run against one of the oracle owned dependency shared library files?

[oracle@oravm01 patch_check]$ rpm -qf /u01/app/oracle/product/11.2.0/vmdb01/lib/
file /u01/app/oracle/product/11.2.0/vmdb01/lib/ is not owned by any package

As you might expect, rpm has no knowledge of oracle owned files, and reports that fact.

Does the patch affect Oracle?

The final step is to compare the current package with the patch list.  In this case there is a match – glibc-2.5-49 is to be replaced with glibc-2.5.58.

Now all that you need to do is repeat this process for all oracle binary files.  This obviously does not work well as a manual process.

Fortunately this is not too difficult a task to automate.   Those readers already familiar with me will not be surprised to learn that this automation involves the use of Perl.

The Patch Check Script

Or rather, scripts.  The Perl script is complemented by the shell script  The reason for using a driver shell script is that there may be more than one Oracle home directory on the server that you need to check.

One of the few limitations of using Perl with Oracle involves the LD_LIBRARY_PATH variable.   This variable cannot be set from within a Perl script, as the Perl compiler loads in the value of LD_LIBRARY_PATH before running your script, so subsequent attempts to change it are futile.  There are some hacks that will work on some platforms that allow LD_LIBRARY_PATH to be changed at runtime, but I would rather not go that route.  The limitations of using LD_LIBARY_PATH are pretty well known, so it is best just to work around that particular issue.

The RPM patch list

The list of RPM should bin a file, one RPM per line. The file names should not contain the .rpm extension. If the extension is included, you will need to remove it. Either that, or you will need to change the script to work with the .rpm extension.

Why does the list of RPMs not include the .rpm extension?  Simple, that’s how I received the list.

Here is a sample of files from a list of RPMs to be updated:

[oracle@oravm01 patch_check]$ head -5 patch_list.txt

Which Files are searched?

Not all files in an Oracle home are processed.  The find command is used with the -perm  /u+x option to find all files that have the executable bit set for the file owner.

These files are then processed to determine RPM dependencies.

The Perl script

Let’s invoke to see what it does when run against a single Oracle home with the standard setting for LD_LIBRARY_PATH and other oracle environment variables.

Here’s the current environment:

[oracle@oravm01 patch_check]$ set | grep -E 'ORA|LD_LIB'

Now run the script. Notice that STDERR has been redirected to a file – more on that later

[oracle@oravm01 patch_check]$ ./ -verbosity 0 -oracle_home $ORACLE_HOME -linux_patch_list patch_list.txt 2> pc.err

#### Possible Conflicts Found #####
--- gdbm-1.8.0-26.2.1.el5_6.1.x86_64
old: 1.8.020 26.220
new: 1.8.020 26.2.1.el5_6.120
--- glibc-2.5-58.el5_6.4.x86_64
old: 2.520 4920
new: 2.520 58.el5_6.420
--- libgcc-4.1.2-50.el5.x86_64
old: 4.1.220 4820
new: 4.1.220 50.el520
--- libstdc++-4.1.2-50.el5.x86_64
old: 4.1.220 4820
new: 4.1.220 50.el520 has found 4 RPM packages that are scheduled to be updated and that Oracle is dependent on.
This new information may cause the patch schedule to be pushed back until it is determined that these updates will have no adverse affects on Oracle.

Now let’s consider the file pc.err, where all error messages were redirected to.

We see there 336 lines in the error file.

[oracle@oravm01 patch_check]$ wc -l pc.err
336 pc.err

The majority of the errors are from rpm -qf  – as seen earlier the RPM database knows nothing of files owned by Oracle.   As such, these are errors we are not really concerned with.  There may however be errors hiding in the file that we are concerned about.

[oracle@oravm01 patch_check]$ head -7 pc.err
working on /u01/app/oracle/product/11.2.0/vmdb01/jdk/jre/lib/amd64/ dependencies => not found
working on /u01/app/oracle/product/11.2.0/vmdb01/jdk/jre/lib/amd64/ dependencies => not found

Following is another error  – this one is not an rpm error, so it may be something to be concerned about.

##### ldd error ############
error encountered with /usr/bin/ldd /u01/app/oracle/product/11.2.0/vmdb01/jdk/jre/lib/amd64/
run the command manually to see the error

Running the command manually results in a segmentation fault.

[oracle@oravm01 patch_check]$ ldd /u01/app/oracle/product/11.2.0/vmdb01/jdk/jre/lib/amd64/
/usr/bin/ldd: line 116:  5216 Segmentation fault      LD_TRACE_LOADED_OBJECTS=1 LD_WARN= LD_BIND_NOW= LD_LIBRARY_VERSION=$verify_out LD_VERBOSE= "$@"

A quick google search on ‘ldd segmentation fault’ suggests this is a linux bug. We will not be investigating this further, as we aren’t too concerned with that at this time.

With the large number of errors that we do not really care about being reported by the rpm -qf command, how can we reduce the number of errors?

The shell script

The script serves two purposes.  It reads the file oracle_homes.txt and runs for each ORACLE_HOME.  The script also sets the LD_LIBARY_PATH variable to eliminate many of the false positives seen in pc.err previously. Following is the entire script.  You may want to adjust the values used for LD_LIBARY_PATH to suit your environment.


# set the environment to call (patch check)


[ -r "$OHOMES_FILE" ] || {
echo The file $OHOMES_FILE is missing
exit 1

while read ohome
sid=$(basename $ohome)

echo $ohome

# you may have to experiment a bit to get the needed paths in LD_LIBRARY_PATH
# in 11g there will probably still be a few files that need to be checked manually
export LD_LIBRARY_PATH=$ohome/lib:$ohome/jdk/jre/lib/amd64:$ohome/jdk/jre/lib/amd64/server

./  -verbosity 0 -oracle_home $ohome -linux_patch_list $PATCH_LIST_FILE >$LOGFILE 2>$ERRFILE


Lets run the script and then check the error logs.

[oracle@oravm01 patch_check]$ ./

Checking on the output files shows that the error logs are much smaller than what was seen previously.

[oracle@oravm01 patch_check]$ ls -ltar *.log *.err
-rw-r–r– 1 oracle oinstall   455 Nov 14 13:18 patch_check_grid.err
-rw-r–r– 1 oracle oinstall 34384 Nov 14 13:18 patch_check_grid.log
-rw-r–r– 1 oracle oinstall   296 Nov 14 13:18 patch_check_vmdb01.err
-rw-r–r– 1 oracle oinstall 34384 Nov 14 13:18 patch_check_vmdb01.log

[oracle@oravm01 patch_check]$ wc -l patch*.err
9 patch_check_grid.err
6 patch_check_vmdb01.err
15 total

By including some more library directories in LD_LIBRARY_PATH, rpm -qf was able to resolve many more dependencies, greatly reducing the false positives that were included in the error files.   Here are the total contents of the error files:

[oracle@oravm01 patch_check]$ cat *.err
Use of uninitialized value in hash element at ./ line 393.
Use of uninitialized value in hash element at ./ line 393.
working on /u01/app/oracle/product/11.2.0/grid/jdk/jre/lib/amd64/ dependencies => not found
working on /u01/app/oracle/product/11.2.0/grid/jdk/jre/lib/amd64/ dependencies => not found
rpmq: no arguments given for query
working on /u01/app/oracle/product/11.2.0/vmdb01/jdk/jre/lib/amd64/ dependencies => not found
working on /u01/app/oracle/product/11.2.0/vmdb01/jdk/jre/lib/amd64/ dependencies => not found

The ‘uninitialized value’ errors are from Perl – this is likely a bug in the script. Looking at the script I can see that while it is a script in the bug, it is not causing any incorrect reporting.

[oracle@oravm01 patch_check]$ grep -n . | grep -3 393
389:                            ($dummy1,$dummy2,$file) = split(/\s+/,$line);
390:                    }
392:                    #print "      ###### FILE: $file\n";
393:                    push @{$self->{olibDeps}->{$file}{dependents}}, $exe;
394:            }
395:    }
397:    # all library dependencies now in %olibDepend

Line 393 should be executed only if there is a value defined for $file.

Changing it as follows this fixed the bug:

push @{$self->{olibDeps}->{$file}{dependents}}, $exe if $file;

The small number of lines now found in the .err files makes it simple to determine if these errors are an issue to be concerned with.

Download the scripts

The scripts can be downloaded here – Patch Check Distribution

The zip file contains a tarball patch_check_dist.tgz.  Use tar or your favorite unarchiving utility to extract the scripts.

May 20, 2011

» Using KateSQL to connect to an Oracle database in Kate

Among the features announced with the release of version 4.6 of the KDE Software Compilation is KateSQL, a SQL Query plugin for the Kate text editor providing the basic functionality of a SQL client. It leverages the Qt SQL module, allowing you to make a connection to most types of databases. Out of the box it includes support for MySQL and SQLite, and with this post I’ll show you how to get it to connect to Oracle databases as well.

Since I am an Ubuntu user (Note: you’ll need to be on 11.04 and either be running Kubuntu or have the ‘kate’ package installed) these instructions will be specific to that operating system, though they will likely also apply to other Debian-based distributions with minimal modification. However, if you are using an RPM-based distribution such as openSUSE or Fedora, you’ll need to adapt the instructions somewhat to make sure you have the necessary development packages installed.

The first thing we’ll need to do in order to get this to work is to obtain an Oracle client. This can be downloaded from the Oracle Technology Network and requires that you have a valid Single Sign-On account. For the purposes of this article we’re going to use the Linux x86 Oracle Instant Client and the files we’re going to download are:


The first package is the client itself, providing the files required to run OCI applications. The second is the Instant Client SDK which includes header files for developing Oracle applications with Instant Client: in our case an OCI library for the Qt SQL module.

Users of RPM-based distributions can just go ahead and install these packages while others will need to convert them to .deb format first. The best way to do this that I am aware of is with the tool called Alien. Let’s install it, convert the Oracle Instant Client packages, and install them too.

cd /path/to/oracle/instant/client/files
sudo apt-get install alien
sudo alien oracle-instantclient11.2-basic-
sudo alien oracle-instantclient11.2-devel-
sudo dpkg -i oracle-instantclient11.2-basic_11. oracle-instantclient11.2-devel_11.

The next step is to install all of the dependencies necessary to build the Qt SQL OCI library.

sudo apt-get build-dep qt4-x11
sudo apt-get install qt4-qmake libqt4-dev libaio1

Then we will need the actual source code for the qt4-x11 package. This can be obtained with the following command:

apt-get source qt4-x11

To recap, at this point we’ve downloaded and installed the Oracle Instant Client and all of the dependencies required to build the qt4-x11 package, and we’ve downloaded the qt4-x11 source code. Next we’re going to build the Qt SQL OCI library itself.

sudo ln -s /path/to/qt4-x11-4.7.0/src /usr/include/src
sudo cp -r /path/to/qt4-x11-4.7.0/include/QtSql/private /usr/include/qt4/QtSql/
cd /usr/include/src/plugins/sqldrivers/oci/
qmake "INCLUDEPATH+=/usr/include/oracle/11.2/client64" "LIBS+=-L/usr/lib/oracle/11.2/client64/lib -Wl,-rpath,/usr/lib/oracle/11.2/client64/lib -lclntsh -lnnz11"
sudo make install

Now that the module has been built and installed, Kate should be able to connect to an Oracle database. Launch Kate (or close and re-open it if it was already running), click Settings -> Configure Kate, expand the Application tree if necessary, click Plugins, check ‘SQL Plugin‘ and click OK. Then click SQL -> Add connection to launch the connection wizard. In addition to the MySQL and SQLite database drivers, you should now also have drivers named QOCI and QOCI8 available. Choose the appropriate one, enter relevant connection details, and complete the connection wizard. You can now execute queries against the database and display result data.

June 18, 2010

» Keeping Up

I found I never published this post as it was sitting in my drafts few months now — it was written in 13th February, 2010. I’m publishing it without any changes. I learn therefore I am! I’ve just wrote few bits about learning a new technology and after skimming through my Google Reader, I noticed [...]

April 21, 2010

» DBD::Oracle and Windows 64bit

I have successfully compiled and installed DBD::Oracle on Windows 2008 Server 64bit operating system today.

I used the latest version of DBD::Oracle 1.24, version for 64bit Windows of Oracle’s
Instant Client Package – Basic along with the Instant Client Package – SQL*Plus and finally the Instant Client Package – SDK.

To get it to make and compile correctly I had to download Microsoft’s Visual Studio Ultimate

which should contain all the files you need. It is rather portly at 2+gb so you might want to grab lunch while you are downloading it.

After all the above downloading DBD::Oracle installed right out of the box.

All one has to do is select ‘Start Menu->All Programs->Microsoft Visual Studio 2010->Visual Studio Tools->Visual Studio x64 Win64 Command Prompt (2010)’
which will open a good old ‘dos’ window.

At this point CD to the directory where you downloaded DBD::Oracle


then set your ‘ORACLE_HOME to the Instant Client directory

     c:\DBD-Oracle set ORACLE_HOME=c:\IC_11

you should also set your NLS like this

     c:\DBD-Oracle set NLS_LANG=.WE8ISO8859P15

Once the above setting are done do a

     c:\DBD-Oracle perl Makefile.PL

and then a

     c:\DBD-Oracle nmake install

Which will produce a whole of warnings (these you can ignore, as they do not seem to effect DBD::Oracle at all) and near the end it should output something like this;

     Generating code
     Finished generating code
     if exist blib\arch\auto\DBD\Oracle\Oracle.dll.manifest mt -nologo -manifest blib\arch\auto\DBD\Oracle\Oracle.dll.manifest -outputresource:blib\arch\auto
     if exist blib\arch\auto\DBD\Oracle\Oracle.dll.manifest del blib\arch\auto\DBD\Oracle\Oracle.dll.manifest
     C:\Perl64\bin\perl.exe -MExtUtils::Command -e "chmod" -- 755 blib\arch\auto\DBD\Oracle\Oracle.dll
     C:\Perl64\bin\perl.exe -MExtUtils::Command -e "cp" -- blib\arch\auto\DBD\Oracle\
     C:\Perl64\bin\perl.exe -MExtUtils::Command -e "chmod" -- 644 blib\arch\auto\DBD\Oracle\
     C:\Perl64\bin\perl.exe "-Iblib\arch" "-Iblib\lib" ora_explain.PL ora_explain
Extracted ora_explain from ora_explain.PL with variable substitutions.
     C:\Perl64\bin\perl.exe -MExtUtils::Command -e "cp" -- ora_explain blib\script\ora_explain
        pl2bat.bat blib\script\ora_explain

At this point you are all done.

Well almost.

It is important that you test your code before you install but you will have to set a few things up first to get it to fully test correctly.

You will need a TNSNAMES.ORA file that points to a valid DB in the Instant Client Directory

Next you will need to set the ORACLE_USER_ID to a valid user

     c:\DBD-Oracle set ORACLE_USER_ID=system/system@XE

You will have to set up TNS_ADMIN to point to the Instant Client Directory

     c:\DBD-Oracle set TNS_ADMIN=c:\IC_11

Most importantly you will have to add the Instant Client directory to your path like this

     c:\DBD-Oracle path = c:\IC_11;%path%

If you do not do this step you will run into the dreaded

Can’t load ‘C:/Perl/lib/auto/DBD/Oracle/Oracle.dll’ for module DBD::Oracle: load_file:%1 is not a valid Win32 application at C:/Perl/lib/ line 202.

Error later on after the compile when you try to use DBD::Oracle.

What is actually going on is that Perl cannot find oci.dll (or one of the other .dlls it needs to run) the

C:/Perl/lib/auto/DBD/Oracle/Oracle.dll’ and the DynaLoader error

is just a false trail as perl is very limited in what it Windows errors it can report on. For more complet info on this sort of error check out this page;

Oracle Troubleshooter HOWTO

by Alexander Foken. It is rather dated but the facts of why perl did not find a dll are still valid.

now you can do this

     c:\DBD-Oracle nmake test

and all the tests should run and it will report.

Finally simply do a

     c:\DBD-Oracle nmake install

and you are all set.

That is about it.

At this point you might want to add the Instant Client directory permanently to your path so you will not run into the Dynaloader error again.

As well you do not need to keep Visual Studio around to use DBD::Oracle so you can uninstall that as well.

April 7, 2010

» Nobody Killed OpenSolaris — Stop the FUD!

I’m tired reading all over the internet — Oracle taking back OpenSolaris, Open Solaris May Die
, Solaris Is Dead, Save Open Solaris, Oracle taking back OpenSolaris.

I’m so sick of it!

I see that some don’t even know the difference between OpenSolaris and commercial Oracle Solaris (former Sun Solaris 10)!

Wake up people! Oracle did make commercial Solaris 10… eh… commercial, that is. They (well, Sun but Oracle paid big $$ for it) have invested lots into Solaris IP and they have full rights to actually charge money for it and they probably should. Struggling Sun made commercial Solaris free to use in desperation to maintain their rapidly shrinking market share. Oracle doesn’t need that – they are not desperate. You’ve made the right decision Oracle – keep Solaris commercial and use these funds to continue developing this great operating system (or whatever makes business sense).

Having said all this, what does it have to do with OpenSolaris? Nothing!

OpenSolaris was and is free. I have just quickly skimmed through the licensing (Binary License and CDDL) and there are no caveats that I can see like 90 days limitation or whatsoever. All the OpenSolaris goodies are still available to everyone for free.

Whining starts that Oracle will not contribute to OpenSolaris anymore. Come on people! Couldn’t you just appreciate what’s been done already and what a great product OpenSolaris is? If you forgot what open-source is about, it’s about community contributions and not about a single vendor giving away it’s IP so that everyone around can scream how great open-source movement is what great products it produces. If one vendor pulls out and community can’t sustain product development, then the product cannot live its normal open-source life.

Get over it! Want a high quality software with great support without any fuss? Pay $$. Want a high quality free open-source software? Make it happen!

August 19, 2009

» VNC Viewer Free Edition 4.1.2 for X: “Connection reset by peer (104)”

Today I tested OracleVM (OVM) templates on their own distribution of Oracle Enterprise Linux (OEL) 5 with seeded VNC Viewer Free Edition 4.1.2 for X. All went fine, but the VNC connection to virtual machine was not painless as I expected.

[root@oram ~]# rpm -qa "vnc|xen"

On a brand new OEL5 system with virtualization support, I have created a new virtual machine with a fresh OEL4 (plain OS), and set appropriate memory for the virtual domain using the Xen management user interface commands. I also checked if the VNC port was allocated with the command virsh dumpxml:

[root@oram ~]# xm create /OVS/running_pool/OVM_EL4U6_X86_PVM_4GB/vm.cfg
Using config file "/OVS/running_pool/OVM_EL4U6_X86_PVM_4GB/vm.cfg".
Started domain OVM_EL4U6_X86_PVM_4GB
[root@oram ~]# xm mem-set 1 1024
[root@oram ~]# xm list
Name                     ID Mem(MiB) VCPUs State   Time(s)
Domain-0                 0      997     2 r-----   284.5
OVM_EL4U6_X86_PVM_4GB    1     1023     1 -b----    73.6
[root@oram ~]# virsh dumpxml OVM_EL4U6_X86_PVM_4GB |grep port
<graphics type='vnc' port='5900' listen=''/>
[root@oram ~]# netstat -l |grep 5900
tcp        0      0 *:5900               *:*                  LISTEN

Now I expected that VNC Viewer would simply connect to the console of VM, but it displayed only the very first frame and then died after a few moments with a Connection reset by peer (104) error:

[root@oram ~]# vncviewer localhost:0
VNC Viewer Free Edition 4.1.2 for X - built Jan 21 2009 14:35:26
Copyright (C) 2002-2005 RealVNC Ltd.
See for information on VNC.
Fri Aug 14 15:59:24 2009
CConn:       connected to host localhost port 5900
CConnection: Server supports RFB protocol version 3.8
CConnection: Using RFB protocol version 3.8
TXImage:     Using default colormap and visual, PseudoColor, depth 8.
Fri Aug 14 15:59:25 2009
CConn:       Using pixel format depth 6 (8bpp) rgb222
CConn:       Using ZRLE encoding
Fri Aug 14 15:59:29 2009
CConn:       Throughput 20156 kbit/s - changing to hextile encoding
CConn:       Throughput 20156 kbit/s - changing to full colour
CConn:       Using pixel format depth 8 (8bpp) colour-map
CConn:       Using hextile encoding
main:        read: Connection reset by peer (104)

As I was able to see the VM console for a while,  my first guess was that it would be related to the virtual system itself, so  I decided to create another virtual domain to see if the problem is replicable. And yes, it was! (Well, in addition to the 104 error I got another one: Broken pipe (32), which was very likely related to domain auto-restarts, and easily traceable by xm list command showing the current VM domain ID changes.)

And the solution for Connection reset by peer (104)? I noticed that VNC Viewer’s default setting is auto-encoding selection, which means first use ZRLE and then switch to hextile if possible—that creates a problem here. When I set any encoding (fixed to either ZRLE or hextile), it started to work well:

[root@oram ~]# vncviewer localhost:0 PreferredEncoding=Hextile
VNC Viewer Free Edition 4.1.2 for X - built Jan 21 2009 14:35:26
Copyright (C) 2002-2005 RealVNC Ltd.
See for information on VNC.
Fri Aug 14 18:10:39 2009
CConn:       connected to host localhost port 5900
CConnection: Server supports RFB protocol version 3.8
CConnection: Using RFB protocol version 3.8
TXImage:     Using default colormap and visual, PseudoColor, depth 8.
Fri Aug 14 18:10:40 2009
CConn:       Using pixel format depth 6 (8bpp) rgb222
CConn:       Using hextile encoding

Do you have any related or similar issues to share?

June 25, 2009

» Scalable Internet Architectures

My old friend and collaborator Theo Schlossnagle at OmniTI posted his slides from his Scalable Internet Architectures talk at VelocityConf 2009.

The slides are brilliant even without seeing Theo talk and I highly recommend the time it takes to flip through them, for anyone who is interested in systems performance. If anyone took an mp3 of this talk I’m dying to hear it, please let me know.

For those of you unfamiliar with OmniTI, Theo is the CEO of this rather remarkable company specializing in Internet-scale architecture consulting. They generalize on Internet-scale architecture, not on one specific dimension the way Pythian specializes on the database tier. This allows them to see Internet-scale workloads from a unique systemic, multidisciplinary point of view; from the user experience all the way up the stack, through the load balancer (or not), the front-end cache, the application server, the database server, the operating system, the storage, and so on. This approach lets them build Internet architectures and solve scalability problems in a unique and powerful, wholistic way.

Pythian first collaborated with OmniTI in 2001, and they deserve all of their success and profile that they’ve built since then. Trivia: both Pythian and OmniTI were founded in September 1997 and both companies continue to be majority-owned and controlled by founders (in Pythian’s case, yours truly).

Here’s the slide deck. Let me know your thoughts.

March 17, 2009

» How to Have a Good Presentation

In about 15 minutes, Giuseppe Maxia will begin a webinar in which the main focus is a presentation on “How to have a good presentation”. Talk about meta!

Giuseppe posted how to join the free webinar.

The slides can be found at

December 19, 2008

» Log Buffer #128: a Carnival of the Vanities for DBAs

Welcome to the 128th edition of Log Buffer, the weekly review of database blogs.

Let’s begin with some PostgreSQL blogs. Jignesh Shah shares his recipe for making a PostgreSQL 8.3 appliance based on OpenSolaris using VirtualBox. While we’re on appliances, Dave Page shows off PostgreSQL management on the iPhone with an application he himself wrote. Stealth DBA for the bus-rise home.

On Database Soup, Josh Berkus has been finding useless indexes. He begins, “I’d say, in general, that you can’t have really well-chosen indexes without the help of a performance expert. However, anyone can improve their use of indexes in PostgreSQL fairly easily using a few system tools … and a little improvement is all that a lot of users need.” And it’s what Josh offers.

Sometimes a DBA is asked to make something real purty-like, contrary to his or her nature though that may be. On the Postgres OnLine Journal, Leo Hsu and Regina Obe offer some help with the first of a series on Fusion charts and PostgreSQL. (”Fusion Charts . . . is a flash-based charting product that makes beautiful flash charts.”)

And now—hey what’s MySQL maven Baron Schwartz doing with a Postgres post on xaprb? He’s asking, what are your favorite PostgreSQL performance resources?

Maybe he’s considering crossing the floor out of weariness with all the contention in the MySQL world? Can’t say I blame him. Lately, the conversation in the MySQL ’sphere has been dominated by non-technical talk of the pluses and minuses of 5.1, of forking, community vs. enterprise, and so on. This week was no exception.

The week began with Jay Pipes’s advice to MySQL: “Drop the current roadmap . . . Forget Windows for now . . . Clean up the abysmal messiness of the code base . . .” It’s strong stuff and worth a read.

Lukas Kahwe Smith followed with his advice to the database division at Sun, purveyor or patron now of MySQL, Drizzle, and PostgreSQL.

Jeremy Zawodny surveyed this new MySQL landscape, full as it now is of patches, forks, and Drizzle, and liked what he saw.

Speaking of which, the MySQL Performance Blog announced the Percona XtraDB Storage Engine: a drop-in replacement for standard InnoDB.

Ronald Bradford got some unexpected results while looking into the size of memory tables. Can you help Ronald out?

On High Availability MySQL, Mark Callaghan showed us how to make MySQL faster in one hour. Nice stuff. And real purty charts, too.

Let’s see what happened in SQL Server now. Kalen Delany opined that there is no such thing as a SQL Server, and she’s not the only one with an opinion on this (one would think) straight-forward matter.

Lichi Shea asserted that there is a limit to set-based solutions: “After all, some procedural solutions are not so bad!  . . .  Now, it’s time for me to dodge the set-based solution crowd.”

Lots of thoughtful comment on that one, and a blog response from Ward Pond, who says that Linchi Shea makes an interesting point about hints, vis-a-vis the set-based and procedural paradigms.

The Data Management Journal looked into extracting numbers with SQL Server: “We all have perfectly normalized tables, with perfectly scrubbed data, right? I wish! Sometimes we are stuck with dirty data in legacy applications. What’s worse is that we are sometimes expected to do interesting things with dirty data. In this blog, I will show you how to extract a number from a varchar column that contains letters and numbers.”

TJay Belt published his reflection, Cloud Computing and me. Like many DBAs, TJay has some thoughts on how the advent of “The Cloud” is going to affect databases and database administration.

Moving into things Oracle, Chen Shapira was thinking about a musical analogy for the DBA. Their not “rockstar programmers” or “jazz programmers”, says Chen. But I won’t give away her conclusion—click on.

Chet Justice, the Oracle Nerd, was pursuing ontology too in the second part of his Application Developers vs. Database Developers. (I wonder if it’s generally true that apps developers have such terrible manners.)

Gary Myers responded, “Oraclenerd has opened that can of worms about OO, ORMs and Databases,” in his item, The Certainty Bottleneck (and ORMs).

On An Expert’s Guide to Database Solutions, James Koopman suggested, maybe it’s time to extend the DBA’s realm of influence, using tools like Spotlight on Oracle.

Or perhaps with other tools, such as TOra? Here on the Pythian Blog, Brad Hudson posted his howto, Installing TOra with Oracle support on Ubuntu 8.04LTS (Hardy Heron).

One last important bit of news from this week—Thomas LaRock, AKA SQLBatman, is America’s Most Exciting DBA.

Until next time, Happy Holidays to all our readers!

December 12, 2008

» Installing TOra with Oracle support on Ubuntu 8.04LTS (Hardy Heron)

The situation is this. I am a system administrator working in a world of DBAs. This is not a bad thing, but sometimes tasks crossover and I need to connect to Oracle. In light of this, I have been using the Oracle SQL Developer GUI in KDE for some time now.

While the functionality of the product is quite slick I find it painful to work with. The major source of my pain is the slow interface due to its dependence on Java. I mean seriously, the screen refreshes are abysmal. I found that the more I used it, the more I needed a lighter, faster alternative.

One day a while back, I noticed that the boss was running TOra. He’s more of a DBA than I am, and I tend to trust his opinion on such things, so I asked him about it. He told me it was quite good, and so I decided I should try it out. I installed it on my Kubuntu desktop, fired it up, and much to my chagrin, found that it had no Oracle support. A bit of digging turned up that Oracle support was not included in the Debian package.

I had to do something about it. A bit more research brought me to some useful sites, a couple of which I will reference later, but after going through the process, I found that none of them had all the bits together in one nifty package. This is an attempt to change that.

After bouncing back and forth on some of these steps, fixing environment issues, decoding compiler errors, and generally fiddling, I went back to my notes and terminal histories and figured out what I feel is the best order for the process. This is the streamlined version of maybe two hours of fiddling. I hope it will help someone avoid the pitfalls (there are 8-bit crocodiles below!).

And so I humbly present to you . . . 

Installing TOra with Oracle support on Ubuntu 8.04LTS (Hardy Heron)


Based on a 32-bit install of Ubuntu 8.04 LTS using the Oracle 11 clients. AMD 3200+ processor with 1GB of RAM and a bizarre assortment of franken-hardware.

I could have done this on 8.10 but I prefer to stick to the long term support releases. I may try it on my 8.10 laptop at a later date and update the guide.


  • I use sudo for everything because logging in to root shells is just bad practice.
  • I plug vi whenever possible, mostly to annoy emacs people.
  • I did this all in a KDE desktop, which means that some things I say may sound like I use KDE.
  • Commands issued are in pre-formatted text without any prompt gunk in front of them, so cut and paste to your hearts content.
  • Output is also in pre-formatted text and I use it sparingly where relevant. No one needs to see the list of packages that apt-get wants to autoremove, nor how many updates I should be installing.

Get the packages

Find an acceptable build location in your filesystem, cd to there and then get the tora source deb package.

sudo apt-get source tora

Get the Oracle client RPMs from the Oracle 11 client download page. All the packages you need are on one page, but you need a log-in to get them. Accounts are free. The files we want are:

  • oracle-instantclient11.1-basiclite-
  • oracle-instantclient11.1-devel-
  • oracle-instantclient11.1-sqlplus-

Install the prerequisites and development libraries

Install alien and all the dependencies that this build will need to work. Alien is a cool little tool for converting .rpm to .deb files and more.

sudo apt-get install libqt3-mt-dev libqt3-compat-headers libqscintilla-dev build-essential g++ gcc autoconf automake flex zlib1g-dev docbook-xsl debhelper alien libaio1 dpatch

This will install a lot of other dependencies, just go with it.

If you want to build with KDE support then do the following (having never done this without the KDE support, I’m not really sure what the differences are (perhaps someone can enlighten me in a comment).

sudo apt-get install kdebase-dev

This will probably install even more dependencies than above, but hey, you’ve come this far. Now that we have all the prerequisites in place, let’s get Oracle installed.

Change directory to where you downloaded the Oracle RPMs.

cd oracle
ls -al
total 18912
drwxr-xr-x 2 root   root       4096 2008-12-10 00:35 .
drwxr-xr-x 4 domito domito     4096 2008-12-10 07:58 ..
-rw-r--r-- 1 root   root   17958287 2008-12-10 00:28 oracle-instantclient11.1-basiclite-
-rw-r--r-- 1 root   root     578817 2008-12-10 00:27 oracle-instantclient11.1-devel-
-rw-r--r-- 1 root   root     782162 2008-12-10 00:27 oracle-instantclient11.1-sqlplus-

Now we are going to convert them to .deb and install them all in one shot.

sudo alien -i *.rpm

The output will look like this.

        dpkg --no-force-overwrite -i oracle-instantclient11.1-basiclite_11.
Selecting previously deselected package oracle-instantclient11.1-basiclite.
(Reading database ... 172382 files and directories currently installed.)
Unpacking oracle-instantclient11.1-basiclite (from oracle-instantclient11.1-basiclite_11. ...
Setting up oracle-instantclient11.1-basiclite ( ...

Processing triggers for libc6 ...
ldconfig deferred processing now taking place
        dpkg --no-force-overwrite -i oracle-instantclient11.1-devel_11.
Selecting previously deselected package oracle-instantclient11.1-devel.
(Reading database ... 172399 files and directories currently installed.)
Unpacking oracle-instantclient11.1-devel (from oracle-instantclient11.1-devel_11. ...
Setting up oracle-instantclient11.1-devel ( ...
        dpkg --no-force-overwrite -i oracle-instantclient11.1-sqlplus_11.
Selecting previously deselected package oracle-instantclient11.1-sqlplus.
(Reading database ... 172446 files and directories currently installed.)
Unpacking oracle-instantclient11.1-sqlplus (from oracle-instantclient11.1-sqlplus_11. ...
Setting up oracle-instantclient11.1-sqlplus ( ...

Environment setup

Now that we have Oracle and the development libraries all in place, we need to get the system to use the libraries and know where Oracle lives.

To permanently add the Oracle library path, called oracle.conf, and add the Oracle library path. Create a new config file for oracle in /etc/ called oracle and add the Oracle library path.

sudo echo /usr/lib/oracle/11.1/client/lib > /etc/

Now rebuild your library cache. No output is good output for ldconfig.

sudo ldconfig

Now set your environment so tora can find Oracle. We’ll also add one that will tell tora where to find the tnsnames.ora file.

export ORACLE_HOME=/usr/lib/oracle/11.1/client
export LD_LIBRARY_PATH=/usr/lib/oracle/11.1/client/lib
export TNS_ADMIN=/usr/lib/oracle/11.1/client

For good measure I would add this to your login scripts so that tora will not bomb when you run it. Depending on your shell you might need to do this to ~/.profile. I use bash, because it rules. And yes, I know there are prettier ways to get this done, but this works.

echo "export ORACLE_HOME=/usr/lib/oracle/11.1/client" >> ~/.bashrc
echo "export LD_LIBRARY_PATH=/usr/lib/oracle/11.1/client/lib" >> ~/.bashrc
echo "export TNS_ADMIN=/usr/lib/oracle/11.1/client" >> ~/.bashrc

Now we should be ready to build.

A word about the environment

If you are using tora then you are in XWindows. Likely you did all this in a terminal window using xterm or konsole. We created these environment variables in the terminal which means they are local to the terminal window itself. If you try to run TOra from the menu, it will fail. That is why we added them into the ~/.bashrc file. The .bashrc file will not take effect for your X desktop until you log out and back in again, there’s no way around that. So until you log out/in you can just run TOra from the command line in your terminal window. Bringing up a new xterm should source your .bashrc file and get the environment variables too.

Building and installing TOra

Unpack tora:

tar -zxvf tora_1.3.22.orig.tar.gz

cd to the tora source dir:

cd tora-1.3.22/

Change the configure command arguments in debian/rules using your favorite editor (which surely must be vi). Find line 20, which says:

./configure --prefix=/usr --without-oracle --without-rpath --disable-new-check --with-kde --enable-libsuffix=

And shift-D, i in the following (non vi users delete the line and insert this one).

./configure --prefix=/usr --with-instantclient --with-oracle-includes=/usr/include/oracle/11.1/client --without-rpath --disable-new-check --with-kde --enable-libsuffix=

If you avoided the minute discomfort of installing the KDE development libraries, or just plain do not want KDE support, use this one (I would also suggest this if you run into some bizarre KDE related error when compiling. It’s been known to happen. I think there’s a support group.):

./configure --prefix=/usr --with-instantclient --with-oracle-includes=/usr/include/oracle/11.1/client --without-rpath --disable-new-check --without-kde --enable-libsuffix=

Now it’s time for the main event, we’re going to compile it. How you ask? Thusly! From right where you are in the root of the tora source tree run this stunningly complex command:

debian/rules binary

This will take a bit of waiting now, but there’s nothing for it. On my out-of-date Athlon XP 3200+, the whole process took 12m27.674s. Just ballparking here.

Once the build is complete you are ready to install tora. Do it like so:

sudo dpkg -i tora_1.3.22-5_i386.deb

Selecting previously deselected package tora.
(Reading database ... 172036 files and directories currently installed.)
Unpacking tora (from tora_1.3.22-5_i386.deb) ...
Setting up tora (1.3.22-5) ...

But Mr Science, won’t apt just clobber my custom package when it updates?

I’m glad you asked that Billy. Get me some coffee and a cigarette and I’ll explain combustion. Ahem.

Normally, yes. in fact when I was testing this I found that apt-get will prefer a repository package to my own even though they were the same version. To fix this we need to add an entry to the apt preferences configuration:

man apt_preferences  # :)

Edit /etc/apt/preferences (it will likely be a new file) and add the following lines:

Package: tora
Pin: version *
Pin-Priority: 50

What this says is that for the package tora, any version it will assign priority 50. Priority 50 means, essentially, that it will not be installed unless it’s not already installed, and you have to ask for it by name. What that means for you is that auto-updates will not destroy all your hard work.

End Game

Don’t forget your tnsnames.ora. We set up the environment to use TNS_ADMIN=/usr/lib/oracle/11.1/client which means that tora will look for them there. The easiest way I found was to get the production tnsnames.ora file from the Oracle server itself and place it in $TNS_ADMIN. Once that is done, start tora and enjoy. Remember to start it from the xterm session that has the environment variable set if you have not yet logged out/in.

This is where I leave off. The running and connecting and such I leave in the hands of you and the TOra documentation.

All in all I was pleased with how the whole process turned out. I have been using TOra and like it so far. I have not yet been able to find stored procedures in it, but that will come. I also had a crash when I brought up the help, but I am not sure if that is a bug and will need to look into it. It compiled cleanly, so I do not blame it on a compilation error.

Coming up

I’m not great at keeping a regular blogging schedule. I am trying to mend my ways though. That being said, I have one in the works dealing with a statement I read on another blog, that Ubuntu was becoming slower with each release. I captured some performance data before I did a fresh install of 8.10, so I will share the results if they are interesting. I also have a nifty little one about VMware in a hosted ISP environment. Stay tuned.


The TOra homepage

Installing TOra with Oracle Support on Ubuntu Community Forums


August 1, 2008

» Please join us! Pythian Europe Launch Event in Prague on Wednesday

Invitation - Pythian Europe Launch Party

I’m pleased to announce that there will be the formal launch of Pythian Europe at the premises of the Canadian Embassy in Prague on Wednesday the 6th of August from 17:00 to 18:30. This historic event will be announced by Mrs. Sameena Qureshi, Trade Counsellor, Embassy of Canada; and Paul Vallée, President and Founder, The Pythian Group. Present will be various members from the press (IT and Business), as well as representatives from Oracle and Sun Microsystems, the Canadian Chamber of Commerce in Prague, and many more. We will prepare some unusual and very tasty snacks and refreshments.

We would love for readers of this blog to join us, so please consider this your special, personal invitation from me. Please come if you’re in Prague on Wednesday. If you plan to attend, please contact Dan at

» Next week, meet me in Frankfurt, Stuttgart, Munich or Prague!

I am traveling to Europe next week to brief major prospects in Germany (Daimler, MAN) as well as to attend to administrative matters at Pythian Europe in Prague and would love to meet any readers of this blog during this trip!

I’m especially interested in meeting:

  • DBAs, Applications Administrators and Systems Administrators,
  • Potential customers (IT Directors, DBA Managers, Supply Managers for IT), and
  • Potential partners (IT product of service companies that could partner with Pythian to delight our mutual customers)

Here is my itinerary:

  • Sunday, August: Frankfurt,
  • Monday, August 4: Stuttgart,
  • Tuesday, August 5: Munich, and
  • Wednesday, August 6 through Saturday, August 9: Prague, Czech Republic.

Please reach out to me using if you would like to meet!

January 8, 2008

» How to advocate for good backups! Or, how NOT to advocate for good backups!

I try to do a decent job of advocating for caring about good backups and business continuity strategies in my 7 Deadly Habits article.

But this one beats them all:

Grave Warning

Just too funny and great not to share. Found via this reddit article, where there is a lively discussion underway.

December 21, 2007

» Where is Storage QoS?

In the era of consolidation, storage has not been left out. Different systems are made to share the same storage boxes, fiber-channel switches and networks. Inside a typical storage box, we have front-end and back-end controllers, cache, physical spindles shared amongst different applications, databases, backup destinations, and so on.

The impact of backup on normal database activity . . . batch processing in one database impacting transactional processing — these are two real life examples of the consequences of storage consolidation known to almost every DBA. Of course, it’s easy to suggest separating databases to different physical disks, but what about SAN box controllers and shared cache? And don’t forget about the cost factor and ubiquitous consolidation that forces storage administrators to pack as much data as possible into a single SAN or NAS storage device.

Some of our customers use hosting services — they outsource hardware hosting just like they outsource DBA work to Pythian. In such scenarios, hosting service providers usually have storage hardware shared amongst different customers to provide higher utilization and on-demand storage capacity at a lower cost.


December 19, 2007

» Pythian Goodies: The Answer to Free Memory, Swap, Oracle, and Everything

I gave this talk at the UKOUG, and I have received a few requests to post the slides online. Instead of just posting the PowerPoint I took some time to give the presentation again (internally here at Pythian) and this time we recorded the session and we’re posting it here in a variety of formats. This is a bit of a departure from the typical Pythian Goodies, in that it is scripted, and there is a lot of content here in the whitepaper, but there hasn’t been a Goodie in a while so why not!

I’d love to hear from you, so please feel free to ask any follow-up questions to this post in the comments.


Do I have enough memory? Why is my free memory so low? Am I swapping to disk? Can I increase my SGA (db cache) size? Can I add another instance to this server? Are my system resources used optimally? These are all questions that often haunt DBAs. This presentation is The Answer. It covers in detail the different types of memory, how to monitor memory, and how to optimally use it with Oracle. Multiple examples in the presentation demonstrate how certain actions on the database side cause different memory areas to be allocated and used on the OS side. Key underlying differences in operating systems approaches to managing memory will be highlighted, with special attention given to Linux, Solaris, and Windows. Using Linux as an example throughout, this presentation explains how to effectively use tools such as “top”, “vmstat” and “/proc/meminfo” to look into into a system’s allocation and use of memory.

Below you should see a flash video with me giving the session.

Download this presentation!
IPod video (right-click and Save As . . .)
MP3 audio only

And below you will find the complete contents of the whitepaper. This is intended to be a good overall reference resource for how memory works in Oracle, using Linux as an example.


These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Slashdot
  • Google
  • Facebook
  • bodytext
  • Technorati
  • TwitThis
  • Reddit
  • Spurl
  • Furl
  • blogmarks
  • Ma.gnolia
  • E-mail this story to a friend!

November 21, 2007

» How to find out the machine ID on various UNIXes

It recently came up that it would be helpful if we had a cheat sheet to find out the machine names for any given UNIX. I knew these off the top of my head but it would be great if people added more as comments.

HP/HP-UX: /bin/uname -i
IBM/AIX: /bin/uname -m
SGI/IRIX: /sbin/sysinfo -s
Sun/Solaris: /usr/ucb/hostid

These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Slashdot
  • Google
  • Facebook
  • bodytext
  • Technorati
  • TwitThis
  • Reddit
  • Spurl
  • Furl
  • blogmarks
  • Ma.gnolia
  • E-mail this story to a friend!