[DRBD-user] Poor performance of drbd/heartbeat/oracle cluster

Hahn, Klaus klaushahn at siemens.com
Tue Dec 29 11:58:51 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi list,
I've just set up a 2 node cluster:
Hardare:  1 CPU (current Xeon),  12 GB RAM, RAID 1, 1 *100 Mbit, 1* 1Gbit Ethernet
              Storage: Raid 1 (300GB),  partitions for OS, Swap and DRBD-Device (160GB).
Software: SLES 11, drbd 8.2.7, heartbeat 2.99, Oracle 10.2.0.4

Current drbd settings:

global {
   dialog-refresh       1;
   minor-count  5;
}
common {
   syncer {
      rate      50M;
   }
   net {
      after-sb-1pri     discard-secondary;
      after-sb-2pri     disconnect;
      after-sb-0pri     discard-zero-changes;
   }
   startup {
      degr-wfc-timeout  120;
      wfc-timeout       0;
   }
   handlers {
      split-brain       "/usr/lib/drbd/notify-split-brain.sh root";
   }
}
resource oradata {
   protocol     C;
   disk {
      on-io-error       pass_on;
   }
   on elsserverbeta {
      device    /dev/drbd0;
      address   192.168.5.3:7788;
      meta-disk internal;
      disk      /dev/sda3;
   }
   on elsserveralpha {
      device    /dev/drbd0;
      address   192.168.5.2:7788;
      meta-disk internal;
      disk      /dev/sda3;
   }
}

DRBD uses the Gbit-Ethernet connection for replication.

The DRBD-device holds the Oracle data (data and redolog files).

The application which runs on top of the HA-Stack uses a lot of small transactions.
Oracle throws following warning: "log write time  760 ms, size 8 KB".

If the drbd-stack on the second node (elsserverbeta) is stopped, the application is "fast" (2 to 3 times faster).

Any idea ?

Regards, Klaus


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20091229/b775dcd9/attachment.htm>


More information about the drbd-user mailing list