[DRBD-user] Significant write performance degredation LVM vs DRBD

Mark Watts m.watts at eris.qinetiq.com
Fri Jan 23 17:01:35 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

We've got a pair of HP ProLiant DL380 G5 servers with 6x146GB 2.5" SAS disks.
We've configured the drives, on a P400 raid controller, as a 6-disk RAID-10 (A 
stripe across three 2-disk mirrors).
A dedicated X-Over cable connects eth1 on each server, for DRBD mirroring.
Note: The second node is not configured at this time.

CentOS 5.2 is installed on the boxes and the array is configured as follows:
(Filesystems are ext3)

	/dev/cciss/c0d0p1	/boot	100MB
	/dev/cciss/c0d0p2	LVM
		/dev/mids/root	6GB
		/dev/mids/swap	1GB
		/dev/mids/shared	400GB

/dev/mids/shared then has DRBD configured ontop of it:

#### drbd.conf ####
common {
    syncer {
        rate             100M;

resource r0 {
    protocol               C;
    on server1
        device           /dev/drbd0;
        disk             /dev/mids/shared;
        meta-disk        internal;
    on server2 {
        device           /dev/drbd0;
        disk             /dev/mids/shared;
        meta-disk        internal;
    net {
        cram-hmac-alg    sha1;
        shared-secret    REMOVED;
        after-sb-0pri    disconnect;
        after-sb-1pri    disconnect;
        after-sb-2pri    disconnect;
        rr-conflict      disconnect;
    disk {
        on-io-error      detach;
    syncer {
        al-extents       257;
    startup {
        wfc-timeout        0;
        degr-wfc-timeout 120;

The following test was used to obtain a write average:
# for loop in `seq 1 10`; do echo "Writing to /empty, loop $loop"; dd 
if=/dev/zero of=/empty bs=$(( 1024 * 1024 * 1024 )) count=4; done

When writing to the / volume the average is 277 MB/s

When writing to the /shared volume the average is 65.1 MB/s

When writing to the /shared volume with no-disk-flushes and no-md-flushes 
enabled the average is 67.4 MB/s

Can anyone suggest why DRBD is causing such a drastic slow-down when ultimatly 
writing data to the same set of spindles; the only difference being the 
addition of the DRBD layer.


Mark Watts BSc RHCE MBCS
Senior Systems Engineer
QinetiQ Applied Technologies
GPG Key: http://www.linux-corner.info/mwatts.gpg
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part.
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20090123/8463787b/attachment.pgp>

More information about the drbd-user mailing list