[DRBD-user] write performance

Anselm Strauss anselm.strauss at id.unibe.ch
Wed Aug 23 17:41:31 CEST 2006

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Thanks to all for the replies.

It seems I have identified my problem. One disk on the first server  
and both disks on the second one are extremely slow under certain  
conditions. According to my benchmarks on the one good disk and  
others I've read, the transfer rate of sequential writes should be  
somewhere around 45-50mb/s. One of the bad disks is around 6mb/s :- 
( I think it has something to do with nforce4 + sata. The whole thing  
is very confusing, hdparm -t repords speeds up to 60mb/s where dd  
reports only 10mb/s. I definitely have to investigate further in this  
problem before I get on with drbd and lustre ...
So, to answer myself: I was wrong, the disk seems to be the bottleneck.

Anselm Strauss


On Aug 23, 2006, at 11:20 AM, Anselm Strauss wrote:

> Hi.
>
> On my drbd system the write performance is far below what i  
> expected. I have two opteron servers connected with gbit and each  
> one has 2 sata 500gb disks. drbd runs on a dedicated cross-connect  
> gbit link. I'm pretty sure that neither the cpu, network, ram nor  
> the disks are a bottleneck.
> For a start i created 2 drbd devices that are primary on one  
> server. each device on a separate disk. I exported the devices with  
> lustre to another host. the second drbd host just receives the  
> mirrored data over the dedicated link and doesn't export anything.
> I get about 20mb/s for a single drbd device and about 28mb/s when i  
> stripe over both. I also tried multiple drbd devices per disk. I  
> can reach 35mb/s with 4 drbd devices, 2 devices on each disk, but  
> that's it. I tried to modify buffers and the protocol, but it seems  
> to be the limit.
> When I disable mirroring data in drbd ("drbdadm down all" on second  
> server) I can reach rates of 40mb/s, 85mb/s and 88mb/s as in the  
> cases above.
> Is this the performance impact i have to expect from drbd? Where  
> could be the bottleneck?
> Has anybody reached higher write performance with drbd?
>
> All resouce sections from my drbd.conf look like this:
>
> resource lustre1 {
>   protocol C;
>   disk {
>     on-io-error detach;
>   }
>   net {
>     max-buffers 8192;
>     sndbuf-size 512k;
>     max-epoch-size 8192;
>   }
>   syncer {
>     rate 20M;
>     al-extents 1024;
>     group 1;
>   }
>   on tnode1 {
>     device /dev/drbd0;
>     disk /dev/sda9;
>     address 10.0.0.5:7788;
>     meta-disk internal;
>   }
>   on tnode2 {
>     device /dev/drbd0;
>     disk /dev/sda9;
>     address 10.0.0.6:7788;
>     meta-disk internal;
>   }
> }
>
> It says sndbuf-size should not be set over 1M. Is it a problem if  
> sndbuf-size aggregated over multiple sections is above 1M?
>
> Cheers,
> Anselm Strauss
>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user




More information about the drbd-user mailing list