[DRBD-user] DRBD on top of LVM - 50% performace drop

Dan Frincu dfrincu at streamwide.ro
Fri Sep 10 13:23:48 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

You have to realize how many abstraction layers you have going on there. 
It goes something like this, disk, software RAID on top of disk, LVM 
physical volumes on top of software RAID, volume group on top of 
physical volume, logical volume on top of volume group, DRBD on top of 
logical volume. Does it sound like a lot? Because it is, and this is 
just one node, now put into play the secondary node and a network 
connection to get your traffic there and then go back the chain down to 
the disk for a write. I'm surprised you got up to 80MB/s, but then 
again, the CPU is compensating for a lot here.

On-point, what protocol are you using for the setup? For instance, if 
you're using protocol C, then it has to wait for a lot of things to 
happen for the write operation to be committed, it is the safest 
protocol, but also the slowest, from what I know. Changing the protocol 
_might_ help, but the main thing is the layer overload you got going on 
there. And to add to that, what happens when a process requires CPU 
time? What about 100 processes?

Regards,

Dan

trekker.dk at abclinuxu.cz wrote:
> Hello,
>
> I'm trying to use DRBD as storage for virtualised guests. DRBD is 
> created on top of LVM partition, all LVM partitions reside on single 
> software RAID1 array (2 disks)
>
> In this setup the DRBD is supposed to operate in standalone mode most 
> of the time, network connections will only come into play when 
> migrating a guest to another host (That's why I can't use RAID - DRBD 
> - LVM - there can be more than 2 hosts and guests need to be able to 
> migrate anywhere.)
>
> So I did very simple benchmark, created an LVM partition and tried to 
> write into it and then read the data:
>
> # dd if=/dev/zero of=/dev/mapper/obrazy2-pokus \
>  bs=$((1024**2)) count=16384
> # dd if=/dev/mapper/obrazy2-pokus of=/dev/null \
>  bs=$((1024**2)) count=16384
>
> Both tests yielded about 80MB/s throughput.
>
> Then I created a DRBD on top of that LVM and retried the test:
>
> # /sbin/drbdmeta 248 v08 /dev/mapper/obrazy2-pokus internal create-md
> # drbdsetup 0 disk \
>  /dev/mapper/obrazy2-pokus /dev/mapper/obrazy2-pokus \
>  internal --set-defaults --create-device
> # drbdsetup 0 primary -o
>
> Reading performace was the same, but writes dropped to about 35MB/s, 
> with flushes disabled (drbdsetup ... -a -i -D -m) about 45MB/s.
>
> I'd understand that, if the device was connected over network, but in 
> standalone mode I was expecting DRBD to have roughly the same 
> performance as underlying storage.
>
> My question: is that drop in write throughput normal or could there be 
> some error in my setup, which is causing it?
>
> System setup is: Debian, kernel 2.6.34 from kernel.org, drbd-utils 
> 8.3.7. Also tested on kernel 2.6.35.4 from kernel.org.
>
> Regards,
> J.B.
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user

-- 
Dan FRINCU
Systems Engineer
CCNA, RHCE
Streamwide Romania

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20100910/3c3fe1c5/attachment.htm>


More information about the drbd-user mailing list