<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Hi,<br>
<br>
You have to realize how many abstraction layers you have going on
there. It goes something like this, disk, software RAID on top of disk,
LVM physical volumes on top of software RAID, volume group on top of
physical volume, logical volume on top of volume group, DRBD on top of
logical volume. Does it sound like a lot? Because it is, and this is
just one node, now put into play the secondary node and a network
connection to get your traffic there and then go back the chain down to
the disk for a write. I'm surprised you got up to 80MB/s, but then
again, the CPU is compensating for a lot here.<br>
<br>
On-point, what protocol are you using for the setup? For instance, if
you're using protocol C, then it has to wait for a lot of things to
happen for the write operation to be committed, it is the safest
protocol, but also the slowest, from what I know. Changing the protocol
<u>might</u> help, but the main thing is the layer overload you got
going on there. And to add to that, what happens when a process
requires CPU time? What about 100 processes?<br>
<br>
Regards,<br>
<br>
Dan<br>
<br>
<a class="moz-txt-link-abbreviated" href="mailto:trekker.dk@abclinuxu.cz">trekker.dk@abclinuxu.cz</a> wrote:
<blockquote cite="mid:4C8A0CBF.9010609@abclinuxu.cz" type="cite">Hello,
<br>
<br>
I'm trying to use DRBD as storage for virtualised guests. DRBD is
created on top of LVM partition, all LVM partitions reside on single
software RAID1 array (2 disks)
<br>
<br>
In this setup the DRBD is supposed to operate in standalone mode most
of the time, network connections will only come into play when
migrating a guest to another host (That's why I can't use RAID - DRBD -
LVM - there can be more than 2 hosts and guests need to be able to
migrate anywhere.)
<br>
<br>
So I did very simple benchmark, created an LVM partition and tried to
write into it and then read the data:
<br>
<br>
# dd if=/dev/zero of=/dev/mapper/obrazy2-pokus \
<br>
bs=$((1024**2)) count=16384
<br>
# dd if=/dev/mapper/obrazy2-pokus of=/dev/null \
<br>
bs=$((1024**2)) count=16384
<br>
<br>
Both tests yielded about 80MB/s throughput.
<br>
<br>
Then I created a DRBD on top of that LVM and retried the test:
<br>
<br>
# /sbin/drbdmeta 248 v08 /dev/mapper/obrazy2-pokus internal create-md
<br>
# drbdsetup 0 disk \
<br>
/dev/mapper/obrazy2-pokus /dev/mapper/obrazy2-pokus \
<br>
internal --set-defaults --create-device
<br>
# drbdsetup 0 primary -o
<br>
<br>
Reading performace was the same, but writes dropped to about 35MB/s,
with flushes disabled (drbdsetup ... -a -i -D -m) about 45MB/s.
<br>
<br>
I'd understand that, if the device was connected over network, but in
standalone mode I was expecting DRBD to have roughly the same
performance as underlying storage.
<br>
<br>
My question: is that drop in write throughput normal or could there be
some error in my setup, which is causing it?
<br>
<br>
System setup is: Debian, kernel 2.6.34 from kernel.org, drbd-utils
8.3.7. Also tested on kernel 2.6.35.4 from kernel.org.
<br>
<br>
Regards,
<br>
J.B.
<br>
_______________________________________________
<br>
drbd-user mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a>
<br>
<a class="moz-txt-link-freetext" href="http://lists.linbit.com/mailman/listinfo/drbd-user">http://lists.linbit.com/mailman/listinfo/drbd-user</a>
<br>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Dan FRINCU
Systems Engineer
CCNA, RHCE
Streamwide Romania
</pre>
</body>
</html>