[DRBD-user] drbdmanage pure client write speed
Brice CHAPPE
bricechappe at gmail.com
Mon Sep 24 16:36:51 CEST 2018
Hi mailing !
I have three nodes drbdmanage cluster.
Two nodes work as storage backend (S1/S2).
One node as satellite pure client (for future nova usage)
I work on 20GB/s LACP network between storage backends and satellite pure
client node
So, when I bench on local drbd on storage node with two nodes connected with
:
dd if=/dev/zero of=/dev/drbd104 bs=1M count=512 oflag=direct
I have almost 680 MB/s => It is ok for me
After I assign the resource to the satellite node.
I try the same thing on it :
dd if=/dev/zero of=/dev/drbd104 bs=1M count=512 oflag=direct
I get 420MB/s => why ?
If I do the same test on satellite node and with disconnect resource on one
storage backend :
dd if=/dev/zero of=/dev/drbd104 bs=1M count=512 oflag=direct
I get 650MB/s => It is ok for me
The 20GB network can support those two flows between storage and satellite.
I don't understand where is the bottleneck or misconfiguration.
(I can read balance at 800MB/s, but I didn't try other settings to get more)
Schema :
-------------
-Satellite-
-------------
||
--------------------
- Switch -
-------------------
|| ||
----- ------
-S1- -S2-
----- ------
Cfg:
Protocol C
al-extents 6007;
md-flushes no;
disk-barrier no;
disk-flushes no;
Others are default (good results with those)
If you have any suggestion I will be very happy.
Thanks !
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20180924/619b4312/attachment-0001.htm>
More information about the drbd-user
mailing list