Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi Digimer,
Following is my qperf test between 2 nodes. TCP_BW test is good enough but
something weird on UDP_BW test. What do you think? 192.168.107.0/24 is on
eth0 and 10.10.130.0/24 is on eth1. Currently DRBD using eth1.
[root at db-2 ~]# qperf -t 60 --use_bits_per_sec 192.168.107.13 tcp_bw
tcp_bw:
bw = 8.81 Gb/sec
[root at db-2 ~]# qperf -t 60 --use_bits_per_sec 192.168.107.13 udp_bw
udp_bw:
send_bw = 13.6 Gb/sec
recv_bw = 5.82 Gb/sec
[root at db-2 ~]# qperf -t 60 --use_bits_per_sec 10.10.130.9 tcp_bw
tcp_bw:
bw = 8.47 Gb/sec
[root at db-2 ~]# qperf -t 60 --use_bits_per_sec 10.10.130.9 udp_bw
udp_bw:
send_bw = 13.2 Gb/sec
recv_bw = 79.6 Mb/sec
Best regards,
On Tue, Feb 7, 2017 at 2:47 PM, Digimer <lists at alteeve.ca> wrote:
> On 06/02/17 02:08 PM, Lazuardi Nasution wrote:
> > Hi,
> >
> > I'm new with DRBD. I'm trying to setup dual primary nodes (VMs with
> > virtio-net of bonded of dual 10GbE links) with following resource config.
> >
> > resource db {
> > on db-1 {
> > volume 0 {
> > device /dev/drbd0 minor 0;
> > disk /dev/vdc1;
> > meta-disk internal;
> > }
> > address ipv4 10.10.130.9:7788
> > <http://10.10.130.9:7788>;
> > }
> > on db-2 {
> > volume 0 {
> > device /dev/drbd0 minor 0;
> > disk /dev/vdc1;
> > meta-disk internal;
> > }
> > address ipv4 10.10.130.10:7788
> > <http://10.10.130.10:7788>;
> > }
> > options {
> > on-no-data-accessible io-error;
> > }
> > net {
> > protocol C;
> > allow-two-primaries yes;
> > after-sb-0pri discard-zero-changes;
> > after-sb-1pri discard-secondary;
> > after-sb-2pri disconnect;
> > sndbuf-size 1M;
> > rcvbuf-size 2M;
> > max-buffers 131072;
> > max-epoch-size 20000;
> > cram-hmac-alg sha1;
> > shared-secret db;
> > }
> > disk {
> > on-io-error detach;
> > disk-flushes no;
> > disk-barrier no;
> > resync-rate 1G;
> > al-extents 257;
> > c-plan-ahead 8;
> > c-fill-target 25M;
> > c-max-rate 1G;
> > c-min-rate 100M;
> > }
> > startup {
> > wfc-timeout 30;
> > outdated-wfc-timeout 20;
> > degr-wfc-timeout 30;
> > become-primary-on both;
> > }
> > }
> >
> >
> > I have tried to change some of variables, but no matter I have done, the
> > performance is just around 70MB/s like dd result below.
> >
> > [root at db-1 ~]# dd if=/dev/zero of=/dev/drbd0 bs=4194304 count=1000
> > 1000+0 records in
> > 1000+0 records out
> > 4194304000 bytes (4.2 GB) copied, 61.7918 s, 67.9 MB/s
> >
> >
> > The same test to the backing storage can give aroud 700MB/s performance.
> > What should I do with this case?
> >
> > Best regards,
>
> There are a few things here;
>
> 1. DRBD has sensible defaults. Start by dramatically simplifying your
> config to only the specifics you need. Tune later after you have a
> baseline. You'll find, I suspect, your tuning won't change much, or
> often actually hurt performance.
>
> 2. Your test is flawed because dd uses caching unless you specify dsync.
> Set that and also use a larger write file (I'd recommend minimum of 2x
> RAM).
>
> 3. You need to also test the network connection between the two nodes.
> Use sperf or similar to ensure you are actually getting the network link
> speed you expect.
>
>
> --
> Digimer
> Papers and Projects: https://alteeve.com/w/
> "I am, somehow, less interested in the weight and convolutions of
> Einstein’s brain than in the near certainty that people of equal talent
> have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20170217/7d997863/attachment.htm>