[DRBD-user] Performance problem on drbd8 2-node cluster

Marco Marino marino.mrc at gmail.com
Mon May 6 10:44:14 CEST 2019

Hello, I'm using drbd 8.4.11 on a two node cluster on top of centos 7. Both
servers have the same hardware configuration: same cpu, ram, disks,...More
precisely there is a Megaraid lsi SAS 9361-8i with a raid5 volume.
CacheCade is enabled for both controllers and I have a raid0 volume with
4x256GB SSD disks.
I'm trying to do same test with fio:

fio --filename=/dev/mapper/vg1-vol2 --direct=1 --rw=randrw --refill_buffers
--norandommap --randrepeat=0 --ioengine=libaio --bs=16k --rwmixread=30
--iodepth=32 --numjobs=32 --runtime=60 --group_reporting --name=16k7030test

On node 1 I have:

Run status group 0 (all jobs):
   READ: bw=259MiB/s (272MB/s), 259MiB/s-259MiB/s (272MB/s-272MB/s),
io=15.2GiB (16.3GB), run=60021-60021msec
  WRITE: bw=605MiB/s (635MB/s), 605MiB/s-605MiB/s (635MB/s-635MB/s),
io=35.5GiB (38.1GB), run=60021-60021msec

I'm doing the test after this command:
pcs cluster standby  --> on node 2. So, there are no write through the
replication network and I can test the effective speed of the disk

if I try to do the same thing from node2 I have a degraded performance:
Run status group 0 (all jobs):
   READ: bw=101MiB/s (105MB/s), 101MiB/s-101MiB/s (105MB/s-105MB/s),
io=6039MiB (6332MB), run=60068-60068msec
  WRITE: bw=234MiB/s (245MB/s), 234MiB/s-234MiB/s (245MB/s-245MB/s),
io=13.7GiB (14.7GB), run=60068-60068msec

Someone can give me an advice? Why this happens? I repeat: there is the
same configuration on both servers. I can check any parameter and I can
give more details if needed.

Thank you
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20190506/5a66af6c/attachment.htm>

More information about the drbd-user mailing list