[DRBD-user] Low drbd performance, ~50% of raw disk

bd at bc-bd.org bd at bc-bd.org
Tue Jan 29 12:59:05 CET 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello,

I am trying to setup a 2 node, drbd, kvm cluster to host a couple virtual
machines.  However the IO performance I get is quite low, both inside the guest
and on the host when directly writing to the drbd device, compared to what I
get on the disk directly.

While the performance inside the guest may be out of scope for this list, I am
hoping that by improving performance on the drbd side it will also improve
inside the guest.

All machines in question run debian squeeze with some backports (kernel, kvm,
drbd8-utils). On the hosts I also tried 3.7.4 vanilla, without any changes
though.

All numbers measured with: dd if=/dev/zero of=... bs=100M oflag=direct

	direct: 377 MB/s
	drbd: 177MB/s
	guest: 67MB/s

Besides the low in guest numbers, I am also seeing an increase in load on the
host, sometimes as high as 60, nearly 100% iowait (on the machine hosting the
guest).

When I invalidate the data on one node, drbd syncs with ~220 MB/s:

	Began resync as SyncTarget (will sync 10485404 KB [2621351 bits set]).
	Resync done (total 47 sec; paused 0 sec; 223092 K/sec)

but it won't reach the configured 250MB/s.

I tried changing

	* al-extents
	* max-buffers
	* max-epoch-size
	* no-tcp-cork
	* sndbuf-size
	* cpu-mask
        * unplug-watermark

but I could not detect any significant improvements.

Hardware:

	Primergy RX300 S7
	Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz
	24GB RAM
	02:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit
	SFI/SFP+ Network Connection (rev 01)
	RAID5 from 4 10K SAS drives.
	Controller: D3116

OS:

	Host: 3.2.0-0.bpo.4-amd64
	Guest: 3.2.0-0.bpo.4-686-pae
	DRBD: 8.3.11

NICs:

	iperf -c 192.168.0.2 -t 60
	------------------------------------------------------------
	Client connecting to 192.168.0.2, TCP port 5001
	TCP window size: 23.5 KByte (default)
	------------------------------------------------------------
	[  3] local 192.168.0.4 port 56985 connected with 192.168.0.2 port 5001
	[ ID] Interval       Transfer     Bandwidth
	[  3]  0.0-60.0 sec  62.4 GBytes  8.94 Gbits/sec

Writing locally to the RAID5:

	dd if=/dev/zero of=/dev/data/dd bs=100M oflag=direct
	dd: writing `/dev/data/dd': No space left on device
	103+0 records in
	102+0 records out
	10737418240 bytes (11 GB) copied, 28.4926 s, 377 MB/s

Writing to a drbd device from the host:

	dd if=/dev/zero of=/dev/drbd2 bs=100M oflag=direct
	dd: writing `/dev/drbd2': No space left on device
	123+0 records in
	122+0 records out
	12884471808 bytes (13 GB) copied, 75.5634 s, 171 MB/s

Writing to disk from inside the guest, no drbd

	dd if=/dev/zero of=/dev/sys/dd bs=100M oflag=direct
	dd: writing `/dev/sys/dd': No space left on device
	41+0 records in
	40+0 records out
	4294967296 bytes (4.3 GB) copied, 30.8856 s, 139 MB/s

Writing to disk from inside the guest, located on drbd

	dd if=/dev/zero of=/dev/sys/dd bs=100M oflag=direct
	dd: writing `/dev/sys/dd': No space left on device
	41+0 records in
	40+0 records out
	4294967296 bytes (4.3 GB) copied, 67.7795 s, 63.4 MB/s

-- 
Q:	What is purple and conquered the world?
A:	Alexander the Grape.
-------------- next part --------------
resource debian5 {
  net {
    sndbuf-size 10M;
    max-buffers 32K;
    max-epoch-size 20000;
    unplug-watermark 32K;

    allow-two-primaries;

    after-sb-0pri discard-zero-changes;
    after-sb-1pri consensus;
    after-sb-2pri violently-as0p;

    rr-conflict violently;
  }
  syncer {
    rate 250M;
    al-extents 257;
  }

  device    /dev/drbd1;
  disk      /dev/data/debian5;
  meta-disk internal;

  on debian2 {
    address   192.168.0.2:7791;
  }
  on debian4 {
    address   192.168.0.4:7791;
  }
}
-------------- next part --------------
A non-text attachment was scrubbed...
Name: debian5.xml
Type: application/xml
Size: 1988 bytes
Desc: not available
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20130129/bb7e9cc6/attachment.xml>


More information about the drbd-user mailing list