[DRBD-user] Performance with DRBD + iSCSI

Greg Freemyer greg.freemyer at gmail.com
Wed Feb 21 17:29:22 CET 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On 2/21/07, Ross S. W. Walker <rwalker at medallion.com> wrote:
>
> There has been a lot of back and forth on the lists about the
> performance of iSCSI and DRBD together.
>
> Let me post some information that I have learned through my use of both.
>
> If you are performing disk io across a network then the performance of
> your disk io is directly related to the performance of your network.
>
> Discover the latency for 1 block sized packet of data to travel from one
> host to another, say for a 4k packet.
>
> 24 packets transmitted, 24 received, 0% packet loss, time 23022ms
> rtt min/avg/max/mdev = 0.164/0.226/0.290/0.038 ms, pipe 2
>
> Ok, so for a 4096 sized packet it takes .226ms round trip. Looks like my
> network needs some help here.
>
> Now if I take 1000ms and divide it by .226 I get, 4425, that is the
> maximum number of IOPS that my network setup can perform for this block
> size, which at 4096 bytes equates to a maximum throughput of 18123894
> bytes, or 17.2MB/s.
>
> This is because each 4k io operation, the standard for file system
> operations, needs to travel across the network. The lower you can get
> those ping times, the higher your throughput will be up to what your
> disk system can sustain at that block size.
>
> Now this is true for DRBD by itself, or iSCSI by itself, but if you
> combine the two then take whatever numbers you calculated above and
> divide by 2 as each io will now have to travel across the network twice,
> once from initiator to target, then again from target to replica.
>
> Remember that for DRBD Prot C, synchronous io, that io operations will
> not return (thus the next operation will not execute), until the write
> operation has been committed to disk on both sides which means that
> performance will be network bound.

The above totally ignores the TCP sliding window of un-acked packets.
Surely iSCSI analysis cannot ignore that.

I don't know about drbd prot c, but if the user space writes are
multiple times the size of a packet, it would seem drbd could send the
entire userspace write to the slave without having to wait for each
individual packet to be acked.

If that is the case the above argues that calls into drbd prot c
should use as big a write as possible.

Greg
-- 
Greg Freemyer
The Norcross Group
Forensics for the 21st Century



More information about the drbd-user mailing list