[Drbd-dev] Huge latency issue with 8.2.6

Graham, Simon Simon.Graham at stratus.com
Tue Aug 12 18:31:42 CEST 2008


We've been benchmarking DRBD 8.2.6 and have found that some specific benchmarks (SQL) have absolutely terrible performance (a factor of 100 worse than the non DRBD case - 30 transactions per second versus 3000). These issues go away when we power off the secondary system, so it seems likely that it's somehow related to the network component. After some analysis of network traces, we found the following:

1. When we are doing 30TPS, we're also doing about 30 1K writes/s - the conclusion here is
   that one transaction needs 1 1K (2 block) write. This means we are seeing a write-to-write
   time of around 33ms. To hit the 3000TPS mark, we'd need to be handling 3000 1K writes/s
   which means a total write-to-write time of 333us

2. When we do a tcpdump on the node running the benchmark, we see the following DRBD protocol 
   consistently:
   . Node issues barrier + 1K write + unplug remote in a single packet
   . Receives barrier ack on meta-data connection 30-130us later
   . Receives Data ack on meta-data connection ~250us later (after original rq issued)
   . Receives TCP level ack on data connection 35-40ms later
   . The next write is not sent on the wire for 35-40ms

3. tcpdump on the other node shows the time between sending the barrierack and sending the
   data ack is around 120us -- this is basically the disk write time.

Conclusion 1 -- network latency has nothing to do with the horrendous perf we are seeing. What's more, we are adding (250 - write_time)us to the overall time to write the block - it seems that the disk write time is of the order of 120us, so we are adding around 130us to the total write time -- this should lead us to a max possible TPS value around 4000...

Conclusion 2 -- the problem here has to do with the time is takes the secondary to send the TCP ACK.

I should also note that we are running this system with GSO disabled -- this in turn means that the zero-copy writes done by DRBD actually turn into non-zc writes inside TCP (if you look at tcp_sendpage in tcp.c, you will see that zero-copy is disabled unless scatter gather is enabled and scatter gather is disabled if you don't have checksum offload enabled).

So, my theory is that the first write completes very quickly (around 300us), then the app issues the second write which is passed to TCP BUT TCP does not send it because it is waiting for buffers to be free which doesn't happen until the TCP level ACK is received on the data connection. In addition, the data connection is basically unidirectional -- all the other DRBD protocol messages flow on the meta-data connection.

So, what we are running into here is the TCP ACK-Delay timer on the secondary node - since we never send any data from the secondary to the primary on this connection, we have to wait for the timer to expire before acknowledging the data.

I have prototyped a change that uses the TCP_QUICKACK socket option to force acks to be sent when the UnplugRemote message is received and this has made a huge difference - in our original tests (which, in the interests of full disclosure, were run against DRBD8.0) we saw the performance go from 30TPS to 1600+TPS. 

Forcing the Ack to be sent when the UnplugRemote is received seems the right fix to me, but please comment... candidate git patch against the HEAD of 8.2 is attached.

Simon

-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0001-Ensure-data-is-ACK-d-at-TCP-level-in-a-timely-fashio.patch
Type: application/octet-stream
Size: 1413 bytes
Desc: 0001-Ensure-data-is-ACK-d-at-TCP-level-in-a-timely-fashio.patch
Url : http://lists.linbit.com/pipermail/drbd-dev/attachments/20080812/9a675990/0001-Ensure-data-is-ACK-d-at-TCP-level-in-a-timely-fashio.obj


More information about the drbd-dev mailing list