[DRBD-user] DRBD and TRIM -- Slow! -- RESOLVED

Lars Ellenberg lars.ellenberg at linbit.com
Mon Aug 7 13:16:22 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Thu, Aug 03, 2017 at 05:08:47PM +0000, Eric Robinson wrote:
> For anyone else who has this problem, we have reduced the time
> required to trim a 1.3TB volume from 3 days to 1.5 minutes.
> 
> Initially, we used mdraid rto build a raid0 array with a 32K chunk
> size. We initialized it as a drbd disk, synced it, built an lvm
> logical volume on it, and created an ext4 filesystem on the volume.
> Creating the filesystem and tripping it took 3 days (each time, every
> time, across multiple tests).
> 
> When running lsblk -D, we noticed that the DISC-MAX value for the
> array was only 32K, compared to 4GB for the SSD drive itself. We also
> noticed that the number matched the chunk size. We theorized that the
> small DISC-MAX value was responsible for the slow trim rate across the
> DRBD link. We deleted the array and built a new one with a 4MB chunk
> size. The DISC-MAX value changed 4MB, which is the max selectable
> chunk size (but  still way below the other DISC-MAX values shown in
> lsblk -D). We realized that, when using mdadm, the DISK-MAX value ends
> up matching the array chunk size.
> 
> Instead of using mdadm to build the array, we used LVM to create a
> striped logical volume and made that the backing device for drbd. Then
> lsblk -D showed a DISC-MAX size of 128MB.  Creating an ext4 filesystem
> on it and trimming only took 1.5 minutes (across multiple tests).
> 
> Somebody knowledgeable may be able to explain how DISC-MAX affects the
> trim speed, and why the DISC-MAX value is different when creating the
> array with mdadm versus lvm.


Usually, there will only be one single discard / trim / unmap
request in flight.

Earlier, you said a rtt of ~ 1ms.
Assuming for a moment that that was the only source of latency,
and if we ignore any other possible overhead:

io-depth of 1, 1000 requests per second, 32k per request,
you will max out at ~  32 MByte per second,

io-depth of 1, 1000 requests per second, 128 MB per request,
you will max out at ~  128 GB per second,
which means you hit some other bottleneck much earlier
(the discard bandwidth of the backing storage...)

Note that DRBD 9.0.8 still has a problem with discards larger than 4 MB,
though (will hit protocoll error, disconnect, and reconnect).
That is already fixed in git, 9.0.9rc1 has that fixed.

(8.4.10 also works fine there)


-- 
: Lars Ellenberg
: LINBIT | Keeping the Digital World Running
: DRBD -- Heartbeat -- Corosync -- Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT
__
please don't Cc me, but send to list -- I'm subscribed



More information about the drbd-user mailing list