[DRBD-user] Poor fsync performance

Andy Dills andy at xecu.net
Fri Jan 11 06:49:04 CET 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Thu, 10 Jan 2013, Andy Dills wrote:

> I'm new to drbd and still reading, so I'm confused about this page:
> 
> http://www.drbd.org/users-guide-8.3/s-throughput-overhead-expectations.html
> 
> If what that page says is true, well then of course we're being 
> constrained by the bandwidth of our bonded connections.
> 
> But what's not clear, is if the contraint created by the network bandwidth 
> is negated by doing this:
> 
>   disk {
>     no-disk-barrier;    
>     no-disk-flushes;
>     no-md-flushes;
>   }
> 
> I would think so? But I haven't yet gained full understanding of what 
> disabling those safeguards implies; all I know is that since my RAIDs are 
> BBU protected I should be using them.

Sorry to followup my own post, but I have some additional data points to 
make my point clearer.

I've done some tuning on the network layer, and I now have these results:

Connected:
# dd if=/dev/zero of=/mnt/file.tmp bs=512M count=1 oflag=direct
...
536870912 bytes (537 MB) copied, 2.54603 s, 211 MB/s

Ok, excellent, I'll take 211MB/s, that's around a 50MB/s increase by 
upping my MTU to support jumbo grames and changing net to include:
    sndbuf-size 0;
    unplug-watermark 16;

But then disconnected:

536870912 bytes (537 MB) copied, 1.61062 s, 333 MB/s

So, it seems to me that even with the no-disk-barrier and no-disk-flushes, 
when I am connected, I am limited (write speed) to the speed of the 
network connection.

Can somebody confirm or deny that assertion? 

I'm really struggling to understand why fsync performance is so abysmal 
when in connected mode vs. unconnected, and whether I need to get some 
10gbps adapters, or add a third gige to the bond.

Andy

---
Andy Dills
Xecunet, Inc.
www.xecu.net
301-682-9972
---



More information about the drbd-user mailing list