[DRBD-user] DRBD module causes severe disk performance decrease

Lars Ellenberg lars.ellenberg at linbit.com
Wed Dec 24 13:11:09 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Tue, Dec 23, 2008 at 04:16:22PM -0700, Sam Howard wrote:
> Lars,
> 
> > first, _why_ did you have to run
> > a recovery on your software raid, at all?
> 
> The reason for the re-sync was due to saving a copy of the data "disk" during
> the DRBD and kernel upgrades, so once everything looked good, I needed to sync
> up my data disks.
> 
> > second, _any_ io on the devices under md recovery
> > will cause "degradation" of the sync.
> > especially random io, or io causing it to seek often.
> 
> My main concern with the assumption that DRBD having a device enabled is
> causing the resync to go slow is because *any* disk I/O will slow the sync ...
> that doesn't explain why the host with the Primary roles was able to complete
> the same exact re-sync in 1/3 of the time.
> 
> The DRBD devices were in sync when these mdadm re-syncs were running, so any
> traffic to the DRBD devices should have had the same disruptive affect on
> *both* hosts.
> 
> Is there something different about the host that has DRBD devices in Secondary
> roles vs Primary roles?  Protocol C, in case that helps.

I don't think so.
double check that io scheduler settings are the same.
if you feel like it, you can experiment with swapped DRBD roles,
and see if the faster md recovery moves with the DRBD role,
or sticks on the host.

but I think the most effective way is to just tune
the md sync_speed_min to something sensible.

> >> and a pretty disturbing issue with resizing an LV/DRBD
> >> device/Filesystem with flexible-meta-disk: internal,
> 
> > care to give details?
> 
> I posted to the drbd-users list on Tue Nov 11 08:14:37 CET 2008, but got no
> response.  I ended up having to try to salvage the data manually (almost 170GB
> worth) and recreate the DRBD devices.  Quite a bummer.  I was hoping the
> internal metadata would make Xen live migration easier, but, if it causes the
> resize to fail, that's not so good.  I've switched the new DRBD device back to
> external metadata device and will test with it.

ok.
well, I can just say "works for me".

> For a visual of the basic layout of our setup, see http://skylab.org/~mush/
> Xen_Server_Architecture.png

I see.
you should definetly put your swap also in a md raid1.
if you have something in swap, and one of your disks fails, you are
screwed anyways, even tough you have your other file systems on raid1.

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list