Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi All, Here is one problem I have encountered but haven`t figured out the answer yet. In my testing environment, I have ten DRBD devices configured as primary/primary. I do have a cluster management software that is irrelevant to the topic. My client is running IOMeter with queue depth set to 16 for each target. If I reboot one of server, all IOs will failover to its partner, no error is reported. However, during DRBD resync, it seems to me that DRBD dedicates all its resources to resync traffic which leaves new IOs from client pending too long and eventually timed out. Does DRBD have a mechanism that can tune the aggressiveness of the the resync operation? In other word, how can I tell DRBD to favor application pending(ap) IO as opposed to resync IO so that from client`s point of view QoS is guaranteed. resource drbd10 { on FA33 { device /dev/drbd10; disk /dev/disk/by-id/scsi-360030480003ae2e0159207cc2a2ac9d4; address 192.168.251.1:7799; meta-disk internal; } on FA34 { device /dev/drbd10; disk /dev/disk/by-id/scsi-360030480003ae32015920a821ca7f075; address 192.168.251.2:7799; meta-disk internal; } net { allow-two-primaries; after-sb-0pri discard-younger-primary; after-sb-1pri discard-secondary; after-sb-2pri violently-as0p; rr-conflict violently; max-buffers 8000; max-epoch-size 8000; unplug-watermark 16; sndbuf-size 0; } syncer { rate 300M; verify-alg crc32c; al-extents 3800; } startup { become-primary-on both; } handlers { before-resync-target "/sbin/before_resync_target.sh"; after-resync-target "/sbin/after_resync_target.sh"; } } Any help is appreciated. Thanks Ben Commit yourself to constant self-improvement -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20110621/476028ea/attachment.htm>