[DRBD-user] umount /drbdpart takes >50 seconds
Harald Dunkel
harald.dunkel at aixigo.de
Wed Dec 12 10:16:09 CET 2018
Hi folks,
using drbd umounting /data1 takes >50 seconds, even though the file
system (ext4, noatime, default) wasn't accessed for more than 2h.
umount ran with 100% CPU load.
# sync
# time umount /data1
real 0m52.772s
user 0m0.000s
sys 0m52.740s
This appears to be a pretty long time. I am concerned that there
is some data sleeping in a buffer that gets flushed only at umount
time.
Kernel is version 4.18.0-0.bpo.1-amd64 on Debian Stretch. drbdutils
is 8.9.10-2. drbd.conf is attached. The bond2 interface used for
drbd synchronization is based upon 2 * 10 Gbit/sec NICs.
Every insightful comment is highly appreciated.
Regards
Harri
-------------- next part --------------
#
# see http://www.drbd.org/users-guide-8.4/re-drbdconf.html
# https://www.linbit.com/en/drbd-sync-rate-controller-2/
#
# /etc/hosts
# ~~~~~~~~~~~
# br0 ("external" network interface)
# 192.168.96.184 srvl060a.example.com srvl060a
# 192.168.96.185 srvl060b.example.com srvl060b
#
# bond2 ("internal" network interface, used for drbd synchronization)
# 10.0.0.2 srvl060a.internal
# 10.0.0.3 srvl060b.internal
#
common {
disk {
# on-io-error detach; # continue in diskless mode (default)
fencing resource-only; # use fence-peer handler
resync-rate 512M; # synchronization rate over a dedicated line (4 each disk!)
c-plan-ahead 0;
al-extents 1237; # activity log extents
}
net {
protocol C;
max-buffers 32k;
max-epoch-size 20000;
sndbuf-size 256k;
rcvbuf-size 512k;
# allow-two-primaries;
cram-hmac-alg sha256;
# generated using "openssl rand -base64 30":
shared-secret "YTbwx3QGiBf2xDrWN+9VppvsrOTXFKRy67x3FAu0";
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
}
startup {
wfc-timeout 300;
degr-wfc-timeout 30;
outdated-wfc-timeout 30;
}
handlers {
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
fence-peer "/usr/lib/drbd/outdate-peer.sh on srvl060a.example.com 192.168.96.185 10.0.0.3 on srvl060b.example.com 192.168.96.184 10.0.0.2";
local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
initial-split-brain "/usr/lib/drbd/notify-split-brain.sh";
split-brain "/usr/lib/drbd/notify-split-brain.sh";
}
}
resource data1 {
device /dev/drbd1;
disk /dev/disk/by-partlabel/data1;
meta-disk internal;
on srvl060a.example.com {
address 10.0.0.2:7788;
}
on srvl060b.example.com {
address 10.0.0.3:7788;
}
startup {
become-primary-on srvl060a.example.com;
}
}
resource data2 {
device /dev/drbd2;
disk /dev/disk/by-partlabel/data2;
meta-disk internal;
on srvl060a.example.com {
address 10.0.0.2:7789;
}
on srvl060b.example.com {
address 10.0.0.3:7789;
}
startup {
become-primary-on srvl060b.example.com;
}
}
More information about the drbd-user
mailing list