Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I'm trying to track down some poor peformance on my proxmox cluster. This
is with drbd 8.3.13.
We have two servers, each with an Adaptec 6405 with the solid state flash
module, and four 2TB drives setup in a RAID 10.
I'm seeing:
# pvscan
PV /dev/md1 VG pve lvm2 [232.38 GiB / 16.00 GiB free]
PV /dev/drbd0 VG data lvm2 [3.63 TiB / 3.48 TiB free]
# lvcreate -n test -L 10G data
# mkfs.ext3 /dev/data/test
# mount /dev/data/test /mnt
# pveperf /mnt
CPU BOGOMIPS: 95995.32
REGEX/SECOND: 838290
HD SIZE: 9.84 GB (/dev/mapper/data-test)
BUFFERED READS: 477.27 MB/sec
AVERAGE SEEK TIME: 5.87 ms
FSYNCS/SECOND: 631.64
DNS EXT: 43.10 ms
DNS INT: 1.18 ms
The fsyncs/second should be tremendously higher.
I've confirmed write-back cache is enabled:
# arcconf GETCONFIG 1 | grep cache
Read-cache mode : Enabled
Write-cache mode : Enabled (write-back)
Write-cache setting : Enabled (write-back)
And after doing some research trying to track it down, I've since added
this to my drbd config:
disk {
no-disk-barrier;
no-disk-flushes;
no-md-flushes;
}
However, after doing an adjust (and a full cluster reboot), I'm still
seeing the same performance.
Any suggestions for tracking down the cause of this?
Thanks,
Andy
---
Andy Dills
Xecunet, Inc.
www.xecu.net
301-682-9972
---