Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
----- "Lars Ellenberg" <lars.ellenberg at linbit.com> schrieb:
> On Tue, Dec 16, 2008 at 08:23:39PM +0000, Rudolph Bott wrote:
> > Hi List,
> >
> > I was wondering if anyone might be able to share some performance
> > information about his/her DRBD setup. Ours comes along with the
> > following Hardware:
> >
> > Hardware: Xeon QuadCore CPU, 2GB RAM, Intel Mainboard with 2
> Onboard
> > e1000 NICs and one additional plugged into a regular PCI slot,
> 3ware
> > 9650SE (PCI-Express) with 4 S-ATA Disks in a RAID-10 array
> >
> > Software: Ubuntu Hardy LTS with DRBD 8.0.11 (from the ubuntu
> repository), Kernel 2.6.24
> >
> > one NIC acts as "management interface", one as the DRBD Link, one
> as
> > the heartbeat interface. On top of DRBD runs LVM to allow the
> creation
> > of volumes (which are in turn exported via iSCSI). Everything seems
> to
> > run smoothly - but I'm not quite satisfied with the write speed
> > available on the DRBD device (locally, I don't care about the iSCSI
> > part yet).
> >
> > All tests were done with dd (either copying from /dev/zero or to
> > /dev/null with 1, 2 or 4GB sized files). Reading gives me speeds at
> > around 390MB/sec which is way more than enough - but writing does
> not
> > exceed 39MB/sec. Direct writes to the raid controller (without
> DRBD)
> > are at around 95MB/sec which is still below the limit of
> Gig-Ethernet.
> > I spent the whole day tweaking various aspects (Block-Device
> tuning,
> > TCP-offload-settings, DRBD net-settings etc.) and managed to raise
> the
> > write speed from initially 25MB/sec to 39MB/sec that way.
> >
> > Any suggestions what happens to the missing ~60-50MB/sec that the
> > 3ware controller is able to handle? Do you think the PCI bus is
> > "overtasked"? Would it be enough to simply replace the onboard NICs
> > with an additional PCI-Express Card or do you think the limit is
> > elsewhere? (DRBD settings, Options set in the default Distro Kernel
> > etc.).
>
> drbdadm dump all
common {
syncer {
rate 100M;
}
}
resource storage {
protocol C;
on nas03 {
device /dev/drbd0;
disk /dev/sda3;
address 172.16.15.3:7788;
meta-disk internal;
}
on nas04 {
device /dev/drbd0;
disk /dev/sda3;
address 172.16.15.4:7788;
meta-disk internal;
}
net {
unplug-watermark 1024;
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
}
disk {
on-io-error detach;
}
syncer {
rate 100M;
al-extents 257;
}
startup {
wfc-timeout 20;
degr-wfc-timeout 120;
}
handlers {
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
}
}
> drbdsetup /dev/drbd0 show
disk {
size 0s _is_default; # bytes
on-io-error detach;
fencing dont-care _is_default;
}
net {
timeout 60 _is_default; # 1/10 seconds
max-epoch-size 2048 _is_default;
max-buffers 2048 _is_default;
unplug-watermark 1024;
connect-int 10 _is_default; # seconds
ping-int 10 _is_default; # seconds
sndbuf-size 131070 _is_default; # bytes
ko-count 0 _is_default;
after-sb-0pri disconnect _is_default;
after-sb-1pri disconnect _is_default;
after-sb-2pri disconnect _is_default;
rr-conflict disconnect _is_default;
ping-timeout 5 _is_default; # 1/10 seconds
}
syncer {
rate 102400k; # bytes/second
after -1 _is_default;
al-extents 257;
}
protocol C;
_this_host {
device "/dev/drbd0";
disk "/dev/sda3";
meta-disk internal;
address 172.16.15.3:7788;
}
_remote_host {
address 172.16.15.4:7788;
}
>
> what exactly does your micro benchmark look like?
dd if=/dev/zero of=/mnt/testfile bs=1M count=2048
dd if=/mnt/testfile of=/dev/null
>
> how do "StandAlone" and "Connected" drbd compare?
Standalone:
root at nas03:/mnt# dd if=/dev/zero of=/mnt/testfile bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2,1 GB) copied, 54,1473 s, 39,7 MB/s
Connected:
root at nas03:/mnt# dd if=/dev/zero of=/mnt/testfile bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2,1 GB) copied, 60,1652 s, 35,7 MB/s
>
> what thoughput does the drbd resync achieve?
~ 63MB/sec
hmm...when I take the information above into account I would say...maybe LVM is the bottleneck? The speed comparison to local writes (achieving ~95mb/sec) were done on the root fs, which is direct on the sda device, not ontop of LVM.
>
> --
> : Lars Ellenberg
> : LINBIT | Your Way to High Availability
> : DRBD/HA support and consulting http://www.linbit.com
>
> DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
> __
> please don't Cc me, but send to list -- I'm subscribed
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user