Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I have two new computers raided by drbd.
Primary
1 Opteron 265 HE
2 GB RAM (PC3200)
4 Seagate 750 GB SATA drives
PCIe Areca 1220 card (raid 6)
builtin broadcom gigabit ethernet with MTU of 9000
64-bit kernel 2.6.18-gentoo-r1 with areca driver, set to deadline scheduler
Secondary
1 Opteron 240 EE
1 GB RAM (PC2700)
3 Seagate 750 GB SATA drives
PCIe Areca 1210 card (raid 5)
builtin broadcom gigabit ethernet with MTU of 9000
64-bit kernel 2.6.18-gentoo-r1 with areca driver, set to deadline scheduler
The computers are connected via a Netgear GS724, with jumbo frames
setting turned on.
The relevant drbd.conf on both computers is
net {
timeout 60;
connect-int 10;
ping-int 10;
ko-count 4;
max-buffers 32768;
max-epoch-size 2048;
sndbuf-size 1M;
}
syncer {
rate 100M;
group 2;
al-extents 257;
}
On fresh sync of /dev/drbd2, which is 1.4 TB, I'm seeing an average
20.3 MB per second. When I test with ttcp, I routinely see about 64
MB per second. Write tests with dd if=/dev/zero
of=/maurice/bonnie/hello bs=400M count=10 appear to show a write
speed on the secondary of about 160 MB per second, so it doesn't
appear that the secondary has any obvious I/O bottleneck.
So shouldn't drbd be much faster than it actually is?
I also notice that the secondary is often unresponsive to command
line commands and I see messages in the log of the primary:
drbd2: [drbd2_worker/9750] sock_sendmsg time expired, ko = 3
--
Maurice Volaski, mvolaski at aecom.yu.edu
Computing Support, Rose F. Kennedy Center
Albert Einstein College of Medicine of Yeshiva University