Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hey guys,
I'm running into some performance issues with DRBD 8.3.2-6 installed on
Centos 5.4 running kernel 2.6.18-164.11.1. I'm trying to sync a disk
across a datacenter with an average latency of 28.8 ms and a maximum
throughput of 135Mb/second (Transferred a 2GB file using SFTP).
Right now I average 2Mbytes/s
###############################################
version: 8.3.2 (api:88/proto:86-90)
GIT-hash: dd7985327f146f33b86d4bff5ca8c94234ce840e build by
mockbuild at v20z-x86-64.home.local, 2009-08-29 14:07:55
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----
ns:11050876 nr:0 dw:0 dr:11059127 al:0 bm:674 lo:1 pe:4 ua:253 ap:0
ep:1 wo:b oos:2388363296
[>....................] sync'ed: 0.5% (2332384/2343176)M
finish: 362:48:56 speed: 1,824 (1,908) K/sec
###############################################
I didn't build the system but I'm contemplating rebuilding the machine
so I can redo the raid configuration to a RAID 10, but right now it's a
six disk RAID 5.
The RAID card information form 'lspci'
###############################################
03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078
(rev 04)
###############################################
I've read several threads and tried numerous things but I haven't had
any success or any change in speed or performance, it seems like I'm
pinned at 2MB/s
- sync protocol A, B and C
- added deadline /sys/block/sdb/queue/scheduler
- tried an external meta-disk on a ram disk
- syncer rate doesn't seem to change the speed at all
- tried no-disk-flushes and no-md-flushes
- tried several net settings, changing buffer size
One interesting thing I did notice, I was initially configuring this to
backup each other's drive, e.g. host1:/dev/sdb1 -> host2:/dev/sdb2 and
host2:/dev/sdb1 -> host1:/dev/sdb2, when I had two DRBD syncs running,
both were transferring at 2MB/s so a total 4MB/s, when I disabled one of
the synchronizations, the other transfer still ran at 2MB/s.
Here is my configuration.
drbd.conf
###############################################
global { usage-count no; }
resource r0 {
protocol C;
startup {
wfc-timeout 0;
degr-wfc-timeout 120;
}
disk {
no-disk-flushes;
no-md-flushes;
}
net {
cram-hmac-alg "sha1";
shared-secret "XXXXXX";
}
syncer { rate 25M; }
on backup01.domain.com {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.70.21:7788;
meta-disk /dev/ram0[0];
}
on backup02.domain.com {
device /dev/drbd0;
disk /dev/sdb2;
address 192.168.70.22:7788;
meta-disk /dev/ram[0];
}
}
###############################################
I think it's IO related, but if you guys and gals can give me any input
that would be great, I noticed that it's bottlenecking at IO, I'm really
thinking about rebuilding these systems with a RAID10, do you think this
will solve my problems? Or is there something else wrong? Here is the
iostat output from both machines.
Backup01 iostat -m
###############################################
avg-cpu: %user %nice %system %iowait %steal %idle
0.04 0.00 0.12 0.05 0.00 99.78
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 5.31 0.20 0.03 1425 182
sda1 0.70 0.05 0.00 375 0
sda2 0.93 0.07 0.00 496 4
sda3 1.29 0.05 0.00 365 25
sda4 0.00 0.00 0.00 0 0
sda5 0.26 0.00 0.01 1 59
sda6 1.07 0.01 0.01 90 41
sda7 0.75 0.01 0.01 92 51
sda8 0.01 0.00 0.00 0 0
sdb 46.61 1.84 0.00 12972 0
sdb1 45.23 1.83 0.00 12901 0
sdb2 1.29 0.01 0.00 70 0
drbd0 0.05 0.00 0.00 0 0
###############################################
Backup02
###############################################
avg-cpu: %user %nice %system %iowait %steal %idle
0.04 0.00 0.18 0.13 0.00 99.65
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 6.09 0.05 0.78 321 5530
sda1 0.02 0.00 0.00 0 0
sda2 0.31 0.01 0.00 41 3
sda3 0.79 0.02 0.00 119 32
sda4 0.00 0.00 0.00 0 0
sda5 0.35 0.00 0.01 1 62
sda6 1.01 0.02 0.00 110 30
sda7 3.60 0.01 0.76 46 5401
sda8 0.01 0.00 0.00 0 0
sdb 44.03 0.00 1.80 1 12794
sdb1 0.01 0.00 0.00 1 0
sdb2 44.02 0.00 1.80 0 12794
###############################################
Thanks in advance for your help.
Best Regards,
Dan Lavu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20100214/921eaf8e/attachment.htm>