[DRBD-user] Slow Sync Performance

Roof, Morey R. MRoof at admin.nmt.edu
Mon May 12 20:43:26 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


My 34MB number is directly to the drive itself, ie /dev/sdb.  It is a
bit lower than I would expect but the PERC 3 controllers are bit slow
anyways.  I will look into it more but I believe DRBD should be able to
do quite a bit better than the 3MB/s I am seeing.


I pulled out the bonding driver and just used a crossover cable on the
machines and got the same exact speed numbers.  I verifed with ethtool
that both of the links where up as 1000Mbs full so I don't see anything
wrong there.  The network performance test was right on where it should
be so I am just not seeing a network problem.


I am thinking about playing with a live CD of a some different distros
so what other kinds of number I can get straight to the disk, but this
3MB/s is really odd and I can't seem to think of any other reason why it
is doing this.

-Morey 

-----Original Message-----
From: Tom Brown [mailto:tbrown at baremetal.com] 
Sent: Monday, May 12, 2008 12:05 PM
To: Roof, Morey R.
Subject: Re: [DRBD-user] Slow Sync Performance


(not on the list as I don't have any answers, just guesses...)

On Mon, 12 May 2008, Roof, Morey R. wrote:

> I have been working on a DRBD setup but I am getting really slow sync 
> performance and after digging around in the lists for a while I still 
> don't have any idea what is causing the problem so I was hoping 
> someone can get me some help.
>
> My setup consists of two Dell PowerEdge 2650 servers that are
identical.
> I have a RAID1 with two 300GB 10K disks that I will be replicating 
> with DRBD to both machines.  I am using the built in PERC 3/di 
> (aacraid) to provide the RAID1.  The link is made from the two onboard

> gige NICs between the servers and it is using the bonding driver in 
> active-failover.
>
> I have run iperf on the link between the servers and can get 98 - 
> 75MBytes/sec without problems.
>
> Running a straight dd on the drives I am getting about 34MBytes/sec in

> write.

?? that's a funny number. Any modern drive cheapo desktop drive should
be able to sustain about 50 MByte/s, a 10K RPM drive should be a fast
drive able to run substantially faster than 50 MB/s, so where is your
hardware losing out? Or is this NOT "on the drives", but "through DRBD"
?

> When I setup DRBD the initial syncs runs at an average of 3MBytes/sec.
> I have verifed that DRBD is using the correct interfaces between the 
> machines and have tried changing the al-extents, sndbuf-size, 
> max-buffers, max-epoch-size, unplug-watermark and such as others have 
> mentioned on the list but it always runs at about 3MBytes/sec.

I'd try pulling out the "using the bonding driver in active-failover",
and just use one nic on each end... not so much because it's desirable,
but because I like to debug the simplest configuration that works. I'd
also be checking ethtool -S eth0 on both ends, as well as the switch
statistics if there are any available... looking for any issues that
might be slowing down the network. I doubt you'll find any, but it's
easiest to check the simplest things first.

> The OS is CentOS 5.1 and I am starting to wonder if there might be 
> something odd in it.

ME TOO. But my problem seems to be almost the opposite, I have slow
write performance, I can get relatively fast syncs, and if I actually
take out the ext2fs layer and write directly to /dev/drbdx I get wildly
varying results, some of them getting near what I'd hope for 80+ MByte/s
but more normally in the 30-40 range.

I'd throw out the centos 5.1 kernel, but I want to have a vendor patched
kernel in use, as I'm using XEN, and the stock XEN patches for dom0
support from xensource absolutely suck, as they are against an archaic
kernel. (Then again, I might get away with using a homebuilt kernel for
dom0 and the vendor kernels for the guests...

-Tom




More information about the drbd-user mailing list