[DRBD-user] MySQL-over-DRBD Performance

Carlos Xavier cbastos at connection.com.br
Wed Jan 23 20:00:31 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Im sorry for the big delay on the answer, I was on vacation.

I got 2 clusters running, one with Dell PowerEdge SC 1435 and controller 
BCM5785  and another on Dell PowerEdge 1900 with controller SAS1068.
On both systems the disks used are sata disk WDC WD2500JS.


----- Original Message ----- 
From: "Art Age Software" <artagesw at gmail.com>
To: <drbd-user at linbit.com>
Sent: Friday, December 21, 2007 6:05 PM
Subject: Re: [DRBD-user] MySQL-over-DRBD Performance


> Well, at least you are getting much better performance than I am getting.
>
> I don't understand why even my local write performance is so much
> worse than yours. What sort of disk subsystem are you using?
>
> On Dec 21, 2007 11:52 AM, Carlos Xavier <cbastos at connection.com.br> wrote:
>> Hi,
>> I have been following this thread since i want to do a very similar
>> configuration.
>>
>> The system is running on Dell 1435SC each one with 2 dual core AMD 
>> Opteron
>> and 4GB of ram.
>> the network cards are:
>> 01:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5721 
>> Gigabit
>> Ethernet PCI Express (rev 21)
>> 02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5721 
>> Gigabit
>> Ethernet PCI Express (rev 21)
>> 06:00.0 Ethernet controller: Intel Corporation 82572EI Gigabit Ethernet
>> Controller (Copper) (rev 06)
>>
>> Right now it is running a OCFS2 over DRBD and we dont have Myqld database
>> over it yet. I run the commands to see the throughput of the write on the
>> disk. As you can see bellow is that when the DRBD is up and connected the
>> througput fall a litle below the middle of the value we got with it
>> disconnected.
>>
>> DRBD and OCFS2 cluster connected
>>
>> root at apolo1:~# dd if=/dev/zero bs=4096 count=10000 
>> of=/clusterdisk/testfile
>> oflag=dsync
>> 10000+0 records in
>> 10000+0 records out
>> 40960000 bytes (41 MB) copied, 3.89017 s, 10.5 MB/s
>>
>>
>> DRBD connected and OCFS2 remote disconnected
>> root at apolo1:~# dd if=/dev/zero bs=4096 count=10000 
>> of=/clusterdisk/testfile
>> oflag=dsync
>> 10000+0 records in
>> 10000+0 records out
>> 40960000 bytes (41 MB) copied, 3.65195 s, 11.2 MB/s
>>
>> DRBD remote stopped and OCFS2 local mounted
>> root at apolo1:~# dd if=/dev/zero bs=4096 count=10000 
>> of=/clusterdisk/testfile
>> oflag=dsync
>> 10000+0 records in
>> 10000+0 records out
>> 40960000 bytes (41 MB) copied, 1.50187 s, 27.3 MB/s
>>
>> Regards,
>> Carlos.
>>
>>
>>
>> ----- Original Message -----
>> From: "Art Age Software" <artagesw at gmail.com>
>> To: <drbd-user at linbit.com>
>> Sent: Thursday, December 20, 2007 7:35 PM
>> Subject: Re: [DRBD-user] MySQL-over-DRBD Performance
>>
>>
>> > On Dec 20, 2007 1:01 PM, Lars Ellenberg <lars.ellenberg at linbit.com> 
>> > wrote:
>> >>
>> >> On Thu, Dec 20, 2007 at 11:08:56AM -0800, Art Age Software wrote:
>> >> > On Dec 20, 2007 3:05 AM, Lars Ellenberg <lars.ellenberg at linbit.com>
>> >> > wrote:
>> >> > > On Wed, Dec 19, 2007 at 04:41:37PM -0800, Art Age Software wrote:
>> >> > > > I have run some additional tests:
>> >> > > >
>> >> > > > 1) Disabled bonding on the network interfaces (both nodes). No
>> >> > > > significant change.
>> >> > > >
>> >> > > > 2) Changed the DRBD communication interface. Was using a direct
>> >> > > > crossover connection between the on-board NICs of the servers. I
>> >> > > > switched to Intel Gigabit NIC cards in both machines, connecting
>> >> > > > through a Gigabit switch. No significant change.
>> >> > > >
>> >> > > > 3) Ran a file copy from node1 to node2 via scp. Even with the
>> >> > > > additional overhead of scp, I get a solid 65 MB/sec. throughput.
>> >> > >
>> >> > > this is streaming.
>> >> > > completely different than what we measured below.
>> >> > >
>> >> > > > So, at this stage I have seemingly ruled out:
>> >> > > >
>> >> > > > 1) Slow IO subsystem (both machines measured and check out 
>> >> > > > fine).
>> >> > > >
>> >> > > > 2) Bonding driver (additional latency)
>> >> > > >
>> >> > > > 3) On-board  NICs (hardware/firmware problem)
>> >> > > >
>> >> > > > 4) Network copy speed.
>> >> > > >
>> >> > > > What's left?  I'm stumped as to why DRBD can only do about 3.5
>> >> > > > BM/sec.
>> >> > > > on this very fast hardware.
>> >> > >
>> >> > > doing one-by-one synchronous 4k writes, which are latency bound.
>> >> > > if you do streaming writes, it probably get up to your 65 MB/sec
>> >> > > again.
>> >> >
>> >> > Ok, but we have tested that with and without DRBD by the dd command,
>> >> > right? So at this point, by all tests performed so far, it looks 
>> >> > like
>> >> > DRBD is the bottleneck. What other tests can I perform that can say
>> >> > otherwise?
>> >>
>> >> sure.
>> >> but comparing 3.5 (with drbd) against 13.5 (without drbd) is bad 
>> >> enough,
>> >> no need to now compare it with some streaming number (65) to make it
>> >> look _really_ bad ;-)
>> >
>> > Sorry, my intent was not to make DRBD look bad. I think DRBD is
>> > **fantastic** and I just want to get it working properly. My point in
>> > trying the streaming test was simply to make sure that there was
>> > nothing totally broken on the network side. I suppose I should also
>> > try a streaming test to the DRBD device and compare that to the raw
>> > streaming number. And, back to my last question: What other tests can
>> > I perform at this point to narrow down the source of the (latency?)
>> > problem?
>>
>> > _______________________________________________
>> > drbd-user mailing list
>> > drbd-user at lists.linbit.com
>> > http://lists.linbit.com/mailman/listinfo/drbd-user
>> >
>>
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user at lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
> 





More information about the drbd-user mailing list