[DRBD-user] MySQL-over-DRBD Performance
matteo at rmnet.it
Thu Dec 20 04:21:16 CET 2007
Ok, can you show the output of iostat -x -m 1 during your drbd testcase
and, if possible, of your raw raid subsystem??
Look at svctime, await and %cputil, they all help you investigating further.
Il 20-12-2007 4:09, "Art Age Software" <artagesw at gmail.com> ha scritto:
> Hardware RAID-10. There is no problem with the disks. We have measured
> raw I/O performance through the RAID on both nodes.
> On Dec 19, 2007 6:16 PM, Matteo Tescione <matteo at rmnet.it> wrote:
>> Sorry if already asked, but are you using hardware raid or software raid? If
>> so, is it raid 5/6 ? I discovered an huge hole in performance like your
>> reports using that kind of setup. Search in the list for previous posts
>> about performance solved.
>> #Matteo Tescione
>> #RMnet srl
>> Il 20-12-2007 1:41, "Art Age Software" <artagesw at gmail.com> ha scritto:
>>> I have run some additional tests:
>>> 1) Disabled bonding on the network interfaces (both nodes). No
>>> significant change.
>>> 2) Changed the DRBD communication interface. Was using a direct
>>> crossover connection between the on-board NICs of the servers. I
>>> switched to Intel Gigabit NIC cards in both machines, connecting
>>> through a Gigabit switch. No significant change.
>>> 3) Ran a file copy from node1 to node2 via scp. Even with the
>>> additional overhead of scp, I get a solid 65 MB/sec. throughput.
>>> So, at this stage I have seemingly ruled out:
>>> 1) Slow IO subsystem (both machines measured and check out fine).
>>> 2) Bonding driver (additional latency)
>>> 3) On-board NICs (hardware/firmware problem)
>>> 4) Network copy speed.
>>> What's left? I'm stumped as to why DRBD can only do about 3.5 BM/sec.
>>> on this very fast hardware.
>>> drbd-user mailing list
>>> drbd-user at lists.linbit.com
> drbd-user mailing list
> drbd-user at lists.linbit.com
More information about the drbd-user