Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hardware RAID-10. There is no problem with the disks. We have measured raw I/O performance through the RAID on both nodes. On Dec 19, 2007 6:16 PM, Matteo Tescione <matteo at rmnet.it> wrote: > Sorry if already asked, but are you using hardware raid or software raid? If > so, is it raid 5/6 ? I discovered an huge hole in performance like your > reports using that kind of setup. Search in the list for previous posts > about performance solved. > > Regards, > > -- > #Matteo Tescione > #RMnet srl > > Il 20-12-2007 1:41, "Art Age Software" <artagesw at gmail.com> ha scritto: > > > > I have run some additional tests: > > > > 1) Disabled bonding on the network interfaces (both nodes). No > > significant change. > > > > 2) Changed the DRBD communication interface. Was using a direct > > crossover connection between the on-board NICs of the servers. I > > switched to Intel Gigabit NIC cards in both machines, connecting > > through a Gigabit switch. No significant change. > > > > 3) Ran a file copy from node1 to node2 via scp. Even with the > > additional overhead of scp, I get a solid 65 MB/sec. throughput. > > > > So, at this stage I have seemingly ruled out: > > > > 1) Slow IO subsystem (both machines measured and check out fine). > > > > 2) Bonding driver (additional latency) > > > > 3) On-board NICs (hardware/firmware problem) > > > > 4) Network copy speed. > > > > What's left? I'm stumped as to why DRBD can only do about 3.5 BM/sec. > > on this very fast hardware. > > > > Sam > > > _______________________________________________ > > drbd-user mailing list > > drbd-user at lists.linbit.com > > http://lists.linbit.com/mailman/listinfo/drbd-user > > > > >