Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On 09.04.2007 22:51, Lars Ellenberg wrote: > the drbd network protocol is "clean", > in the sense of well defined for endianess and data type sizes. > it is no problem mixing different architechtures on the nodes. I upgraded one of two boxes in the cluster from X86 to EM64T, I kept all config. While the disconnected performance is quite similar, as soon as I connect the both, performance is horrible. Horrible means I cannot get 10% of what I had before. Usually DRBD overhead was negligible (under 5%), now it unacceptable. Both machines show almost no CPU and low disk i/o, What are they waiting for? How can I debug that? The communication path between the two seems as always: iperf [ 3] 0.0-10.0 sec 2.29 GBytes 1.97 Gbits/sec (2 Bonded Intel Gbits) How can I debug that? Thanks in advance. -- Regards, H.D.