[DRBD-user] The effects of duplex mismatch on a cluster running drbd

Lars Ellenberg Lars.Ellenberg at linbit.com
Fri Jul 15 10:29:13 CEST 2005

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


/ 2005-07-15 02:44:14 -0400
\ Maurice Volaski:
> I have running a pair of servers using drbd to do mirroring. Primary
> is using dual bonded gigabit. Secondary didn't have gigabit yet. And
> the 100 Mbit it was running was incorrectly set to half duplex.
> 
> Hearbeat dropped packets. That seems expected.
> Drbd was continually dropping the connection. That seems expected.
> But here's something interesting: it affected services on the primary.
> Both netatalk and smb clients were dropping connections frequently.
> I'm assuming this is due to the way protocol C works. IO isn't flagged
> as done until the secondary acknowledges it. And a duplex mismatch 
> majorly impacts network operation.
> 
> This is in effect means that anything that impacts IO on the secondary
> impacts the primary's functionality. They're like siamese computers 
> :-)

not in all detail: if io on the secondary is completely broken, and you
have a ko-count configured, the primary will go into "StandAlone" mode
if it could not deliver data for that many drbd timeout/ping cycles.

sure there is still room for improvement in this field, more detailed
configuration options about why and when the secondary should be
considered broken.

but hey, you should have some monitoring things setup, and if throughput
is really bad, and load on the primary goes through the roof, you can
always disconnect by hand, and then fix your secondary...

-- 
: Lars Ellenberg                                  Tel +43-1-8178292-0  :
: LINBIT Information Technologies GmbH            Fax +43-1-8178292-82 :
: Schoenbrunner Str. 244, A-1120 Vienna/Europe   http://www.linbit.com :
__
please use the "List-Reply" function of your email client.



More information about the drbd-user mailing list