[DRBD-user] How does protocol A work across a WAN?

Chris de Vidal chris at devidal.tv
Tue Jul 3 21:19:14 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Thanks for the rapid reply!

--- Lars Ellenberg <lars.ellenberg at linbit.com> wrote:
> > Suppose then the primary node goes down before all of the recent changes could be copied to
> > the secondary node.
> > 
> > Would the only losses to the secondary node be those recent few minutes?  Or might there be
> > other, out-of-step changes that are lost as well?
> > 
> > Another way to ask the question: are changes run through a strictly first-in first-out (FIFO)
> > process or could they be sent across the wire in random order for optimization?  Is the order
> > of changes strictly in the order that they happened on the primary node?  Or is there some
> > reordering for performance or some other reason?
> 
> the image on the secondary is always consistent, regardless of when the
> connection is lost or the primary crashes (looks the same from this
> perspective: no more requests from Primary).
> (however, it is inconsistent during resynch, so keep that in mind.)
> 
> the only reodering taking place is the reordering in the local io stacks
> below drbd. this is restricted to happen within reorder domains which are 
> bounded by "drbd barriers". drbd barriers are issued whenever the
> primary relays an io-completion event to the upper layers (file system).
> 
> rationale: if the "user" (file system) submits several write requests
> while some other write request is not yet completed, these are obviously
> independend and may be reordered. any write request submitted after some
> other request has been completed however may well be dependend on that
> very completion. follows: reordering may take place, but only between
> completion events.

I'm not sure I understand.  Forgive me, I've not done as much low-level/network programming as you
have, I'm just an admin.

Here's what I have in mind: database replication over slow, inexpensive WAN links.  For example,
T1s or even cablemodem/DSL.

I've read lots of warnings which recommend I only use protocol C for databases, because if you run
protocol A you run a risk of completing a transaction locally that doesn't complete on the remote
copy.  If you have a local crash and you bring up the remote copy the transaction that was
supposed to have been completed is actually rolled back.

I understand why that would be a Bad Thing (tm).

However, I don't see how this loss is any more significant than running snapshot backups on a
database.  If you can accept that kind of risk then DRBD/protocol A is sufficient.


Say for example the snapshot backup happens in the middle of a database transaction.  The backup
completes.  THEN the database transaction completes.

What happens if the database server catches fire?  You buy a new server, install the OS and the
database software.  Then you restore the snapshot backup you made.  Then, and this is the
important part, ***you start up the database service which rolls back the most recent transaction.
 Oops, you lost your most recent transaction; it actually did complete but you've lost the record
of it.***

What happens if the primary database server catches fire with DRBD/protocol A?  ***You start up
the database service which rolls back the most recent transaction.  Oops, you lost your most
recent transaction; it actually did complete but you've lost the record of it.***

See what I mean?  It seems to me that DRBD/protocol A carries exactly the same risk as with
snapshot backups.

With one significant advantage!  You save a lot of time buying the hardware, reinstalling the OS
and the database software, restoring the data, etc.  You just log in to the second server and flip
on the service.

It seems both snapshots and DRBD/protocol A run the same level of risk with databases.  With a
database, ideally you want a cluster but if you cannot afford that you must depend solely upon
snapshot backups.

The same goes for WAN replication; if you need a remote hot site and can't afford a high-speed
connection you run DRBD/protocol A.

Unless of course DRBD with protocol A doesn't use some sort of FIFO order, which gets back to my
question.

It seems you're saying it does.  You said, "the only reodering taking place is the reordering in
the local io stacks below drbd" which would seem to indicate a first-in, first-out write buffer.


All of this assumes of course that we're not doing something mission critical like banking. 
Rolling back an already-completed transaction could be very, very bad for a bank.  Nonetheless
I've gotta wonder how they deal with this; surely they must do tape snapshot backups in addition
to clustering.  Surely even they have to accept the possibility that they could lose the most
recent transactions!

Anyway, the kind of databases I have in mind are for small/medium businesses where, although
rolling back an already-completed transaction wouldn't be ideal, it wouldn't be the end of the
business, particularly when considering that your primary data center just got swallowed up by an
earthquake.  I'll gladly accept just a few transactions lost!!  Like everything, there's a
tradeoff of risk to cost/performance, and it seems that for small businesses this is an acceptable
risk, particularly if those same small businesses can also accept the risk of snapshot backups.

Many, many small businesses accept the risk of using periodic snapshot backups of databases (such
as ours), and it seems to it's the same level of risk for DRBD/protocol A, unless there's
reordering going on somewhere.


So... does protocol A send writes across the wire on a first-in/first out basis?

CD

You're a good person?  Yeah, right!  ;-)
Prove it: TenThousandDollarOffer.com



More information about the drbd-user mailing list