[DRBD-user] Opteron/Xeon - Kernel 2.4/2.6 setup
Lars.Ellenberg at linbit.com
Mon Jun 14 09:51:40 CEST 2004
/ 2004-06-14 07:51:35 +0200
\ Daniel Khan:
> I am thinking of using DRBD in a rather complex environment:
> It's a 3 node cluster for a webhosting environment:
> node1, node2: Dual AMD Opteron, Kernel 2.6.5 SMP (Gentoo)
> node3: Dual Xeon, Kernel 2.4.22 SMP (Fedora)
> node1 and node2 mount there home directories over NFS from node3.
> additionally node1 and node2 are keeping a mirror of their (NFS) data on
> node on their local disk.
> The whole thing is to be managed over heartbeat.
> And I want to use DRBD for the mirroring.
> My questions:
> I saw that the 0.7 branch should work on kernel 2.6.
> But would it be crazy to use this in a production environment?
currently that would be asking for trouble.
we have indication that under some circumstances drbd 0.7 may still
cause data inconsistencies between its mirrors [read data corruption] :(
working on that, but don't use pre-releases in realeased in a production
server. we will announce an official release of 0.7 when we think it is
ready, and even then you want to test it yourself first, not go directly
> And - will DRBD work on a 64bit system?
> If the setup above is too risky I'll use rsync based replication for now.
> Help and suggestions are highly appreciated.
> And .. please don't hesitate to tell me if you think the whole
> idea/setup is crap and why.
I think, ok, you can do it that way ...
but consider this:
node1 mounts node3:drbd0 via nfs
writes go via network (nfs) to node3, then via network (drbd) back
again to itself, the block write acks via network (drbd) back to
node3, then the fs write acks via network (nfs) back to node1.
even reads go via network.
how about (normal operation):
node1 mounts node1:drbd0 directly.
writes go via network (drbd) to node3, and to the local disk.
reads go directly to the local disk.
now, what do you think may be better wrt performance and latency?
More information about the drbd-user