[DRBD-user] Multiple Clusters

Dominique Chabord dominique.chabord at bluedjinn.com
Fri Jun 4 10:54:31 CEST 2004

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Jason Gray a écrit :
> I'm looking at creating a multi-clustered array server network for our
> production environment.  Is it possible to have 5,6,7..n servers clustered
> together (kind of like a token ring) to act as redundant arrays for each
> other?
drbd is peer-to-peer.
As long as you have one secondary assigned each primary, drbd will 
consider different volumes as independently managed, so you can build 
some kind of a ring, where each server backs-up the previous one.
> 
> So, instead of having a Primary array I would have 5,6 or 7 "Primary" arrays
> that mirror each other across an isolated network (10.0.0.0 say).

each primary volume is mirrored only once. It cannot "broadcast" updates 
to others, but only sends them to its secondary.
I can see a major advantage in your proposal, because your P volumes 
will be small and will resynch faster. A computer failure will only 
affect a smaller amount of data and less users.

> Each server would provide a work space for 10-20 users.  If the server they
> are working on goes down they are routed to a different server.  The whole
> server network acts like a 5, 6 or 7 server RAID cluster.

This is failover. It has to be done by another piece of software. 
Heartbeat is an off-the-shelf solution for two servers.

I manage the Shaman-X project which works out a kit for N servers and P 
drbd volumes. It does what I understand you ask for.

the current version is there:
http://downloads.shaman-x.org/recovery_kits/shx-kit-17-may-04.tar
download it on a workstation and follow read-me instructions.

I take this opportunity to provide project status update below.

This kit is an interative programme called cdx in Shaman-X terminology 
which prompts you for your parameters and then generates a custom-kit to 
be deployed on your n servers. Running the custom-kit then requires wdx, 
sendarp, drbd, php and apache on every servers. Sendarp comes with the 
.tar above. WDX can be downloaded at
http://downloads.shaman-x.org/wdx-0.4.2.src.tar
Management of the whole is done through a web page called hdx, so you 
know at anytime which is primary and which is secondary.

Good points are:
- It manages N servers and P drbd volumes
- Primaries and secondaries are both failed over. I mean, if a secondary 
fails, a new secondary is chozen and re-synchs.
- You can assign primary/secondary roles manually for every drbd volume 
from hdx web page

Shaman-X team is all volunteers and therefore we lack time for 
finalizing, so following limitations apply now:
- cdx custom-kit generator is not tested enough and there are probably 
still a few bugs in it (shell script).
- hdx web page is planned to be re-engineered (php script): bug 
corrections, display improvements, password protection for commands and 
documentation. Today for instance, HDX needs to declare two separate 
computer rooms for disaster protection, when this should be an option.
- custom kit should be improved: today, if the secondary is not fully 
synchronized, it accepts to switch to primary status, which is bad
- Network failover is not implemented is to be implemented in the 
custom-kit too (relocate primary and secondary according to network status)
- the custom kit implementation of drbd we made is to be checked by a 
drbd specialist, and intensive testing is necessary before going to 
production.
- I hope I'm not discouraging you. It is closer to a good result than it 
appears above. Testing is certainly the key activity to plan.


> 
> The issue is that the secondary systems are essentially offline.  Can the
> systems write to each other while being online?  I need each node to be
> active and sending data to each other.  Is this possible?
No it is not. The secondary volume is off-line. A server X can be 
primary for volume 3, 4 and 5 and secondary for 1 and 7. It will serve 
volumes 3, 4 and 5 and support IP3, IP4 and IP5 addresses used to access 
data.
Volumes 1 and 2 can be served by server Y which is secondary for volumes 
3, 4 and 6. Y will support IP1 and IP2.
Volumes 6 and 7 is served by server Z which is secondary for volumes 5 
and 2. Z will support IP6 and IP7
When a server fails,
     for all primary volumes move the primary roles to the corresponding 
secondaries,
     for all secondary volumes, choose a new secondary and resynch from 
the primary.
 >
> 
> Cheers,
Hope this helps
Regards
Dominique
> 
> Jason
> 
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
> 
> 



More information about the drbd-user mailing list