[DRBD-user] drbd 8.3 - 6 nodes

Felix Frank ff at mpexnet.de
Mon Mar 5 12:44:32 CET 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

sorry I forget to CC the list (again), so let me bring this back to
everyone's vision.

On 03/05/2012 11:01 AM, Umarzuki Mochlis wrote:
> Pada 5 Mac 2012 5:21 PTG, Felix Frank <ff at mpexnet.de> menulis:
>> Hi,
>>
>> this is unfortunately not very clear at all.
>>
>>
>> So A1 and A2 are failover partners with DRBD? And A3 mounts a replicated
>> volume via remote block storage (iSCSI)?
>>
>> This would be a rather standard setup requiring a DRBD resource and a
>> floating IP address shared by A1 and A2. A3 uses services provided by
>> the node owning the IP address.
>>
>> I suspect you're aiming for something more complex, so please specify :-)
>>
>> Regards,
>> Felix
> 
> well sir, this setup is for zimbra-cluster with rgmanager + cman + drbd 8.3
> 
> A1, A2 & A3 is a zimbra cluster group (currently running). what i
> understand is that with external metadata, i could simply point A1 &
> A2 disks as drbd device without having to reformat (mkfs) them

That's true, and works for internal metadata as well, if you've got the
space at the end of your filesystem.

I don't really know what a zimbra cluster group comprises.

> but what i did not understand/know, would i be able to make A3 mount
> disk/LUN of A1 or A2 so A3 can resume A1's or A2's zimbra-cluster
> service? without drbd, A3 would automatically mount A1's LUN & run as
> A1 while resuming A1's role via rgmanager

Typically, A2 will assume A1's role in case of failure, using the DRBD
device.

I'm still not sure how your 3rd node comes into play. For mere High
Availability, 2 nodes generally suffice. Adding a 3rd one makes things a
bit harder.

Cheers,
Felix



More information about the drbd-user mailing list