[DRBD-user] GFS2 freezes

Zohair Raza engineerzuhairraza at gmail.com
Wed Oct 31 11:31:53 CET 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Wed, Oct 31, 2012 at 1:33 PM, Maurits van de Lande <
M.vandeLande at vdl-fittings.com> wrote:

>  Hello Zohair,****
>
> ** **
>
> >So If I don't have real fencing device, I can't get a cluster?****
>
> ** **
>
> It depends, I have some clusters without fencing, they are mainly for
> virtualization. The cluster is configured not to relocate the services in
> case of  a failure. I do this manually. (all services are redundant) The
> Virtual Machine configuration files are stored on a GFS2 file system. I
> configured this file system to be always available (also when quorum is
> lost). Because I do all the recovering manually this does not matter, also
> no data is written to the GFS2 file system.****
>
> ** **
>
> So, you can have a cluster without fencing if a quorum loss does not
> corrupt your data. You should also set the cluster timeout’s in such a way
> that even with a WAN connection the cluster does not loose quorum during
> normal operation. (I have not tried this)****
>
> ** **
>
> For SAMBA you might need “clustered SAMBA”
> http://ctdb.samba.org/samba.html
>

It usually comes later after a clustered file system

****
>
> ** **
>
> Did you have a look at glusterfs? http://www.gluster.org/ It supports
> synchronization (I do not know if it also works over a WAN connection) But
> it has  nothing to do with drbd. The drbd 8.3 branch is very mature I do
> not know if this is the same for glusterfs.****
>
> **
>

I checked that earlier, but wanted to dig more into the problem why I am
unable to achieve this with gfs

Now considering glusterfs, lsync and csync2

Thanks alot for your help so far guys

 **
>
> Best regards,****
>
> ** **
>
> Maurits****
>
> ** **
>
> ** **
>
> *Van:* drbd-user-bounces at lists.linbit.com [mailto:
> drbd-user-bounces at lists.linbit.com] *Namens *Zohair Raza
> *Verzonden:* woensdag 31 oktober 2012 9:51
> *Aan:* Felix Frank
> *CC:* drbd-user at lists.linbit.com
>
> *Onderwerp:* Re: [DRBD-user] GFS2 freezes****
>
>  ** **
>
> So If I don't have real fencing device, I can't get a cluster?****
>
> ** **
>
> My requirement is to synchronized two Samba boxes between remote
> locations, I can't use rsync because of bandwidth consumption and system
> processing each time it will run it will go through each file and see if it
> is synced or not. ****
>
> ** **
>
> While GFS seemed to be the right option, but as two servers are distant
> from each other I can not have fencing device as it may experience power
> outage or network failures quite often. ****
>
> ** **
>
> What do you guys suggest in such scenario?****
>
> ** **
>
> Regards,
> Zohair Raza****
>
> ** **
>
> ** **
>
> On Wed, Oct 31, 2012 at 12:30 PM, Felix Frank <ff at mpexnet.de> wrote:****
>
> On 10/31/2012 12:02 AM, Lars Ellenberg wrote:
> >>> Manual fencing is not in any way supported. You must be able to call
> >>> > > 'fence_node <peer>' and have the remote node reset. If this doesn't
> >>> > > happen, your fencing is not sufficient.
> >> > fence_node <peer> doesn't work for me
> >> >
> >> > fence_node node2 says
> >> >
> >> > fence node2 failed
> > Which is why you need a *real* fencing device
> > for automatic fencing.****
>
> ...which is bound to sound more than a little cryptic to the
> uninitiated, I assume.
>
> An example for a "classical" fencing method is a power distribution unit
> with network access. The surviving node accesses the PDU and cuts the
> power to its peer.
> This is just one example. Similar results can be achieved using
> IPMI/ILOM technologies etc.
>
> HTH,
> Felix****
>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user****
>
> ** **
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20121031/602a9364/attachment.htm>


More information about the drbd-user mailing list