Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi Lars, I will have a look on your suggestions. thanks On Wed, Oct 31, 2012 at 1:43 PM, Lars Ellenberg <lars.ellenberg at linbit.com>wrote: > On Wed, Oct 31, 2012 at 09:33:02AM +0000, Maurits van de Lande wrote: > > Hello Zohair, > > > > >So If I don't have real fencing device, I can't get a cluster? > > > > It depends, I have some clusters without fencing, they are mainly for > > virtualization. The cluster is configured not to relocate the services > > in case of a failure. I do this manually. (all services are > > redundant) The Virtual Machine configuration files are stored on a > > GFS2 file system. I configured this file system to be always available > > (also when quorum is lost). Because I do all the recovering manually > > this does not matter, also no data is written to the GFS2 file system. > > > > So, you can have a cluster without fencing if a quorum loss does not > > corrupt your data. You should also set the cluster timeout's in such a > > way that even with a WAN connection the cluster does not loose quorum > > during normal operation. (I have not tried this) > > > > For SAMBA you might need "clustered SAMBA" > http://ctdb.samba.org/samba.html > > clustered samba on GFS2 via WAN to remote locations with bandwidth > constraints, > not to speak of latency and possible flakyness of the link. > > very very VERY BAD idea. > > cluster file system accros a WAN: already a bad idea. > > cluster file system without fencing is a sure subscription to data loss. > > Cluster file systems are very sensitive to latency, > both storage latency and network latency (for the DLM component). > So even if you would manage to get it working, > the performance of it would sucksucksuck big time > (on a WAN with bandwidth and latency constraints ...). > > In a word: Don't. > > Maybe look again why you think rsync does not work for you, > use csync2, couple it with inotify maybe, > look into lsyncd (which does the inotify part, and can by coupled with > both rsync or csync2 or any other similar tool), ... > > > Did you have a look at glusterfs? http://www.gluster.org/ It supports > > synchronization (I do not know if it also works over a WAN connection) > > But it has nothing to do with drbd. The drbd 8.3 branch is very > > mature I do not know if this is the same for glusterfs. > > > > Best regards, > > > > Maurits > > > > > > Van: drbd-user-bounces at lists.linbit.com [mailto: > drbd-user-bounces at lists.linbit.com] Namens Zohair Raza > > Verzonden: woensdag 31 oktober 2012 9:51 > > Aan: Felix Frank > > CC: drbd-user at lists.linbit.com > > Onderwerp: Re: [DRBD-user] GFS2 freezes > > > > So If I don't have real fencing device, I can't get a cluster? > > > > My requirement is to synchronized two Samba boxes between remote > > locations, I can't use rsync because of bandwidth consumption and > > system processing each time it will run it will go through each file > > and see if it is synced or not. > > > > While GFS seemed to be the right option, but as two servers are > > distant from each other I can not have fencing device as it may > > experience power outage or network failures quite often. > > > > What do you guys suggest in such scenario? > > > > Regards, > > Zohair Raza > > > > > > On Wed, Oct 31, 2012 at 12:30 PM, Felix Frank <ff at mpexnet.de<mailto: > ff at mpexnet.de>> wrote: > > On 10/31/2012 12:02 AM, Lars Ellenberg wrote: > > >>> Manual fencing is not in any way supported. You must be able to call > > >>> > > 'fence_node <peer>' and have the remote node reset. If this > doesn't > > >>> > > happen, your fencing is not sufficient. > > >> > fence_node <peer> doesn't work for me > > >> > > > >> > fence_node node2 says > > >> > > > >> > fence node2 failed > > > Which is why you need a *real* fencing device > > > for automatic fencing. > > ...which is bound to sound more than a little cryptic to the > > uninitiated, I assume. > > > > An example for a "classical" fencing method is a power distribution unit > > with network access. The surviving node accesses the PDU and cuts the > > power to its peer. > > This is just one example. Similar results can be achieved using > > IPMI/ILOM technologies etc. > > > > HTH, > > Felix > > _______________________________________________ > > drbd-user mailing list > > drbd-user at lists.linbit.com<mailto:drbd-user at lists.linbit.com> > > http://lists.linbit.com/mailman/listinfo/drbd-user > > > > > _______________________________________________ > > drbd-user mailing list > > drbd-user at lists.linbit.com > > http://lists.linbit.com/mailman/listinfo/drbd-user > > > -- > : Lars Ellenberg > : LINBIT | Your Way to High Availability > : DRBD/HA support and consulting http://www.linbit.com > > DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. > __ > please don't Cc me, but send to list -- I'm subscribed > _______________________________________________ > drbd-user mailing list > drbd-user at lists.linbit.com > http://lists.linbit.com/mailman/listinfo/drbd-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20121031/1b7837b5/attachment.htm>