[DRBD-user] fence-peer

Kaloyan Kovachev kkovachev at varna.net
Thu Jul 4 12:37:42 CEST 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,
On 2013-07-04 00:24, cesar wrote:
> I forget say this:
> 
> 1- The Topic/thread is this:
> http://drbd.10923.n7.nabble.com/fence-peer-td3298.html [1]
> 2- This Topic/thread have a patch "rhcm-fence-peer.sh" by Mr. Kaloyan
> Kovachev for DRBD
> 
> Correction in my question:
> My Proxmox Node on KVM6 have:
> IP for the Switch LAN communication = 10.0.0.6
> IP for the DRBD communication (NIC to NIC) = 10.10.10.6
> 
> And any more questions:
> - What requisites of DRBD version (8.3.x, 8.4.x. 9.x) or others
> programs have the patch "rhcm-fence-peer.sh"?

It was made for Redhat Cluster Manager to fence the peer and to check if 
was fenced previously with cman. I am not common with Proxmox, so i am 
not sure if this script is what you need in your case or the 
crm-fence-peer one, which is included with DRBD

> - Can the patch "rhcm-fence-peer.sh" works with several resources or
> volumes of resource if this is implemented on several nodes?

Yes it works with several resources, but to use it with several nodes 
you should have different outdate handlers, because the script expects 
only two peer names

> 
> Hoping you can dispel my doubts I say see you soon
> 
> Best regards
> Cesar
> 
> ----- Original Message -----
> FROM: [hidden email] [2]
> TO: [hidden email] [3]
> SENT: Wednesday, July 03, 2013 4:34 PM
> SUBJECT: Re: fence-peer
> 
> Hi guys and Mr. Kaloyan Kovachev
> 
> @ To anyone that can help me with this problem
> 
> @ Kaloyan Kovachev:
> Many thanks for your kind attention and your time,
> but I don't understand you very well,
> I'm no expert in cluster configurations, and do not speak English well
> (but eager to learn),
> so if you can help me, please speak me in a easy mode for me.
> 
> I'll be very grateful if you can help me
> 
> Please read all this message, and if you can help me, please let me 
> know
> 

If the recommendations in my previous email mean nothing for you, then i 
am afraid i may not be much of help and you will need to look for and 
expert/paid support to configure the things for you

> if you need more details of anything, please let me know (for example
> my cluster configuration file)
> 
> Let me tell you what I want:
> As i have DRBD on two Nodes, i want to built fencing handlers for
> freeze IO during the disconnected period, reconnect automatically and
> quickly, and replay pending buffers, without any reset", and if this
> is possible, must works without use any addtional software for
> Clusters, ie without RHCS, RHCM, Pacemaker, Openais, heartbeat, etc.
> 
> MY SCENARY:
> - I have 2 Proxmox Nodes in the same LAN, the Proxmox VE HA Cluster is
> based on proven Linux HA technologies
> Please see: http://pve.proxmox.com/wiki/High_Availability_Cluster [4]
> - I have DRBD in each Proxmox Node for my KVM VMs (for use as virtual
> Hard disks)
> - I have configurated "manual_fence" for my Proxmox Nodes. ie that in
> case of failures, I must disconnect the electric current brutally and
> after by CLI I apply my "manual_fence" for get HA. This technique
> works well for me, so i don't need extra cluster configurations of
> Linux HA for DRBD.
> - All the Proxmox Nodes have ssh comunication without need to enter 
> password
> 

ssh is required for this script, but not it is not enough and 
manual_fencing is just not fencing at all - you will need a real fencing 
and to read much more on how cluster works and why fencing is so 
important.

DRDB is a 'storage service' of the cluster, so it is even more important 
to have proper fencing or you will end up with corrupted data and no 
services at all.

DRBD/cluster without fencing - NO! NO! NO! ( 
http://www.youtube.com/watch?v=oKI-tD0L18A )

> MY QUESTIONS:
> 1- Can I enable automatic reconnect of resources of DRBD in my Proxmox
> Nodes in case of resources DRBD are disconnect?
> 2- Is necessary configure only the DRBD configuration files, or i need
> to make others configurations of cluster (for example on Linux HA)?
> 3- If you can help me, please write with a example of each
> configuration file (because i don't understand many theoretical
> explanations. For this purpose, my nodes are called "kvm5" and "kvm6"
> respectively.
> kvm5 have:
> -----------
> IP for the Switch LAN communication = 10.0.0.5
> IP for the DRBD communication (NIC to NIC) = 10.10.10.5
> 
> kvm5 have:
> -----------
> IP for the Switch LAN communication = 10.0.0.6
> IP for the DRBD communication (NIC to NIC) = 10.10.10.6
> 
> Best regards
> Cesar
> 
> -------------------------
> 
> If you reply to this email, your message will be added to the
> discussion
> below:http://drbd.10923.n7.nabble.com/fence-peer-tp3298p17995.html [5]
> 
> To unsubscribe from fence-peer, click here.
> NAML [6]
> 
> -------------------------
> View this message in context: Re: fence-peer [7]
> Sent from the DRBD - User mailing list archive [8] at Nabble.com.
> 
> 
> Links:
> ------
> [1] http://drbd.10923.n7.nabble.com/fence-peer-td3298.html
> [2] http://vmail.varna.net/user/SendEmail.jtp?type=node&node=17997&i=0
> [3] http://vmail.varna.net/user/SendEmail.jtp?type=node&node=17997&i=1
> [4] http://pve.proxmox.com/wiki/High_Availability_Cluster
> [5] http://drbd.10923.n7.nabble.com/fence-peer-tp3298p17995.html
> [6]
> http://drbd.10923.n7.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml
> [7] http://drbd.10923.n7.nabble.com/fence-peer-tp3298p17997.html
> [8] http://drbd.10923.n7.nabble.com/DRBD-User-f3.html
> 
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user



More information about the drbd-user mailing list