[DRBD-user] fence-peer

cesar brain at click.com.py
Wed Jul 3 23:24:44 CEST 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


I forget say this:

1- The Topic/thread is this: http://drbd.10923.n7.nabble.com/fence-peer-td3298.html
2- This Topic/thread have a patch "rhcm-fence-peer.sh" by Mr. Kaloyan Kovachev for DRBD

Correction in my question:
My Proxmox Node on KVM6 have:
IP for the Switch LAN communication = 10.0.0.6 
IP for the DRBD communication (NIC to NIC) = 10.10.10.6 


And any more questions:
- What requisites of DRBD version (8.3.x, 8.4.x. 9.x) or others programs have the patch "rhcm-fence-peer.sh"? 
- Can the patch "rhcm-fence-peer.sh" works with several resources or volumes of resource if this is implemented on several nodes?

Hoping you can dispel my doubts I say see you soon

Best regards
Cesar

  ----- Original Message ----- 
  From: cesar [via DRBD] 
  To: cesar 
  Sent: Wednesday, July 03, 2013 4:34 PM
  Subject: Re: fence-peer


  Hi guys and Mr. Kaloyan Kovachev 

  @ To anyone that can help me with this problem 

  @ Kaloyan Kovachev: 
  Many thanks for your kind attention and your time, 
  but I don't understand you very well, 
  I'm no expert in cluster configurations, and do not speak English well (but eager to learn), 
  so if you can help me, please speak me in a easy mode for me. 

  I'll be very grateful if you can help me 

  Please read all this message, and if you can help me, please let me know 

  if you need more details of anything, please let me know (for example my cluster configuration file) 

  Let me tell you what I want: 
  As i have DRBD on two Nodes, i want to built fencing handlers for freeze IO during the disconnected period, reconnect automatically and quickly, and replay pending buffers, without any reset", and if this is possible, must works without use any addtional software for Clusters, ie without RHCS, RHCM, Pacemaker, Openais, heartbeat, etc. 

  My scenary:
  - I have 2 Proxmox Nodes in the same LAN, the Proxmox VE HA Cluster is based on proven Linux HA technologies 
  Please see:  http://pve.proxmox.com/wiki/High_Availability_Cluster
  - I have DRBD in each Proxmox Node for my KVM VMs (for use as virtual Hard disks) 
  - I have configurated "manual_fence" for my Proxmox Nodes. ie that in case of failures, I must disconnect the electric current brutally and after by CLI I apply my "manual_fence" for get HA. This technique works well for me, so i don't need extra cluster configurations of Linux HA for DRBD. 
  - All the Proxmox Nodes have ssh comunication without need to enter password 

  My questions:
  1- Can I enable automatic reconnect of resources of DRBD in my Proxmox Nodes in case of resources DRBD are disconnect? 
  2- Is necessary configure only the DRBD configuration files, or i need to make others configurations of cluster (for example on Linux HA)? 
  3- If you can help me, please write with a example of each configuration file (because i don't understand many theoretical explanations. For this purpose, my nodes are called "kvm5" and "kvm6" respectively. 
  kvm5 have: 
  ----------- 
  IP for the Switch LAN communication = 10.0.0.5 
  IP for the DRBD communication (NIC to NIC) = 10.10.10.5 

  kvm5 have: 
  ----------- 
  IP for the Switch LAN communication = 10.0.0.6 
  IP for the DRBD communication (NIC to NIC) = 10.10.10.6 

  Best regards 
  Cesar 



------------------------------------------------------------------------------

  If you reply to this email, your message will be added to the discussion below:
  http://drbd.10923.n7.nabble.com/fence-peer-tp3298p17995.html 
  To unsubscribe from fence-peer, click here.
  NAML



--
View this message in context: http://drbd.10923.n7.nabble.com/fence-peer-tp3298p17997.html
Sent from the DRBD - User mailing list archive at Nabble.com.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20130703/88090523/attachment.htm>


More information about the drbd-user mailing list