[DRBD-user] Replication problems constants with DRBD 8.3.10

cesar brain at click.com.py
Fri Jun 28 21:00:14 CEST 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


*Many thanks Lars Ellenberg for your answers.*
Your suggestions are very important to me...  :-)

I feel like a novice beside you, and be very grateful if you can help

*My answers and quesions:*

1- About of: "So what is it now? urgent, or just "curious"? ;-) " :
A= First I was in a hurry to finish to deliver, after I spoke with the boss
and he accepted that take some time to fix it.

2- For KVM VMs in HA, I am using Proxmox VE 2.3 since his iSO installer (his
Kernel is 2.6.32 of RHEL6), + LVM2 + DRBD. These 2 PCs are into the same
LAN. And these PCs are ASUS P8H77-M PRO (i don't use raid).
Here you will see the specs for this mainboard:
http://dlcdnet.asus.com/pub/ASUS/mb/LGA1155/P8H77-M_PRO/E7508_P8H77_M_PRO.pdf

3- I will changing soon network cards realtek for Intel network cards for
servers (To rule out problems for network cards)

4- About of: "Or just don't do dual-primary."
A= As i need two primaries on DRBD for use with KVM VMs + LVM2 + DRBD + HA
for VMs, i can't change this setup, I must keep it.

5- About of my manual Fence: I use the agent "fence_manual" in my cluster,
then if my computer crash, I will cut the energy brutally in the PC that
crashed, and after I will apply  by CLI the fence manually to start the VM
on the other Node. I have already verified that works in this way (PDUs are
very expensive if I want real fences, and in my country only have the brand
APC, and for only have a VM as Mail Server not justify the expense).

6- About of: "With special purpose built fencing handlers, we may be able to
fix your setup so it will freeze IO during the disconnected period,
reconnect, and replay pending buffers, without any reset."
*Q= My questions:*
6.1 To accomplish this, what should I do?
6.2 Need I add a software or Hardware for get fence with DRBD?
6.3 or just simply edit the "global_common.conf" file" and enable the line
that says "fence-peer "/usr/lib/drbd/crm-fence-peer.sh"

*THIS IS MY ACTUAL CONFIGURATION WITH DRBD 8.4.2 Version:
global_common.conf file:*

global {
        usage-count yes;
        # minor-count dialog-refresh disable-ip-verification
}

common {
        protocol C;

        handlers {
                pri-on-incon-degr
"/usr/lib/drbd/notify-pri-on-incon-degr.sh;
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;
reboot -f";
                pri-lost-after-sb
"/usr/lib/drbd/notify-pri-lost-after-sb.sh;
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;
reboot -f";
                local-io-error "/usr/lib/drbd/notify-io-error.sh;
/usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ;
halt -f";
                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                split-brain "/usr/lib/drbd/notify-split-brain.sh
some-user at my-domain.com";
                out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh
some-user at my-domain.com";
                # before-resync-target
"/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
                # after-resync-target
/usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
        }

        startup {
                # wfc-timeout degr-wfc-timeout outdated-wfc-timeout
wait-after-sb
                wfc-timeout 30; degr-wfc-timeout 20; outdated-wfc-timeout
15;
        }

        options {
                # cpu-mask on-no-data-accessible
                cpu-mask 0;
        }

        disk {
                # size max-bio-bvecs on-io-error fencing disk-barrier
disk-flushes
                # disk-drain md-flushes resync-rate resync-after al-extents
                # c-plan-ahead c-delay-target c-fill-target c-max-rate
                # c-min-rate disk-timeout
                on-io-error detach; al-extents 3389; resync-rate 75M;
        }

        net {
                # protocol timeout max-epoch-size max-buffers
unplug-watermark
                # connect-int ping-int sndbuf-size rcvbuf-size ko-count
                # allow-two-primaries cram-hmac-alg shared-secret
after-sb-0pri
                # after-sb-1pri after-sb-2pri always-asbp rr-conflict
                # ping-timeout data-integrity-alg tcp-cork on-congestion
                # congestion-fill congestion-extents csums-alg verify-alg
                # use-rle
                sndbuf-size 0; no-tcp-cork; unplug-watermark 16; max-buffers
8000; max-epoch-size 8000;
                #Verificacion de replicacion online:
                data-integrity-alg md5;
                #Verificacion de datos en disco online:
                verify-alg sha1;
        }
}

*r0.res file:*
resource r0 {

  protocol C;

  startup {
    #wfc-timeout  15;
    #degr-wfc-timeout 60;
    become-primary-on both;
  }

  net {
    #cram-hmac-alg sha1;
    #shared-secret "my-secret";
    allow-two-primaries;
    after-sb-0pri discard-zero-changes;
    after-sb-1pri discard-secondary;
    after-sb-2pri disconnect;
  }

  on kvm5 {
    device /dev/drbd0;
    disk /dev/sda3;
    address 10.2.2.50:7788;
    meta-disk internal;
  }

  on kvm6 {
    device /dev/drbd0;
    disk /dev/sda3;
    address 10.2.2.51:7788;
    meta-disk internal;
  }
}

*r1.res file:*
resource r1 {
  protocol C;
  startup {
    #wfc-timeout  15;
    #degr-wfc-timeout 60;
    become-primary-on both;
  }
  net {
    #cram-hmac-alg sha1;
    #shared-secret "my-secret";
    allow-two-primaries;
    after-sb-0pri discard-zero-changes;
    after-sb-1pri discard-secondary;
    after-sb-2pri disconnect;
  }
  on kvm5 {
    device /dev/drbd1;
    disk /dev/sdb3;
    address 10.2.2.50:7789;
    meta-disk internal;
  }
  on kvm6 {
    device /dev/drbd1;
    disk /dev/sdb3;
    address 10.2.2.51:7789;
    meta-disk internal;
  }
}

Waiting for your prompt reply I say see you soon

Best regards
Cesar



--
View this message in context: http://drbd.10923.n7.nabble.com/Replication-problems-constants-with-DRBD-8-3-10-tp17896p17980.html
Sent from the DRBD - User mailing list archive at Nabble.com.



More information about the drbd-user mailing list