[DRBD-user] Antwort: Re: Stop master mount access during a slave network failure in C protocol?

Robert.Koeppl at knapp.com Robert.Koeppl at knapp.com
Tue Sep 1 09:45:40 CEST 2015

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Or you use a variable sync rate and circumvent the performance implications
of a fixed rate.
something like

  syncer {
  c-plan-ahead 20;
  c-min-rate 1M;
  c-max-rate 20M;
  c-fill-target 5M;
  al-extents 3389;
   verify-alg md5;
  }}

Mit freundlichen Grüßen / Best Regards

Robert Köppl

Customer Support & Projects
Teamleader IT Support

KNAPP Systemintegration GmbH
Waltenbachstraße 9
8700 Leoben, Austria
Phone: +43 5 04953 6322
Fax: +43 5 04953 6500
robert.koeppl at knapp.com
www.KNAPP.com

Commercial register number: FN 138870x
Commercial register court: Leoben

The information in this e-mail (including any attachment) is confidential
and intended to be for the use of the addressee(s) only. If you have
received the e-mail by mistake, any disclosure, copy, distribution or use
of the contents of the e-mail is prohibited, and you must delete the e-mail
from your system. As e-mail can be changed electronically KNAPP assumes no
responsibility for any alteration to this e-mail or its attachments. KNAPP
has taken every reasonable precaution to ensure that any attachment to this
e-mail has been swept for virus. However, KNAPP does not accept any
liability for damage sustained as a result of such attachment being virus
infected and strongly recommend that you carry out your own virus check
before opening any attachment.



Von:	Digimer <lists at alteeve.ca>
An:	Mayk Eskilla <meskilla at outlook.com>,
            "drbd-user at lists.linbit.com" <drbd-user at lists.linbit.com>
Datum:	31.08.2015 18:23
Betreff:	Re: [DRBD-user] Stop master mount access during a slave network
            failure in C protocol?
Gesendet von:	drbd-user-bounces at lists.linbit.com



On 31/08/15 11:27 AM, Mayk Eskilla wrote:
> Hi list
>
> I'm testing drbd C protocol with two ext4 partitions on 2 banana pi's and
I noticed that"C protocol does not stop a copying process into the master
mount, while I disconnect the slave network cable: There is a short delay
in copying, but the process then continues with /proc/drbd mentioning
"Network failure" and "Waiting for connection" plus slave being in
inconsistent state. re-plugging the slave network cable will trigger a
successful sync form master to slave and both are up-to-date again.

Do you have two drbd resources, each with ext4 and mounted on one node
at a time? If you're mounting an ext4 partition on both nodes, you will
corrupt the file system very quickly.

Also note that without fencing, it's very possible to get a split-brain.

> My question is simply this: how come C protocol does not block master
mount write access, when data can not safely be written to the slave? Is
this considered a heartbeat's task, so drbd does not react itself? Or can I
modify the drbd.conf, so at least disk writes into the master are stopped
when the slave is disconnected?

If the Primary node loses connection to the Secondary, it starts marking
the changed inodes in a "dirty blocks list". Later, when the Secondary
reconnects, it starts sync'ing those dirty blocks over to the peer at
the rate set in 'syncer { rate xM; };.

Note that replication (replicating new data to both nodes when
connected) always goes as fast as possible. When dirty blocks need to by
sync'ed, the speed given for this is take away from the replication
rate, causing your writes to feel slow.

> Attached is my minimalistic drbd.conf
>
> cat /etc/drbd.conf
> global { usage-count no; }
> common { syncer { rate 100M; } }

This is way too high. Try 20M on an rpi.

> resource r0 {
>         protocol C;
>         startup {
>                 wfc-timeout  15;
>                 degr-wfc-timeout 60;
>         }
>         net {
>                 cram-hmac-alg sha256;
>                 shared-secret "secret";
>         }
>         on Pi1 {
>                 device /dev/drbd0;
>                 disk /dev/sda1;
>                 address 192.168.1.11:7789;
>                 meta-disk internal;
>         }
>         on Pi2 {
>                 device /dev/drbd0;
>                 disk /dev/sda1;
>                 address 192.168.1.12:7789;
>                 meta-disk internal;
>         }
> }
>
> There is no heartbeat service involved as of now, so I'm assigning roles
myself with drbdadm.

When then time comes, use corosync + pacemaker. Heartbeat is long
deprecated. When you do setup pacemaker, figure out what device you can
use for fencing and configure/test stonith before anything else. Once
that is working, configure drbd's fencing to 'resource-and-stonith' and
configure the 'crm-{un,}fence-peer.sh' handlers.

This will prevent split-brains and make your cluster much safer and more
stable.

> Regards
>
> Mayk
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>


--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user





More information about the drbd-user mailing list