[DRBD-user] Metadevice question

Alvaro Pietrobono a.pietrobono at mail2.list.it
Wed Apr 13 15:18:51 CEST 2005

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


>> cmd /sbin/drbdsetup /dev/drbd3 disk /dev/hda10 /dev/hda16 3 
>> --on-io-error=detach
>>  failed!
>>
>> where: hda10 is 10Gb and hda16 is 360Mb and drbd version is 0.7.10 
>> for kernel
>> 2.4.27-2-386 (debian)
>
> 128MB is needed for each device. So 4 (0,1,2,3) times 128MB makes 
> 512MB and not 360MB. See also manpage:
>
>       meta-disk device [index]
>              internal means, that the last 128 MB of  the  lower  device  are
>              used  to  store  the  meta-data.  You  must not use [index] with
>              internal.
>
>              You can use a single block device to store meta-data of multiple
>              DRBD  devices.   E.g.  use meta-disk /dev/hde6[0]; and meta-disk
>              /dev/hde6[1]; for two different  resources.  In  this  case  the
>              meta-disk would need to be at least 256 MB in size.
>
> Mit freundlichen Grüßen
> Manfred Ackermann
>
> T-Systems
> Systems Integration
>> Systemadministrator SSC NITS
> Industry Business Unit Telco
> T-Systems International GmbH
> Utbremer Straße 90, D-28217 Bremen
> Telefon: +49 421 3799-815
> Telefax: +49 421 3799-169
> E-Mail: prsm-eznord at t-systems.com
> Internet: http://www.t-systems.com

All right. BUT I use different meta-devices for each device.
My configuration:

DEVICE           META-DEVICE
hda8  2Gb        hda13 360Mb
hda9  20Gb       hda14 360Mb
hda10 10Gb       hda15 360Mb
hda11 10Gb       hda16 360Mb
hda12 13Gb       hda17 337Mb




Follow drbd.conf's disk configuration.
Thanks for your help, it's appreciated.

A.Pietrobono

______________________________________


resource home {

  # transfer protocol to use.
  # C: write IO is reported as completed, if we know it has
  #    reached _both_ local and remote DISK.
  #    * for critical transactional data.
  # B: write IO is reported as completed, if it has reached
  #    local DISK and remote buffer cache.
  #    * for most cases.
  # A: write IO is reported as completed, if it has reached
  #    local DISK and local tcp send buffer. (see also sndbuf-size)
  #    * for high latency networks
  #
  #**********
  # uhm, benchmarks have shown that C is actually better than B.
  # this note shall disappear, when we are convinced that B is
  # the right choice "for most cases".
  # Until then, always use C unless you have a reason not to.
  #     --lge
  #**********
  #
  protocol C;

  # what should be done in case the cluster starts up in
  # degraded mode, but knows it has inconsistent data.

#  incon-degr-cmd "echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; 
halt -f";
   incon-degr-cmd "exit 1";


  startup {
    # Wait for connection timeout.
    # The init script blocks the boot process until the resources
    # are connected. This is so when the cluster manager starts later,
    # it does not see a resource with internal split-brain.
    # In case you want to limit the wait time, do it here.
    # Default is 0, which means unlimited. Unit is seconds.
    #
     wfc-timeout  30;

    # Wait for connection timeout if this node was a degraded cluster.
    # In case a degraded cluster (= cluster with only one node left)
    # is rebooted, this timeout value is used.
    #
    degr-wfc-timeout 15;    # 2 minutes.
  }

  disk {
    # if the lower level device reports io-error you have the choice of
    #  "pass_on"  ->  Report the io-error to the upper layers.
    #                 Primary   -> report it to the mounted file system.
    #                 Secondary -> ignore it.
    #  "panic"    ->  The node leaves the cluster by doing a kernel panic.
    #  "detach"   ->  The node drops its backing storage device, and
    #                 continues in disk less mode.
    #
    on-io-error   detach;
  }

  net {
    # this is the size of the tcp socket send buffer
    # increase it _carefully_ if you want to use protocol A over a
    # high latency network with reasonable write throughput.
    # defaults to 2*65535; you might try even 1M, but if your kernel or
    # network driver chokes on that, you have been warned.
    # sndbuf-size 512k;

    # timeout       60;    #  6 seconds  (unit = 0.1 seconds)
    # connect-int   10;    # 10 seconds  (unit = 1 second)
    # ping-int      10;    # 10 seconds  (unit = 1 second)

    # Maximal number of requests (4K) to be allocated by DRBD.
    # The minimum is hardcoded to 32 (=128 kb).
    # For hight performance installations it might help if you
    # increase that number. These buffers are used to hold
    # datablocks while they are written to disk.
    #
     max-buffers     2048;

    # The highest number of data blocks between two write barriers.
    # If you set this < 10 you might decrease your performance.
     max-epoch-size  2048;

    # if some block send times out this many times, the peer is
    # considered dead, even if it still answers ping requests.
    # ko-count 4;

    # if the connection to the peer is lost you have the choice of
    #  "reconnect"   -> Try to reconnect (AKA WFConnection state)
    #  "stand_alone" -> Do not reconnect (AKA StandAlone state)
    #  "freeze_io"   -> Try to reconnect but freeze all IO until
    #                   the connection is established again.
     on-disconnect reconnect;

  }

  syncer {
    # Limit the bandwith used by the resynchronisation process.
    # default unit is KB/sec; optional suffixes K,M,G are allowed
    #
    rate 500M;

    # All devices in one group are resynchronized parallel.
    # Resychronisation of groups is serialized in ascending order.
    # Put DRBD resources which are on different physical disks in one group.
    # Put DRBD resources on one physical disk in different groups.
    #
    group 1;

    # Configures the size of the active set. Each extent is 4M,
    # 257 Extents ~> 1GB active set size. In case your syncer
    # runs @ 10MB/sec, all resync after a primary's crash will last
    # 1GB / ( 10MB/sec ) ~ 102 seconds ~ One Minute and 42 Seconds.
    # BTW, the hash algorithm works best if the number of al-extents
    # is prime. (To test the worst case performace use a power of 2)
    al-extents 257;
  }

  on snoopy2n1 {
    device     /dev/drbd0;
    disk       /dev/hda11;
    address    10.71.71.1:7788;
    meta-disk  /dev/hda13[0];

    # meta-disk is either 'internal' or '/dev/ice/name [idx]'
    #
    # You can use a single block device to store meta-data
    # of multiple DRBD's.
    # E.g. use meta-disk /dev/hde6[0]; and meta-disk /dev/hde6[1];
    # for two different resources. In this case the meta-disk
    # would need to be at least 256 MB in size.
    #
    # 'internal' means, that the last 128 MB of the lower device
    # are used to store the meta-data.
    # You must not give an index with 'internal'.
  }


  on snoopy2n2 {
    device     /dev/drbd0;
    disk       /dev/hda11;
    address   10.71.71.2:7788;
    meta-disk /dev/hda13[0];
  }
}

resource "mail" {
  protocol C;
#  incon-degr-cmd "echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; 
halt -f";
incon-degr-cmd "exit 1";
  startup {
    wfc-timeout         30;  ## Infinite!
    degr-wfc-timeout  15;  ## 2 minutes.
  }
  disk {
    on-io-error detach;
  }
  net {
    # timeout           60;
    # connect-int       10;
    # ping-int          10;
    # max-buffers     2048;
    # max-epoch-size  2048;
  }
  syncer {
    rate   500M;
    group   1; # sync concurrently with r0
  }

  on snoopy2n1 {
    device      /dev/drbd1;
    disk        /dev/hda9;
    address     10.71.71.1:7789;
    meta-disk   /dev/hda14[1];
  }

  on snoopy2n2 {
    device     /dev/drbd1;
    disk       /dev/hda9;
    address    10.71.71.2:7789;
    meta-disk  /dev/hda14[1];
  }
}

resource "spool" {
  protocol C;
#  incon-degr-cmd "echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; 
halt -f";
incon-degr-cmd "exit 1";
  startup {
    wfc-timeout         30;  ## Infinite!
    degr-wfc-timeout  15;  ## 2 minutes.
  }
  disk {
    on-io-error detach;
  }
  net {
    # timeout           60;
    # connect-int       10;
    # ping-int          10;
    # max-buffers     2048;
    # max-epoch-size  2048;
  }
  syncer {
    rate   500M;
    group   1; # sync concurrently with r0
  }

  on snoopy2n1 {
    device      /dev/drbd2;
    disk        /dev/hda8;
    address     10.71.71.1:7790;
    meta-disk   /dev/hda15[2];
  }

  on snoopy2n2 {
    device     /dev/drbd2;
    disk       /dev/hda8;
    address    10.71.71.2:7790;
    meta-disk  /dev/hda15[2];
  }
}

resource "www" {
  protocol C;
#  incon-degr-cmd "echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; 
halt -f";
incon-degr-cmd "exit 1";
  startup {
    wfc-timeout         30;  ## Infinite!
    degr-wfc-timeout  15;  ## 2 minutes.
  }
  disk {
    on-io-error detach;
  }
  net {
    # timeout           60;
    # connect-int       10;
    # ping-int          10;
    # max-buffers     2048;
    # max-epoch-size  2048;
  }
  syncer {
    rate   500M;
    group   2; # sync after with home mail spool
  }

  on snoopy2n1 {
    device      /dev/drbd3;
    disk        /dev/hda10;
    address     10.71.71.1:7791;
    meta-disk   /dev/hda16[3];
  }

  on snoopy2n2 {
    device     /dev/drbd3;
    disk       /dev/hda10;
    address    10.71.71.2:7791;
    meta-disk  /dev/hda16[3];
  }
}

resource "opt" {
  protocol C;
#  incon-degr-cmd "echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; 
halt -f";
incon-degr-cmd "exit 1";
  startup {
    wfc-timeout         30;  ## Infinite!
    degr-wfc-timeout  15;  ## 2 minutes.
  }
  disk {
    on-io-error detach;
  }
  net {
    # timeout           60;
    # connect-int       10;
    # ping-int          10;
    # max-buffers     2048;
    # max-epoch-size  2048;
  }
  syncer {
    rate   500M;
    group   3; # last in sync
  }

  on snoopy2n1 {
    device      /dev/drbd4;
    disk        /dev/hda12;
    address     10.71.71.1:7792;
    meta-disk   /dev/hda17[4];
  }

  on snoopy2n2 {
    device     /dev/drbd4;
    disk       /dev/hda12;
    address    10.71.71.2:7792;
    meta-disk  /dev/hda17[4];
  }
}




__________________________________________
>
>
> -----Ursprüngliche Nachricht-----
> Von: drbd-user-bounces at lists.linbit.com
> [mailto:drbd-user-bounces at lists.linbit.com]Im Auftrag von Alvaro
> Pietrobono
> Gesendet: Mittwoch, 13. April 2005 12:58
> An: drbd-user at linbit.com
> Betreff: [DRBD-user] Metadevice question
>
>
> Hi,
>
> in the http://wiki.linux-ha.org/DRBD_2fQuickStart07 I founded:
> "Currently DRBD meta-data reserves 128MB, regardless of actual physical data
> storage. This allows for a maximum storage size of a single DRBD device of
> approximatly 4TB. "
>
> I need 5 drbd devices. For the first and second no problem.
> For the other devices I received:
> Starting DRBD resources:    [ d0 d1 d2 ioctl(,SET_DISK_CONFIG,) 
> failed: Invalid
> argument
> Meta device too small.
>
> cmd /sbin/drbdsetup /dev/drbd3 disk /dev/hda10 /dev/hda16 3 
> --on-io-error=detach
> failed!
>
> where: hda10 is 10Gb and hda16 is 360Mb and drbd version is 0.7.10 for kernel
> 2.4.27-2-386 (debian)
>
> Where is the problem?
> Thanks in advance.
>
> A. Pietrobono
>
> ----------------------------------------------------------------
> This message was sent using IMP, the Internet Messaging Program.
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>



----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.




More information about the drbd-user mailing list