[DRBD-user] c-min-rate priority

A.Rubio aurusa at etsii.upv.es
Tue May 26 15:06:57 CEST 2015

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Have you test these values ?

https://drbd.linbit.com/users-guide/s-throughput-tuning.html


El 26/05/15 a las 13:16, Ben RUBSON escribió:
> RAID controller is OK yes.
>
> Here is a 4 steps example of the issue :
>
>
>
> ### 1 - initial state :
>
> Source :
> - sdb read MB/s      : 0
> - sdb write MB/s     : 0
> - eth1 incoming MB/s : 0
> - eth1 outgoing MB/s : 0
> Target :
> - sdb read MB/s      : 0
> - sdb write MB/s     : 0
> - eth1 incoming MB/s : 0
> - eth1 outgoing MB/s : 0
>
>
>
> ### 2 - dd if=/dev/zero of=bigfile :
>
> Source :
> - sdb read MB/s      : 0
> - sdb write MB/s     : 670
> - eth1 incoming MB/s : 1
> - eth1 outgoing MB/s : 670
> Target :
> - sdb read MB/s      : 0
> - sdb write MB/s     : 670
> - eth1 incoming MB/s : 670
> - eth1 outgoing MB/s : 1
>
>
>
> ### 3 - disable the link between the 2 nodes :
>
> Source :
> - sdb read MB/s      : 0
> - sdb write MB/s     : 670
> - eth1 incoming MB/s : 0
> - eth1 outgoing MB/s : 0
> Target :
> - sdb read MB/s      : 0
> - sdb write MB/s     : 0
> - eth1 incoming MB/s : 0
> - eth1 outgoing MB/s : 0
>
>
>
> ### 4 - re-enable the link between the 2 nodes :
>
> Source :
> - sdb read MB/s      : ~20
> - sdb write MB/s     : ~670
> - eth1 incoming MB/s : 1
> - eth1 outgoing MB/s : 670
> Target :
> - sdb read MB/s      : 0
> - sdb write MB/s     : 670
> - eth1 incoming MB/s : 670
> - eth1 outgoing MB/s : 1
> DRBD resource :
>  1: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r-----
>     ns:62950732 nr:1143320132 dw:1206271712 dr:1379744185 al:9869 
> bm:6499 lo:2 pe:681 ua:1 ap:0 ep:1 wo:d oos:11883000
>     [>...................] sync'ed:  6.9% (11604/12448)M
>     finish: 0:34:22 speed: 5,756 (6,568) want: 696,320 K/sec
>
>
>
> ### values I would have expected in step 4 :
>
> Source :
> - sdb read MB/s      : ~400 (because of c-min-rate 400M)
> - sdb write MB/s     : ~370
> - eth1 incoming MB/s : 1
> - eth1 outgoing MB/s : 670
> Target :
> - sdb read MB/s      : 0
> - sdb write MB/s     : 670
> - eth1 incoming MB/s : 670
> - eth1 outgoing MB/s : 1
>
> Why resync is totally ignored and application (dd here in the example) 
> still consumes all available IOs / bandwidth ?
>
>
>
> Thank you,
>
> Ben
>
>
>
> 2015-05-25 16:50 GMT+02:00 A.Rubio <aurusa at etsii.upv.es 
> <mailto:aurusa at etsii.upv.es>>:
>
>     Cache settings an I/O in RAID controller is optimal ???
>     Write-back, write-through, cache enablad, I/O direct, ...
>
>     El 25/05/15 a las 11:50, Ben RUBSON escribió:
>
>         The link between nodes is a 10Gb/s link.
>         The DRBD resource is a RAID-10 array which is able to resync
>         at up to 800M (as you can see I have lowered it to 680M in my
>         configuration file).
>
>         The "issue" here seems to be a prioritization "issue" between
>         application IOs and resync IOs.
>         Perhaps I miss-configured something ?
>         Goal is to have resync rate up to 680M, with a minimum of
>         400M, even if there are application IOs.
>         This in order to be sure to complete the resync even if there
>         are a lot of write IOs from the application.
>
>         With my simple test below, this is not the case, dd still
>         writes at its best throughput, lowering resync rate which
>         can’t reach 400M at all.
>
>         Thank you !
>
>             Le 25 mai 2015 à 11:18, A.Rubio <aurusa at etsii.upv.es
>             <mailto:aurusa at etsii.upv.es>> a écrit :
>
>             the link between nodes is ???  1Gb/s ??? , 10Gb/s ??? ...
>
>             the Hard Disks are ??? SATA 7200rpm ???, 10000rpm ???, SAS
>             ???,
>             SSD ???...
>
>             400M to 680M with a 10Gb/s link and SAS 15.000 rpm is OK
>             but less ...
>
>                 Le 12 avr. 2014 à 17:23, Ben RUBSON
>                 <ben.rubson at gmail.com <mailto:ben.rubson at gmail.com>> a
>                 écrit :
>
>                 Hello,
>
>                 Let's assume the following configuration :
>                 disk {
>                     c-plan-ahead 0;
>                     resync-rate 680M;
>                     c-min-rate 400M;
>                 }
>
>                 Both nodes are uptodate, and on the primary, I have a
>                 test IO burst running, using dd.
>
>                 I then cut replication link for a few minutes so that
>                 secondary node will be several GB behind primary node.
>
>                 I then re-enable replication link.
>                 What I expect here according to the configuration is
>                 that secondary node will fetch missing GB at a 400
>                 MB/s throughput.
>                 DRBD should then prefer resync IOs over application
>                 (dd here) IOs.
>
>                 However, it does not seems to work.
>                 dd still writes at its best throughput, meanwhile
>                 reads are made from the primary disk between 30 and 60
>                 MB/s to complete the resync.
>                 Of course this is not the expected behaviour.
>
>                 Did I miss something ? 
>
>
>
>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20150526/d02bdf56/attachment.htm>


More information about the drbd-user mailing list