[DRBD-user] Reducing loadavg / iowait?

Leroy van Logchem Leroy.vanLogchem at wldelft.nl
Tue Aug 2 16:48:42 CEST 2005

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Best mailinglist,

I'am using drbd on three heartbeat enabled clusters doing nfs and samba 
only.
The performance is quite okay but we experience high load averages and 
iowait.
Ideas to improve the setup are most welcome.

Primary side:
-----------------
cat /proc/loadavg
11.65 11.66 12.64 1/284 11563
( The load doesnt drop below 10 and peaks around 50 daily )

iostat -xk  output shows:
avg-cpu:  %user   %nice    %sys %iowait   %idle
           1.49    0.25    8.21   57.46   32.59
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s 
avgrq-sz avgqu-sz   await  svctm  %util
sdc        4228.71 354.46 465.35 450.50 37592.08 6396.04 18796.04  
3198.02    48.03    10.13   11.48   1.08  98.81
The sdc device is 'n external scsi U160 RAID5 (64K striped) cabinet 1.4 
TB netto drbd'ed available.

Secondairy side:
-----------------
cat /proc/loadavg
0.11 0.15 0.10 1/104 9797

iostat -xk  output shows:
avg-cpu:  %user   %nice    %sys %iowait   %idle
           2.24    0.00    2.74    0.00   95.01
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s 
avgrq-sz avgqu-sz   await  svctm  %util
sdc          0.00 1625.74  0.00 555.45    0.00 17449.50     0.00  
8724.75    31.42     0.52    0.94   0.40  22.48
(A bit out of sync but gives 'n idea of how the load and iowait differences)

The nodes a connected using a dedicated eth1 (Intel gigabit 
cross-cabled) using MTU 9000 ifconfig option.

Filesystem:
-----------------
# tune2fs -l /dev/drbd0
tune2fs 1.35 (28-Feb-2004)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          428f38ae-7877-49bc-b5cd-0f94c45d4b74
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal resize_inode filetype 
needs_recovery sparse_super large_file
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              29108032
Block count:              363470617
Reserved block count:     7269412
Free blocks:              82397776
Free inodes:              28987021
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      1024
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         2624
Inode blocks per group:   82
Filesystem created:       Mon Apr 25 09:24:51 2005
Last mount time:          Fri Jul 29 09:06:42 2005
Last write time:          Fri Jul 29 09:06:42 2005
Mount count:              60
Maximum mount count:      28
Last checked:             Mon Apr 25 09:24:51 2005
Check interval:           15552000 (6 months)
Next check after:         Sat Oct 22 09:24:51 2005
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               128
Journal inode:            8
Default directory hash:   tea
Directory Hash Seed:      b5c73717-8faf-4394-a824-8d401cef9df4
Journal backup:           inode blocks
( the filesystem has been optimized by reducing the nr of inodes ; 
mkfs.ext3 -i 51200 option)

Blockdev tuned:
-----------------
blockdev --report
rw  8192   512  4096          0 -1367793664  /dev/sdc
rw  8192   512  4096         63 -1387202359  /dev/sdc1
rw  8192   512  4096 -1387202296   19406520  /dev/sdc2

DRBD details:
-----------------
resource drbd0 {
    protocol               C;
    incon-degr-cmd       "logger '!DRBD! pri on incon-degr'";
    on filera2.wldelft.nl {
        device           /dev/drbd0;
        disk             /dev/sdc1;
        address          192.168.0.4:7988;
        meta-disk        /dev/sdc2 [0];
    }
    on filera1.wldelft.nl {
        device           /dev/drbd0;
        disk             /dev/sdc1;
        address          192.168.0.3:7988;
        meta-disk        /dev/sdc2 [0];
    }
    disk {
        on-io-error      panic;
    }
    syncer {
        rate             99M;
        al-extents       521;
    }
    startup {
        degr-wfc-timeout   0;
    }
}

Kernel tunables:
-----------------
vm.dirty_expire_centisecs = 300
vm.dirty_writeback_centisecs = 250
vm.dirty_ratio = 20
vm.vfs_cache_pressure = 10000
vm.min_free_kbytes = 51200
kernel.panic = 1
kernel.panic_on_oops = 1

Kernel:
-----------------
RedHat Enterprise 4 ES Update 1 2.6.9-11.ELsmp + drbd 0.7.11 module

/proc/drbd:
-----------------
 0: cs:Connected st:Primary/Secondary ld:Consistent
    ns:587626784 nr:146624628 dw:732214976 dr:155019805 al:619991 
bm:2765 lo:129 pe:47 ua:0 ap:129

Thanks,
Leroy




More information about the drbd-user mailing list