Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hello DRBD users, We used DRBD with version 8.2.6 ( internal metadata ) and we found a slowdown disk resources on DRBD partitions after have upgrade DRBD release to 8.3.1 or 8.3.6 ( external metadata on the same disk ). We didn't understand why DRBD performance between older and newer version have declined and the difference between real partition and DRBD partition performance ! We have trying to increase activity log size without best performance. We have trying also to disable cache with more performance... If someone have an idea. Please to read it. regards Resources and program used : - tstDrbd is a program that write lines to a file. - hdparm - iostat - iftop We made following tests : DRBD 8.2.6 ( C protocol ) using 2.6.17 kernel with CPU ( VIA Nehemiah / 1002.462 Mhz / 2006.94 bogomips ) Network interface using to DRBD : 10/100Mbs > hdparm -ctT /dev/hda /dev/hda: IO_support = 1 (32-bit) Timing cached reads: 258 MB in 2.01 seconds = 128.19 MB/sec Timing buffered disk reads: 72 MB in 3.07 seconds = 23.46 MB/sec -- TEST with 5000000 lines -- /root partition : tstDrbd -f /root/tstDrbd.txt -n 5000000 Time to write 5000000 lines (hh:mm:ss) : 00:00:18. <=============\ /drbd partition : tstDrbd -f /drbd/tstDrbd.txt -n 5000000 Time to write 5000000 lines (hh:mm:ss) : 00:00:20. <============== ~ Same performance -- TEST with 7000000 lines -- /root partition : tstDrbd -f /root/tstDrbd.txt -n 7000000 Time to write 7000000 lines (hh:mm:ss) : 00:00:26. <==============\ /drbd partition : tstDrbd -f /root/tstDrbd.txt -n 7000000 Time to write 7000000 lines (hh:mm:ss) : 00:00:40. <============== ~ DRBD slowness If we used both partition test in the same time, the /root test increased the delay to write !!! During the test Writing program we launched iostat to analyzed IO resources. About IO disk stats : avg-cpu: %user %nice %system %iowait %steal %idle 25,87 0,00 74,13 0,00 0,00 0,00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util hda 0,00 548,26 0,00 58,71 0,00 39761,19 677,29 3,05 54,27 11,05 64,88 drbd0 0,00 0,00 0,00 605,47 0,00 38276,62 63,22 1827,88 1649,08 1,64 99,50 The DRBD driver to used at 100% and hard disk only at 65%. During this test, network bandwidth using to synchronized data used 90 % of 100 Mbs during all test ( iftop on the selected interface ) DRBD setup : * DRBD network parameters : resource drbd { protocol C; net { shared-secret "NUMLOG"; .... } syncer { rate 40M; al-extents 257; } disk { on-io-error call-local-io-error; } on sv1 { device /dev/drbd0; disk /dev/hda8; address 192.168.20.1:7789; meta-disk internal; } on sv2 { device /dev/drbd0; disk /dev/hda7; address 192.168.20.2:7789; meta-disk internal; } --------------------------------------------------------------------------------------- DRBD 8.3.6 or 8.3.1 ( B protocol ) using kernel 2.6.29.1 with CPU ( Intel(R) Atom(TM) CPU 230 1.60GHz / 3192.12 bogomips ) Network interface using to DRBD : 10/100Mbs > hdparm -ctT /dev/hda /dev/hda: IO_support = 0 (default 16-bit) Timing cached reads: 1264 MB in 2.00 seconds = 632.16 MB/sec Timing buffered disk reads: 84 MB in 3.01 seconds = 27.89 MB/sec -- TEST with 5000000 lines -- /root partition : tstDrbd -f /root/tstDrbd.txt -n 5000000 Time to write 5000000 lines (hh:mm:ss) : 00:00:19. /drbd partition : tstDrbd -f /drbd/tstDrbd.txt -n 5000000 Time to write 5000000 lines (hh:mm:ss) : 00:00:33. <============== ~ DRBD slowness -- TEST with 7000000 lines -- /root partition : tstDrbd -f /root/tstDrbd.txt -n 7000000 Time to write 7000000 lines (hh:mm:ss) : 00:00:25. /drbd partition : tstDrbd -f /drbd/tstDrbd.txt -n 7000000 Time to write 7000000 lines (hh:mm:ss) : 00:01:07. <============== ~ DRBD slowness During the test Writing program we launched iostat to analyzed IO resources. About IO disk stats : avg-cpu: %user %nice %system %iowait %steal %idle 1.49 0.00 2.99 47.26 0.00 48.26 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util hda7 0.00 123.50 0.00 68.50 0.00 8428.00 123.04 0.49 6.57 2.80 19.20 /dev/hda9 hda9 0.00 0.00 0.00 1.00 0.00 2.00 2.00 0.15 152.00 152.00 15.20 Metadata drbd0 0.00 0.00 0.00 192.50 0.00 8656.00 44.97 23.10 108.75 5.21 100.20 /dev/drbd0 The DRBD driver to used at 100% and hard disk only at 20%. During this test, network bandwidth using to synchronized data used 20 to 40 % of 100 Mbs during all test ( iftop on the selected interface ) DRBD setup : * DRBD network parameters : resource drbd { protocol B; net { shared-secret "NUMLOG"; .... } syncer { rate 40M; al-extents 257; } disk { on-io-error call-local-io-error; no-disk-flushes; } on sv1 { device /dev/drbd0; disk /dev/hda7; address 192.168.20.1:7789; meta-disk /dev/hda9[0]; } on sv2 { device /dev/drbd0; disk /dev/hda7; address 192.168.20.2:7789; meta-disk /dev/hda9[0]; } -- Fabrice LE CREURER Développement / Support technique EDTI FT-MASTER Developer engineer / Helpdesk FT-MASTER product NUMLOG - Internet : http://www.numlog.fr Tel : (+33) 1 30 79 16 16 - Fax: (+33) 1 30 81 92 86 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20091126/8347eb6e/attachment.htm>