Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi, I use a PCI-E flash card and 10Gb ib card to test drbd9. Test tool is fio. I tested 4k randread, 4k read, 512k read, 4k randwrite, 4k write, 512k write, on flash card device, drbd device in standalone status, and drbd device in sync status. 1. read (bandwidth/iops) /dev/dfa/dev/drbd0/dev/drbd0 standalone 4k randread630m/158k445m/111k428m/107k 4k read804m/201k441m/110k423m/105k 512k read2282m2457m2279m 2. write/dev/dfa/dev/drbd0/dev/drbd0 standalone 4k randwrite1424m/364k93m/23k328m/82k 4k write1455m/372k286m/71k369m/92k 512k write1785m860m1799m Seems write performance droped a lot. Bandwitdh of 4k randwrite is only 93M. I read user guide and tried the recommanded parameter, but it's still no use. Anyone can give some advise? My fio command: fio --filename=/dev/drbd0 --direct=1 --rw=randwrite --randrepeat=0 --ioengine=libaio --group_reporting --bs=4k --iodepth=4 --numjobs=16 --runtime=20 --name=fiotest1 --size=20g My r0.res config: ~ [root at app2 drbd.d]# vi r0.res resource r0 { disk { on-io-error detach; disk-flushes no; disk-barrier no; c-plan-ahead 40; c-fill-target 24M; c-min-rate 10M; c-max-rate 1000M; al-extents 6007; } net { protocol A; max-buffers 20000; max-epoch-size 20000; sndbuf-size 0; rcvbuf-size 0; } device /dev/drbd0; disk /dev/dfa; meta-disk internal; on app2 { address 10.1.10.104:7789; } on app3 { address 10.1.10.105:7789; } Thanks! Richard ligong_qiu at sina.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20161125/2ab42fc0/attachment.htm>