Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Can you post the contents your /proc/drbd? You might also want to add "no-disk-drain", and see if that helps. Gordan Andrew (Anything) wrote: > Hi All > > I've recently started trying to use DRBD + ocfs2 in a dual node setup and > have had issues with very slow write performance. > > Adding no-disk-barrier, no-disk-flushes and no-md-flushes seems to work for > hundreds of different google results, and sounds like that's why my drbd > disk is so slow. > > So I thought I'd fire up a test on 2 x virtual machines (on physically > different machines) to test see the sort of change that this might make. > These two virtual machines are far from super speedy, and are only on > 100mbit (direct interconnect), but I expected to see at least some sort of > improvement. > > All ive done is add the no flush lines to the disk section of r0 on both > servers. > no-disk-barrier; > no-disk-flushes; > no-md-flushes; > > Theyre currently running v8.2.7, but I checked v8.3.0 aswell as v8.0.14 (on > an older kernel) just in case. All with pretty much the same results. > > I'm hoping someone can see clear as day what ive missed, ive included some > of my benchmark results. > > Thanks in advance. > Andy.. > > > #dd if=/dev/zero of=/dev/sdb bs=512k count=1000 oflag=direct > 1000+0 records in > 1000+0 records out > 524288000 bytes (524 MB) copied, 15.0304 seconds, 34.9 MB/s > > # dd if=/dev/zero of=/dev/sdb bs=512 count=1000 oflag=direct > 1000+0 records in > 1000+0 records out > 512000 bytes (512 kB) copied, 0.351227 seconds, 1.5 MB/s > > > ##### without flushes: > #single node > # dd if=/dev/zero of=/dev/drbd0 bs=512k count=1000 oflag=direct > 1000+0 records in > 1000+0 records out > 524288000 bytes (524 MB) copied, 15.0428 seconds, 34.9 MB/s > > # dd if=/dev/zero of=/dev/drbd0 bs=512 count=1000 oflag=direct > 1000+0 records in > 1000+0 records out > 512000 bytes (512 kB) copied, 0.367788 seconds, 1.4 MB/s > > #dual node > # dd if=/dev/zero of=/dev/drbd0 bs=512k count=1000 oflag=direct > 1000+0 records in > 1000+0 records out > 524288000 bytes (524 MB) copied, 51.0372 seconds, 10.3 MB/s > > # dd if=/dev/zero of=/dev/drbd0 bs=512 count=1000 oflag=direct > 1000+0 records in > 1000+0 records out > 512000 bytes (512 kB) copied, 2.03025 seconds, 252 kB/s > > > ##### with flushes: > #single node > # dd if=/dev/zero of=/dev/drbd0 bs=512k count=1000 oflag=direct > 1000+0 records in > 1000+0 records out > 524288000 bytes (524 MB) copied, 17.5014 seconds, 30.0 MB/s > > # dd if=/dev/zero of=/dev/drbd0 bs=512 count=1000 oflag=direct > 1000+0 records in > 1000+0 records out > 512000 bytes (512 kB) copied, 0.420332 seconds, 1.2 MB/s > > #dual node > # dd if=/dev/zero of=/dev/drbd0 bs=512k count=1000 oflag=direct > 1000+0 records in > 1000+0 records out > 524288000 bytes (524 MB) copied, 49.6752 seconds, 10.6 MB/s > > # dd if=/dev/zero of=/dev/drbd0 bs=512 count=1000 oflag=direct > 1000+0 records in > 1000+0 records out > 512000 bytes (512 kB) copied, 1.99413 seconds, 257 kB/s > > > # drbdsetup /dev/drbd0 show > disk { > size 0s _is_default; # bytes > on-io-error detach; > fencing dont-care _is_default; > no-disk-barrier ; > no-disk-flushes ; > no-md-flushes ; > max-bio-bvecs 0 _is_default; > } > .. > al-extents 1201; > .. > sndbuf-size 0; # bytes (larger buffers had slower results for > small files test) > > > _______________________________________________ > drbd-user mailing list > drbd-user at lists.linbit.com > http://lists.linbit.com/mailman/listinfo/drbd-user