Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Thu, Jul 2, 2009 at 3:49 PM, Florian Haas<florian.haas at linbit.com> wrote: >>> Hi Florian, >>> I did the tests again using just LVM: > > What do you mean, you changed back to locking_type = 1? I was too lazy for that :-) I stopped cman, and I used the option -cn with vgcreate. This should be enough. >> just for the records, i do not think that testing disk-speed with a >> ~100MB file when your controller has 128MB cache is a good indicator. > > Nonetheless 50 MB/s write speed per disk is a reasonable expectation on > current hardware, and 14.2 MB/s would be way too slow. Still, it's > probably wise to rerun the test writing one big chunk that fits into > memory (RAM), but not into the on-controller cache. Something like bs=1G > count=1 (assuming you have more than 2G or so RAM). I'm not testing the disk/controller speed as absolute value. I don't want to make comparison with other disks/controllers, I'm trying to measure the performances at each level: disk, drbd, lvm, filesystem and in this kind of comparison the controller cache has no influence (it doesn't change between tests). PS. just for the record the cache is split 50% for writing and 50% for reading; so in my test only 64MB of cache were involved. >> *maybe* this is the issue. drbd waits for an ack from the disk subsystem >> requesting the controller to write the data to the disks whereas (c)lvm >> is happy as soon as the data hits the controller cache. >> >> of course, i might be wrong ;) > > You are, because the original intention was to use DRBD together with > CLVM, which requires running DRBD in dual-Primary mode, which in turn > necessitates protocol C. ...and I would also add that the test over drbd was quite good (I'm sure it can be tuned to achieve even better performances): # sync && dd if=/dev/zero of=/dev/drbd1 bs=1M count=100 oflag=direct 104857600 bytes (105 MB) copied, 1,7703 seconds, 59,2 MB/s -- Federico.