Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I didn't set this cluster up but that information was documented by the
previous admin here:
http://crunchtools.com/kvm-cluster-with-drbd-gfs2/
It is now to the point where provisioning space for a new VM is a day
long process with a load level that brings down the server. I did these
tests this morning. If the formatting gets really bad on send the non
drbd portion of the raid shows 2.14.79 MB writes and the drbd portion of
the raid shows 4.65MB to start and drops off to around 1.5MB average.
Iostat test on the non drbd portion of the raid.
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
cciss/c0d0 622.61 0.03 214.79 0 427
drbd0 329.65 0.03 1.26
0 2
Iostat test on the drbd portion of the raid.
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
cciss/c0d0 52.26 0.02 4.65 0 9
drbd0 1182.41 0.02 4.65 0 9
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
cciss/c0d0 134.83 0.01 2.15 0 4
drbd0 449.75 0.01 1.75 0 3
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
cciss/c0d0 105.00 0.00 1.60 0 3
drbd0 407.00 0.00 1.59 0 3
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
cciss/c0d0 90.95 0.00 1.54 0 3
drbd0 394.97 0.00 1.54 0 3
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
cciss/c0d0 114.93 0.00 1.68 0 3
drbd0 430.35 0.00 1.68 0 3
Clustering was broken after an upgrade before I got here. I upgraded
both systems to the latest RHEL 5 about a month ago. The drbd was
compiled locally and is version drbd-8.2.6.
Thank You
Ken