Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
As the thread "Hardware-recomendation needed" drifted slightly away from my problem (although it is an interesting discussion as well) I'll ask for input again in this thread. Setup: 2xXeon E3-1260L, 16GB Memory, 1 SSD as Boot-Device, 1 SATA-Drive, direct connect via 1GBit e1000 interface, KVM installed As space is limited we've just room for that single SATA drive - so RAID5 oder RAID 10 is not an option. My drbd.conf looks like this: global { usage-count no; } common { protocol C; syncer { rate 120M; al-extents 3389; } startup { wfc-timeout 15; degr-wfc-timeout 60; become-primary-on both; } net { cram-hmac-alg sha1; shared-secret "secret"; allow-two-primaries; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; sndbuf-size 512k; } } resource r0 { on vm01 { device /dev/drbd0; disk /dev/sdb3; address 10.254.1.101:7780; meta-disk /dev/sda3[0]; } on vm02 { device /dev/drbd0; disk /dev/sdb3; address 10.254.1.102:7780; meta-disk /dev/sda3[0]; } } resource r1 { on vm01 { device /dev/drbd1; disk /dev/sdb1; address 10.254.1.101:7781; meta-disk internal; } on vm02 { device /dev/drbd1; disk /dev/sdb1; address 10.254.1.102:7781; meta-disk internal; } } So one drbd-device with internal metadata and one with external metadata on the SSD. I did some benchmarking as described in http://www.drbd.org/users-guide/ch-benchmark.html Throughput values don't change really - 85MB/s on the raw device, 83MB/s on the drbd-device. (average of 5 times "dd if=/dev/zero of=/dev/drbd1 bs=512M count=1 oflag=direct") But latency drops: for the 1000 512B blocks it took 0,05397s to write on the raw device and 12,757s on the drbd device (average of 5 times "dd if=/dev/zero of=/dev/drbd1 bs=512 count=1000 oflag=direct") There was no significant differences between internal metadata on the sata-disk and external metadata on the Boot-SSD. Then I created a KVM-Host (debian squeeze) on one of the machines and connected a virtual disk on local LVM, one on LVM on DRBD with external metadata and one on LVM on DRBD with internal metadata, 32GB each, connected via virtio. each virtual disk was partitioned with one big partition and formatted with ext3. The formatting took 19s for local lvm, 95s for drbd with external metadata and 133s for drbd with internal metadata... So - is this as expected with this setup? Where do the experts think is the bottleneck? Network oder Disk? regards Lukas -- -------------------------- software security networks Lukas Gradl <proxmox#ssn.at> Eduard-Bodem-Gasse 6 A - 6020 Innsbruck Tel: +43-512-214040-0 Fax: +43-512-214040-21 --------------------------