Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
In preparation to begin testing DRBD I've found my superiors have new requirements. The desired setup is 3 nodes in the cluster, replicating a LVM volume between all 3. The volume will contain QCOW2 images for KVM, and to have concurrent access I've been looking at using GFS2. The potential complication is all 3 servers are not identical. Two are identical, Dell 2950s. The other is a new Dell R510. The 2950s run 6 x SATA 7200RPM in RAID 6, and the R510 has it's system on RAID1 SAS and 6 x SAS 10000RPM in RAID 6. Is it correct that with DRBD, the combination of mismatched performance of the disk I/O would be a problem? How much more difficult is a 3 node cluster over 2 node? Also, if I'm able get an iSCSI, what role would DRBD play in the model of 3 servers w/ shared storage? I assume to allow concurrent access to the same iSCSI space, that I would have to still use a cluster file system (live migration). Would DRBD then be used to replicate the system partitions or with KVM is it only useful to replicate the VM store when not using shared storage? With or without a shared storage device (most likely without), how would failover work for the virtual servers? Is that where Pacemaker comes in? Basically a way to trigger the still-alive servers to bring up the VMs that were running on the failed server. Apologies for the barrage of questions. This is all still very new to me and I want to make sure I have a good idea what I'm getting into before I make promises to my superiors. Thanks, - Trey -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20111118/96a03959/attachment.htm>