Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Fri, Nov 18, 2011 at 1:55 AM, Florian Haas <florian at hastexo.com> wrote: > Hi Trey, > > On 11/18/11 07:22, Trey Dockendorf wrote: > > In preparation to begin testing DRBD I've found my superiors have new > > requirements. The desired setup is 3 nodes in the cluster, replicating > > a LVM volume between all 3. The volume will contain QCOW2 images for > > KVM, and to have concurrent access I've been looking at using GFS2. > > OK, so if it's 3 nodes that a hypervisor is meant to run on, then that > rules out putting DRBD-based storage and the hypervisors on the same > boxes. (It certainly doesn't rule out the use of DRBD altogether; see > below). > > > The potential complication is all 3 servers are not identical. Two are > > identical, Dell 2950s. The other is a new Dell R510. The 2950s run 6 x > > SATA 7200RPM in RAID 6, and the R510 has it's system on RAID1 SAS and 6 > > x SAS 10000RPM in RAID 6. Is it correct that with DRBD, the combination > > of mismatched performance of the disk I/O would be a problem? How much > > more difficult is a 3 node cluster over 2 node? > > For multiple-node write access, it's impossible. What DRBD currently > supports is adding a third node in a stacked configuration, but that is > only useful for backup and DR purposes. You can't really think of the > third node as a fully-fledged member of the virtualization cluster. > > > Also, if I'm able get an iSCSI, what role would DRBD play in the model > > of 3 servers w/ shared storage? I assume to allow concurrent access to > > the same iSCSI space, that I would have to still use a cluster file > > system (live migration). Would DRBD then be used to replicate the > > system partitions or with KVM is it only useful to replicate the VM > > store when not using shared storage? > > You can put your iSCSI on DRBD-backed storage, and then use that iSCSI > target as centralized storage for your hypervisors. You may not even > need to set up cLVM. That's what this talk explains: > > http://www.hastexo.com/content/roll-your-own-cloud > > (Free-of-charge registration required, or just use your Google Profile > or WordPress account, or anything else that supports OpenID, to log in.) > > You can also take a look at this Tech Guide which I wrote while working > at Linbit, which is still hosted on their site: > > > http://www.linbit.com/en/education/tech-guides/highly-available-virtualization-with-kvm-iscsi-pacemaker/ > > > With or without a shared storage device (most likely without), how would > > failover work for the virtual servers? Is that where Pacemaker comes > > in? Basically a way to trigger the still-alive servers to bring up the > > VMs that were running on the failed server. > > Yes, watch the talk. :) > > Cheers, > Florian > > -- > Need help with High Availability? > http://www.hastexo.com/now > _______________________________________________ > drbd-user mailing list > drbd-user at lists.linbit.com > http://lists.linbit.com/mailman/listinfo/drbd-user > After reading the documents linked previously, most of the DRBD user guide and documents related to Pacemaker, I still have a few conceptual questions. First, I've seen lots of mention of the requirement to to sync the XML data for each VM. Is this necessary will KVM or Pacemaker be handling this? Right now I typically store all VM disks at /vmstore which has the same security context as /var/lib/libvirt/images. Do I also need to replicate the directory containing the domain XML files? The other questions is do Pacemaker and other services for clustering live on the replicated servers or on external systems? As a follow-up , if they live on the replicated servers, do I have to take measures to ensure those services stay in sync? Right now I have a good idea of how to structure all this, but just a few low level concepts I have yet to fully understand. So far I think I can achieve the necessary failover and live migration for the VMs using 2 servers with /vmstore replicated with DRBD. /vmstore will live on top of GFS2 to allow active/active operation. Thanks for all the help, - Trey -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20111124/d3b5e6d9/attachment.htm>