<br><br><div class="gmail_quote">On Fri, Nov 18, 2011 at 1:55 AM, Florian Haas <span dir="ltr"><<a href="mailto:florian@hastexo.com">florian@hastexo.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi Trey,<br>
<div class="im"><br>
On 11/18/11 07:22, Trey Dockendorf wrote:<br>
> In preparation to begin testing DRBD I've found my superiors have new<br>
> requirements. The desired setup is 3 nodes in the cluster, replicating<br>
> a LVM volume between all 3. The volume will contain QCOW2 images for<br>
> KVM, and to have concurrent access I've been looking at using GFS2.<br>
<br>
</div>OK, so if it's 3 nodes that a hypervisor is meant to run on, then that<br>
rules out putting DRBD-based storage and the hypervisors on the same<br>
boxes. (It certainly doesn't rule out the use of DRBD altogether; see<br>
below).<br>
<div class="im"><br>
> The potential complication is all 3 servers are not identical. Two are<br>
> identical, Dell 2950s. The other is a new Dell R510. The 2950s run 6 x<br>
> SATA 7200RPM in RAID 6, and the R510 has it's system on RAID1 SAS and 6<br>
> x SAS 10000RPM in RAID 6. Is it correct that with DRBD, the combination<br>
> of mismatched performance of the disk I/O would be a problem? How much<br>
> more difficult is a 3 node cluster over 2 node?<br>
<br>
</div>For multiple-node write access, it's impossible. What DRBD currently<br>
supports is adding a third node in a stacked configuration, but that is<br>
only useful for backup and DR purposes. You can't really think of the<br>
third node as a fully-fledged member of the virtualization cluster.<br>
<div class="im"><br>
> Also, if I'm able get an iSCSI, what role would DRBD play in the model<br>
> of 3 servers w/ shared storage? I assume to allow concurrent access to<br>
> the same iSCSI space, that I would have to still use a cluster file<br>
> system (live migration). Would DRBD then be used to replicate the<br>
> system partitions or with KVM is it only useful to replicate the VM<br>
> store when not using shared storage?<br>
<br>
</div>You can put your iSCSI on DRBD-backed storage, and then use that iSCSI<br>
target as centralized storage for your hypervisors. You may not even<br>
need to set up cLVM. That's what this talk explains:<br>
<br>
<a href="http://www.hastexo.com/content/roll-your-own-cloud" target="_blank">http://www.hastexo.com/content/roll-your-own-cloud</a><br>
<br>
(Free-of-charge registration required, or just use your Google Profile<br>
or WordPress account, or anything else that supports OpenID, to log in.)<br>
<br>
You can also take a look at this Tech Guide which I wrote while working<br>
at Linbit, which is still hosted on their site:<br>
<br>
<a href="http://www.linbit.com/en/education/tech-guides/highly-available-virtualization-with-kvm-iscsi-pacemaker/" target="_blank">http://www.linbit.com/en/education/tech-guides/highly-available-virtualization-with-kvm-iscsi-pacemaker/</a><br>
<div class="im"><br>
> With or without a shared storage device (most likely without), how would<br>
> failover work for the virtual servers? Is that where Pacemaker comes<br>
> in? Basically a way to trigger the still-alive servers to bring up the<br>
> VMs that were running on the failed server.<br>
<br>
</div>Yes, watch the talk. :)<br>
<br>
Cheers,<br>
Florian<br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
Need help with High Availability?<br>
<a href="http://www.hastexo.com/now" target="_blank">http://www.hastexo.com/now</a><br>
_______________________________________________<br>
drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br>
<a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
</font></span></blockquote></div><br><div>Thanks for the info! This will keep me busy with research and testing, and hopefully allow me create a proper infrastructure plan.</div><div><br></div><div>Thanks again,</div><div>
- Trey</div>