Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
> not quite true: > stor1 fails -> VM keeps using stor2 -> power outage (all machines are > down) -> stor1 boots faster and VM starts using it ... oops That is assuming the vm can come up automatically with only one storage. The hypervisor by default should not allow the vm to power on without manual intervention due to the missing disk. However, that may depend on the hypervisor... > - If you have a large data-set you risk for data loss, because of the > extended rebuild time With the standard linux md raid, rebuilds are done in the background (and rarely trigger a full sync) so things may be slower during rebuild but don't see the chance for data-set risk. However if you have too many VMs configured such a way it would probably really thrash your storage servers and/or network if they were all rebuilding at the same time (but min/max sync rate it's tunable)... Although the entire disk isn't rebuilt, one sector generally means a whole area (ie: 64mb more or less depending on size of disk) to be synced, and so for an active system that could mean a lot of to resync after being ot of sync for a short period of time. > > At this point I am not sure I would recommend/use it over drbd or any of > > the various cluster filesystems, etc. just that it did test out well > > enough that I am at least considering it, given that most of my servers > I > > don't need the redundant network storage (maybe 3%) beyond what's built > > into the boxes, as the majority of our servers are active/active with > > redundant failover loadbalancers in front of them, or active/passive > with > > sync'd configs, or are simply not critical 24/7/365. > > It has some use cases, but generally not recommended. I would probably use > such setup for load-balanced (mostly) read-only data or a small partition > with clustered (session cache) fs for a high load web cloud or really > (really) large storage pool with small chunks on each server combined in > LVM after the software raid > > For 2 nodes always use DRBD