Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
DRBD/LVM, LVM/DRBD, LVM/DRBD/LVM, LVM/DRBD/DRBD/LVM (3 nodes) should all work. With DRBD over LVM, you gain the ability to make an automatic snapshot of the last consistent state of a resource before you start synchronizing. Make sure you use the latest DRBD and a recent kernel. If you can't, do read all the release notes carefully to work around known critical bugs. Forget about barriers. They don't even exist anymore in recent kernels - DRBD was updated recently to match that. Barriers, flushes, syncs etc. only mitigate the fact that your caches are unsafe. Do yourself a favor: use safe caches (BBU, PSU) and live a simple life. We've been doing LVM over DRBD over LVM over HW RAID+WiMax in production for 2 years and it works: one building was destroyed in a fire and there was no data loss and no server downtime the fsck we forced (that didn't find errors). Our cluster is not fast. We don't know if that's because of DRBD because our stack is complicated. It's still usually fast enough for our use. Lionel Sausin Le 08/11/2012 15:33, Denis Cardon a écrit : >> it does work :-) >> >> I'm not at all sure what's better performance-wise. >> >> If your backing device is an LV, you do incur a write penalty to all >> drbd interactions on both nodes. >> Still, that doesn't mean that the setup will be slower than the other >> way around. >> >> It's usually a good idea to make your own tests, hopefully with a most >> befitting workload. > > I've been digging into this issue recently. There is some drbd > documentation with this scenario > (http://www.drbd.org/users-guide/s-lvm-lv-as-drbd-backing-dev.html), > and I have done such setup in the past. It does work, but I was > wondering if it is safe? > > DRBD by default make use of barrier (unless no-disk-barrier is set > with a BBU RAID backend), and such barrier where not correctly > implemented in LVM before kernel 2.6.33. So I guess there are some > cases where there is problems, eg. with software MD raid. > > Cheers, > > Denis > >> >> Cheers, >> Felix >> _______________________________________________ >> drbd-user mailing list >> drbd-user at lists.linbit.com >> http://lists.linbit.com/mailman/listinfo/drbd-user >> > >