[DRBD-user] Persistent Device Names

Arnold Krille arnold at arnoldarts.de
Wed Jan 25 01:04:39 CET 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Tuesday 24 January 2012 16:50:09 Ted Young wrote:
> > Additionally when you mirror disks directly with drbd, your
> > drbd-resources
> are fixed to the size of the disk-pairs. When you mirror lvm volumes, they
> can have the size they need to have to fulfill their > task. You can have
> drbd-resources of only some MB but also resources of the size of two or
> three of your disks toghether.
> 
> Thank you Arnold for your reply.  I think you may have hit the impedance
> mismatch on the head!  You are recommending putting DRBD on top of a large
> LVM volume.  In fact, I was planning on putting LVM on top of a bunch of
> mirrored (DRBD) physical drives.  When one puts DRBD on top of LVM one risks
> losing the entire logical volume if a single drive fails.  In such a case,
> DRBD would have to re-sync the entire logical volume.  By putting LVM on
> top of DRBD, DRBD would only have to re-sync the failed hard drive.
> 
> That being said, I just discovered today that DRBD volumes are a relatively
> new feature.  Prior to version 8.4 one would have to create a separate DRBD
> resource for every synchronized block device.  Obviously, this would be
> really annoying and so using DRBD on top of LVM makes sense.  However, I was
> planning on defining each physical hard drive as a DRBD volume within one
> resource then using LVM to stripe/aggregate them.  So perhaps the reason I
> have found little on the subject is that most people have traditionally put
> DRBD on top of LVM instead of the other way around.

I see your reasoning. You are planning on providing just a few "services" in 
active-passive failover?
With smaller drbd resources, you could provide atomic disks for the services 
and balance the load so that normally not all services run on one node just 
because its all one drbd resource that is master on that node.
If you just have one big drbd resource (regardless of how many drbd-volumes 
are in there), all the services need to run on that node where the resource is 
in primary state. Possible but the second node will then just waste 
electricity and produce heat. And the first node also might produce a lot of 
heat when its reaching its capacity.
But when you have two or more drbd-resources, one can be primary on node1 with 
the connected services running there while the second drbd is primary on node2 
and has its services there. When all is well, the load is ~evenly balanced, 
processing is low enough to use some power-savings and energy (and electricity 
bill) will be saved. Of course the systems would still have to be powerfull 
enough to take over all services on a single system in case the other node 
dies or is shut down. But as that shouldn't be the normal state of operation, 
the load-balancing-by-distributing-services allows for a certain over-
commitment that only affects performance for the short (un-)planned downtimes 
of one of the nodes.

I am not yet convinced of 8.4.X, if you plan to put this system into 
production in the next weeks, it might be better to stay with latest 8.3.X.

My advocating towards drbd-on-lvm doesn't speak against lvm-on-drbd. At our 
work I did several drbd-on-lvm, some of these drbd-resources are used for 
gfs2, some are used for new volume groups, some are used as disks for virtual 
machines and got a partition-table with pv on top.
The fact that drbd results "just" in a block device makes it very versatile.

Have fun,

Arnold
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20120125/b4cc2d91/attachment.pgp>


More information about the drbd-user mailing list