[DRBD-user] Error after update to 9.0.8+linbit-1

Roland Kammerer roland.kammerer at linbit.com
Mon Aug 14 17:05:26 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Mon, Aug 14, 2017 at 04:40:24PM +0200, Frank Rust wrote:
> 
> 
> > Am 14.08.2017 um 16:12 schrieb Roland Kammerer <roland.kammerer at linbit.com>:
> > 
> > On Mon, Aug 14, 2017 at 02:41:56PM +0200, Frank Rust wrote:
> > (…)
> >>> drbdadm status
> >> root at virt5:~# drbdadm status
> >> .drbdctrl role:Secondary
> >>  volume:0 disk:UpToDate
> >>  volume:1 disk:UpToDate
> >>  fs1 role:Secondary
> >>    volume:0 peer-disk:UpToDate
> >>    volume:1 peer-disk:UpToDate
> >>  fs2 role:Secondary
> >>    volume:0 peer-disk:UpToDate
> >>    volume:1 peer-disk:UpToDate
> >>  virt1 role:Secondary
> >>    volume:0 peer-disk:UpToDate
> >>    volume:1 peer-disk:UpToDate
> >>  virt2 role:Primary
> >>    volume:0 peer-disk:UpToDate
> >>    volume:1 peer-disk:UpToDate
> >>  virt3 role:Secondary
> >>    volume:0 peer-disk:UpToDate
> >>    volume:1 peer-disk:UpToDate
> >>  virt4 role:Secondary
> >>    volume:0 peer-disk:UpToDate
> >>    volume:1 peer-disk:UpToDate
> > 
> > Maybe a bit unrelated, but why would you spawn the control volume over
> > that many nodes? Especially virtual machines. That does not make sense
> > to me. I guess these should be satellite nodes.
> > 
> 

Removed some details, because this was only to me, but is relevant for
other users and was not flagged with [OFFLIST] or any other hint:

> These virt1-5 are not virtual machines, but the hosts for the virtual
> machines. Each of them adds some XYZGB of storage to the system which
> would else be lost. But indeed most of the storage is delivered by
> fs1,fs2 and virt5 (~ ABCTB each) 

No, that is not true. Satellite means that this node is a node that
receives its cluster DB via TCP/IP. Our best practice is to keep it on
3 und use the rest as satellite (--satellite on add-node). It would be
lost if you give them the "--no-storage" flag. Only then such node are
not considered for deployment of the actual data. These are two very
different things.

As a side note and not something wanted in this particular setup: Even
if nodes are --satellite + --no-storage, they can be hypervisors. They
are then DRBD clients and receive/write data to storage nodes over the
network. This is the typical setup. 3 big storage nodes, N hypervisor
nodes that are --no-storage --satellite.

Regards, rck.



More information about the drbd-user mailing list