[DRBD-user] Queries on 'pure client' and external node.

Sreekumar S sreesiv at gmail.com
Fri Jul 1 18:17:13 CEST 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello,

I am a newbie to DRBD, and I am very interested in trying this out for
OpenStack setup.

I've a few general queries on 'pure client' and external nodes...
a) I am assuming that both pure client and external node will be able to
remotely mount a volume created on a 'control+storage' node, similar to an
nfs mount. For this DRBD uses the drbd.ko and its dependency ko to connect
the nodes, without traffic/switching b/w user land and kernel. Is this
assumption correct?
b) If the answer to (a) is YES, then I assume the 'DRBD transport' being
mentioned in the docs, and in the DRBD driver code submitted for nova to
directly connect bypassing iSCSI, _is using this protocol?
c) If (b) is YES, then I suppose it will be kind of equivalent to iSCSI and
can potentially avoid iSCSI hops in case of multi host hypervisor setup
(like in OpenStack).

Keeping these assumptions...
I downloaded the src tar balls from your http site and compiled and make
installed everything. Painstakingly soft linked between /usr/local/* and
normal installation locations :-).
I was able to get all the nodes setup, via both old school config style and
also the drbdmanage way. Assuming I'll be able to do what is mentioned in
(a)...

a) When I add an external node...
sudo drbdmanage add-node --external dev2 203.0.113.6
Currently not implemented
this is the message I get. When will this be done? and Is there any other
source I can try out, even if it's half baked!

b) When I add a pure client node...
sudo drbdmanage add-node --satellite --no-storage --control-node dev1 dev2
203.0.113.6
devstack at dev0:~$ sudo drbdmanage list-nodes
+------------------------------------------------------------------------------------------------------------+
| Name | Pool Size | Pool Free |
     |                      State |
|------------------------------------------------------------------------------------------------------------|
| dev0 |      8188 |      8084 |
     |                         ok |
| dev1 |      8188 |      8084 |
     |                         ok |
| dev2 |         0 |         0 |
     | satellite node, no storage |
+------------------------------------------------------------------------------------------------------------+

But when I do list-nodes on the satellite node itself, I get...
devstack at dev2:~$ sudo drbdmanage list-nodes
+------------------------------------------------------------------------------------------------------------+
| Name | Pool Size | Pool Free |
     |                      State |
|------------------------------------------------------------------------------------------------------------|
| dev0 |      8188 |      8084 |
     |                    OFFLINE |
| dev1 |      8188 |      8084 |
     |                    OFFLINE |
| dev2 |         0 |         0 |
     | satellite node, no storage |
+------------------------------------------------------------------------------------------------------------+

May be because of this OFFLINE state or may be something I am doing wrong,
I am unable to get a device /dev/drbd* created for mounting.

So, Is this the right way to go about? I am trying to create an OpenStack
environment with no iSCSI hops and no tgtadm daemons running. Everything
should work from local disk device with DRBD replication underneath.
Instead of multiple primaries I want to use external/pure client nodes
remotely mounting to storage nodes. I am planning to create multiple
independent clusters, so that a pure client/external node on one cluster,
can actually be a storage/backup node for another.

Thanks and awaiting advice from experts on the list.

Thanks,
Sreekumar





-- 
Thanks,
Sreekumar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20160701/2384ec48/attachment.htm>


More information about the drbd-user mailing list