<div dir="ltr"><div><div><div><div>Hi Igor,<br></div>because installing openstack-cinder-volume on the storage node doesn't solve the failover problem in an efficient way. More precisely, openstack-cinder-volume uses LVM with the default (and more cheaper) configuration. But LVM has a lot of problems when used in clusters. HA-LVM has problems with lvmetad service (used by openstack-cinder-volume) and CLVMd doesn't manage snapshots, a feature really useful in openstack-cinder.<br></div>Drbdmanage is a good idea beacuse it manages automatically resources in drbd as logical volumes and in conjunction with the cinder-volume plugin handles also the "transport" part to the compute nodes (without pass over the cinder-volume node). The problem is that there is no documentation and more important, there are no use case guides. In a datacenter environment is important to split storage network and replication network and services on various nodes.<br></div>Thank you for your support.<br></div>Marco<br></div><div class="gmail_extra"><br><div class="gmail_quote">2017-11-26 23:05 GMT+01:00 Igor Cicimov <span dir="ltr"><<a href="mailto:igorc@encompasscorporation.com" target="_blank">igorc@encompasscorporation.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto">Hi Marco,<span class=""><br><div class="gmail_extra" dir="auto"><br><div class="gmail_quote">On 23 Nov 2017 7:05 am, "Marco Marino" <<a href="mailto:marino.mrc@gmail.com" target="_blank">marino.mrc@gmail.com</a>> wrote:<br type="attribution"><blockquote class="m_5366914458434733459quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div><div><div><div><div>Hi, I'm trying to configure drbd9 with openstack-cinder.<br></div>Actually my (simplified) infrastructure is composed by:<br></div>- 2 drbd9 nodes with 2 NICs on each node, one for the "replication" network (without using a switch) and one for the "storage" network.<br></div>- 1 compute node with a dedicated NIC connected to the storage network<br></div>- 1 controller with openstack-cinder-volume installed on it.<br><br></div>Please, review the configuration at <a href="https://www.draw.io/#G1P2uJ9LoXc0bJdNRS9m2fN5cuik73t7_T" target="_blank">https://www.draw.io/#G1P2uJ9Lo<wbr>Xc0bJdNRS9m2fN5cuik73t7_T</a><br><br></div>My questions are:<br></div>1) Using the DRBD Transport, can I connect directly compute nodes to the drbd storage without passing through the cinder-volume node?<br></div>2) if yes, what I have to install on each node? I suppose: <br>drbd9 kernel module + utils + drbdmanage on DRBD1 and DRBD2<br></div>drbd9 kernel module + utils on COMPUTE nodes<br></div>drbdmanage on openstack-cinder-volume (???)<br></div><br></div>3) How should I configure openstack-cinder-volume in drbdmanage??? (External node or whatelse?) Please note that replication network and storage network are 2 different subnets! IPs I've used in drbdmanage when I created the drbd cluster belongs to the replication network. Is this correct?<br></div><br></div><a href="http://192.168.10.0/24" target="_blank">192.168.10.0/24</a> = replication network <br></div><a href="http://192.168.20.0/24" target="_blank">192.168.20.0/24</a> = storage network <br><div><br>I'm sorry, but I'm a bit confused about this configuration</div></div></blockquote></div></div><div dir="auto"><br></div><div class="gmail_extra" dir="auto"></div></span><div dir="auto">Wouldn't you just install Cinder together with drbd on the drbd nodes? Then you just give cinder the drbd device for its lvm to serve block devices from via iSCSI as per usual.</div><div class="gmail_extra" dir="auto"><br></div></div>
</blockquote></div><br></div>