<div dir="ltr"><div><div>Roland, Thank you for your answer. <br></div>Can you provide an example when there are 3 storage nodes with a drbdreplication network and one storagenetwork? Please, note that compute nodes are connected only to the storagenetwork. So I have some doubts related to the satellite / DRDB client configuration when clients are outside the replication network.</div><div>I opened another thread here -> <a href="http://lists.linbit.com/pipermail/drbd-user/2017-November/023850.html">http://lists.linbit.com/pipermail/drbd-user/2017-November/023850.html</a><br></div><div><br></div>Thank you<br></div><div class="gmail_extra"><br><div class="gmail_quote">2017-11-28 11:19 GMT+01:00 Roland Kammerer <span dir="ltr"><<a href="mailto:roland.kammerer@linbit.com" target="_blank">roland.kammerer@linbit.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Tue, Nov 14, 2017 at 05:50:08PM +0100, Marco Marino wrote:<br>
> Hi,<br>
> I'm trying to understand if it is possible to deploy a 2 node solution with<br>
> drbd9/drbdmanage compatible with openstack-cinder-volume. Should I use 2 or<br>
> 3 nodes with drbdmanage? It seems that, in a 2 node configuration, if one<br>
> node goes down, drbdmanage becomes unstable ( please see<br>
> <a href="https://lists.gt.net/drbd/users/28672" rel="noreferrer" target="_blank">https://lists.gt.net/drbd/<wbr>users/28672</a> )<br>
<br>
</span>In a two node cluster both have to be up, otherwise really have to force<br>
drbdmanage to do operations (that is intentional). In a 3 node setup one<br>
can fail, the others still have quorum. That is something you have to<br>
decide.<br>
<br>
A common setup for openstack is 3 storage nodes and $N hypervisors that<br>
act as drbdmange satellites/DRBD clients (without local storage, they<br>
read/write data via the network).<br>
<br>
HTH, rck<br>
</blockquote></div><br></div>