<html dir="ltr"><head></head><body style="text-align:left; direction:ltr;"><div>Yes, we've tested this approach as well. LVM can join DRBD devices into a single volume. But this way it may be more complicated to bypass drbd in case of any problems with it. A few weeks ago we've grown our volume above the allowed limit and DRBD stopped to work. We were not able to shrink it back because DRBD error appeared in a month time after reboot and we've already started to populate the extended space. With RAID0->LVM->DRBD->XFS we simply changed a mount point from DRBD to LVM volume with minimum impact to operation time. In case of RAID0 -> DRBD -> LVM -> XFS I guess it would be required to recover LVM volume changing DRBD to the physical disks in meta data.</div><div><br></div><div>-----Original Message-----</div><div><b>From</b>: Adam Goryachev <<a href="mailto:Adam%20Goryachev%20%3cmailinglists@websitemanagers.com.au%3e">mailinglists@websitemanagers.com.au</a>></div><div><b>To</b>: <a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a></div><div><b>Subject</b>: Re: [DRBD-user] PingAck did not arrive in time.</div><div><b>Date</b>: Tue, 25 Jun 2019 09:12:17 +1000</div><div><br></div><pre>Is it possible to use Linux MD on top of DRBD devices? ie, use </pre><pre>/dev/drbd[0-3] to create a RAID0 array?</pre><pre><br></pre><pre>Or I guess using them as PV's and then creating a single LV across all </pre><pre>of them?</pre></body></html>