[DRBD-user] Questions regarding offsite disaster/recovery (from stacked drbd device)

Schmidt, Torsten torsten.schmidt at tecdoc.net
Wed Oct 21 13:13:29 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi list,

i've successfully implemented a three-node setup with an offsite node via drbd stacked-device as explained in the users-guide
( http://www.drbd.org/users-guide/s-pacemaker-stacked-resources.html )

the offsite drbd-device status is always 'secondary' because it has to stay in sync with the 'primary' (stacked) device managed by pacemaker crm.
afaik 'secondary' devices cannot be mounted/accessed.

so, how can i access the offsite data, assumed all nodes in my ha-cluster failed?

how to handle the different states, the offsite-device can be in and how to do a recovery from it?
i can think these:
  -  cs:WFConnection   ro:Secondary/Unknown ds:UpToDate/DUnknown
  -  cs:WFConnection/Unknown   ro:Secondary/Primary ds:Outdated/DUnknown
  -  cs:WFConnection/Unknown   ro:Secondary/Primary ds:Consistent/DUnknown
  -  cs:WFConnection/Unknown   ro:Secondary/Primary ds:Inconsistent/DUnknown


is there a possibility to make the data on the offsite node permanently available (read only)?
  e.g. so i can always 'clone' my production data to my test-environment from the state:
    cs:Connected   ro:Secondary/Primary ds:UpToDate/UpToDate


Mit freundlichen Grüßen / with kind regards

Torsten Schmidt

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20091021/6382a595/attachment.htm>


More information about the drbd-user mailing list