[DRBD-user] Manuall split brain recovery

Michael Schwartzkopff misch at multinet.de
Mon Apr 19 14:41:39 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Am Montag, 19. April 2010 12:35:43 schrieb Zemke, Kai:
> Hello Forum,
>
> I'm currently strugeling with a DRBD related problem. We have a two node
> cluster running. These to nodes share their data via DRBD. Last week we had
> a power failure in our datacenter. One of the nodes crashed and rebooted.
> From this point on cat /proc/drbd shows the following output:
>
> NODE1:
> ----------------------------------------------------------
> version: 8.3.0 (api:88/proto:86-89)
> GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root at xen0,
> 2009-06-04 09:17:59 0: cs:StandAlone ro:Secondary/Unknown
> ds:UpToDate/DUnknown   r--- ns:0 nr:0 dw:0 dr:0 al:0 bm:93 lo:0 pe:0 ua:0
> ap:0 ep:1 wo:b oos:6607464 1: cs:StandAlone ro:Secondary/Unknown
> ds:UpToDate/DUnknown   r--- ns:0 nr:0 dw:0 dr:0 al:0 bm:95 lo:0 pe:0 ua:0
> ap:0 ep:1 wo:b oos:2392872
>
> NODE2:
> -----------------------------------------------------------
> version: 8.3.0 (api:88/proto:86-89)
> GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root at xen0,
> 2009-06-04 09:17:59 0: cs:WFConnection ro:Primary/Unknown
> ds:UpToDate/Inconsistent C r--- ns:0 nr:0 dw:2039644193 dr:270667732
> al:2559052 bm:2150165 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:6647004 1:
> cs:WFConnection ro:Primary/Unknown ds:UpToDate/Inconsistent C r--- ns:0
> nr:0 dw:613150072 dr:392361024 al:111384 bm:4395 lo:0 pe:0 ua:0 ap:0 ep:1
> wo:b oos:2394624
>
> On node 2 are running severell virtuall maschines hosted by XEN.
> The data on node 2 is up to date and I want to keep this data.
> Now I try to sync all data from node2 to node1.
>
> I followed the "Manuall split brain recovery" from drbd.org and did the
> following:
>
> On Node1:
>
> drbdadm secondary all
> drbdadm -- --descard-my-data connect all

/descard/discard/

> Now node1 was waiting for connection
>
> On Node2:
>
> drbdadm connect all
>
> Nothin happened now. But node1 went from waiting for connection to
> Standalone. Am I missing something? I thought that the cluster would
> connect now and all data would be synced from node2 to node1.
>
> What am I doing wrong here?

Make the DRBD secondary on the node where the data should be destroyed.

-- 
Dr. Michael Schwartzkopff
MultiNET Services GmbH
Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
Tel: +49 - 89 - 45 69 11 0
Fax: +49 - 89 - 45 69 11 21
mob: +49 - 174 - 343 28 75

mail: misch at multinet.de
web: www.multinet.de

Sitz der Gesellschaft: 85630 Grasbrunn
Registergericht: Amtsgericht München HRB 114375
Geschäftsführer: Günter Jurgeneit, Hubert Martens

---

PGP Fingerprint: F919 3919 FF12 ED5A 2801 DEA6 AA77 57A4 EDD8 979B
Skype: misch42



More information about the drbd-user mailing list