[DRBD-user] drbd and secondary while primary down

Tim Hibbard hibbard at research.ohiou.edu
Mon Jul 18 14:50:05 CEST 2005

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


First of all, thanks to everyone who is part of this project.  I really like 
what drbd is doing and where it is going.

I've been running test scenarios and have found a problem.  NodeA and NodeB, 
while both servers are running, remain fully in sync perfectly.  When I 
shutdown NodeA(Primary), Node B becomes active as should.  Drbd mounts the 
disks in primary mode and mounts them through heartbeat.

While NodeB is active I deleted a directory on the block device, excepting the 
change to also reflect when NodeA became available again.  Instead when NodeA 
comes alive again it seems to sync NodeA to NodeB block.  When heartbeat does 
its recovery the directory and all it's contents are found on both block 
devices again.  As a note, while NodeB is primary, /proc/drbd does show 
Primary/UnKnown and mount shows /dev/drbd3 mounted to /usr/local/leo.

Am I missing something in my ha.cf of drbc.conf to tell NodeA to do a sync 
from NodeB block after a reboot? 

Any help or ideas are appreciated.  Attached are my config files.  I'm running 
2.6.11 kernel, drbd-0.7.11, and heartbeat-1.2.3.

Thanks in advance

Tim Hibbard
 
-------------- next part --------------
#THIS IS A 10G SPACE ON /dev/sdb1
resource oracle {
  protocol C;
  disk {
    on-io-error   detach;
  }
  net {
    on-disconnect reconnect;
  }
  syncer {
    rate 100M;
    al-extents 257;
    group 1;

  }

  on leo1 {
    device    /dev/drbd1;
    disk      /dev/sdb1;
    address   xxx.xxx.xxx.111:7788;
    meta-disk internal;
 }

  on leo2 {
    device     /dev/drbd1;
    disk       /dev/sdb1;
    address    xxx.xxx.xxx.112:7788;
    meta-disk internal;
  }
}

#THIS IS THE DRBD FOR POSTFIX
#THIS IS A 10G SPACE ON /dev/sdb2
resource postfix {
  protocol C;
  disk {
    on-io-error   detach;
  }
  net {
    on-disconnect reconnect;
  }
  syncer {
    rate 100M;
    al-extents 257;
    group 3;
  }

  on leo1 {
    device    /dev/drbd2;
    disk      /dev/sdb2;
    address   xxx.xxx.xxx.111:7789;
    meta-disk internal;
 }

  on leo2 {
    device     /dev/drbd2;
    disk       /dev/sdb2;
    address    xxx.xxx.xxx.112:7789;
    meta-disk internal;
  }
}

#THIS IS THE DRBD FOR LEO-CLUSTER AND ARIES
#THIS IS THE REMAINING SPACE ON SDC1
resource leo_services {
  protocol C;
  disk {
    on-io-error   detach;
  }
  net {
    on-disconnect reconnect;
  }
  syncer {
    rate 100M;
    al-extents 257;
    group 2;
  }

  on leo1 {
    device    /dev/drbd3;
    disk      /dev/sdb3;
    address   xxx.xxx.xxx.111:7790;
    meta-disk internal;
 }

  on leo2 {
    device     /dev/drbd3;
    disk       /dev/sdb3;
    address    xxx.xxx.xxx.112:7790;
    meta-disk internal;
  }
}
-------------- next part --------------
leo1 xxx.xxx.xxx.113 drbd drbddisk::oracle drbddisk::leo_services Filesystem::/dev/drbd3::/usr/local/leo::ext3 Filesystem::/dev/drbd1::/oracle::ext3 leo1-cluster
leo2 xxx.xxx.xxx.114 drbd drbddisk::postfix Filesystem::/dev/drbd2::/home::ext3 leo2-cluster
-------------- next part --------------
logfacility syslog
watchdog /dev/watchdog
logfile /var/log/heartbeat
node leo1 leo2
keepalive 2
deadtime 20
bcast eth0
ping xxx.xxx.xxx.254
auto_failback yes
respawn hacluster /usr/lib/heartbeat/ipfail

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20050718/b2212e2e/attachment.pgp>


More information about the drbd-user mailing list