[DRBD-user] drbd newbie

Tim Hibbard hibbard at ohio.edu
Fri May 6 22:20:18 CEST 2005

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


I am in the beginning stage of planing a high availability environment. I have 
some general questions about deployment. Any suggestions or comments
are welcomed.

I have two servers connected to a shared storage device (server1 and server2).
We also have a 'slave' machine on the network which I would like to have all 
data duplicated to in case of an 'emergency'.

My main concern is the the 'slave' remains a secondary at all times until I 
change it manually as the primary in the event of an 'emergency'.  Do I need
to change anything in my drdb.conf to make this happen.  What about reboots?  
Or should I make a startup file which executes "drbdadm secondary all" 
on the 'slave' machine?  

Also I question on how heartbeat/drbd handles the exchange of drbd primary 
on the heartbeat cluster.  The scenario is 'server1' had /dev/drdb0 mounted and 
is the primary.  'Server1' has problems and heartbeat transfers resources to 
'server2' via the haresources. (we have stonith network powerswitch to take care 
of dual mounts).

haresources:
==========
server1 datadisk::drbd0 Filesystem::/dev/drbd0::/home Filesystem::/dev/sda1::/usr/local/apache 192.168.0.1 apache
server2 Filesystem::/dev/sda3::/usr/local/mysql 192.168.0.2 mysql

Does 'server2' automatically become primary? Will the 'slave' still remain secondary the 
whole time?  Is the same true with the heartbeat event of auto_failback?  Or should I 
put a "drbdadm primary all" in my apache init.d file?

The mysql database has built-in replication happening with the slave machine.  
Also the data is saved once an hour with mysql_dump.  It it possible to replicate 
the data in /usr/local/mysql database with drbd?  Could this cause corruption/unreliability 
in the database files located on the slave node?


Here are my config files.  Any feedback is greatly appreciated.

drbd.conf
==========================================================
resource drbd0 {
  protocol B;
  startup {
    degr-wfc-timeout 120;    # 2 minutes.
  }

  disk {
    on-io-error   detach;
  }
  net {
    on-disconnect reconnect;
  }
  syncer {
    rate 100M;
  }

  on cluster {
    device     /dev/drbd0;
    disk       /dev/sda4;
    address    192.168.0.1:7788;
    meta-disk internal;
  }

  on slave {
    device    /dev/drbd0;
    disk      /dev/hdc1;
    address   192.168.1.200:7788;
    meta-disk internal; 
 }
}


ha.cf
====================================
logfacility daemon         
node server1 server2 
keepalive 1
deadtime 10    
bcast eth0
ping 192.168.1.254
auto_failback yes
respawn hacluster /usr/lib/heartbeat/ipfail
stonith apcmaster /etc/ha.d/apcmaster.conf
stonith_host *  apcmaster xxx.xxx.xxx.xxx username password





Tim Hibbard
Software Engineer For 
Vice President Of Research
Ohio University
Athens, Ohio 45701
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20050506/1d3577ab/attachment.pgp>


More information about the drbd-user mailing list