Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Bryce, You need to configure heartbeat so that when you bring the nodes up, heartbeat decides who is drbd primary on each one of the devices. This is done when heartbeat runs the drbddisk init script that is located in /etc/ha.d/resource.d You tell heartbeat who is primary in the file /etc/ha.d/haresources. Mine looks like this: -----------Begin of file----------- sauron.ibb.gatech.edu 188.8.131.52 drbddisk::export Filesystem::/dev/drbd0::/export::ext3::rw,usrquota,grpquota,acl,user_xattr saslauthd cyrus-imapd clamd mimedefang sendmail saruman.ibb.gatech.edu 184.108.40.206 drbddisk::web Filesystem::/dev/drbd1::/web::ext3::rw,usrquota,grpquota,acl,user_xattr httpd mysqld saruman.ibb.gatech.edu 220.127.116.11 saruman.ibb.gatech.edu 18.104.22.168 -----------end of file----------- No line wraps above, each line starts with a fqhn. If you are not yet using heartbeat and you reboot both machines simultaneously, drbd has no way to find out which node should become primary. Whereas if you use heartbeat, you will always have a node that will act as primary by mounting the drbd devices and starting some services. For example I am testing (almost ready for production) on two machines, Node 1 is primary for sendmail, imap on resource export which is /dev/drbdo, but Node 2 is primary for Web and Mysql or resource web which is /dev/drbd1. This setup allows me to have two machines doing something usefull and not just run all services on one and have another $7500 computer sitting there just waiting for takeover. Also, depending on how you compile drbd, you might need to manually load the drbd modules on boot from /etc/rc.local (or whichever place better fits gento), maybe /etc/modules.d/SOMETHING (I cannot remember it of the top of my head). I recently upgraded from 0.7.10 to 0.7.11 and have been running it just fine for 33 days. HTH, Diego Quoting Bryce Porter <bryce at oicgroup.net>: > Hello all, > > I have three DRBD resources recently configured (and working well I > might add), but when I reboot either one of the slave or master node in > my failover cluster (just for testing purposes), it seems DRBD doesn't > come back up correctly. > > First, I have to manually drbdadm resource up for each of the three > resources. Then (if operating on the master node), I have to manually > drbdadm resource primary for each resource (and no, "all" works in > neither of these cases). > > Moreover, I have to manually re-create the block device files in /dev > for my resources as it seems when the module is removed before the box > reboots, it nukes them. > > Once I get all of the manual steps done, everything seems to work > smoothly on both boxes again. > > Is this a known issue? Also, I know 0.7.11 is out, but it is masked on > Gentoo, meaning that the Gentoo developers have not thoroughly tested > it. Would it fix these issues, and is it stable enough for a production > environment? > > If anyone needs any more details on it, please let me know. For what > it's worth, both boxes are completely identical Athlon64's. > > Thank you in advance. > > Cheers, > Bryce Porter > > _______________________________________________ > drbd-user mailing list > drbd-user at lists.linbit.com > http://lists.linbit.com/mailman/listinfo/drbd-user > ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.