[DRBD-user] Broken mysql replication

Jordi Espasa Clofent jespasac at minibofh.org
Mon Mar 15 11:30:14 CET 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Hi all,

I've a master mysql cluster composed by node0 (the active) and node1 
(the passive), which share de mysql data using drbd. I get the high 
avaliability with heartbeat, of course. No problem here, all work fine. 
If I shudown the active one (node0), drbd and heartbeat assure the mysql 
high avaliability.

node2 is a simple mysql slave; it uses the virtual IP shared between 
node0 and node1 for replication.

The detected problem is:

- I launch a massive INSERTs (databaseX.tableN) script against virtual IP.
- I shutdown node0; node1 takes the control. During the downtime, the 
script is hanging, obviously
- I check the tableN in node1 and the same table in node2... and they've 
a different number of records. The result is the mysql replication is 

¿Why? ¿Maybe I need some drbd tuning?

Here is my drbd.conf in node0/node1:

cat /etc/drbd.conf
resource r0 {
   protocol C;
   startup { degr-wfc-timeout 120; }
   disk { on-io-error detach; }
   net  { timeout 60; connect-int 10; ping-int 10;
          max-buffers 2048; max-epoch-size 2048; }
   syncer { rate 10M; }

   on kvm-node0.srv.cat {
     disk       /dev/hda1;
     device /dev/drbd0;
     meta-disk "internal";

   on kvm-node1.srv.cat {
     disk /dev/hda1;
     device /dev/drbd0;
     meta-disk "internal";

More information about the drbd-user mailing list