Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hello,
any comments? =)
Verify test results are seems very strange for us.
*/---/*
*/Best regards,/*
/Eugene Istomin/
Hello,
we done some tests with DRBD9 9.0.1
Some preliminary results:
#### 1. Verify depends on "connection-mesh" position. ####
connection-mesh {
hosts 6787-dblapro-edss 6788-dblapro-edss 6789-dblapro-edss;
}
If the first node is switched off - verify on the second if failed:
6788-dblapro-edss# drbdadm verify all
storage: State change failed: (-15) Need a connection to start verify or resync
Command 'drbdsetup verify storage 0 0' terminated with exit code 11
storage role:Secondary
disk:UpToDate
6787-dblapro-edss connection:Connecting
6789-dblapro-edss role:Secondary
peer-disk:UpToDate
After "drbdadm up all" on the first (6787-dblapro-edss) and "drbdadm down all" on the second (6788-dblapro-edss) seems working correctly (with node 2 unavailability problems, but thats ok)
storage role:Secondary
disk:UpToDate
6787-dblapro-edss role:Secondary
replication:VerifyS peer-disk:UpToDate done:100.00
6788-dblapro-edss connection:Connecting
So, "connection-mesh" host position matters (not just for verify, may be).
#### 2. Verify strange verify behaviour ####
the same three nodes, start with fully OK status for all nodes.
#storage role:Primary
disk:UpToDate
6788-dblapro-edss role:Secondary
peer-disk:UpToDate
6789-dblapro-edss role:Secondary
peer-disk:UpToDate
Than, the magic:
1. On first (Primary role) node# mount /dev/drbd/by-res/storage/0 /media/storage# dd if=/dev/urandom of=/media/storage/1 bs=1M count=100-rw-r--r-- 1 root root 104857600 Feb 17 18:17 /media/storage/1
2. Down Second node and mount as ordinary disk (we use external metadata)
3. Second node - verifywait till sync is done
# ... storage role:Secondary
disk:UpToDate
6787-dblapro-edss role:Primary
replication:VerifyS peer-disk:UpToDate done:100.00
6789-dblapro-edss role:Secondary
replication:VerifyS peer-disk:UpToDate done:100.00
1. Mount Secondary node disk (assuming to see only "./1" file)# drbdadm down all# mount /dev/disk/by-label/storage /media/storage/# ls -l /media/storage/
-rw-r--r-- 1 root root 104857600 Feb 17 18:21 1
-rw-r--r-- 1 root root 104857600 Feb 17 18:21 2
What really verify do?
#### 3. No information about speeds, timing & bandwith ####
How long will replication lasts?
What is current bandwidth?
The questions are for initial sync. resync & verify.
BTW, current implementation (for ex. done:80.94) is faulty sometimes.
Thanks for DRBD9 mesh topology! =)
*/---/*
*/Best regards,/*
/Eugene Istomin/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20160219/fedcc63a/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 473 bytes
Desc: This is a digitally signed message part.
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20160219/fedcc63a/attachment.pgp>