Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I forgot to mention ,,, after making the good node Primary .,.. I stopped drbd process on the bad node ... this is how it looks now cat /proc/drbd version: 8.4.2 (api:1/proto:86-101) GIT-hash: 7ad5f850d711223713d6dcadc3dd48860321070c build by dag at Build64R6, 2012-09-06 08:16:10 1: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----- ns:0 nr:0 dw:1075884 dr:11456741 al:420 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1004148 2: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----- ns:0 nr:0 dw:911340 dr:3110809 al:275 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:864788 3: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----- ns:0 nr:0 dw:911260 dr:3415117 al:258 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:7626800 4: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----- ns:0 nr:0 dw:911968 dr:3293909 al:231 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:720268 On Sun, Feb 7, 2016 at 10:16 AM, AALISHE <aalishe at gmail.com> wrote: > Hello everyone, > > > I am not so familiar with the drbd ... but I will try be informative so > you are able to help me - I hope. > > I have 2 nodes cluster ... node 1 had a disk failure .. it happens to be > part of drbd disks and holds its data ... so after seeing the diskless > status from cat /proc/drbd ..... I migrated to other node and made it > primary... ... I also have a backup of the data > > > The software is: > > *- CentOS 6.5* > > *- drbd 8.4* > *rpm -qa | grep drbd* > kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 > drbd84-utils-8.4.2-1.el6.elrepo.x86_64 > > *- drbd config* > > * cat /etc/drbd.d/r0.res* > resource r0 { > > volume 0 { > meta-disk internal; > device /dev/drbd1; > disk /dev/sda1; > } > > volume 1 { > meta-disk internal; > device /dev/drbd2; > disk /dev/sdb1; > } > > volume 2 { > meta-disk internal; > device /dev/drbd3; > disk /dev/sdc1; > } > > volume 3 { > meta-disk internal; > device /dev/drbd4; > disk /dev/sdd1; > } > > > on lws1h1 { > address 10.100.0.223:7789; > } > > on lws1h2 { > address 10.100.0.224:7789; > } > } > > > - Disk to replace > > Feb 5 04:45:33 lws1h1 kernel: sd 4:0:0:0: [sdc] Unhandled error code > Feb 5 04:49:20 lws1h1 kernel: block drbd3: IO ERROR: neither local nor > remote disk > > > The disk is not part of LVM or Raid ....it is ext4 / JBOD > > > > Cloud you please guide me to replace the disk properly .. and have > Uptodate/Uptodate back again on both ends > > Thanks! > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20160207/a024c33d/attachment.htm>