Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
> NOTE: the SDB devices on nfs2 are numbered differently. Yet in your config they match up. May be a case of putting 2.0TB into a 1.5TB bag. -Ross ----- Original Message ----- From: drbd-user-bounces at lists.linbit.com <drbd-user-bounces at lists.linbit.com> To: drbd-user at lists.linbit.com <drbd-user at lists.linbit.com> Sent: Wed Mar 05 20:52:31 2008 Subject: [DRBD-user] DRBD LVM2 Trouble Hey Guys, I have a pretty bad situation on my hands. We had a node configured running DRBD 8.0.6. The goal was to keep this running in standalone mode until we provisioned a matching machine. We purchased the matching machine and finally had it fully configured today. I kicked off the initial sync, and had hoped that we would have both machines in sync within a day or two. This was unfortunately not the case. When I kicked off the sync all seemed well however our application quickly began throwing error's as the primary node became read only. I quickly shut off drbd on the secondary node and attempted to return the original configuration to the primary server. Sadly no amount of back peddling has helped us. We are currently dead in the water. DRBD was configured on the primary node with LVM. We have/had 3 resources configured the first 2 being 2TB in size and the 3rd being 1.4-5TB in size. Since stopping the initial sync I have not been able to mount LVM Volume Group that sits above the three resources. NOTE: the SDB devices on nfs2 are numbered differently. /var/log/messages was giving the following messages: Mar 5 14:38:35 nfs2 kernel: drbd2: rw=0, want=3434534208, limit=3421310910 Mar 5 14:38:35 nfs2 kernel: attempt to access beyond end of device Mar 5 14:38:35 nfs2 kernel: drbd2: rw=0, want=3434534216, limit=3421310910 Mar 5 14:38:35 nfs2 kernel: attempt to access beyond end of device Mar 5 14:38:35 nfs2 kernel: drbd2: rw=0, want=3434534224, limit=3421310910 Mar 5 14:38:35 nfs2 kernel: attempt to access beyond end of device Here is my conf: global { usage-count no; } common { syncer {rate 20M;} handlers { local-io-error "echo o > /proc/sysrq-trigger ; halt -f"; outdate-peer "/sbin/drbd-peer-outdater"; } startup { wfc-timeout 120; # s degr-wfc-timeout 120; # 2 minutes. } disk { on-io-error detach; } net { sndbuf-size 512k; max-buffers 2048; unplug-watermark 128; cram-hmac-alg "xxx"; shared-secret "xxx"; after-sb-0pri disconnect; after-sb-1pri disconnect; after-sb-2pri disconnect; rr-conflict disconnect; } } resource drbd_sdb1 { protocol C; syncer { al-extents 1031; } on nfs2.imprev.net { device /dev/drbd0; disk /dev/sdb1; address 216.145.23.104:7788; flexible-meta-disk /dev/sdb2; } on nfs1.imprev.net { device /dev/drbd0; disk /dev/sdb1; address 216.145.23.103:7788; flexible-meta-disk /dev/sdb2; } } resource drbd_sdb3 { protocol C; syncer { al-extents 1031; } on nfs2.imprev.net { device /dev/drbd1; disk /dev/sdb3; address 216.145.23.104:7789; flexible-meta-disk /dev/sdb4; on nfs1.imprev.net { device /dev/drbd1; disk /dev/sdb3; address 216.145.23.103:7789; flexible-meta-disk /dev/sdb4; } } resource drbd_sdb5 { protocol C; syncer { al-extents 1031; } on nfs2.imprev.net { device /dev/drbd2; disk /dev/sdb5; address 216.145.23.104:7790; flexible-meta-disk /dev/sdb6; } on nfs1.imprev.net { device /dev/drbd2; disk /dev/sdb5; address 216.145.23.103:7790; flexible-meta-disk /dev/sdb6; } } _______________________________________________ drbd-user mailing list drbd-user at lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user ______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20080305/db8c0f6f/attachment.htm>