Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Thu, Jul 15, 2010 at 09:16:43AM +1000, adamn at eyemedia.com.au wrote: > Ok I didn't realise this was that kind of an issue, I just assumed DRBD > wasn't happy with my setup due to the second node not being online. > The partition /dev/sdb1 is only set to about 40GB while I'm testing the > setup, once I've nutted everything the machine will be rebuilt to have > it's full capacity. (around 7TB) it's an LVM partition > > I created a single Logical Volume around 20GB have it exported via > iscsi-scst. > > Single VMWare ESXi server (again this is all testing this phase), setup > iscsi to connect to the SAN (testing MPIO) as soon as I "scan" from the > storage adapter I start getting the > [28609.797320] block drbd1: al_complete_io() called on inactive extent 623 > > ete.. > > see http://www.genis-x.com/files/syslog I never encountered that particular message, not in testing, not in production. I cannot remember any report of it, either. It indicates serious imbalance in reference counts of "active extents". So "something" is "special" with your setup. I suggest some sort of build inconsistency, so please double check a clean build of the DRBD module exactly fitting the kernel. Try to narrow it down. Try to reproduce without going through the iSCSI layer first, accessing DRBD locally. Try to reproduce with a "vanilla" kernel, using a different iSCSI target. -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. __ please don't Cc me, but send to list -- I'm subscribed