[DRBD-user] ]Re: [Scst-devel] vSphere MPIO with scst/drbd in dual primary mode. WAS: Re: R: Re: virt_dev->usn randomly generated!?

Brian Jared bjared at ethosprime.com
Mon Mar 8 17:21:50 CET 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Since we've been using ESX3.5 and now vSphere4 with drbd in active/active for 3 years now, 
I thought I'd share some things that we do. 

1. We have multiple iscsi targets carved out of the SANs, and we concluded that as long as each 
vmware host is using that target via the same SAN, you prevent split-brain scenarios. But, 
you don't have to have your second SAN sitting idle. You can have your next iSCSI target 
be used via the second SAN, but make sure all vmware hosts have it "active" on the same 
SAN. 
2. Every once in a while, do an audit to make sure that each iscsi target is active on the same SAN 
from every vmware host. 
3. If possible have a script output the condition of the disks in your raid, and e-mail you a 'diff' 
output if it changes. (i.e. a "tw_cli /c0 show > /etc/disk-status0-7.txt" or whatever the LSI 
equivalent is, etc. 

We have had great success with two linux SAN servers running DRBD in active/active in three 
separate locations. It's great to hear someone having success with scst. I'll definitely be trying 
that if we install a fourth set of SANs. 

Thanks for your patience with me, and I'm sorry to hear that MY confusion propagated to others. :) 

--Brian 

----- Original Message ----- 
From: "Zhen Xu" <zhenxu_zj at yahoo.com> 
To: "Matteo Tescione" <matteo at rmnet.it> 
Cc: "Vladislav Bolkhovitin" <vst at vlnb.net>, scst-devel at lists.sourceforge.net, "drbd-user" <drbd-user at lists.linbit.com>, "Matteo Tescione" <matteo at rmnet.it>, "Brian Jared" <bjared at ethosprime.com> 
Sent: Saturday, March 6, 2010 6:23:20 PM GMT -05:00 US/Canada Eastern 
Subject: Re: ]Re: [Scst-devel] vSphere MPIO with scst/drbd in dual primary mode. WAS: Re: R: Re: virt_dev->usn randomly generated!? 



I wonder how you can do NV_CACHE battery backed. The NV_CACHE in SCST is just the system memory/page cache. It is not the cache on the RAID card. You could have UPS hooked to the server running SCST. However, if the system failed (memory, hardware, or just seg fault) and you had to reboot without proper shutdown, the page cache will be lost. It is a very interesting setup that you wanted to do. I am interested if you have much success. 

Zhen 




From: Matteo Tescione <matteo at rmnet.it> 
To: Zhen Xu <zhenxu_zj at yahoo.com> 
Cc: Vladislav Bolkhovitin <vst at vlnb.net>; scst-devel at lists.sourceforge.net; drbd-user <drbd-user at lists.linbit.com>; Matteo Tescione <matteo at rmnet.it>; Brian Jared <bjared at ethosprime.com> 
Sent: Sat, March 6, 2010 5:55:26 PM 
Subject: ]Re: [Scst-devel] vSphere MPIO with scst/drbd in dual primary mode. WAS: Re: R: Re: virt_dev->usn randomly generated!? 

As you read in my previous post, targets are running virtual so I didn't care the NV_CACHE flag at the moment. 
I have realized your kinda of setup from almost 2 years now without any major issue. (w/ NV_CACHE BatteryBacked) 

The goal of this setup is to use the secondary node as an active node, trying to improve at least reads operations. 
As for the writes, i'm aware of the additional latency but I'm thinking that drbd can coupe well with some concurrent writes. Maybe someone from the drbd lists can clarify this. 
Split-brain cases can be very well avoided by the adoption of a serial cable and dopd. I recently had frequent kernel panic caused by large commands not well handled in scst (fixed few days ago) but no data lost occured in drbd primary/secondary configuration even with NV_CACHE exposed. 
Obviously I'm going to test with BLOCKIO or FILEIO to see some real numbers. 
Though there are still many things left to clarify/test/stress, before start to think to move to production stage. 
Thanks 
-- 
matteo 


----- Messaggio originale ----- 
Da: "Zhen Xu" < zhenxu_zj at yahoo.com > 
A: "Matteo Tescione" < matteo at rmnet.it >, "Brian Jared" < bjared at ethosprime.com > 
Cc: "Vladislav Bolkhovitin" < vst at vlnb.net >, scst-devel at lists.sourceforge.net , "drbd-user" < drbd-user at lists.linbit.com > 
Inviato: sabato 6 marzo 2010 23.27.54 (GMT+0100) Europe/Berlin 
Oggetto: [SPAM?]Re: [Scst-devel] vSphere MPIO with scst/drbd in dual primary mode. WAS: Re: R: Re: virt_dev->usn randomly generated!? 




Sounds like that you are MPIO from the vSphere initiator to two different target on different host. Those two targets are sync'd with DRBD. I do not think it is safe with what you are doing here. The two target hosts have page cache/nv_cache that is not sync'd. Potentially, you could have a lot of inflight IO in the page cache which have not yet made to the drbd layer. Also, how do you deal with split brain situations? With the kind of setup you have, you probably will have a hard time to figure out which copy to keep after a split brain. 

I was able to setup a redundant SCST cluster with DRBD and Pacemaker with MPIO. I had to pick one host as primary and run the second node as secondary. Actually, you create a master/slave relationship in Pacemaker and it will manage which node will be primary. MPIO is done with multiple ethernet ports on both nodes. I had 2 ethernet ports on each node for iSCSI traffic. Again, this is managed through Pacemaker and the IP address just float with DRBD primary host. I was able to create lot of IO on the initiator and reboot the primary host and the initiator side will just pause a few second and continue. I run SCST backend in FILEIO mode with NV_CACHE. Running SCST back-end in BLOCKIO mode will generate "concurrent write" errors with DRBD. DRBD is expecting all IO serialized. The more I think about it, I think running FILEIO mode with NV_CACHE is not safe as this will cause corruption due to lost inflight IO in the page cache/NV_CACHE. 




From: Matteo Tescione < matteo at rmnet.it > 
To: Brian Jared < bjared at ethosprime.com > 
Cc: Vladislav Bolkhovitin < vst at vlnb.net >; scst-devel at lists.sourceforge.net ; drbd-user < drbd-user at lists.linbit.com > 
Sent: Sat, March 6, 2010 1:43:35 PM 
Subject: [Scst-devel] vSphere MPIO with scst/drbd in dual primary mode. WAS: Re: R: Re: virt_dev->usn randomly generated!? 

Hi folks, 
after a little bit of experimenting I SUCCESFULLY created an drbd active/active cluster with scst/iscsi exported to round-robin vsphere initiator. 

configuration is 2 virtual machines with centos5-64 linux-2.6.33 patched and drbd8.3.7. Initiators are vSphere Software iscsi initiator. 
Relevant config in drbd is net section, allow-two-primaries. 
Relevant config in scst.conf is: 

DEVICE rmnet-devel,/dev/drbd0,NV_CACHE,512 

[ASSIGNMENT MailScanner has detected a possible fraud attempt from " default_iqn.2010-02.com.sc " claiming to be Default_iqn.2010-02.com .scst:RMnet-devel] 
#DEVICE <device name>,<lun 
DEVICE rmnet-devel,0 

Note that I'm using the same scst.conf and iscsi-scstd.conf in both targets. 
The vSphere initiators are seeing 2 paths for 1 device, switched to Round-robin. 
Since the targets are running virtual with no real hardware, i have no idea at the moment of what kind of performance increase it could bring. 
Vlad, what do you think about? is fileio the best choice here? 

Hope this helps, 
-- 
matteo 




----- Messaggio originale ----- 
Da: "Brian Jared" < bjared at ethosprime.com > 
A: "Vladislav Bolkhovitin" < vst at vlnb.net > 
Cc: scst-devel at lists.sourceforge.net 
Inviato: venerdì 5 marzo 2010 19.54.51 (GMT+0100) Europe/Berlin 
Oggetto: [SPAM?]Re: [Scst-devel] R: Re: virt_dev->usn randomly generated!? 

That was the intention...but the scst.conf file on the other machine looks 
a lot different. There's no [HANDLER vdisk] section, and only GROUP and 
ASSIGNMENT sections for the Default_iqn.x.x.x ... I'm guessing that's why 
they didn't work? Below is the config that is on the other host, but I 
can't believe it's so different from the other one... It's been a while, 
so I am not sure what all I tried back then, or why this is so drastically 
different. 

--Brian 

# Automatically generated by SCST Configurator v1.0.6. 

[HANDLER disk] 
#DEVICE <H:C:I:L> 

[HANDLER disk_perf] 
#DEVICE <H:C:I:L> 

[GROUP Default] 
#USER <user wwn> 

[GROUP MailScanner has detected a possible fraud attempt from " default_iqn.2000-01.com.et " claiming to be Default_iqn.2000-01.com .ethosprime:nas2.drbd0.bucket01] 
#USER <user wwn> 

[GROUP Default_iqn.2000-01.com.ethosprime:nas2.drbd1.bucket02] 
#USER <user wwn> 

[GROUP Default_iqn.2000-01.com.ethosprime:nas2.drbd2.bucket03] 
#USER <user wwn> 

[GROUP Default_iqn.2000-01.com.ethosprime:nas2.drbd3.bucket04] 
#USER <user wwn> 

[ASSIGNMENT Default] 
#DEVICE <device name>,<lun> 

[ASSIGNMENT Default_iqn.2000-01.com.ethosprime:nas2.drbd0.bucket01] 
#DEVICE <device name>,<lun> 
DEVICE 0:2:1:0,0 

[ASSIGNMENT Default_iqn.2000-01.com.ethosprime:nas2.drbd1.bucket02] 
#DEVICE <device name>,<lun> 
DEVICE 0:2:2:0,0 

[ASSIGNMENT Default_iqn.2000-01.com.ethosprime:nas2.drbd2.bucket03] 
#DEVICE <device name>,<lun> 
DEVICE 1:2:0:0,0 

[ASSIGNMENT Default_iqn.2000-01.com.ethosprime:nas2.drbd3.bucket04] 
#DEVICE <device name>,<lun> 
DEVICE 1:2:1:0,0 

[TARGETS enable] 
#HOST <wwn identifier> 

[TARGETS disable] 
#HOST <wwn identifier> 

----- Original Message ----- 
From: "Vladislav Bolkhovitin" < vst at vlnb.net > 
To: "Brian Jared" < bjared at ethosprime.com > 
Cc: scst-devel at lists.sourceforge.net 
Sent: Friday, March 5, 2010 12:58:48 PM GMT -05:00 US/Canada Eastern 
Subject: Re: [Scst-devel] R: Re: virt_dev->usn randomly generated!? 

Brian Jared, on 03/05/2010 06:57 PM wrote: 
> I found my old config. For some reason I thought I deleted all of the old 
> scst stuff. 
> 
> This is what my config looked like before I gave up. Those DEVICE lines 
> appear just as you mentioned. So, what did I do wrong? 

Used the same file and scst_vdisk_ID on both hosts? 

> --Brian 
> 
> # Automatically generated by SCST Configurator v1.0.6. 
> 
> [HANDLER vdisk] 
> #DEVICE <vdisk name>,<device path>,<options>,<block size> 
> DEVICE bucket01,/dev/drbd0,NV_CACHE,512 
> DEVICE bucket02,/dev/drbd1,NV_CACHE,512 
> DEVICE bucket03,/dev/drbd2,NV_CACHE,512 
> DEVICE bucket04,/dev/drbd3,NV_CACHE,512 
> 
> [HANDLER vcdrom] 
> #DEVICE <vdisk name>,<device path> 
> 
> [GROUP Bucket01] 
> #USER <user wwn> 
> 
> [GROUP MailScanner has detected a possible fraud attempt from " bucket01_iqn.2000-01.com.et " claiming to be Bucket01_iqn.2000-01.com .ethosprime:nas1.drbd0.bucket01] 
> #USER <user wwn> 
> 
> [GROUP Bucket02] 
> #USER <user wwn> 
> 
> [GROUP MailScanner has detected a possible fraud attempt from " bucket02_iqn.2000-01.com.et " claiming to be Bucket02_iqn.2000-01.com .ethosprime:nas1.drbd1.bucket02] 
> #USER <user wwn> 
> 
> [GROUP Bucket03] 
> #USER <user wwn> 
> 
> [GROUP MailScanner has detected a possible fraud attempt from " bucket03_iqn.2000-01.com.et " claiming to be Bucket03_iqn.2000-01.com .ethosprime:nas1.drbd2.bucket03] 
> #USER <user wwn> 
> 
> [GROUP Bucket04] 
> #USER <user wwn> 
> 
> [GROUP MailScanner has detected a possible fraud attempt from " bucket04_iqn.2000-01.com.et " claiming to be Bucket04_iqn.2000-01.com .ethosprime:nas1.drbd3.bucket04] 
> #USER <user wwn> 
> 
> [GROUP Default] 
> #USER <user wwn> 
> 
> [ASSIGNMENT Bucket01] 
> #DEVICE <device name>,<lun> 
> 
> [ASSIGNMENT Bucket01_iqn.2000-01.com.ethosprime:nas1.drbd0.bucket01] 
> #DEVICE <device name>,<lun> 
> DEVICE bucket01,0 
> 
> [ASSIGNMENT Bucket02] 
> #DEVICE <device name>,<lun> 
> 
> [ASSIGNMENT Bucket02_iqn.2000-01.com.ethosprime:nas1.drbd1.bucket02] 
> #DEVICE <device name>,<lun> 
> DEVICE bucket02,0 
> 
> [ASSIGNMENT Bucket03] 
> #DEVICE <device name>,<lun> 
> 
> [ASSIGNMENT Bucket03_iqn.2000-01.com.ethosprime:nas1.drbd2.bucket03] 
> #DEVICE <device name>,<lun> 
> DEVICE bucket03,0 
> 
> [ASSIGNMENT Bucket04] 
> #DEVICE <device name>,<lun> 
> 
> [ASSIGNMENT Bucket04_iqn.2000-01.com.ethosprime:nas1.drbd3.bucket04] 
> #DEVICE <device name>,<lun> 
> DEVICE bucket04,0 
> 
> [ASSIGNMENT Default] 
> #DEVICE <device name>,<lun> 
> 
> [TARGETS enable] 
> #HOST <wwn identifier> 
> 
> [TARGETS disable] 
> #HOST <wwn identifier> 
> 
> 
> 
> ----- Original Message ----- 
> From: "Vladislav Bolkhovitin" < vst at vlnb.net > 
> To: "Brian Jared" < bjared at ethosprime.com > 
> Cc: scst-devel at lists.sourceforge.net , "Matteo Tescione" < matteo at rmnet.it > 
> Sent: Friday, March 5, 2010 8:44:54 AM GMT -05:00 US/Canada Eastern 
> Subject: Re: [Scst-devel] R: Re: virt_dev->usn randomly generated!? 
> 
> Brian Jared, on 03/04/2010 04:41 AM wrote: 
>> Maybe you could give an example of a config that would generate 
>> the same SN from two targets that MUST be different. I'm very curious, 
>> as I couldn't find any documentation on this. 
>> 
>> It's my understanding that targets must all be uniquely named, and unique 
>> target names pushed through a static (randomly generated) array will also 
>> be unique, and thus vSphere4 won't see them as being the same LUN on 
>> different hosts. 
>> 
>> Again, the problem is that my only interface appears to be my naming of the 
>> iSCSI target LUN, which has to be unique...so...they'll always appear like 
>> 4 unique disks, and not two disks with two paths each. 
>> 
>> An example config might clear this up. I'm still wondering how this works. 
> 
> In SCST all devices have names. Matteo gave you example: 
> 
> DEVICE devel-datastore,/dev/drbd0,NV_CACHE,512 
> 
> Here "devel-datastore" is the name. From this name and _only from it_ 
> the ID/USN is generated. 
> 
> Vlad 
> 


-- 
; Brian Jared < bjared at ethosprime.com > 
: Ethos Prime Engineer 
: http://www.ethosprime.com 
: Cell: 317.201.9036 

------------------------------------------------------------------------------ 
Download Intel® Parallel Studio Eval 
Try the new software tools for yourself. Speed compiling, find bugs 
proactively, and fine-tune applications for parallel performance. 
See why Intel Parallel Studio got high marks during beta. 
http://p.sf.net/sfu/intel-sw-dev 
_______________________________________________ 
Scst-devel mailing list 
Scst-devel at lists.sourceforge.net 
https://lists.sourceforge.net/lists/listinfo/scst-devel 


------------------------------------------------------------------------------ 
Download Intel® Parallel Studio Eval 
Try the new software tools for yourself. Speed compiling, find bugs 
proactively, and fine-tune applications for parallel performance. 
See why Intel Parallel Studio got high marks during beta. 
http://p.sf.net/sfu/intel-sw-dev 
_______________________________________________ 
Scst-devel mailing list 
Scst-devel at lists.sourceforge.net 
https://lists.sourceforge.net/lists/listinfo/scst-devel 





-- 
; Brian Jared <bjared at ethosprime.com> 
: Ethos Prime Engineer 
: http://www.ethosprime.com 
: Cell: 317.201.9036 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20100308/e81b82fd/attachment.htm>


More information about the drbd-user mailing list