[DRBD-user] drbd pacemaker scst/srp 2 node active/passive question

Sebastian Riemer sebastian.riemer at profitbricks.com
Fri Mar 1 17:52:15 CET 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On 01.03.2013 17:16, Adam Goryachev wrote:
>> ----- Original Message -----
>> From: "Dan Barker" <dbarker at visioncomm.net>
>> To: "drbd List (drbd-user at lists.linbit.com)" <drbd-user at lists.linbit.com>
>> Sent: Friday, March 1, 2013 4:59:40 AM
>> Subject: Re: [DRBD-user] drbd pacemaker scst/srp 2 node active/passive question
>>
>> That's easy, I've been doing it for years, going back to ESXi 4.1 at least, maybe even to 4.0. I run ESXi 5.1 now.
>>
>> Set up both the servers in ESXi, Configuration, Storage adapters. Use static discovery, because you can list the targets whether they exist or not. When the primary goes down, the secondary will come up (if it's available) on ESXi without intervention.
>>
>> In my setup, the .46 drbd is secondary, and invisible to ESXi. .47 is primary and visible to ESXi. I run the following targets (you can do this with the GUI, but I get lazy):
>>
>> vmkiscsi-tool -S -a "172.30.0.46 iqn.2012-05.com.visioncomm.DrbdR:Storage03" vmhba39
>> vmkiscsi-tool -S -a "172.30.0.46 iqn.2012-06.com.visioncomm.DrbdR:Storage02" vmhba39
>> vmkiscsi-tool -S -a "172.30.0.46 iqn.2012-08.com.visioncomm.DrbdR:Storage01" vmhba39
>> vmkiscsi-tool -S -a "172.30.0.46 iqn.2012-08.com.visioncomm.DrbdR:Storage00" vmhba39
>> vmkiscsi-tool -S -a "172.30.0.47 iqn.2012-05.com.visioncomm.DrbdR:Storage03" vmhba39
>> vmkiscsi-tool -S -a "172.30.0.47 iqn.2012-06.com.visioncomm.DrbdR:Storage02" vmhba39
>> vmkiscsi-tool -S -a "172.30.0.47 iqn.2012-08.com.visioncomm.DrbdR:Storage01" vmhba39
>> vmkiscsi-tool -S -a "172.30.0.47 iqn.2012-08.com.visioncomm.DrbdR:Storage00" vmhba39
>>
>> If both are primary, I see 4 targets, 8 paths. This "never<g>" happens. Usually, I see 4 targets, 4 paths.
>>
>> I always do the switchover manually, so you might see slightly different results. My steps are:
>>
>>  Go primary on the .46 server.
>>
>>  Start the target (iscsi-target) software on the .46 server.
>>
>>  Rescan on all ESXi.
>>
>>  Stop the target software on the .47 server (ESXi fails over to the other path seamlessly at this point).
>>
>>  Stop drbd on .47 and do whatever maintenance was necessary.
>>
>> To reverse:
>>
>>  The same steps, but you can skip the scan if the ESXi have "seen" both targets since boot.  One shows up as active and the other shows up as dead, but the VMs don't care.
> Question: Given the above, at some point, you have dual primary, and
> iscsi-target on both nodes for a short period of time. Is there actually
> a problem to run like this all the time? Regardless of which DRBD node
> is written, DRBD should ensure it is copied to the other node. Also,
> reads should not be relevant since it doesn't matter which DRBD node the
> data comes from.
> 
> However, I'm not so confident to actually try this, especially if it
> will break in some subtle and horrible way by corrupting the data slowly
> over a period of 6 months etc...

WTF? Why are you writing about iSCSI?

SRP is the transport. SRP has different addressing (InfiniBand GUIDs).
But yes, the unmaintained "srp-tools" shouldn't be used for discovery.
Instead the SRP connection strings should be echoed directly to sysfs.

Here is an example for SRP with IB:
$ SRP1="id_ext=0002c903004ed0b2,\
ioc_guid=0002c903004ed0b2,\
dgid=fe800000000000000002c903004ed0b3,\
pkey=ffff,service_id=0002c903004ed0b2"

$ echo "$SRP1" > /sys/class/infiniband_srp/srp-mlx4_0-1/add_target

The GUIDs never change so the strings always apply. 0002c903004ed0b2 is
the IB HCA GUID and 0002c903004ed0b3 is the GUID of the first IB port.

The connection strings can be read from the SCST SRP target.

$ cat /sys/kernel/scst_tgt/targets/ib_srpt/ib_srpt_target_0/login_info
tid_ext=0002c903004ed0b2,ioc_guid=0002c903004ed0b2,pkey=ffff,dgid=fe800000000000000002c903004ed0b3,service_id=0002c903004ed0b2

Cheers,
Sebastian



More information about the drbd-user mailing list