Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I can't answer your questions with any confidence, so I'll say nothing. However, I never got my VMFS file systems to behave using fileio. I believe fileio will flush late, and the two targets will be serving different data. Using blockio, all worked as expected. A change on ESXi-A (using iSCST-A) was immediately visible on ESXi-B (using iSCST-B). With fileio, no amount of refreshing made the changes show up until/unless lots of activity has occurred. I'm using ESXi (4.0u1) on iSCSI-Target (1.4.20) on DRBD (8.3.7). Not very similar to your setup, but possibly helpful none the less. -----Original Message----- From: drbd-user-bounces at lists.linbit.com [mailto:drbd-user-bounces at lists.linbit.com] On Behalf Of steve Sent: Monday, June 14, 2010 8:59 AM To: drbd-user at lists.linbit.com Subject: [DRBD-user] DRBD primary/primary SCST Hello , i'm new to drbd and i have done following Setup ( please see attachment ) System-information 2 storage-nodes --------------- - centos 5.5 x86_64 - vanilla kernel 2.6.33-5 with scst_exec_req_fifo patch - drbd 8.3.7 ( kernel ) - infiniband stack ( kernel, no ofed ) - infiniband HCA ( dualport MT26418 ) - cluster setup ( clvmd ) - raidcontroller Adaptec 52445 ( with BBU ) 3 VMWARE-HOSTS -------------- - ESX4 Vsphere - Vmware MPIO with "fixed-path" - 4 Pathes for every LUN/vdisk - vdisks using VMFS3 , no RDMs DRBD-Setup ---------- - primary/primary Setup - replication via IPoIB - Both nodes raidcontroller with BBU - /dev/sdx ( raid ) as backend - PV (lvm2) on drbd resource Replication , split-brain recovery is working fine . Write speed is about 550 MByte/sec , read speed is about 900 MByte/sec (RAID50- 12x SAS , simple sequential test with dd ) Now i'm try to export my LUNs via scst target driver ( ib_srpt ) SCST vdisk are configure with FILEIO and WRITE_TROUGH . On my Vmware-host i can see the LUNs exported by the two storage-nodes . On SCST mailinglist i was advised that is it not safe to use the VMWARE MPIO with round - robin ( VMWARE uses SPC-2 reserverations ) when running multiple instances of SCST on different machines . - using multiple Hosts with one SCST Server is ok - do not share same LUNs accross different SCST Server ( storage A and storage B can't share their reservation states for the LUNs ) My Question : before a write operation of the Vmware-Host is ACK'd by the SCST/IB Layer (after the data transfer) the SCST instances dies . is the data now allready replicated to the other primary ( and is corruptued because it was not ACK'd) or will this be handled by the VMFS filesystem ? When does exactly DRBD flushes to the backend device ? Has anyone a how-to for a safe intergrating of DRBD and SCST ? any comments to this setup are welcome ! kind regards Steve