Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On 12/01/2011 08:47 PM, Kushnir, Michael (NIH/NLM/LHC) [C] wrote: > I got it now, I think... > > So, no matter what, multipathing two separate iSCSI targets is bad... This is just because of how iSCSI works. > > Is there another transport I could use to multipath safely to a dual-primary DRBD? CMAN with GNBD running on each DRBD node? Any other alternative? > Maybe there is - if the concept of multipathing is well understood (which is what we implicitly try to achieve here *g*). Multipath I/O (let's call that conveniently "mpio") is about *multiple* connections to *one* blockdevice/storage system (we'll call that "disk" now). So if you have two drbd nodes which make up one "disk", mpio is about creating multiple connections from the active node to the server which wants to use that disk. The mpio driver cares about the best connection between disk and server on the premise of exclusive access to that disk. So drbd and mpio play their game on different fields. You can export a drbd device with mpio, but you always have to ensure exclusive access to the disk. The dual primary and clustering filesystem features *in combination* are about simultaneous access. It'd be dangerous to use dual primary disks like normal disks, but if you put a cluster filesystem on it you can access files on the mounted disk safely because the fs cares about managing your writes. Now it gets interesting (please someone correct me if I'm wrong): If I create an image file on that clustered filesystem (dd if=/dev/zero of=myfile.img), the cluster filesystem is supposed to handle simultaneous writes, so I can access the image on both nodes and the writes get serialized by the fs. If write access to that image file is now "exclusified", it could be exported via iscsi and used with mpio. Even though this contradicts my understanding of iscsi-mpio, would this work? If it does, I guess performance won't be satisfactory. -- Mark