[DRBD-user] SAN, drbd, ha, shared disks, ataoe, iscsi, gnbd

Maciej Bogucki macbogucki at gmail.com
Wed Aug 13 11:48:16 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Alex wrote:
>>> yes, thats why i want to use drbd, to replace raid1 limitations and group
>>> computerX two by two. In this scenario, remain just one question: how can
>>> i join together /dev/drbd*, or....if this thing is not possible, the
>>> question is: how can i join together all 8 volX in order to have:
>>> - Fault-tolerance: Failure of a single drive (volX) or server (computerX)
>>> should not bring down the GFS!
>>>       
>> This isn't the limit of GFS. You can't use DRBD with iSCSI or GNBD
>> because  DRBD support only two nodes and You have eight.
>>     
>
> Yes i have 8 nodes but only 4 /dev/drbdX devices, where X={1,2,3,4} are 
> mirroring each two nodes...
>   
So, You can create 4 GFS fs on the top of DRBD devices[1]
>   
>> GFS need shared storage(SAN) - one raw device which could be visible on
>> the few nodes.
>>     
>
> Using iscsi i can export /dev/drbdX which will be visible by all M servers as 
> raw devices as you say. With lvm i can unify all /dev/drbdX in a logical 
> volume and run GFS on top. Why will not work? Is a drbd limitation?
>
>   
Why You want to build it such way? This is complex configuration and it 
won't work, because from what I know LVM need 3 devices(one for 
metadata)[2], and here is the problem to build true HA configuration.

[1] - http://www.drbd.org/users-guide/ch-gfs.html
[2] - 
http://www.redhat.com/docs/manuals/csgfs/browse/4.6/Cluster_Logical_Volume_Manager/mirror_create.html

Best Regards
Maciej Bogucki



More information about the drbd-user mailing list