[DRBD-user] SAN, drbd, ha, shared disks, ataoe, iscsi, gnbd

Maciej Bogucki macbogucki at gmail.com
Tue Aug 12 16:30:42 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Alex wrote:
> On Tuesday 12 August 2008 12:52, you wrote:
>> Alex wrote:
>>> Hello experts,
>>> I read that raid software on linux is not cluster aware, so i'm trying to
>>> find a solution to join together more computers to form a shared file
>>> system and build a SAN (correctly if i am wrong), avoiding usage of raid
>>> software.
>>> Let say that I have:
>>> - N computers (N>8) sharing their volumes (volX, where X=N). Each volX is
>>> arround 120GB.
>>> - M servers (M>3) - which are accessing a GFS volume (read/write)
>>> - Other regular computers which are available if required.
>>> Now, I want:
>>> - to build somehow a GFS on top of vol1, vol2, ... volN volumes with high
>>> data availability and without a single point of failure.
>> If You want to use GFS, You would need one volX exported via iSCSI or
>> GNDB to M servers. 
> This exactly what i have now on one of M servers:
> [root at rhclm ~]# lsscsi
> [0:0:0:0]    disk    IET      VIRTUAL-DISK     0     /dev/sda
> [1:0:0:0]    disk    IET      VIRTUAL-DISK     0     /dev/sdb
> [root at rhclm ~]#
> here, sda and sdb are block devices imported via iscsi from computer1 and 
> computer2.
> Question: is possible to group sda and sdb using raid1 software in an array 
> (/dev/md0) and after that, on top of md0 to create a logical volume and run 
> GFS?
> AFAIK THIS DESIGN IS IMPOSSIBLE because raid software on linux is NOT 
> CLUTER-AWARE? So, just using iscsi (or gnbd) to export volX is NOT ENOUGH 
> because if i am loosing one computerX i am loosing all data on GFS! Right?
here is the answer 
>> For HA you could use DRBD - two volumes vol1, vol2 
>> created as /dev/drbd0 and exported to M servers.
> yes, thats why i want to use drbd, to replace raid1 limitations and group 
> computerX two by two. In this scenario, remain just one question: how can i 
> join together /dev/drbd*, or....if this thing is not possible, the question 
> is: how can i join together all 8 volX in order to have:
> - Fault-tolerance: Failure of a single drive (volX) or server (computerX) 
> should not bring down the GFS!
This isn't the limit of GFS. You can't use DRBD with iSCSI or GNBD 
because  DRBD support only two nodes and You have eight.
GFS need shared storage(SAN) - one raw device which could be visible on 
the few nodes.
>> Your solution isn't good for GFS fs.
> i am looking to find it, that's why i'm here...  can you suggest me one?
I can't help You because I did tested any of them ;(
>> You can also use other HA/Cluster fs like: hadoop, gluster, kosmos-fs,
>> mogile and much more.
> I read about lustre (from SUN Microsistems). It seems that is well supported 
> on linux (centos5/rhel5) and has support for raid/lvm/iscsi, is scaling well 
> and is easy to extend. Is that correct? Using lustre can i join all volX 
> (exported via iscsi) toghether in one bigger volume (using raid/lvm) and have 
I think You are right.

Best Regards
Maciej Bogucki

More information about the drbd-user mailing list