[DRBD-user] SAN, drbd, ha, shared disks, ataoe, iscsi, gnbd

Maciej Bogucki macbogucki at gmail.com
Tue Aug 12 11:52:14 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Alex wrote:
> Hello experts,
>
> I read that raid software on linux is not cluster aware, so i'm trying to find 
> a solution to join together more computers to form a shared file system and 
> build a SAN (correctly if i am wrong), avoiding usage of raid software.
>
> Let say that I have:
> - N computers (N>8) sharing their volumes (volX, where X=N). Each volX is 
> arround 120GB.
> - M servers (M>3) - which are accessing a GFS volume (read/write)
> - Other regular computers which are available if required.
>
> Now, I want:
> - to build somehow a GFS on top of vol1, vol2, ... volN volumes with high data 
> availability and without a single point of failure.
>   
If You want to use GFS, You would need one volX exported via iSCSI or 
GNDB to M servers. For HA you could use DRBD - two volumes vol1, vol2 
created as /dev/drbd0 and exported to M servers.

> - resulted logical volume to be used on SERVER1, SERVER2 and SERVER3 
> (read/write access)
>   
With CLVM You could have logical volumes for GFS fs.
> So, how can i do that? Is that possible?
>
> Can somebody suggest me an scenario regarding how to grup machines?
>
> My scenario (before to start):
>
> Step1. Grup computerX together, 2 by 2 using drbd and create network raid 1 
> mirrors for each, in order to produce:
>
> On computer1 and computer2:
> - /dev/drbd0 (120GB size, contain vol1 <-> vol2 mirrored)
> On computer3 and computer4:
> - /dev/drbd1 (120GB size, contain vol3 <-> vol4 mirrored)
> On computer5 and computer6:
> - /dev/drbd2 (120GB size, contain vol5 <-> vol6 mirrored)
> On computer7 and computer8:
> - /dev/drbd3 (120GB size, contain vol7 <-> vol8 mirrored)
>
> Does it mean that will result 4 CLUSTERS (4 different cluster.conf files). Is 
> that correct? If yes, how can i join together resulted volumes /dev/drbd*?
>
> Will be ok to export resulted /dev/drbd* volumes using ataoe (iscsi or gnbd) 
> to our SERVERS and after that, on SERVER1 for example, to manipulate and join 
> imported volumes (/dev/import0, /dev/import1, /dev/import2, /dev/import3) 
> using clvm like on the next step below?
>
> Step2. On SERVER1, join resulted volumes together using lvm and create a 
> logical volume (480GB)
> pvcreate /dev/import0 /dev/import1 /dev/import2 /dev/import3
> vgcreate myvg ...
> lvcreate mylvm ...
>
> Now, mylvm is grouping import0 up to import4 in one logical volume which is in 
> fact drbd0+drbd1+drbd2+drbd3.
>
> What will be if at this stage, will not be available any fencing? I know that 
> GFS require fencing in all circumstances.
>
> How can be implemented fencing? ATAoE and ISCSI does not provide any builtin 
> fencing mechanism, so, maybe GNBD sould be used to export /dev/drbd* (because 
> it has fencing builtin). Is that correct? Is possible? If, yes, should i 
> create only one cluster grouping:
> - all N computers together
> or
> - all N computers plus our 3 SERVERS together?
>
> Step3. format /dev/myvg/mylv using GFS:
> mkfs.gfs -p lock_dlm -t cluster:data -j 3 /dev/myvg/mylv
>
> Step4. Create other cluster containing only 3 nodes (SERVER1 up to SERVER3) 
> which will mount and use resulted shared GFS volume:
>
> mount /dev/myvg/mylv /var/www/data on all our servers.
>
> Any ideas?
>   

Your solution isn't good for GFS fs.
You can also use other HA/Cluster fs like: hadoop, gluster, kosmos-fs, 
mogile and much more.

Best regards
Maciej Bogucki



More information about the drbd-user mailing list