[DRBD-user] Building load-balancing SAN upon DRBD v0.8 and probably GFS or Lustre.

Milind Dumbare milind at linsyssoft.com
Fri Dec 1 07:38:22 CET 2006

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

On Friday 01 December 2006 10:33, Igor Yu. Zhbanov wrote:
> Hello!
> Is it possible o build SAN with DRBD v0.8 and some (which one?) cluster
> file system?
> Suppose, we have two client nodes (which will access our SAN) and two
> storage nodes. All nodes connectet by, suppose, gigabit ethernet. On
> storage nodes we will run DRBD v0.8 in Primary/Primary mode, so we can
> access to our hard drives on both nodes simultaneously to make load
> balancing (at least for reading).
> Next, I think, we need to set up some clustered file system upon our DRBD
> devices pair. Probably it will be GFS or Lustre (What is the best? Also,
> I don't know is it possible to setup Lustre in Primary/Primary
> configuration).
Don't know about Lustre but yes you can setup GFS or OCFS. I have tried OCFS 
it works fine. Is lustre, shared disk file system? If it is even it will 
> So, we can mount file system on both nodes. It's all ok. Both storage nodes
> can mount file system and use it in parallel. But what about two our
> clients nodes? Is it possible to mount GFS or Lustre or something else
> remoutely? Or am I must to setup NFS (which nobody likes) on top of GFS?
Yes that will be feasible. Setup NFS on top of GFS (cluster file system). It 
should work.
> Please, tell me your suggestions, is it possible to build load-balancing
> SAN with parallel access to each storage nodes and multiple clients nodes?
multiple clients? See DRBD can work with only two nodes. But exporting the 
Shared device to multiple clients should work.

> (And is it possible without exporting shared block device to All nodes
> which want to access to shared file system? I think, network file system
> trafic is much lesser than network block device trafic.)
I didn't get you here. Will you explain in more detail here?
> Thanks!
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user

More information about the drbd-user mailing list