Hi,<div><br></div><div>I'm trying out a fairly complex configuration across 4 servers.</div><div><br></div><div>Each server has 12 300GB partitions on a hardware RAID10 array.</div><div>Each server then has 6 primary drbd partitions on it. The secondary drbd partitions for those 6 are split across the other servers.</div>
<div>2 secondary partitions to each of the 3 remaining servers.</div><div><br></div><div>Below is a rough layout. The drbd partitions are numbered and the p/s indicates whether it's a primary or secondary. I've also kept the primary and secondary of a given drbd partition on the same logical partition so that the partition sizes are identical.</div>
<div><br></div><div>
storage1
storage2
storage3
storage4
10.1.1.11
10.1.1.12
10.1.1.13
10.1.1.14
sda41p
1s
11p
11s
sda5
2s
2p
12s
12p
sda6
9p
6p
6s
9s
sda7
5p
10p
5s
10s
sda8
3s
8s
3p
8p
sda9
4s
7s
7p
4p
sda10
13p
13s
23p
23s
sda11
14s
14p
24s
24p
sda12
21p
18p
18s
21s
sda13
17p
22p
17s
22s
sda14
15s
20s
15p
20p
sda15
16s
19s
19p
16p</div><div><br></div><div>The thinking behind this config is to allow for smaller partitions should a full resync be needed and in the case of failure of a server, the load if the failed server gets split across the other 3. The plan is to export the drbd partitions as NFS shares.</div>
<div>I need space and reasonable load distribution in case of failure, but I have an extremely limited budget, so I'm hoping this will work. It's configured and running at the moment. Just beginning to do benchmarking and testing. Apart from the management complexities inherent in this configuration, does anyone have any comments or critiques about it?</div>
<div><br></div><div>Thanks</div><div>Guy</div><div><br>-- <br>Don't just do something...sit there!<br>
</div>