Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hello Gnubie, > > thats certainly not LVM! > > What I mean here is my LVM2 partitions covers /dev/hda4 and /dev/hdb1 > in physical partitions. So, all I want is to mirror all the contents > of my LVM2 partitions if that's the right term to use. So you want to do 'device -> LVM2 -> LVM2 volumes -> drbd -> drbd' devices, I would call this drbd on top of LVM ;) In other words, you already have a working LVM environment, want to keep it and just mirror the lvm volumes one by one. > > > meta-disk: Unless you try to convert from drbd-0.6 to 0.7, I would > > usually choose 'internal'. > > So, if I'll go straight installing DRBD-0.7.6 from scratch as what I > did in my setup, I'll just use "internal", right? In principle yes, just make clearly sure, that there are really no important data on the partitions you want to setup drbd on. Drbd will overwrite the first 128MB, which might not so good for important stuff... ;-) > > > LVM: So what do you want? LVM on top of DRBD or DRBD on top of LVM? And > > what don't you understand? > > I really don't understand that phrases. Sorry about that. How will I > know if the setup is "LVM on top of DRBD" or the other way around? > Based on my current setup, what do you suggests? What are the best > ways to configure and implement this kind of setup? s.o. lvm on top of drbd: device -> drbd -> LVM2 -> LVM2 volumes Which means, you have e.g. hda and you completely mirror /dev/hda with drbd. On the resulting drbd device you put LVM and you will finally mount the lvm volumes. So its logically named LVM on top of drbd. With drbd on top of LVM you will finally mount the drbd devices. > > > Did you already read some of the howto's one can find in the web? E.g.: > > > > http://linuxha.trick.ca/DataRedundancyByDrbd > > Yeah, I read it. But as what I said, I'm confused and don't > understand that much. Sorry. > > Would you or anyone from this mailing list care to post their > actual/working /etc/drbd.conf file with a similar setup that I want to > implement if that's not asking too much from you? Well, we don't use lvm, but if you put drbd on top of lvm, you can just replace all the /dev/sdc{X} lines with /dev/lvm/... Hope it helpes, Bernd -- Bernd Schubert Physikalisch Chemisches Institut / Theoretische Chemie Universität Heidelberg INF 229 69120 Heidelberg e-mail: bernd.schubert at pci.uni-heidelberg.de -------------- next part -------------- global { minor_count 5; } resource drbd0 { protocol A; # incon-degr-cmd "halt -f"; startup { degr-wfc-timeout 120; # 2 minutes. } disk { on-io-error detach; } net { ko-count 10; # if some block send times out this many times, # the peer is considered dead, even if it still # answeres ping requests on-disconnect reconnect; sndbuf-size 524288; max-buffers 16384; } syncer { rate 100M; group 1; al-extents 257; } on hamilton1 { device /dev/drbd0; disk /dev/sdc5; address 192.168.2.1:7789; meta-disk internal; } on hamilton2 { device /dev/drbd0; disk /dev/hda5; address 192.168.2.2:7789; meta-disk internal; } } resource drbd1 { protocol A; incon-degr-cmd "halt -f"; startup { degr-wfc-timeout 120; # 2 minutes. } disk { on-io-error detach; } net { ko-count 10; # if some block send times out this many times, # the peer is considered dead, even if it still # answeres ping requests on-disconnect reconnect; sndbuf-size 524288; max-buffers 16384; } syncer { rate 100M; group 2; al-extents 257; } on hamilton1 { device /dev/drbd1; disk /dev/sdc6; address 192.168.2.1:7790; meta-disk internal; } on hamilton2 { device /dev/drbd1; disk /dev/hda6; address 192.168.2.2:7790; meta-disk internal; } } resource drbd2 { protocol A; incon-degr-cmd "halt -f"; startup { degr-wfc-timeout 120; # 2 minutes. } disk { on-io-error detach; } net { ko-count 10; # if some block send times out this many times, # the peer is considered dead, even if it still # answeres ping requests on-disconnect reconnect; sndbuf-size 524288; max-buffers 16384; } syncer { rate 100M; group 3; al-extents 257; } on hamilton1 { device /dev/drbd2; disk /dev/sdc7; address 192.168.2.1:7791; meta-disk internal; } on hamilton2 { device /dev/drbd2; disk /dev/hda7; address 192.168.2.2:7791; meta-disk internal; } } resource drbd3 { protocol A; # incon-degr-cmd "halt -f"; startup { degr-wfc-timeout 120; # 2 minutes. } disk { on-io-error detach; } net { ko-count 10; # if some block send times out this many times, # the peer is considered dead, even if it still # answeres ping requests on-disconnect reconnect; sndbuf-size 524288; max-buffers 16384; } syncer { rate 100M; group 4; al-extents 257; } on hamilton1 { device /dev/drbd3; disk /dev/md2; address 192.168.2.1:7792; meta-disk internal; } on hamilton2 { device /dev/drbd3; disk /dev/hdc5; address 192.168.2.2:7792; meta-disk internal; } } resource drbd4 { protocol A; # incon-degr-cmd "halt -f"; startup { degr-wfc-timeout 120; # 2 minutes. } disk { on-io-error detach; } net { ko-count 10; # if some block send times out this many times, # the peer is considered dead, even if it still # answeres ping requests on-disconnect reconnect; sndbuf-size 524288; max-buffers 16384; } syncer { rate 100M; group 5; al-extents 257; } on hamilton1 { device /dev/drbd4; disk /dev/md3; address 192.168.2.1:7793; meta-disk internal; } on hamilton2 { device /dev/drbd4; disk /dev/hdc6; address 192.168.2.2:7793; meta-disk internal; } }