Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi and thanks for the answer !
I got several pm urging me NOT to use active/active and OCFS2.
A more simpler active/passive and no OCFS2 would be the best choice.... Too
many things could go wrong with OCFS2 and active/active + MySQL.
But you fully understood my configuration and thanks for your help.
My drbd.conf is almost like the one you sent me.
But in my case, i must have another problem.... it's not working.
One more question. I have 2 ethernet ports. eth1 is used to link both boxes
together.
Should i use for DRBD + Heartbeat a different IP address and class than on
eth0 which is on the LAN ?
Patrick
2011/4/22 Digimer <linux at alteeve.com>
> On 04/22/2011 01:36 PM, Patrick Egloff wrote:
> > Hi all,
> >
> > First of all, let me say that i'm a newbie with DRBD and not a high
> > level linux specialist...
>
> Few are. Fewer still who claim to be. :)
>
> > I want to have a HA setup for my Intranet which is using PHP + MySQL.
> > (Joomla 1.6)
> >
> > For that, i have 2 DELL servers with 5 HD RAID on which i installed a
> > CentOS 5.5 with
> >
> > I tried to install OCFS2, DRBD and Heartbeat as active/active. I'm at
> > the point where i can access to my drbd partition /sda6, but i can't
> > make both boxes talk together.
> > I do have some errors will loading :
> > - mount.ocfs2 (device name specified was not found while opening device
> > /dev/drbd0)
> > - drbd is waiting for peer.... and i have to enter "yes" to stop the
> process
> >
> > After reading a lot, i'm not even sure anymore if my first project is
> > the right choice...
> >
> > Is the configuration i planned the best one for my usage or should i
> > change my plans for another setup with same result, that is high
> > availibility ?
> >
> > If it makes sense to continue with drbd , i will be back with some
> > questions about my problems...
> >
> >
> > Thanks,
>
> I can't speak to heartbeat or OCFS2, as I use RHCS and GFS2, but the
> concept should be similar. Aside from that, those are questions above
> DRBD anyway.
>
> First, your RAID 5 is done in hardware, so CentOS only sees /dev/sda,
> right? Second, Partition 6 is what you want to use as a backing device
> on either node for /dev/drbd0? If you want to run Active/Active, then
> you will also want Primary/Primary, right?
>
> Given those assumptions, you will need to have a drbd.conf similar to
> below. Note that the 'on foo {}' section must have the same hostname
> returned by `uname -n` from either node. Also, change the 'address' to
> match the IP address of the interface you want DRBD to communicate on.
> Lastly, make sure any firewall you have allows port 7789 on those
> interfaces.
>
> Finally, replace '/sbin/obliterate' with the path to a script that will
> kill (or mark Inconsistent) the other node in a split-brain situation.
> This is generally done using a fence device (aka: stonith).
>
> Line wrapping will likely make this ugly, sorry.
>
> ====
> #
> # please have a a look at the example configuration file in
> # /usr/share/doc/drbd83/drbd.conf
> #
>
> # The 'global' directive covers values that apply to RBD in general.
> global {
> # This tells Linbit that it's okay to count us as a DRBD user. If
> you
> # have privacy concerns, set this to 'no'.
> usage-count yes;
> }
>
> # The 'common' directive sets defaults values for all resources.
> common {
> # Protocol 'C' tells DRBD to not report a disk write as complete
> until
> # it has been confirmed written to both nodes. This is required for
> # Primary/Primary use.
> protocol C;
>
> # This sets the default sync rate to 15 MiB/sec. Be careful about
> # setting this too high! High speed sync'ing can flog your drives
> and
> # push disk I/O times very high.
> syncer {
> rate 15M;
> }
>
> # This tells DRBD what policy to use when a fence is required.
> disk {
> # This tells DRBD to block I/O (resource) and then try to
> fence
> # the other node (stonith). The 'stonith' option requires
> that
> # we set a fence handler below. The name 'stonith' comes
> from
> # "Shoot The Other Nide In The Head" and is a term used in
> # other clustering environments. It is synonomous with with
> # 'fence'.
> fencing resource-and-stonith;
> }
>
> # We set 'stonith' above, so here we tell DRBD how to actually fence
> # the other node.
> handlers {
> # The term 'outdate-peer' comes from other scripts that flag
> # the other node's resource backing device as
> 'Inconsistent'.
> # In our case though, we're flat-out fencing the other node,
> # which has the same effective result.
> outdate-peer "/sbin/obliterate";
> }
>
> # Here we tell DRBD that we want to use Primary/Primary mode. It is
> # also where we define split-brain (sb) recovery policies. As we'll
> be
> # running all of our resources in Primary/Primary, only the
> # 'after-sb-2pri' really means anything to us.
> net {
> # Tell DRBD to allow dual-primary.
> allow-two-primaries;
>
> # Set the recover policy for split-brain recover when no
> device
> # in the resource was primary.
> after-sb-0pri discard-zero-changes;
>
> # Now if one device was primary.
> after-sb-1pri discard-secondary;
>
> # Finally, set the policy when both nodes were Primary. The
> # only viable option is 'disconnect', which tells DRBD to
> # simply tear-down the DRBD resource right away and wait for
> # the administrator to manually invalidate one side of the
> # resource.
> after-sb-2pri disconnect;
> }
>
> # This tells DRBD what to do when the resource starts.
> startup {
> # In our case, we're telling DRBD to promote both devices in
> # our resource to Primary on start.
> become-primary-on both;
> }
> }
>
> # The 'resource' directive defines a given resource and must be followed
> by the
> # resource's name.
> # This will be used as the GFS2 partition for shared files.
> resource r0 {
> # This is the /dev/ device to create to make available this DRBD
> # resource.
> device /dev/drbd0;
>
> # This tells DRBD where to store it's internal state information. We
> # will use 'internal', which tells DRBD to store the information at
> the
> # end of the resource's space.
> meta-disk internal;
>
> # The next two 'on' directives setup each individual node's
> settings.
> # The value after the 'on' directive *MUST* match the output of
> # `uname -n` on each node.
> on an-node01.alteeve.com {
> # This is the network IP address on the network interface
> and
> # the TCP port to use for communication between the nodes.
> Note
> # that the IP address below in on our Storage Network. The
> TCP
> # port must be unique per resource, but the interface itself
> # can be shared.
> # IPv6 is usable with 'address ipv6 [address]:port'.
> address 192.168.2.71:7789;
>
> # This is the node's storage device that will back this
> # resource.
> disk /dev/sda6;
> }
>
> # Same as above, but altered to reflect the second node.
> on an-node02.alteeve.com {
> address 192.168.2.72:7789;
> disk /dev/sda6;
> }
> }
> ====
>
> --
> Digimer
> E-Mail: digimer at alteeve.com
> AN!Whitepapers: http://alteeve.com
> Node Assassin: http://nodeassassin.org
>
--
Patrick Egloff - TK5EP
email : pegloff at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20110424/cd03fdcd/attachment.htm>