[DRBD-user] Question on local block error behavior

Jojy Varghese jojy.varghese at gmail.com
Mon Oct 17 01:13:08 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Thanks Florian. What we are observing is that on a block error on
Node1, that block is fetched from the peer (Node2) and all other
blocks are fetched from the same node(Node1). So the errored node
seems to be still functional(not completely detached). We are using
ver 8.3.11.

thanks
Jojy

On Fri, Oct 14, 2011 at 7:44 AM,  <drbd-user-request at lists.linbit.com> wrote:
> Send drbd-user mailing list submissions to
>        drbd-user at lists.linbit.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        http://lists.linbit.com/mailman/listinfo/drbd-user
> or, via email, send a message with subject or body 'help' to
>        drbd-user-request at lists.linbit.com
>
> You can reach the person managing the list at
>        drbd-user-owner at lists.linbit.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of drbd-user digest..."
>
>
> Today's Topics:
>
>   1. Re: Question on local block error behavior (Lars Ellenberg)
>   2. Re: Disk Corruption = DRBD Failure? (Charles Kozler)
>   3. Re: drbd-user Digest, Vol 87, Issue 21 (Jojy Varghese)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 14 Oct 2011 14:09:50 +0200
> From: Lars Ellenberg <lars.ellenberg at linbit.com>
> Subject: Re: [DRBD-user] Question on local block error behavior
> To: drbd-user at lists.linbit.com
> Message-ID: <20111014120950.GA8552 at barkeeper1-xen.linbit>
> Content-Type: text/plain; charset=us-ascii
>
> On Fri, Oct 14, 2011 at 09:02:47AM +0200, Florian Haas wrote:
>> On 2011-10-13 20:00, Jojy Varghese wrote:
>> > Hi
>> >    We are testing drbd for our storage cluster and had a question
>> > about the behavior we are seeing. we have layered drbd on top of a
>> > device mapper layer. When we simulate a block error using the dm
>> > layer, we see that the requests for those particular blocks is
>> > forwarded to the peer node. We are using 8.3x version of drbd. The
>> > documentation says that the default behavior is to takeout the
>> > defective node even if there is 1 block error.
>>
>> Er, no. It never was, and the documentation (at least the User's Guide)
>> never said so.
>>
>> - Prior to 8.4, the default behavior was to simply not do anything about
>> the I/O error and pass it right up to the calling layer, where the
>> latter was expected to handle it.
>>
>> http://www.drbd.org/users-guide-legacy/s-configure-io-error-behavior.html
>
>
> Uhm, yes, the config option is called pass-on, but it actually still
> tries to mask the io error, if it can (remote node stil reachable).
> Different versions of DRBD behave slightly different there, more recent
> are supposed to "degrade" the failing disk's status to inconsistent,
> which is correct, after all.
>
> Still, explicitly configuring "on-io-error detach;" is highly
> recommended. The pass-on setting is not paricularly useful.
> It is unfortunate that it happens to be the default with 8.3, still.
>
>> - Since 8.4, the default behavior is to transparently read from or write
>> to the affected block on the peer node, "detaching" from the local
>> (faulty) device and masking the I/O error.
>>
>> http://www.drbd.org/users-guide/s-configure-io-error-behavior.html
>>
>> Removal from the cluster in case of an I/O error (by way of a deliberate
>> kernel panic) was an option in 0.7, and can still be configured via a
>> local-io-error handler. If it was ever the default, then that would have
>> been prior to 0.7 -- i.e. long before I started working with DRBD, so I
>> wouldn't know.
>
> There used to be a "panic" setting, yes.
> But it never was the default.
>
> --
> : Lars Ellenberg
> : LINBIT | Your Way to High Availability
> : DRBD/HA support and consulting http://www.linbit.com
>
>
> ------------------------------
>
> Message: 2
> Date: Fri, 14 Oct 2011 08:36:39 -0400
> From: Charles Kozler <charles at fixflyer.com>
> Subject: Re: [DRBD-user] Disk Corruption = DRBD Failure?
> To: drbd-user at lists.linbit.com
> Message-ID: <4E982CD7.2070807 at fixflyer.com>
> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
>
> Hi Florian,
>
> Thanks again for all of your help.
>
> While the diagram makes clear the entire flow of the process, I am
> looking for something like a flow chart to depict the order of
> operations. For instance, what is the flow of the data and order of
> operations when a write occurs to /dev/drbd0 on the primary and how is
> it applied on the other node- a write occurs to /dev/drbd0 on node1, it
> writes to the real block device on node1, then is put on the socket to
> node2, node2 receives data and applies algo check (if applied), data
> written to /dev/drbd0 on node2.
>
> The fundamentals page also only gives a brief overview of how it works
> at a high level, I am looking to see what actually occurs under the hood
> so perhaps I should start looking at the kernel docs that you pointed
> out earlier?
>
>
>
> Regards,
> Chuck Kozler
> /Lead Infrastructure & Systems Administrator/
> ---
> *Office*: 1-646-290-6267 | *Mobile*: 1-646-385-3684
> FIX Flyer
>
> Notice to Recipient: This e-mail is meant only for the intended
> recipient(s) of the transmission, and contains confidential information
> which is proprietary
> to FIX Flyer LLC. Any unauthorized use, copying, distribution, or
> dissemination is strictly prohibited. All rights to this information is
> reserved by FIX Flyer LLC.
> If you are not the intended recipient, please contact the sender by
> reply e-mail and please delete this e-mail from your system and destroy
> any copies
>
> On 10/14/2011 5:21 AM, Charles Kozler wrote:
>> Haven't read it yet though I will later today.
>>
>> Having not ready any of the documentation of the underlying processes/workings, all of my understandings were purely based on assumptions from my basic use with DRBD- that said, thank you for all your insight and I will let you know later my understandings :)
>>
>>
>> Sent from my Verizon Wireless BlackBerry
>>
>> -----Original Message-----
>> From: Florian Haas<florian at hastexo.com>
>> Sender: drbd-user-bounces at lists.linbit.com
>> Date: Fri, 14 Oct 2011 09:08:17
>> To:<drbd-user at lists.linbit.com>
>> Subject: Re: [DRBD-user] Disk Corruption = DRBD Failure?
>>
>> On 2011-10-12 20:30, Charles Kozler wrote:
>>> I will re-read the DRBD Funadmentals- the way I understood it was
>>> basically if you were writing to node1 it wouldn't put the data through
>>> a TCP socket and would actually just write directly to the block device
>>> and that TCP was usually only used for the actual replicating and data
>>> integrity conversation between the hosts.  My understanding now is that
>>> for all hosts included in the resource definition it will put the data
>>> into that socket - including the host you're writing from (eg: if I
>>> wrote to /dev/drbd0 on host1 it will go through the socket to write the
>>> data still to write it to the underlying block device-
>> Er, no. It won't.
>>
>>> I had originally
>>> thought it would skip the TCP socket write and write directly to the
>>> block device).
>> For the _local_ write, of course it doesn't go through the TCP socket.
>> Why should it? That would be braindead. Also, given the documentation,
>> what makes you think so? I ask because I wrote it, and if there's
>> anything horribly unclear in there I'd be happy to fix it.
>>
>> Did you look at the illustration in the Fundamentals chapter?
>>
>> Florian
>>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20111014/b533df0e/attachment-0001.htm>
>
> ------------------------------
>
> Message: 3
> Date: Fri, 14 Oct 2011 07:44:24 -0700
> From: Jojy Varghese <jojy.varghese at gmail.com>
> Subject: Re: [DRBD-user] drbd-user Digest, Vol 87, Issue 21
> To: drbd-user at lists.linbit.com
> Message-ID:
>        <CAJD3hpVQcOhMby9UNUtdwgdvH+uQwSWcCC012wJ1f7+NHG-Cvg at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> Thanks Florian. So the behavior for 8.3 , in case of an io -error is :
>
> - get the block from the peer node
> - pass on the the error to the above layer
>
> If thats the case, then why does it fetch the block from peer? If it
> could fetch the block from peer sucessfully, why even bother the layer
> above?
>
> thanks
> Jojy
>
> On Fri, Oct 14, 2011 at 3:00 AM,  <drbd-user-request at lists.linbit.com> wrote:
>> Send drbd-user mailing list submissions to
>> ? ? ? ?drbd-user at lists.linbit.com
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>> ? ? ? ?http://lists.linbit.com/mailman/listinfo/drbd-user
>> or, via email, send a message with subject or body 'help' to
>> ? ? ? ?drbd-user-request at lists.linbit.com
>>
>> You can reach the person managing the list at
>> ? ? ? ?drbd-user-owner at lists.linbit.com
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of drbd-user digest..."
>>
>>
>> Today's Topics:
>>
>> ? 1. Re: A few configuration questions specific to RHEL 5
>> ? ? ?primary/primary GFS2 setup (Kaloyan Kovachev)
>> ? 2. Re: examples of dual primary DRBD (Bart Coninckx)
>> ? 3. Question on local block error behavior (Jojy Varghese)
>> ? 4. Re: Question on local block error behavior (Florian Haas)
>> ? 5. Re: Disk Corruption = DRBD Failure? (Florian Haas)
>> ? 6. Re: Disk Corruption = DRBD Failure? (Charles Kozler)
>>
>>
>> ----------------------------------------------------------------------
>>
>> Message: 1
>> Date: Thu, 13 Oct 2011 14:14:03 +0300
>> From: Kaloyan Kovachev <kkovachev at varna.net>
>> Subject: Re: [DRBD-user] A few configuration questions specific to
>> ? ? ? ?RHEL 5 primary/primary GFS2 setup
>> To: <drbd-user at lists.linbit.com>
>> Message-ID: <414622476651d55d1e3e5a5960d304b7 at mx.varna.net>
>> Content-Type: text/plain; charset=UTF-8
>>
>> Hi,
>> ?a bit late posting because of the time zone, but ...
>>
>> On Wed, 12 Oct 2011 14:45:54 -0400, Digimer <linux at alteeve.com> wrote:
>>> On 10/12/2011 02:34 PM, Kushnir, Michael (NIH/NLM/LHC) [C] wrote:
>>>> Digimer,
>>>>
>>>> Thanks again for holding my hand on this. I've already started reading
>>>> your wiki posts. I wish Google gave your site a better ranking. I've
>> been
>>>> doing research for months, and your articles (especially comments in
>> the
>>>> config files) are very helpful.
>>>
>>> Happy it helps! Linking back to it might help. ;)
>>>
>>>>> Also note that I had either leg of the bond routed through different
>>>>> switches. I had tried stacking them (hence the ability to LAG) but ran
>>>>> into issue there as well. So now for HA networking I use two
>> independent
>>>>> switches, with a simple uplink between the switches, and mode=1. This
>>>>> configuration has tested very reliable for me.
>>>>
>>>> I am using a single M4900 switch due to project budget issues right
>> now.
>>>> Once we go further toward production I intend to use two stacked M4900
>>>> switches. For now LACP hasn't been a problem. I will test with stacked
>>>> M4900s and get back to you with my results.
>>>
>>> Consider the possibility that you might one day want/need Red Hat
>>> support. In such a case, not using mode=1 will be a barrier. Obviously
>>> your build is to your spec, but do please carefully consider mode=1
>>> before going into production.
>>>
>>
>> I am using LACP (mode=4) with stacked switches without problems, but
>> Digimer is right about the support barrier
>>
>>>>> Fencing is handled entirely within the cluster (cluster.conf). I use
>>>>> Lon's "obliterate-peer.sh" script as the DRBD fence-handler. When DRBD
>>>>> sees a split-brain, it blocks (with 'resource-and-stonith') and calls
>>>>> 'fence_node<victim>' and waits for a successful return. The result is
>>>>> that, on fault, the node gets fenced twice (once from the DRBD call,
>>>>> once from the cluster itself) but it works just fine.
>>>>
>>>> Great explanation. Thanks!
>>>>
>>
>> 'resource-and-stonith' is the key here - multipath will retry the failed
>> requests on the surviving node _after_ it resumes IO
>>
>>>>
>>>>>> 4. Replicated DRBD volume with GFS2 mounted over GNBD
>>>>
>>>>> No input here, sorry.
>>>>
>>>> See below.
>>>>
>>>>>> 5. Replicated DRBD volume with GFS2 mounted over iSCSI (IET)
>>>>
>>>>
>>>> So my setup looks like this:
>>>>
>>>> DRBD (pri/pri)->gfs2->gnbd->multipath->mount.gfs2
>>>>
>>
>> while setting up the cluster i have also tried GNBD, but switched to iSCSI
>> (IET), because it allows importing the device locally too, which is not
>> possible with GNBD. With such setup it is possible to use the same
>> (multipath) name for the device instead of drbdX on the local machine to
>> avoid deadlocks. The resulting setup is:
>>
>> LVM->DRBD (pri/pri)->iSCSI->multipath->gfs2
>>
>>>> I skipped clvmd because I do not need any of the features of LVM. My
>>>> RAID volume is 4.8TB. We will replace equipment in 3 years, and in most
>>>> aggressive estimates we will use 2.4TB at most within 3 years.
>>>>
>>
>> The use of LVM (not CLVM in my case) comes handy for the backups - you can
>> snapshot a volume and mount with local locking = much faster without DLM
>> overhead and iSCSI/DRBD being involved
>>
>> Now back to your original questions:
>>
>>> 1. In the case of a 2-primary split brain (switch hiccup, etc), I would
>> like server #1 to always remain primary and server #2 to always shut down.
>> I would like this behavior because server #2 can't become secondary because
>> GNBD is not going to release it. What is the best way to accomplish this?
>>
>> use 'resource-and-stonith', then modify your 'fence-peer' handler to sleep
>> on the second server. As a handler you may use obliterate-peer.sh or the
>> one i have posted to this list a week ago
>>
>>> 2. I've tried the deadline queue manager as well as CFQ. I've noticed no
>> difference. Can you please elaborate on why deadline is better, and how can
>> I measure any performance difference between the two?
>>
>> just something i have observed: if you start writing a few gigabyte file,
>> with CFQ after some time the IO for the entire GFS stops for few seconds
>> and even small request from other nodes are blocked, which does not happen
>> with deadline
>>
>>> 3. It seems that GNBD is the biggest source of latency in my system. It
>> decreases IOPS by over ~50% (based on DD tests compared to the same DRBD
>> based GFS2 mounted locally). I've also tried Enterprise iSCSI target as an
>> alternative and the results were not much better. The latency on my LAN is
>> ~0.22ms. Can you offer any tuning tips?
>>
>> yes, even if iSCSI (in my case) is connected via loopback interface there
>> is a performance impact. You may finetune your iscsi client (open-iscsi in
>> my case) and multipath for your usage case (check queue depth / data
>> segment for iscsi and rr_min_io for multipath), you should also use jumbo
>> frames if possible, but it will still be slower than a direct attached
>> disks.
>>
>> A test case involving the network latency is if DRBD is primary and
>> connected, but in diskless mode, so all reads and writes will go to the
>> remote node - you will probably get near the same performance like when
>> using GNBD/iSCSI (your DD tests 4 and 5)
>>
>>
>>
>>
>>
>> ------------------------------
>>
>> Message: 2
>> Date: Thu, 13 Oct 2011 21:47:04 +0200
>> From: Bart Coninckx <bart.coninckx at telenet.be>
>> Subject: Re: [DRBD-user] examples of dual primary DRBD
>> To: drbd-user at lists.linbit.com
>> Message-ID: <4E974038.6040905 at telenet.be>
>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>
>> On 10/07/11 22:21, Bart Coninckx wrote:
>>> On 10/06/11 22:03, Florian Haas wrote:
>>>> On 2011-10-06 21:43, Bart Coninckx wrote:
>>>>> Hi all,
>>>>>
>>>>> would you mind sending me examples of your crm config for a dual primary
>>>>> DRBD resource?
>>>>>
>>>>> I used the one on
>>>>>
>>>>> http://www.drbd.org/users-guide/s-ocfs2-pacemaker.html
>>>>>
>>>>> and on
>>>>>
>>>>> http://www.clusterlabs.org/wiki/Dual_Primary_DRBD_%2B_OCFS2
>>>>>
>>>>> and they both result into split brain, except for when I start drbd
>>>>> manually first.
>>>>
>>>> They clearly should not. Rather than soliciting other people's
>>>> configurations and then try to adapt yours based on that, why don't you
>>>> upload _your_ CIB (not just a "crm configure dump", but a full "cibadmin
>>>> -Q") and your DRBD configuration to your pastebin/pastie/fpaste and let
>>>> people tell you where your problem is?
>>>
>>> OK, I posted the drbd.conf on http://pastebin.com/SQe9YxhY
>>>
>>> cibadmin -Q is on http://pastebin.com/gTZqsACq
>>>
>>> The split brain logging is on http://pastebin.com/7unKKkdi .
>>>
>>> Could this be some sort of timing issue? Manually things are find, but
>>> there are some seconds in between the primary promotions.
>>>
>>> thx,
>>>
>>> B.
>>> _______________________________________________
>>> drbd-user mailing list
>>> drbd-user at lists.linbit.com
>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>>
>> Could this be something as trivial as adding a "interleave" meta
>> attribute to the master resource? I did this today and things seem fine
>> now?
>>
>> thx,
>>
>> B.
>>
>>
>>
>>
>> ------------------------------
>>
>> Message: 3
>> Date: Thu, 13 Oct 2011 11:00:53 -0700
>> From: Jojy Varghese <jojy.varghese at gmail.com>
>> Subject: [DRBD-user] Question on local block error behavior
>> To: drbd-user at lists.linbit.com
>> Message-ID:
>> ? ? ? ?<CAJD3hpV1pXhw61RWNh-hZT3R9=jDdFUUJP9SOwsF_aB3jDofmg at mail.gmail.com>
>> Content-Type: text/plain; charset=UTF-8
>>
>> Hi
>> ? We are testing drbd for our storage cluster and had a question
>> about the behavior we are seeing. we have layered drbd on top of a
>> device mapper layer. When we simulate a block error using the dm
>> layer, we see that the requests for those particular blocks is
>> forwarded to the peer node. We are using 8.3x version of drbd. The
>> documentation says that the default behavior is to takeout the
>> defective node even if there is 1 block error. Just wanted to verify
>> what the correct behavior is.
>>
>> thanks in advance
>> Jojy
>>
>>
>> ------------------------------
>>
>> Message: 4
>> Date: Fri, 14 Oct 2011 09:02:47 +0200
>> From: Florian Haas <florian at hastexo.com>
>> Subject: Re: [DRBD-user] Question on local block error behavior
>> To: drbd-user at lists.linbit.com
>> Message-ID: <4E97DE97.2070205 at hastexo.com>
>> Content-Type: text/plain; charset=ISO-8859-1
>>
>> On 2011-10-13 20:00, Jojy Varghese wrote:
>>> Hi
>>> ? ?We are testing drbd for our storage cluster and had a question
>>> about the behavior we are seeing. we have layered drbd on top of a
>>> device mapper layer. When we simulate a block error using the dm
>>> layer, we see that the requests for those particular blocks is
>>> forwarded to the peer node. We are using 8.3x version of drbd. The
>>> documentation says that the default behavior is to takeout the
>>> defective node even if there is 1 block error.
>>
>> Er, no. It never was, and the documentation (at least the User's Guide)
>> never said so.
>>
>> - Prior to 8.4, the default behavior was to simply not do anything about
>> the I/O error and pass it right up to the calling layer, where the
>> latter was expected to handle it.
>>
>> http://www.drbd.org/users-guide-legacy/s-configure-io-error-behavior.html
>>
>> - Since 8.4, the default behavior is to transparently read from or write
>> to the affected block on the peer node, "detaching" from the local
>> (faulty) device and masking the I/O error.
>>
>> http://www.drbd.org/users-guide/s-configure-io-error-behavior.html
>>
>> Removal from the cluster in case of an I/O error (by way of a deliberate
>> kernel panic) was an option in 0.7, and can still be configured via a
>> local-io-error handler. If it was ever the default, then that would have
>> been prior to 0.7 -- i.e. long before I started working with DRBD, so I
>> wouldn't know.
>>
>> Hope this helps.
>>
>> Cheers,
>> Florian
>>
>> --
>> Need help with DRBD?
>> http://www.hastexo.com/now
>>
>>
>> ------------------------------
>>
>> Message: 5
>> Date: Fri, 14 Oct 2011 09:08:17 +0200
>> From: Florian Haas <florian at hastexo.com>
>> Subject: Re: [DRBD-user] Disk Corruption = DRBD Failure?
>> To: drbd-user at lists.linbit.com
>> Message-ID: <4E97DFE1.3040604 at hastexo.com>
>> Content-Type: text/plain; charset=ISO-8859-1
>>
>> On 2011-10-12 20:30, Charles Kozler wrote:
>>> I will re-read the DRBD Funadmentals- the way I understood it was
>>> basically if you were writing to node1 it wouldn't put the data through
>>> a TCP socket and would actually just write directly to the block device
>>> and that TCP was usually only used for the actual replicating and data
>>> integrity conversation between the hosts. ?My understanding now is that
>>> for all hosts included in the resource definition it will put the data
>>> into that socket - including the host you're writing from (eg: if I
>>> wrote to /dev/drbd0 on host1 it will go through the socket to write the
>>> data still to write it to the underlying block device-
>>
>> Er, no. It won't.
>>
>>> I had originally
>>> thought it would skip the TCP socket write and write directly to the
>>> block device).
>>
>> For the _local_ write, of course it doesn't go through the TCP socket.
>> Why should it? That would be braindead. Also, given the documentation,
>> what makes you think so? I ask because I wrote it, and if there's
>> anything horribly unclear in there I'd be happy to fix it.
>>
>> Did you look at the illustration in the Fundamentals chapter?
>>
>> Florian
>>
>> --
>> Need help with High Availability?
>> http://www.hastexo.com/now
>>
>>
>> ------------------------------
>>
>> Message: 6
>> Date: Fri, 14 Oct 2011 09:21:10 +0000
>> From: "Charles Kozler" <charles at fixflyer.com>
>> Subject: Re: [DRBD-user] Disk Corruption = DRBD Failure?
>> To: "Florian Haas" <florian at hastexo.com>,
>> ? ? ? ?drbd-user-bounces at lists.linbit.com, ? ? drbd-user at lists.linbit.com
>> Message-ID:
>> ? ? ? ?<78571060-1318584071-cardhu_decombobulator_blackberry.rim.net-985701216- at b13.c9.bise6.blackberry>
>>
>> Content-Type: text/plain
>>
>> Haven't read it yet though I will later today.
>>
>> Having not ready any of the documentation of the underlying processes/workings, all of my understandings were purely based on assumptions from my basic use with DRBD- that said, thank you for all your insight and I will let you know later my understandings :)
>>
>>
>> Sent from my Verizon Wireless BlackBerry
>>
>> -----Original Message-----
>> From: Florian Haas <florian at hastexo.com>
>> Sender: drbd-user-bounces at lists.linbit.com
>> Date: Fri, 14 Oct 2011 09:08:17
>> To: <drbd-user at lists.linbit.com>
>> Subject: Re: [DRBD-user] Disk Corruption = DRBD Failure?
>>
>> On 2011-10-12 20:30, Charles Kozler wrote:
>>> I will re-read the DRBD Funadmentals- the way I understood it was
>>> basically if you were writing to node1 it wouldn't put the data through
>>> a TCP socket and would actually just write directly to the block device
>>> and that TCP was usually only used for the actual replicating and data
>>> integrity conversation between the hosts. ?My understanding now is that
>>> for all hosts included in the resource definition it will put the data
>>> into that socket - including the host you're writing from (eg: if I
>>> wrote to /dev/drbd0 on host1 it will go through the socket to write the
>>> data still to write it to the underlying block device-
>>
>> Er, no. It won't.
>>
>>> I had originally
>>> thought it would skip the TCP socket write and write directly to the
>>> block device).
>>
>> For the _local_ write, of course it doesn't go through the TCP socket.
>> Why should it? That would be braindead. Also, given the documentation,
>> what makes you think so? I ask because I wrote it, and if there's
>> anything horribly unclear in there I'd be happy to fix it.
>>
>> Did you look at the illustration in the Fundamentals chapter?
>>
>> Florian
>>
>> --
>> Need help with High Availability?
>> http://www.hastexo.com/now
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user at lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>> ------------------------------
>>
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user at lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>>
>> End of drbd-user Digest, Vol 87, Issue 21
>> *****************************************
>>
>
>
> ------------------------------
>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
>
> End of drbd-user Digest, Vol 87, Issue 22
> *****************************************
>



More information about the drbd-user mailing list