[DRBD-user] testing drbd without real deices

Heiko rupertt at gmail.com
Mon Aug 10 15:42:48 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


great information, I disabled s/g on one cluster and didnt have any crashed
since then.
Is it enough to disconnect the resources and change that settings or do I
have to completely shutdown
drbddevice and xen VM?

big thnx

.r

On Thu, Jul 30, 2009 at 11:29 PM, Maros Timko <timkom at gmail.com> wrote:

> No,
>
> I meant:
> http://www.gossamer-threads.com/lists/drbd/users/17212
> http://www.gossamer-threads.com/lists/drbd/users/17356
> http://www.gossamer-threads.com/lists/drbd/users/16962
>
> Tino
>
> 2009/7/30 Rupert <rupertt at gmail.com>
>
>> Maros Timko wrote:
>>
>>> Rupertt,
>>>  TOE - TCP Offloading Engine is HW support of network card to improve
>>> performance. It is usually enabled by default on your network card.
>>> Check "ethtool -k" option and search on this list.
>>> Tino
>>> 2009/7/30 Rupert <rupertt at gmail.com <mailto:rupertt at gmail.com>>
>>>
>>>    Maros Timko wrote:
>>>
>>>        Heiko,
>>>        whhich machine crashes? Primary only, both?
>>>        I think it is primary but only in case xen VM is running on
>>>        top of DRBD. You can prevent crashes by disabling TOE. Or use
>>>        DRBD 8.3.2, AFAIK there should be a parameter that should help
>>>        in such setups.
>>>        Check older posts in this list.
>>>         Tino
>>>
>>> Hello tino,
>>
>> you mean this post?
>>
>> http://archives.free.net.ph/message/20071214.134353.11ba93b1.de.html
>>
>> this is what ethtool show me:
>> ethtool -k eth0
>> Offload parameters for eth0:
>> Cannot get device rx csum settings: Operation not supported
>> Cannot get device udp large send offload settings: Operation not supported
>> rx-checksumming: off
>> tx-checksumming: on
>> scatter-gather: on
>> tcp segmentation offload: on
>> udp fragmentation offload: off
>> generic segmentation offload: off
>>
>> will disabling tx have any affect on traffic over this device?
>> there are 3 other VM that use that device, so i hesitate to just change
>> this setting.
>>
>>
>> cheers
>>
>>
>> .r
>>
>>     Hello Tino,
>>>
>>>    in one case I have a primary on each machine, means 2 drbd devices.
>>>    In the other case we have only 1 drbd device, and there only the
>>>    primary crashed.
>>>    What means TOE, havent found anything about that yet?
>>>    Someone on this list suggested that I should use protocol A, but
>>>    we dont want to loose
>>>    any data, so we cant use that.
>>>    I am thinking about updating, but I first have to test if these
>>>    packages work with our system,
>>>    CentOS 5.x.
>>>
>>>    so long
>>>
>>>        2009/7/27 Heiko <rupertt at gmail.com <mailto:rupertt at gmail.com>
>>>        <mailto:rupertt at gmail.com <mailto:rupertt at gmail.com>>>
>>>
>>>
>>>
>>>
>>>           On Mon, Jul 27, 2009 at 2:18 PM, Martin Gombac
>>>        <martin at isg.si <mailto:martin at isg.si>
>>>             <mailto:martin at isg.si <mailto:martin at isg.si>>> wrote:
>>>
>>>               In my humble opinion, drbd does't crash if you loose
>>>        network
>>>               connections. :-)
>>>               Would be a first in history.
>>>               Maybe heartbeat puts both sources to primary and when they
>>>               join you got split brain.
>>>               In this case you didn't set up heartbeat correctly.
>>>
>>>           Hello M.,
>>>
>>>           i had some people here that confirmed a bug in protocol C that
>>>           causes these crashes.
>>>           I also thought of heartbeat, but I now have 2 ucast devices
>>>        and we
>>>           still have crashes and
>>>           no entries in the logfile that say it does a reboot on purpose:
>>>
>>>           only these messages:
>>>
>>>           heartbeat[2880]: 2009/07/27_11:59:37 ERROR: glib: Unable to
>>>        send
>>>           [-1] ucast packet: No such device
>>>           heartbeat[2880]: 2009/07/27_11:59:37 ERROR: write_child: write
>>>           failure on ucast eth0.: No such device
>>>
>>>
>>>
>>>
>>>           my ha config looks like this
>>>
>>>           #use_logd on
>>>           logfile /var/log/ha-log
>>>           debugfile /var/log/ha-debug
>>>           logfacility local0
>>>           keepalive 2
>>>           deadtime 10
>>>           warntime 3
>>>           initdead 20
>>>           udpport 694
>>>           ucast eth0 172.17.8.201
>>>           ucast eth0 172.17.8.202
>>>           ucast eth1 172.31.0.1
>>>           ucast eth1 172.31.0.2
>>>           node xen-a1.fra1
>>>           node xen-b1.fra1
>>>           auto_failback on
>>>
>>>           haresources:
>>>
>>>           xen-a1.fra1 drbddisk::blrg  xen::blrg-vm1
>>>
>>>           thnx a lot
>>>
>>>
>>>           .r
>>>
>>>
>>>               Regards,
>>>               M.
>>>
>>>
>>>               On 27, Jul2009, at 1:47 PM, Heiko wrote:
>>>
>>>                   Hello,
>>>
>>>                   i have to convince my boss that our server crashes i
>>>                   reported on this list are due
>>>                   to a non exsiting dedicated line! We have all our drbd
>>>                   traffic routed through switches
>>>                   and they often just crash.
>>>
>>>                   Now I have to create a test setup to show them that
>>>        when I
>>>                   plug the corg/shutdown the networkdevice
>>>                   the machines tend to crash.
>>>                   Since I dont have any spare machines I would like
>>>        to use
>>>                   loopback devices,
>>>                   are these supported by now? I found some list
>>>        entries that
>>>                   say this is not supported by drbd!
>>>                   would this be enough to get the machines crashing?
>>>                   We use drbd8.0 and 8.2 and have on both these crashes.
>>>
>>>
>>>                   cheers.
>>>                   _______________________________________________
>>>                   drbd-user mailing list
>>>                   drbd-user at lists.linbit.com
>>>        <mailto:drbd-user at lists.linbit.com>
>>>        <mailto:drbd-user at lists.linbit.com
>>>        <mailto:drbd-user at lists.linbit.com>>
>>>
>>>                   http://lists.linbit.com/mailman/listinfo/drbd-user
>>>
>>>
>>>
>>>
>>>           _______________________________________________
>>>           drbd-user mailing list
>>>           drbd-user at lists.linbit.com
>>>        <mailto:drbd-user at lists.linbit.com>
>>>        <mailto:drbd-user at lists.linbit.com
>>>        <mailto:drbd-user at lists.linbit.com>>
>>>
>>>           http://lists.linbit.com/mailman/listinfo/drbd-user
>>>
>>>
>>>
>>>  ------------------------------------------------------------------------
>>>
>>>        _______________________________________________
>>>        drbd-user mailing list
>>>        drbd-user at lists.linbit.com <mailto:drbd-user at lists.linbit.com>
>>>        http://lists.linbit.com/mailman/listinfo/drbd-user
>>>
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> drbd-user mailing list
>>> drbd-user at lists.linbit.com
>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>>
>>>
>>
>>
>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20090810/3cfebf38/attachment.htm>


More information about the drbd-user mailing list