Matteo Tescione matteo at rmnet.it
Thu Oct 11 23:52:16 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Hi folks,

First of all thank u for your great job! It's been a while since i'm using
drbd for our HA ISCSI NAS.
Hardware is Double Quad-core Zeon 5345 Processor with 2TB Raid6 running on a
3ware 9650 w/ 256 battery backed up cache.
Network is provided by 2 gbit ethernet bonded together in round-robin (alb
seems to be a little buggy)
Kernel is Drbd is 8.2.0 Ram is 4Gb

Since the last upgrade drbd on secondary node upon every reboot goes
out-of-mem, vmalloc failed and then starts a full resync, even if was 100%
synched before the reboot.

drbd0: conn( WFConnection -> WFReportParams )
drbd0: Handshake successful: Agreed network protocol version 87
drbd0: Peer authenticated using 20 bytes of 'sha1' HMAC
drbd0: data-integrity-alg:
drbd0: drbd_bm_resize called with capacity == 3906037208
drbd0: bitmap: failed to vmalloc 61031836 bytes
drbd0: OUT OF MEMORY! Could not allocate bitmap! Set device size => 0
drbd0: size = 0 KB (0 KB)
drbd0: Becoming sync target due to disk states.
drbd0: peer( Unknown -> Primary ) conn( WFReportParams -> WFBitMapT ) pdsk(
DUnknown -> UpToDate )
drbd0: Writing meta data super block now.
drbd0: receive_bitmap: (want != h->length) in
drbd0: error receiving ReportBitMap, l: 4088!
drbd0: peer( Primary -> Unknown ) conn( WFBitMapT -> ProtocolError ) pdsk(
UpToDate -> DUnknown )
drbd0: asender terminated
drbd0: tl_clear()
drbd0: Connection closed
drbd0: Writing meta data super block now.
drbd0: conn( ProtocolError -> Unconnected )
drbd0: receiver terminated

In the btw, i experienced strange behaviours using the same kernel with
bonding and e1000 driver compiled as a module, like resynch at few KB/sec...
I'm going to upgrade to last 2.6.23, but i would like to know what's going
Additionally, i see bonding 2 gbit seems not to help goin toward 1gbit wall
(syncer seems stuck at 110-115 MB/sec)
Disk subsystem is providing 300-500 Mb/sec without problems.
I run 2 e1000 bonded interface, test done both in rr and alb mode, with tso
enabled/disabled, and flow control disabled. Iperf shows:

[  5]  0.0-10.0 sec  1.30 GBytes  1.12 Gbits/sec
[  4]  0.0-10.0 sec   578 MBytes   484 Mbits/sec

Any thoughts?
Thanks in advance,
So long and thank for all the fish
#Matteo Tescione
#RMnet srl

More information about the drbd-user mailing list