[DRBD-user] VMware ESX 3 + iSCSI Enterprise Target/DRBD goneterribly wrong - help!

Lars Ellenberg lars.ellenberg at linbit.com
Fri Aug 3 17:08:59 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Wed, Aug 01, 2007 at 04:52:30PM -0400, Ross S. W. Walker wrote:
> > -----Original Message-----
> > From: drbd-user-bounces at lists.linbit.com 
> > [mailto:drbd-user-bounces at lists.linbit.com] On Behalf Of Lars 
> > Ellenberg
> > Sent: Wednesday, August 01, 2007 4:18 PM
> > To: drbd-user at lists.linbit.com
> > Subject: Re: [DRBD-user] VMware ESX 3 + iSCSI Enterprise 
> > Target/DRBD goneterribly wrong - help!
> > 
> > On Tue, Jul 03, 2007 at 12:38:23AM -0700, Jakobsen wrote:
> > > I have some critical issues with three ESX 3.0.1 servers, 
> > > that access an iSCSI Enterprise Target. The iSCSI target is
> > > replicating with another server with DRBD, and everything works
> > > fine WITHOUT DRBD ENABLED.
> > > 
> > > When I enable DRBD, it starts a full sync that takes about 1 hour to
> > > complete, and everything seems fine. After the full sync, DRBD is
> > > not under heavy load anymore. Suddenly - without any errors on the
> > > DRBD servers - the VMware guests starts throwing I/O errors at me,
> > > and everything goes read-only.
> > > 
> > > Have any of you guys got the same problem?
> > > I have no clue what the problem can be.
> > 
> > meanwhile...  they hired me for consulting.

...
[3] http://www.tuxyturvy.com/blog/index.php?/archives/31-VMware-ESX-and-ext3-journal-aborts.html
...

> > the solution is to change the guest linux kernel, or at least
> > patch its mptscsih driver module as explained in [3] and [1].
> 
> So would you say the core problem has to do with how well the mptscsi
> driver handles scsi timeouts? Sounds like it in the description.

exactly.
what you will see on the target side is loads of this:
14:48:50 kernel: execute_task_management(1212) 3dff73a 1 27f7df03
14:48:50 kernel: cmnd_abort(1143) 3dff727 1 0 42 4096 0 0
14:48:50 kernel: execute_task_management(1212) 3dff73b 2 ffffffff
14:48:50 kernel: execute_task_management(1212) 3dff73c 1 2af7df03
14:48:50 kernel: cmnd_abort(1143) 3dff72a 1 0 42 512 0 0
14:48:50 kernel: execute_task_management(1212) 3dff743 2 ffffffff
14:48:50 kernel: data_out_start(1032) unable to find scsi task 3dff73d 26ba9c0
14:48:50 kernel: cmnd_skip_pdu(454) 3dff73d 1e 0 4096
14:48:50 kernel: data_out_start(1032) unable to find scsi task 3dff73e 26ba9c1
14:48:50 kernel: cmnd_skip_pdu(454) 3dff73e 1e 0 8192
14:48:50 kernel: data_out_start(1032) unable to find scsi task 3dff73e 26ba9c1
14:48:50 kernel: cmnd_skip_pdu(454) 3dff73e 1e 0 8192
14:48:50 kernel: data_out_start(1032) unable to find scsi task 3dff73e 26ba9c1
14:48:50 kernel: cmnd_skip_pdu(454) 3dff73e 1e 0 4096
14:48:50 kernel: data_out_start(1032) unable to find scsi task 3dff73f 26ba9c2

which is no problem per se, just the target logging warnings about
unexpected actions received from the initiator...
the problem is not even the ESX initiator either, but that it reports
"SCSI BUSY" to the linux guest, and certain versions of the mptscsih
driver (which are used in the linux guest to access the virtual ESX SCSI
drives) map this to "DID_BUS_BUSY ... | scsi_status",
which causes the linux scsi midlayer to give up early,
as explained in the above link (and the other links given in the
previous mail).

> Also I have found that Ethernet flow-control can wreck havoc with
> iSCSI during heavy io which will present itself as a series of
> timeouts.
> 
> Ethernet flow-control should be avoided in favor of the new TCP
> scaling in most new OSes where possible and completely avoided when
> doing jumbo MTUs as most switches these days just do not have the
> bufferring and pipelining for these large frames.

thanks, it is good to have this for reference!

-- 
: Lars Ellenberg                            Tel +43-1-8178292-0  :
: LINBIT Information Technologies GmbH      Fax +43-1-8178292-82 :
: Vivenotgasse 48, A-1120 Vienna/Europe    http://www.linbit.com :
__
please use the "List-Reply" function of your email client.



More information about the drbd-user mailing list