[DRBD-user] cyrus-imapd on DRBD partitions

Cheewai Lai CLai at gov.bw
Tue Nov 23 08:45:42 CET 2004

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

Has anyone successfully run cyrus-imapd on DRBD partitions?

In my case, I have set up 2 identical nodes running Suse 9
(2.6.5-7 kernel) using the bundled drbd-0.7.0-59.22 and
heartbeat-1.2.2-0.6.

Copying and deleting several hundred MBs repeatedly on the DRBD
partitions (reiserfs) worked fine.

But, as soon as I configured cyrus-imapd "configdirectory" and 
"partition-default" to use the DRBD partitions, I got these messages:

master[4506]: process started
master[4507]: about to exec /usr/lib/cyrus/bin/ctl_cyrusdb
kernel: drbd0: Local IO failed. Detaching...
kernel: drbd0: Local IO failed. Detaching...
kernel: drbd0: Local IO failed. Detaching...
kernel: drbd0: Local IO failed. Detaching...
kernel: drbd0: local read failed, retrying remotely
kernel: drbd0: local read failed, retrying remotely
kernel: drbd0: local read failed, retrying remotely
kernel: drbd0: local read failed, retrying remotely
kernel: drbd0:
/usr/src/packages/BUILD/kernel-smp-2.6.5/modules-2.6.5/drbd/drbd_actlog.
c:649: Connected flags=0x5509
kernel: drbd0:
/usr/src/packages/BUILD/kernel-smp-2.6.5/modules-2.6.5/drbd/drbd_actlog.
c:649: Connected flags=0x5509
[this last messages repeat non-stop]

The slave node shows the partition as consistent but
the primary node /proc/drbd showed:
version: 0.7-pre8 (api:74/proto:72)
 
0: cs:DiskLessClient st:Primary/Secondary ld:Inconsistent
     ns:0 nr:0 dw:684600 dr:2008 al:360 bm:0 lo:0 pe:0 ua:0 ap:0


DRBD never recovered from this state and the system would 
eventually hang. When I mount the SAN-disks directly without 
using DRBD, cyrus-imapd worked fine.

Any idea how I could go about troubleshooting this?


Details of my setup:
- Suse Enterprise Linux 9 with 2.6.5-7.111-smp kernel
- DRBD partitions mounted on fiber-channel connected SAN disk 
  (IBM shark) using QLogic FC HBA.
- Each node has two ethernet connections (switched, not x/over), 
  100Mbps and 1Gbps. 1Gbps is used for heartbeat and DRBD.



More information about the drbd-user mailing list