[DRBD-user] Debian + DRBD + SATA RAID with problems

Mateus Longo mateus at insideracing.com.br
Wed Apr 18 15:44:24 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi everybody,

I have 2 servers running debian with 3 SATA HDs with a RAID1 and a
RAID5 partition. The RAID5 partition is running as a DRBD system, but
temporarily there is only one machine running due to hardware upgrade
in both machines. Sometimes the filesystem turns into read only mode
with these errors.
part of [less syslog]:
Apr 17 10:33:13 LS1 kernel: EXT3-fs error (device drbd0):
ext3_free_blocks_sb: bit already cleared for block 9590
969
Apr 17 10:33:14 LS1 kernel: Aborting journal on device drbd0.
Apr 17 10:33:14 LS1 kernel: ext3_abort called.
Apr 17 10:33:14 LS1 kernel: EXT3-fs error (device drbd0):
ext3_journal_start_sb: Detected aborted journal
Apr 17 10:33:14 LS1 kernel: Remounting filesystem read-only
Apr 17 10:33:14 LS1 kernel: EXT3-fs error (device drbd0) in
ext3_free_blocks_sb: Journal has aborted
Apr 17 10:33:14 LS1 last message repeated 3 times
Apr 17 10:33:14 LS1 kernel: EXT3-fs error (device drbd0) in
ext3_reserve_inode_write: Journal has aborted
Apr 17 10:33:14 LS1 kernel: EXT3-fs error (device drbd0) in
ext3_truncate: Journal has aborted
Apr 17 10:33:14 LS1 kernel: EXT3-fs error (device drbd0) in
ext3_reserve_inode_write: Journal has aborted
Apr 17 10:33:14 LS1 kernel: EXT3-fs error (device drbd0) in
ext3_orphan_del: Journal has aborted
Apr 17 10:33:14 LS1 kernel: EXT3-fs error (device drbd0) in
ext3_reserve_inode_write: Journal has aborted
Apr 17 10:33:14 LS1 kernel: __journal_remove_journal_head: freeing
b_committed_data
Apr 17 10:33:14 LS1 kernel: __journal_remove_journal_head: freeing
b_committed_data

# cat /proc/drbd
version: 0.7.16 (api:77/proto:74)
SVN Revision: 2066 build by root at Antonov225P, 2006-10-04 09:33:48
 0: cs:WFConnection st:Primary/Unknown ld:Consistent
    ns:0 nr:0 dw:14351332 dr:94114897 al:7950 bm:7950 lo:0 pe:0 ua:0 ap:0


and then, i got this:
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
md1 : active raid5 sdb3[1] sda3[0]
      299788800 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]

md0 : active raid1 sdb1[1] sda1[0]
      6000128 blocks [3/2] [UU_]

can anybody help me?

-- 
Flws
Mateus Longo
www.insideracing.com.br



More information about the drbd-user mailing list