Jump to content

Raid problem


Vdubjunkie
 Share

Recommended Posts

Hello all:

The reason I call this a Raid problem as opposed to a drive problem is that raid complains about an offline drive, but the system sees the drive fine. I was able today to do a fdisk and list the partition table on all three drives involved in this RAID.

 

History includes the fact that this RAID worked fine for a long time, but "broke" after a dirty system shutdown. The configuration is three drives each with two partitions. The first partition of each drive is part of a RAID 5 configuration, and the second partition of each contributes to a RAID 1 configuration. My raidtab is simple and hasn't changed.

 

Here is a potentially useful bit of info:

 

#dmesg
<snip>
CMD649: chipset revision 2
CMD649: not 100% native mode: will probe irqs later
CMD649: ROM enabled at 0xe4000000
   ide2: BM-DMA at 0xec00-0xec07, BIOS settings: hde:pio, hdf:pio
   ide3: BM-DMA at 0xec08-0xec0f, BIOS settings: hdg:pio, hdh:pio
hda: WDC WD600BB-32BSA0, ATA DISK drive
hda: DMA disabled
blk: queue c03cb420, I/O limit 4095Mb (mask 0xffffffff)
hdc: CD-RW BCE2410IM, ATAPI CD/DVD-ROM drive
hdd: IC35L120AVVA07-0, ATA DISK drive
hdc: DMA disabled
hdd: DMA disabled
blk: queue c03cb9a8, I/O limit 4095Mb (mask 0xffffffff)
hde: Maxtor 6Y120P0, ATA DISK drive
hdf: WDC WD800BB-00BSA0, ATA DISK drive
blk: queue c03cbcb8, I/O limit 4095Mb (mask 0xffffffff)
blk: queue c03cbdf4, I/O limit 4095Mb (mask 0xffffffff)
hdh: Maxtor 6Y120P0, ATA DISK drive
blk: queue c03cc240, I/O limit 4095Mb (mask 0xffffffff)
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
ide2 at 0xdc00-0xdc07,0xe002 on irq 10
ide3 at 0xe400-0xe407,0xe802 on irq 10
hda: host protected area => 1
hda: 117231408 sectors (60022 MB) w/2048KiB Cache, CHS=7297/255/63, UDMA(66)
hdd: host protected area => 1
hdd: 241254720 sectors (123522 MB) w/1863KiB Cache, CHS=239340/16/63, UDMA(66)
hde: host protected area => 1
hde: 240121728 sectors (122942 MB) w/7936KiB Cache, CHS=238216/16/63, UDMA(100)
hdf: host protected area => 1
hdf: 156301488 sectors (80026 MB) w/2048KiB Cache, CHS=155061/16/63, UDMA(100)
hdh: host protected area => 1
hdh: 240121728 sectors (122942 MB) w/7936KiB Cache, CHS=238216/16/63, UDMA(100)
Partition check:
/dev/ide/host0/bus0/target0/lun0: p1 p2 < p5 p6 p7 p8 p9 p10 >
/dev/ide/host0/bus1/target1/lun0:<6> [PTBL] [15017/255/63] p1 p2
/dev/ide/host2/bus0/target0/lun0: p1 p2
/dev/ide/host2/bus0/target1/lun0:<6> [PTBL] [9729/255/63] p1
/dev/ide/host2/bus1/target1/lun0: p1 p2
md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.
<snip>
md: autorun ...
md: considering ide/host2/bus1/target1/lun0/part1 ...
md:  adding ide/host2/bus1/target1/lun0/part1 ...
md:  adding ide/host0/bus1/target1/lun0/part1 ...
md: created md0
md: bind<ide/host0/bus1/target1/lun0/part1,1>
md: bind<ide/host2/bus1/target1/lun0/part1,2>
md: running: <ide/host2/bus1/target1/lun0/part1><ide/host0/bus1/target1/lun0/part1>
md: ide/host2/bus1/target1/lun0/part1's event counter: 0000004a
md: ide/host0/bus1/target1/lun0/part1's event counter: 0000004a
raid5: measuring checksumming speed
  8regs     :  1015.600 MB/sec
  32regs    :   483.200 MB/sec
  pII_mmx   :  1228.000 MB/sec
  p5_mmx    :  1269.200 MB/sec
raid5: using function: p5_mmx (1269.200 MB/sec)
md: raid5 personality registered as nr 4
md0: max total readahead window set to 496k
md0: 2 data-disks, max readahead per data-disk: 248k
raid5: device ide/host2/bus1/target1/lun0/part1 operational as raid disk 2
raid5: device ide/host0/bus1/target1/lun0/part1 operational as raid disk 0
raid5: md0, not all disks are operational -- trying to recover array
raid5: allocated 3284kB for md0
raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2
RAID5 conf printout:
--- rd:3 wd:2 fd:1
disk 0, s:0, o:1, n:0 rd:0 us:1 dev:ide/host0/bus1/target1/lun0/part1
disk 1, s:0, o:0, n:1 rd:1 us:1 dev:[dev 00:00]
disk 2, s:0, o:1, n:2 rd:2 us:1 dev:ide/host2/bus1/target1/lun0/part1
RAID5 conf printout:
--- rd:3 wd:2 fd:1
disk 0, s:0, o:1, n:0 rd:0 us:1 dev:ide/host0/bus1/target1/lun0/part1
disk 1, s:0, o:0, n:1 rd:1 us:1 dev:[dev 00:00]
disk 2, s:0, o:1, n:2 rd:2 us:1 dev:ide/host2/bus1/target1/lun0/part1
md: updating md0 RAID superblock on device
md: ide/host2/bus1/target1/lun0/part1 [events: 0000004b]<6>(write) ide/host2/bus1/target1/lun0/part1's sb offset: 38400128
md0: no spare disk to reconstruct array! -- continuing in degraded mode
md: ide/host0/bus1/target1/lun0/part1 [events: 0000004b]<6>(write) ide/host0/bus1/target1/lun0/part1's sb offset: 38403264
md: ... autorun DONE.
[events: 00000047]
[events: 00000047]
[events: 00000047]
md: autorun ...
md: considering ide/host2/bus1/target1/lun0/part2 ...
md:  adding ide/host2/bus1/target1/lun0/part2 ...
md:  adding ide/host2/bus0/target0/lun0/part2 ...
md:  adding ide/host0/bus1/target1/lun0/part2 ...
md: created md1
md: bind<ide/host0/bus1/target1/lun0/part2,1>
md: bind<ide/host2/bus0/target0/lun0/part2,2>
md: bind<ide/host2/bus1/target1/lun0/part2,3>
md: running: <ide/host2/bus1/target1/lun0/part2><ide/host2/bus0/target0/lun0/part2>
    <ide/host0/bus1/target1/lun0/part2>
md: ide/host2/bus1/target1/lun0/part2's event counter: 00000047
md: ide/host2/bus0/target0/lun0/part2's event counter: 00000047
md: ide/host0/bus1/target1/lun0/part2's event counter: 00000047
md: raid0 personality registered as nr 2
md1: max total readahead window set to 768k
md1: 3 data-disks, max readahead per data-disk: 256k
raid0: looking at ide/host0/bus1/target1/lun0/part2
raid0:   comparing ide/host0/bus1/target1/lun0/part2(82220544) with ide/host0/bus1/target1/lun0/part2(82220544)
raid0:   END
raid0:   ==> UNIQUE
raid0: 1 zones
raid0: looking at ide/host2/bus0/target0/lun0/part2
raid0:   comparing ide/host2/bus0/target0/lun0/part2(81660480) with ide/host0/bus1/target1/lun0/part2(82220544)
raid0:   NOT EQUAL
raid0:   comparing ide/host2/bus0/target0/lun0/part2(81660480) with ide/host2/bus0/target0/lun0/part2(81660480)
raid0:   END
raid0:   ==> UNIQUE
raid0: 2 zones
raid0: looking at ide/host2/bus1/target1/lun0/part2
raid0:   comparing ide/host2/bus1/target1/lun0/part2(81660480) with ide/host0/bus1/target1/lun0/part2(82220544)
raid0:   NOT EQUAL
raid0:   comparing ide/host2/bus1/target1/lun0/part2(81660480) with ide/host2/bus0/target0/lun0/part2(81660480)
raid0:   EQUAL
raid0: FINAL 2 zones
raid0: zone 0
raid0: checking ide/host0/bus1/target1/lun0/part2 ... contained as device 0
 (82220544) is smallest!.
raid0: checking ide/host2/bus0/target0/lun0/part2 ... contained as device 1
 (81660480) is smallest!.
raid0: checking ide/host2/bus1/target1/lun0/part2 ... contained as device 2
raid0: zone->nb_dev: 3, size: 244981440
raid0: current zone offset: 81660480
raid0: zone 1
raid0: checking ide/host0/bus1/target1/lun0/part2 ... contained as device 0
 (82220544) is smallest!.
raid0: checking ide/host2/bus0/target0/lun0/part2 ... nope.
raid0: checking ide/host2/bus1/target1/lun0/part2 ... nope.
raid0: zone->nb_dev: 1, size: 560064
raid0: current zone offset: 82220544
raid0: done.
raid0 : md_size is 245541504 blocks.
raid0 : conf->smallest->size is 560064 blocks.
raid0 : nb_zone is 439.
raid0 : Allocating 3512 bytes for hash.
md: updating md1 RAID superblock on device
md: ide/host2/bus1/target1/lun0/part2 [events: 00000048]<6>(write) ide/host2/bus1/target1/lun0/part2's sb offset: 81660480
md: ide/host2/bus0/target0/lun0/part2 [events: 00000048]<6>(write) ide/host2/bus0/target0/lun0/part2's sb offset: 81660480
md: ide/host0/bus1/target1/lun0/part2 [events: 00000048]<6>(write) ide/host0/bus1/target1/lun0/part2's sb offset: 82220544
md: ... autorun DONE.
raid5: switching cache buffer size, 4096 --> 1024

 

This part here is the one that I just noticed...

RAID5 conf printout:
--- rd:3 wd:2 fd:1
disk 0, s:0, o:1, n:0 rd:0 us:1 dev:ide/host0/bus1/target1/lun0/part1
disk 1, s:0, o:0, n:1 rd:1 us:1 dev:[dev 00:00]
disk 2, s:0, o:1, n:2 rd:2 us:1 dev:ide/host2/bus1/target1/lun0/part1
RAID5 conf printout:
--- rd:3 wd:2 fd:1
disk 0, s:0, o:1, n:0 rd:0 us:1 dev:ide/host0/bus1/target1/lun0/part1
disk 1, s:0, o:0, n:1 rd:1 us:1 dev:[dev 00:00]
disk 2, s:0, o:1, n:2 rd:2 us:1 dev:ide/host2/bus1/target1/lun0/part1

disk1 is showing up as [dev 00:00] instead of it's full /dev/hostx/busx... assignment, and I don't know why. It's definitely not listed that way in raidtab

Edited by spinynorman
Link to comment
Share on other sites

  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

×
×
  • Create New...