Jump to content

With RAID1 I cannot update kernel


dude67
 Share

Recommended Posts

I have had this problem for quite some time now, and would appreciate if you'd be able to help out.

 

I have set up my system with SW RAID1 (earlier problem here: https://mandrivausers.org/index.php?showtopic=74504 ).

But ever since I've made that RAID set-up I've been unable to update my kernel. I currently boot to

 

2.6.27.4-desktop-2mnb

 

Here's my RAID1 set-up:

$ cat /proc/mdstat
 Personalities : [raid1] [raid6] [raid5] [raid4]
 md2 : active raid1 sdc9[1] sda9[0]
	   506272256 blocks [2/2] [UU]

 md1 : active raid1 sda8[0] sdc8[1]
	   102398208 blocks [2/2] [UU]

 md0 : active raid1 sda6[0] sdc6[1]
	   20474688 blocks [2/2] [UU]

 unused devices: <none>

 

I've seen a few kernel version updates for Mandriva, but have been unable to use them as I would get this error when booting to

any other kernel version as the one mentioned above.

		   ...
  Loading ahci module
  Waiting for driver initialization
  mdadm: /dev/md0 not identified in config file.
  mdadm: error opening /dev/md0: No such file or directory
  Creating root device.
  Mounting root filesystem.
  mount: error mounting /dev/root on /sysroot as ext3: Invalid argument
  Setting up other filesystems.
  setuproot: moving /dev failed: No such file or directory
  setuproot: error mounting /proc: No such file or directory
  setuproot: error mounting /sys: No such file or directory
  Switching to new root and running init
  switchroot: /dev does not exist in new root
  Booting has failed.

This is my current /etc/lilo.conf file:

 

# File generated by DrakX/drakboot

# WARNING: do not forget to run lilo after modifying this file
default="desktop_2.6.27.4-2mnb"
boot=/dev/md0
map=/boot/map
install=menu
keytable=/boot/fi-latin1.klt
raid-extra-boot=mbr
menu-scheme=wb:bw:wb:bw
compact
prompt
nowarn
timeout=100
message=/boot/message
image=/boot/vmlinuz
label="linux"
root=/dev/md0
initrd=/boot/initrd.img
append="splash=verbose"
vga=788
image=/boot/vmlinuz
label="linux-nonfb"
root=/dev/md0
initrd=/boot/initrd.img
append="splash=verbose"
other=/dev/sda1
label="windows"
table=/dev/sda
other=/dev/sdb1
label="windows1"
table=/dev/sdb
map-drive=0x80
to=0x81
map-drive=0x81
to=0x80
image=/boot/vmlinuz
label="failsafe"
root=/dev/md0
initrd=/boot/initrd.img
append="failsafe"
image=/boot/vmlinuz-2.6.27-desktop-0.rc8.2mnb
label="2.6.27-desktoprc8-2mnb"
root=/dev/md0
initrd=/boot/initrd-2.6.27-desktop-0.rc8.2mnb.img
append="splash=verbose"
vga=788
image=/boot/vmlinuz-2.6.27.4-desktop-1mnb
label="desktop_2.6.27.4-1mnb"
root=/dev/md0
initrd=/boot/initrd-2.6.27.4-desktop-1mnb.img
append="splash=verbose"
vga=788
image=/boot/vmlinuz-2.6.27.4-desktop-2mnb
label="desktop_2.6.27.4-2mnb"
root=/dev/md0
initrd=/boot/initrd-2.6.27.4-desktop-2mnb.img
append="splash=verbose"
vga=788
image=/boot/vmlinuz-2.6.27.5-desktop-2mnb
label="desktop_2.6.27.5-2mnb"
root=/dev/md0
initrd=/boot/initrd-2.6.27.5-desktop-2mnb.img
append="splash=verbose"
vga=788
image=/boot/vmlinuz-2.6.27.7-desktop-1mnb
label="desktop_2.6.27.7-1mnb"
root=/dev/md0
initrd=/boot/initrd-2.6.27.7-desktop-1mnb.img
append="splash=verbose"
vga=788
																																								   71,1-8		Bot

Is there something I should add to /etc/lilo.conf? What am I missing here?

Link to comment
Share on other sites

I'd say it's more to do with the initrd than anything else. I've generally had no problems booting a system or upgrading the kernel. The only time I've had problems is when I've converted from standard filesystem to raid array, then I had to regenerate the initrd.

 

To check this, boot the kernel that works, then regenerate the initrd for the newly installed kernel. Something along these lines:

 

mkinitrd initrd-2.6.17-5mdv.img 2.6.17-5mdv

 

depending of course on what the initrd filename is for the kernel you're trying to boot.

Link to comment
Share on other sites

I seem to have these initrd files (and a lot more files with initrd in the filenames, but I'm guessing these are relevant):

$ locate initrd
	 /initrd						 
	 /boot/initrd-2.6.27-desktop-0.rc8.2mnb.img
	 /boot/initrd-2.6.27.4-desktop-1mnb.img	
	 /boot/initrd-2.6.27.4-desktop-2mnb.img	
	 /boot/initrd-2.6.27.5-desktop-2mnb.img	
	 /boot/initrd-2.6.27.7-desktop-1mnb.img	
	 /boot/initrd-desktop.img				  
	 /boot/initrd.img						  
	 /etc/sysconfig/mkinitrd

 

So I should (?):

mkinitrd initrd-2.6.27.7-desktop-1mnb.img 2.6.27.7-desktop-1mnb

Link to comment
Share on other sites

Whats the contents of /etc/mdadm.conf? Normally booting issues were because of a problem with initrd, but it seems if you regenerated it, this is not the problem.

 

/dev/md0 doesn't seem to exist on your config file based on your results above, but post the /etc/mdadm.conf file and we can check how this looks.

Link to comment
Share on other sites

# mdadm configuration file
#
# mdadm will function properly without the use of a configuration file,
# but this file is useful for keeping track of arrays and member disks.
# In general, a mdadm.conf file is created, and updated, after arrays
# are created. This is the opposite behavior of /etc/raidtab which is
# created prior to array construction.
#
#
# the config file takes two types of lines:
#
#	   DEVICE lines specify a list of devices of where to look for
#		 potential member disks
#
#	   ARRAY lines specify information about how to identify arrays so
#		 so that they can be activated
#
# You can have more than one device line and use wild cards. The first
# example includes SCSI the first partition of SCSI disks /dev/sdb,
# /dev/sdc, /dev/sdd, /dev/sdj, /dev/sdk, and /dev/sdl. The second
# line looks for array slices on IDE disks.
#
#DEVICE /dev/sd[bcdjkl]1
#DEVICE /dev/hda1 /dev/hdb1
DEVICE /dev/sda*
DEVICE /dev/sdc*
#
# If you mount devfs on /dev, then a suitable way to list all devices is:
#DEVICE /dev/discs/*/*
#
#
#
# ARRAY lines specify an array to assemble and a method of identification.
# Arrays can currently be identified by using a UUID, superblock minor number,
# or a listing of devices.
#
#	   super-minor is usually the minor number of the metadevice
#	   UUID is the Universally Unique Identifier for the array
# Each can be obtained using
#
#	   mdadm -D <md>
#
#ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371
#ARRAY /dev/md1 super-minor=1
#ARRAY /dev/md2 devices=/dev/hda1,/dev/hdb1
ARRAY /dev/md1 devices=/dev/sda8,/dev/sdc8
ARRAY /dev/md2 devices=/dev/sda9,/dev/sdc9
#
# ARRAY lines can also specify a "spare-group" for each array.  mdadm --monitor
# will then move a spare between arrays in a spare-group if one array has a failed
# drive but no spare
#ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1
#ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1
#
# When used in --follow (aka --monitor) mode, mdadm needs a
# mail address and/or a program.  This can be given with "mailaddr"
# and "program" lines to that monitoring can be started using
#	mdadm --follow --scan & echo $! > /var/run/mdadm
# If the lines are not found, mdadm will exit quietly
MAILADDR root@localhost
#MAILADDR root@mydomain.tld
#PROGRAM /usr/sbin/handle-mdadm-events
																			   62,1		  Bot

Link to comment
Share on other sites

I see that the md0 is not specified in the /etc/mdadm.conf file. Should it be? OK, it's there in the example, but how has it worked for me with this one kernel?

 

I'll add this to the config file and reboot (with a newer kernel):

ARRAY /dev/md0 devices=/dev/sda6,/dev/sdc6

Link to comment
Share on other sites

Since it wasn't in the file, it should be in there, so that mdadm knows about it. What you will find is that initrd would probably need regenerating after this, so that it would work. The kernel can't boot because of this. I'm pretty sure it must be this because of the error at bootup.

Link to comment
Share on other sites

Can you tell me if the /dev/md0 node exists?

You mean this? (Sorry for my ignorance - I do appreciate your help!)

$ cat /proc/mdstat
  Personalities : [raid1] [raid6] [raid5] [raid4]
  md2 : active raid1 sdc9[1] sda9[0]
	 506272256 blocks [2/2] [UU]

  md1 : active raid1 sdc8[1] sda8[0]
	 102398208 blocks [2/2] [UU]

  md0 : active raid1 sda6[0] sdc6[1]
	 20474688 blocks [2/2] [UU]

  unused devices: <none>

Edited by dude67
Link to comment
Share on other sites

Nope :)

 

Check the directory /dev to see if md0, md1, md2, etc, etc do actually exist. They should do since the raid device is active. But when booting, it says that it doesn't exist (from your first post).

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...