Jump to content
  • Announcements

    • spinynorman

      Mandriva Official Documentation

      Official documentation for extant versions of Mandriva can be found at doc.mandriva.com.   Documentation for the latest release may take some time to appear there. You can install all the manuals from the main repository if you have Mandriva installed - files are prefixed mandriva-doc.
    • paul

      Forum software upgrade   10/29/17

      So you may have noticed the forum software has upgraded !!!
      A few things that have changed. We no longer have community blogs (was never really used) We no longer have a portal page.
      We can discuss this, and decide whether it is needed (It costs money) See this thread: Here
ianw1974

Installing Mandriva with Software RAID

Recommended Posts

I was figuring this all out yesterday, since I was having reliability problems with one of my machines. When you boot to install the operating system, there are no immediate options for configuring software RAID arrays. Just the basic partitioning schemes.

 

Whilst software RAID probably isn't the most popular, since hardware RAID is much preferred, I don't currently have the technology in my machine to do this. So, I used software RAID, and have submitted this HOWTO for anyone who wants to use it.

 

First, you'll need more than one hard disk. Both mine are IDE, the first is a 20GB disk, and the second is a 160GB disk. Since you cannot partition it during the installation process, you'll have to create the partitions manually. This is a simple task, and requires the use of a utility called "fdisk". As an example, I've listed my hard disk partitions below, the blocks that they utilise, and the effective rough size in MB/GB.

 

This HOWTO has been written for RAID1, but can equally apply for RAID5 as well.

 

/dev/hda1 = blocks 1-12 = 100MB
/dev/hda2 = blocks 13-2434 = 19.9GB

/dev/hdb1 = blocks 1-125 = 1GB
/dev/hdb2 = blocks 126-2547 = 19.9GB
/dev/hdb3 = blocks 2548-end = 138GB

 

/dev/hda1 has been created because the boot section needed to be outside of the RAID array, therefore this will be mounted as "/boot".

 

/dev/hda2 will be mounted as "/" and will be utilised as part of the RAID array. The equivalent partition that it will be mirrored with is /dev/hdb2.

 

/dev/hdb1 is swap, and isn't being mirrored. Neither is my /home mountpoint, since this is larger than the rest of the space I have. If I had 2 x 160GB drives in my system, then everything would have been mirrored.

 

Creating the partitions

 

Boot from the Mandriva CD/DVD and press F1. Type "linux rescue" at the prompt. Afterwards, you should get a menu, and choose the option to go to the command line.

 

For options on what commands to use in fdisk, press "m" once within the program. However, the basics are:

 

n = create new partition
d = delete existing partition
p = show existing partitions
w = save and exit
q = exit without saving

 

Since, I have two hard disks, the commands to use with fdisk, are as follows:

 

fdisk /dev/hda
fdisk /dev/hdb

 

after one of these commands has been ran, you can then use the menu options I listed above. Now, my full example:

 

fdisk /dev/hda
n - to create new partition
p - to choose primary rather than extended
1 - if prompted for which primary partition type 1.
press enter for the start block, as we will use the default.
+100M (this is to select 100MB partition, easier than figuring out equivalent in blocks which is 1-12)
n - to create new partition
p - to choose primary
2 - to create 2nd primary partition
press enter for the start block, as we will use the default.
press enter for the end block, as we will use the default.
t - toggle partition type
2 - partition 2
fd - selects auto RAID
w - to save and exit

 

now the 20GB disk has been partitioned as I have above. Let's do similar for the 160GB, however, we need swap and home on here also. So:

 

fdisk /dev/hdb
n
p
1
press enter for default start block
+1024M (1GB swap)
n
p
2
press enter for default start block
2547 (this will equal same size partition for "/" to be mirrored - calculation later)
n
p
3
press enter for default start block
press enter for default end block
t
1
82 (sets for swap file)
t
2
fd
w (to save and exit)

 

That's the second disk configured, exactly how we want it. Swap at the beginning, so that it's faster to access. Then the "/" partition, exactly the same size, followed by /home using the rest of the disk.

 

So, how did I work out what size for the partition? Well, easy really. On /dev/hda it started at block 13 and finished at 2434. So, 2434-13=2421 blocks utilised for "/". On /dev/hdb the partitioning was different, "/" didn't start at block 13. Therefore, since the swap ended at 125 and this partition starts at 126, we need to work out the end point for this partition. We know the size is 2421 blocks, so 2421+126=2547, which is the end point, to get the exact same size as the partition on /dev/hda.

 

Installation

 

Now the installation can be completed, so type "reboot" and press enter, to restart, and boot from the Mandriva CD/DVD again. This time, press enter to let it boot normally into the installation.

 

When you get to partitioning, choose "Custom Partitioning", since we already have done it, so we just need to allocate these to mount points, as well as create the arrays.

 

Select the first partition on /dev/hda, and set this to be "/boot" and then set the file system type be it ext2, ext3, reiserfs, etc, etc. I chose reiserfs, but ext2 or ext3 are just as good here. Now select the second partition, and create the array, let it use the default of "md0".

 

On /dev/hdb, the first partition should be green, and will show as swap, so nothing to do here. Select the second partition, and you can then choose an option to add this to the existing array. Select the third partition, and then set this to be "/home" and choose the file system type.

 

Click the Advanced button, go to the array tab, and then modify the file system so that it is what you want it to be, be it, ext2, ext3, reiserfs, etc, etc.

 

Continue the install, and let all partitions be formatted. And follow the rest of the installation as you would normally and reboot the system when complete.

 

Post Installation

 

Now that your system is up and running, you can check the status of your array, by typing this at a terminal prompt:

 

cat /proc/mdstat

 

this will output on the percentage complete of the array, or whether it's complete and active. This can take some time depending on the file system/partition size.

 

Now you have software raid running on your system.

Share this post


Link to post
Share on other sites

It is really not immeiately obvious, but you can configure a softwre-raid (even level 5) during installation of mandriva

 

when you are at the partitioning step you have to go into expert mode, then you will get some additional options

 

the partitions you use must be formatted as linux-raid (you have to choose this as filesystem), then you can add them together to a raid, you can even build a lvm on top of this, which is not bad, since you can later add additional raid-arrays to the lvm and you still have the security of your underlying raid-systems

 

all this can even be done from a running system, too, e. g. if you just installed two additional drives, you can build a raid from mcc

 

the problem is it was very hard to find, the disk partitioner should have an additional option like "create softwre-raid" and then guide you through it, I needed at least some trial -and-error to get it working.

 

btw the performance of a raid-1 + lvm is not that bad at all (have it running on a p3 550 MHz, and it works)

Share this post


Link to post
Share on other sites
Guest ukginger

Excuse me, call me stupid, but why did you not use the mouse like I did ?

 

My install was loaded and running while you was typing all that 1970's stuff in . . .

Share this post


Link to post
Share on other sites

See above post, it explains the gui method.

 

However, there is the command-line method too, which is also useful to know in case the gui doesn't work - which in some cases generates weird errors.

 

But each to their own, we have a choice on what to use. You use gui, I sometimes use command line. But you'll also find that Linux will require that you use the command line at some point.

 

Let's see how far you get fixing your system without using the command line sometime in the near future when something goes wrong :lol:

 

Oh, and I can probably configure the disks quicker than you can with the mouse. Or maybe, try a text-based installer when your gui doesn't work because of graphics card problems, and let's see how fast you are.

Share this post


Link to post
Share on other sites

I'm using 2 6gig ide's with 1 big partition each and software (mdadm) raid1. /dev/md0 is /dev/hda1 and /dev/hdc1. I'm also using lvm to partion /dev/md0 if that makes a difference. I'm up and runing with MDK2007 just fine. If I unplug /dev/hdc and try to boot everything comes up fine but if I unplug /dev/hdc and try to reboot the boot loader isn't found.

Here is my lilo.conf

-------------------------------------

default="linux"

boot=/dev/hda

map=/boot/map

install=menu

keytable=/boot/us.klt

menu-scheme=wb:bw:wb:bw

compact

prompt

nowarn

timeout=20

message=/boot/message

image=/boot/vmlinuz

label="linux"

root=/dev/vg0/root

initrd=/boot/initrd.img

---------------------------------------

What do I have to do to make /dev/hdc boot in case of failure in /dev/hda.

Thanx

Rois

Edited by gimecoffee

Share this post


Link to post
Share on other sites

You would need to edit and change lilo.conf to look at /dev/hdc. I normally use grub for this, as it's much easier to configure it and get it working correctly for when the drive is missing.

 

You might need to put an entry in lilo.conf that points to /dev/hdc instead for booting the kernel from this disk in the event of failure. This is what I've normally had to do with grub.conf.

Share this post


Link to post
Share on other sites

I've tried boot changing boot to /dev/hdc and running lilo -v. The command runs as usual with no error but when I unplug /dev/hda and boot I'm still getting the screen filling with '9''s. I also tried putting the drive into the primary master postition (/dev/hda) and had the same problem. Maybe I'm just not understanding. Seems like if you had a bad primary on a software raid 1 there should be a way to have the seconary boot. Otherwise how would you evey be able to change the primary.

Share this post


Link to post
Share on other sites

OK, here is my /etc/lilo.conf from my laptop (I'm not using raid on this, but it's the only example I got, since I'm using grub on my machine with raid).

 

# File generated by DrakX/drakboot
# WARNING: do not forget to run lilo after modifying this file

default="linux"
boot=/dev/hda
map=/boot/map
keytable=/boot/uk-latin1.klt
menu-scheme=wb:bw:wb:bw
compact
prompt
nowarn
timeout=100
message=/boot/message
image=/boot/vmlinuz
	label="linux"
	root=/dev/hda5
	initrd=/boot/initrd.img
	append=" resume=/dev/hda6 splash=silent"
	vga=788
image=/boot/vmlinuz
	label="linux-nonfb"
	root=/dev/hda5
	initrd=/boot/initrd.img
	append=" resume=/dev/hda6"
image=/boot/vmlinuz
	label="failsafe"
	root=/dev/hda5
	initrd=/boot/initrd.img
	append=" failsafe resume=/dev/hda6"

 

I should think it's only a case of changing the boot= line to point to the other hard disk to get lilo written to this MBR. The 9's you're experiencing either means the MBR hasn't been written correctly, or even at all.

 

Grub is by far easier to do, here are the commands to use if you're using grub as your boot loader.

 

grub --no-floppy
root (hd0,1)
setup (hd0)
device (hd0) /dev/hdb
root (hd0,1)
setup (hd0)
quit

 

the first command launches grub without probing the floppy disk drive.

the second command finds the partition /dev/hda2 and the respective boot stuff.

the third command writes the boot loader to the MBR.

the fourth command changes to the second hard disk.

the fifth command writes the boot loader to the MBR.

the sixth command quits grub.

 

The only differences you will need with when you use this is for commands two, four and five. This depends on your partition setup.

 

I had /boot separate on /dev/hda2, and therefore (hd0,1) is my partition. If you didn't separate /boot, then it will be on the / partition. So wherever / has been installed this would be where you need to specify the correct partition.

 

The first disk and first partition is always (hd0,0)

The first disk and second partition (hd0,1)

The second disk and second partition (hd1,1)

 

But, since I use the device line to set /dev/hdb to hd0, this means, you still specify (hd0,1) even though it's the second hard disk. Like my example above, it makes it clear.

 

Not only that, from my lilo.conf, it's using /dev/hdx mount points (because this is from a machine with no raid installed - I use grub). With raid, these would me /dev/mdx point points, so /dev/md0, /dev/md1, and so on. This is important, because it mounts the array and not the partition itself.

 

My lilo.conf is just an example that you can see, and what I'd expect you to have to change in it to get it working.

Share this post


Link to post
Share on other sites

Just to uncomplicate things I decided to try a re-install using raid1 without LVM and using ext3 instead of xfs. Before the finished install reboot I changed the boot loader to boot=/dev/md0. That worked perfectly. Upon examination of lilo.conf the install program has

---------------------------

. . .

boot=/dev/md0

raid-extra-boot=mbr

. . . .

---------------------------

 

Restarting with /dev/hda unplugged booted right up (/dev/hdc only.) Moving /dev/hdc to the primary master (/dev/hda) came right up as well.

 

So the introduction of XFS or LVM both appear to break it OR perhaps I'm missing something in the configuration to allow XFS and/or LVM. Any thoughts on why XFS might be causing a problem? I've been searching the web for any info on XFS and raid and haven't found anything.

 

My next idea was to do 2 raid1's. md0 would be 512MB ext3 (if I can't use XFS) and have the root ( / ) on it. Then md1 could have LVM on it with all the rest of my space. The mandriva installation program does md0 and the reboot works fine as I mentioned about BUT I'd like to add a hot-spare/spare-device and can't seem to find a reference to make that work. I have bee able to use the feature when I manually create an array using the option "--spare-devices=1 /dev/hdb1", but how do you add "spare device" after the initial create that the installation program is done and rebooted ? Is that even possible?

 

Thanx

Share this post


Link to post
Share on other sites

The filesystem shouldn't be a problem. I use reiserfs on mine, or even ext3 and never had problems, so xfs should be fine.

 

xfs is normally a good file system for multimedia machines that store large files like video movies, etc. I'd recommend reiserfs if you want something nice. I always had problems trying to resize ext3 partitions, and this is why I use reiserfs over anything else. Oh, and that it's faster too, than xfs and ext3 from comparisons I read a while back.

Share this post


Link to post
Share on other sites

Well I hadn't thought about reiserfs. I gave that one a try and when boot=/dev/md0 and

raid-extra-boot=mbr it works perfectly. I guess that would lead me to believe that it's really an xfs+raid1+lilo issue. I've always used ext3 because that was the installation default but a friend of mine (that admins for large corporation server networks) suggested I look at XFS. Do you have any recommendations one way or another on Reiserfs vs XFS? From everything I've read they are fairly comparable. I'm presume they each have places where they work the best. Mine will be a backup mirror for smb and MySql.

 

I put a lot of stock in what my admin buddy says but it's rather mute if MDK 2007 won't do what it's suppose to like it will with ReiserFS. I think XFS has been around for a long time and that makes me feel a little better as well.

 

My XFS lilo error during installation says: (set to mbr and NOT first partion)

An error occurred

You can not install the bootloader on a xfs partion

...propogated

 

If I leave the boot=/dev/hda during installation and alter lilo.conf once I've boot in I get:

Fatal: Filesystem would be destroyed by LILO boot sector: /dev/md0

 

I suppose I could do reiserfs (or ext3) on md0 to get the boot issues to work correctly and then use XFS on md1 for the rest of my file system. Thoughts on mixing filesystems? Good or Bad thing?

 

ianw1974, thanks for taking the time to talk about this with me. I appreciate the help.

 

Rois

Share this post


Link to post
Share on other sites

Mixing is perfectly fine. I'll give an example.

 

I always use reiserfs, because it's good for small and large files. XFS is apparently really bad with small files, and therefore slow. XFS is good for large files, such as mpeg's and all that sort of multimedia stuff. I read about this a while ago.

 

So here's where the mixing comes. For example, I have loads of mpegs stored from my child when I video him - they are large. So I would create a directory called /movies and then I would create a partition on one of my disks and format as XFS. I then mount /movies to this partition. I then have XFS for just the movie stuff, but the rest of the system is reiserfs.

 

So you get the best of both worlds :P

 

However, I actually just use reiserfs completely for all my stuff, but the idea above is an example of what you could do.

 

I never changed my lilo.conf to look at my second hard disk, so can't help you on the little error you're getting. I don't remember seeing an error on this when I was using lilo and raid1 in Mandriva.

Share this post


Link to post
Share on other sites
I never changed my lilo.conf to look at my second hard disk, so can't help you on the little error you're getting. I don't remember seeing an error on this when I was using lilo and raid1 in Mandriva.

 

There is no error if you keep the default settings. (boot=/dev/hda) Even XFS does load and boot with raid1. The second hard drive is what started me on this little quest. I was testing the raid1 to make sure if the primary failed I (or whoever is responsible for the machine) could actually boot the second drive to keep things going or just to replace the primary. With lilo set to boot=/dev/hda, the lilo boot loader didn't get written to /dev/hdc so when I pulled the plug on /dev/hda, /dev/hdc had no boot loader and couldn't do anything. Probably there would be a way to use the rescue disk to fix this but I just couldn't believe it would be that difficult. And low and behold I was correct. With Reiserfs or ext3, and LILO altered to be boot=/dev/md0, raid-extra-boot=mbr, lilo is written to all disks in the md0 array. That way if the Primary-Master (/dev/hda) fails I can still reboot from the Seconary-Master(/dev/hdc) OR when I need to replace the failed primary-master, I can move /dev/hdc in the /dev/hda position and put a new blank disk in the /dev/hdc position for configuration and adding back to the array.

 

Thanks again for all the help.

Rois

Share this post


Link to post
Share on other sites
Guest Shingoshi
Excuse me, call me stupid, but why did you not use the mouse like I did ?

 

My install was loaded and running while you was typing all that 1970's stuff in . . .

 

Never mind the fact that there is no GUI to (automatically) configure lilo.conf for RAID after the system is already installed and running. Then, you absolutely need to know commandline (or at least how to hand edit the conf).

 

I'm trying to write the boot records to both disks in my array. And that's how I found this post on Google. I want to know that if my first disk fails, the system can automatically boot from the second disk, without me having to use an installation cd/vd.

 

These are the lines I'm having trouble with:

boot = /dev/md0

raid-extra-boot= /dev/sda,/dev/sdb

What I get instead, is this:

Fatal: Filesystem would be destroyed by LILO boot sector: /dev/md0

 

The above only works if I have "boot = /dev/sda", and then comment out the following line for "raid-extra-boot"

 

So, what I would like to know, is what line must precede raid-extra-boot for lilo to know that we dealing with RAID devices.

The "boot = /dev/md0" doesn't seem to work, no matter what I do.

 

I read somewhere on how to create two separate sections in lilo.conf. One for each of the disks in the RAID array.

reference: (http://tldp.org/HOWTO/Boot+Root+Raid+LILO-3.html)

 

disk=/dev/sda (first disk)

bios=0x80

sectors=63

heads=255

cylinders=16065

partition=/dev/md1

start=63

boot=/dev/sda # this is the first disk

 

In a separate section, the following lines are changed accordingly.

disk=/dev/sdb (second disk)

boot=/dev/sdb # this is the second disk

 

When I change back to "disk=/dev/sda", lilo complains as soon as it reaches "disk=/dev/sdb".

 

So if anyone anywhere has ever managed to get lilo to write a boot recoRd to both disks, PLEASE EXPLAIN!!

Edited by Shingoshi

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×