Jump to content

Changing your system to software Raid1


ianw1974
 Share

Recommended Posts

OK, so kind of found this out recently, which is a shame, as it would have saved me having to completely reinstall my system when I added an additional disk to it, to then want to convert it to raid 1 mirroring support.

 

But nevermind, here is how you can convert your system for additional redundancy against hard disk failure. Of course, hardware raid is better, but for those who don't have the cash - like me, can do it the software way. Incidently, this is more or less the same as the cheap controllers that have software raid in BIOS on the card. So, save some extra cash than buy these equivalents :P

 

In this example, I've used partitioning from an 8GB setup I had within vmware. My machines at home are 2 x 160GB, so the principals are the same, just substitute your partitioning where necessary.

 

Please note, I won't be held responsible for data loss. This is your business, so make sure you have it backed up. This procedure works, I've used it.

 

Pre-conversion

 

Here is how my partitions looked like before conversion:

 

Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot	  Start		 End	  Blocks   Id  System
/dev/sda1			   1		 123	  987966   82  Linux swap
/dev/sda2   *		 124		 136	  104422+  83  Linux
/dev/sda3			 137		 259	  987997+  83  Linux
/dev/sda4			 260		1044	 6305512+   5  Extended
/dev/sda5			 260		 503	 1959898+  83 Linux
/dev/sda6			 504		1044	 4345551   83  Linux

 

Each partition effectively mounts as follows:

 

/dev/sda1 = swap
/dev/sda2 = /boot
/dev/sda3 = /
/dev/sda5 = /usr
/dev/sda6 = /var

 

this system was being run as a server, hence the lack of /home, but of course, again, substitute your partitions where appropriate, the procedure is the same. It's important, because when you convert the partitions later, you won't know what they were otherwise.

 

Install your second hard disk, which in this example becomes /dev/sdb. Once you have both in the system, boot your system normally - no need for recovery disks at this stage. Although I suggest that you run a runlevel 3 while you do this, just do:

 

init 3

 

or press CTRL-ALT-F1 at the gui to get the console window, login as root and then do:

 

service dm stop
service xfs stop

 

to stop the X related services.

 

Altering and creating new partitions

 

OK, so now we need to convert the partitions. To do this, go into fdisk like this:

 

fdisk /dev/sda

 

We need to toggle each of the partitions, and change them to "fd" which is Linux raid autodetect. To do this, you press the letter "t" and then the number of the partition. In my examples, I have partitions 1, 2, 3, 5 and 6. So:

 

t
1
fd

 

and so on for each of the other partitions. When finished, press:

 

w

 

and it will save and exit. It will say the new partition table won't be used until reboot, this is perfectly fine and normal because your partitions are mounted, and have to be this way to copy the data off. Now my partitions look like this:

 

Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot	  Start		 End	  Blocks   Id  System
/dev/sda1			   1		 123	  987966   fd  Linux raid autodetect
/dev/sda2   *		 124		 136	  104422+  fd  Linux raid autodetect
/dev/sda3			 137		 259	  987997+  fd  Linux raid autodetect
/dev/sda4			 260		1044	 6305512+   5  Extended
/dev/sda5			 260		 503	 1959898+  fd  Linux raid autodetect
/dev/sda6			 504		1044	 4345551   fd  Linux raid autodetect

 

Now we need to make sure /dev/sdb looks the same. You could do this manually with fdisk, but this method is much better:

 

sfdisk -d /dev/sda | sfdisk /dev/sdb

 

it will do it all for you, verify with this command:

 

fdisk -l /dev/sda
fdisk -l /dev/sdb

 

and make sure the block numbers are the same. They should be. That's your disks configured.

 

Create arrays and copy data

 

Since we have /dev/sda in use, we cannot add this to the array yet, and we need to copy the data off before we do, so we create the arrays with /dev/sda missing. This is how:

 

mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb2
mdadm --create /dev/md2 --level=1 --raid-devices=2 missing /dev/sdb3
mdadm --create /dev/md3 --level=1 --raid-devices=2 missing /dev/sdb5
mdadm --create /dev/md3 --level=1 --raid-devices=2 missing /dev/sdb6

 

So, now is the time to create the partitions. In this example, all my partitions were ext3, but find the appropriate command for making your filesystem. Some examples are:

 

mkswap
mke2fs (this is for ext2)
mke2fs -j (this is for ext3)
mkreiserfs

 

so now, formatting the partitions ready for use (note you use the array devices not the actual /dev/sdx devices):

 

mkswap /dev/md0
mke2fs -j /dev/md1
mke2fs -j /dev/md2
mke2fs -j /dev/md3
mke2fs -j /dev/md4

 

now, I create a mount point, and start to copy the data across:

 

swapon /dev/md0
mkdir /mnt/array
mount /dev/md2 /mnt/array

 

you remember, that in my example, /dev/sda3 was my / partition, and when added to the array became /dev/md2. Now to copy the first part:

 

cp -dpRx / /mnt/array

 

now mount all other partitions and copy the rest:

 

mount /dev/md1 /mnt/array/boot
mount /dev/md3 /mnt/array/usr
mount /dev/md4 /mnt/array/var

cp -dpRx /boot /mnt/array
cp -dpRx /usr /mnt/array
cp -dpRx /var /mnt/array

 

Configuration for boot

 

Now, we need to configure the system so that it will boot. So, edit /etc/fstab and ensure your mount points are correct. Here is mine:

 

/dev/md2				/					   ext3	defaults		1 1
/dev/md1				/boot				   ext3	defaults		1 2
none					/dev/pts				devpts  gid=5,mode=620  0 0
none					/dev/shm				tmpfs   defaults		0 0
none					/proc				   proc	defaults		0 0
none					/sys					sysfs   defaults		0 0
/dev/md3				/usr					ext3	defaults		1 2
/dev/md4				/var					ext3	defaults		1 2
/dev/md0				swap					swap	defaults		0 0
/dev/hdc				/media/cdrom			auto	pamconsole,exec,noauto,managed 0 0
/dev/fd0				/media/floppy		   auto	pamconsole,exec,noauto,managed 0 0

 

Now, we need to copy this over to the mirrored drive in /mnt/array/etc:

 

cp -dp /etc/fstab /mnt/array/etc/fstab

 

we have to edit /boot/grub/grub.conf (sometimes menu.lst) as well so that it will boot the right place. If you use lilo, then change this as well accordingly and rerun lilo. Here is my grub.conf:

 

default=0
timeout=5
splashimage=(hd0,1)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux ES (2.6.9-34.EL)
root (hd0,1)
kernel /vmlinuz-2.6.9-34.EL ro root=/dev/md2
initrd /initrd-2.6.9-34.EL.img

 

OK, so I used Red Hat for this :P but the procedure is the same for Mandriva, the text will be different depending on your distro. This also needs copying across:

 

cp -dp /boot/grub/grub.conf /mnt/array/boot/grub/grub.conf

 

Now, to finish off the configuration, we need to have a config file for the array, so create /etc/mdadm.conf:

 

DEVICE	/dev/sda*
DEVICE	/dev/sdb*
ARRAY		/dev/md0 devices=/dev/sda1,/dev/sdb1
ARRAY		/dev/md1 devices=/dev/sda2,/dev/sdb2
ARRAY		/dev/md2 devices=/dev/sda3,/dev/sdb3
ARRAY		/dev/md3 devices=/dev/sda5,/dev/sdb5
ARRAY		/dev/md4 devices=/dev/sda6,/dev/sdb6
MAILADDR	root@localhost

 

this is enough to ensure it'll boot the system successfully. Now we have to reinstall grub, so:

 

grub --no-floppy
root (hd0,1)
setup (hd0)
device (hd0 /dev/sdb
root (hd0,1)
setup (hd0)
quit

 

please note (hd0,1) points to /dev/sda2. /dev/sda1 = (hd0,0) /dev/sda2 = (hd0,2), etc, etc. This is to find your boot config. If you don't have boot separate, then this has to point to the partition that / is on. So if I didn't have /boot, it would be /dev/sda3 in my case if /dev/sda2 was a different partition altogether than /boot.

 

Finishing off

 

Now we need to finish off, enter your CD1 and boot from this using:

 

linux rescue

 

just exit to the prompt when finished. We now need to delete the stuff on /dev/sda and recreate the partitions, and add to the array. This is simple enough, but first we have to activate the array, so create /etc/mdadm.conf (you did this already, but you're booted from the CD now!):

 

DEVICE	/dev/sda*
DEVICE	/dev/sdb*
ARRAY		/dev/md0 devices=missing,/dev/sdb1
ARRAY		/dev/md1 devices=missing,/dev/sdb2
ARRAY		/dev/md2 devices=missing,/dev/sdb3
ARRAY		/dev/md3 devices=missing,/dev/sdb5
ARRAY		/dev/md4 devices=missing,/dev/sdb6
MAILADDR	root@localhost

 

note that I didn't put /dev/sdax partitions, because they aren't configured correctly yet, they aren't true raid partitions or even added to the array. To activate the array:

 

mdadm --assemble --scan

 

this will scan the config file, and all arrays will be active. If it didn't work, it's because the /dev/mdx devices don't exist, so we have to create them if this is the case:

 

mknod /dev/md0 b 9 0
mknod /dev/md1 b 9 1
mknod /dev/md2 b 9 2
mknod /dev/md3 b 9 3
mknod /dev/md4 b 9 4

 

Note, that depending on the array name, the number at the name matches it and rerun the assemble command above. You see the sequence above ;)

 

Now, you can mount the partitions:

 

mkdir /mnt/array
mount /dev/md2 /mnt/array
mount /dev/md1 /mnt/array/boot
mount /dev/md3 /mnt/array/usr
mount /dev/md4 /mnt/array/var

 

and check in /mnt/array that you can see all the files/directories, etc. This is just to verify you have all the data OK. Now, delete all partitions on /dev/sda with fdisk:

 

fdisk /dev/sda
d 1
d 2
d 3
d 4
w

 

that is enough, partitions 5 and 6 are extended partitions, so by deleting the extended partition 4, they automatically go. Verify it's empty with:

 

fdisk -l /dev/sda

 

now, we need to create the partitions ready to add them to the array:

 

sfdisk -d /dev/sdb | sfdisk /dev/sda

 

and then verify with:

 

fdisk -l /dev/sda
fdisk -l /dev/sdb

 

and make sure all is the same. Once OK, let's add the disks to the array that only consists of /dev/sdb partitions so far:

 

mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2
mdadm --add /dev/md2 /dev/sda3
mdadm --add /dev/md3 /dev/sda5
mdadm --add /dev/md4 /dev/sda6

 

they will add, and now check the status of the array using:

 

cat /proc/mdstat

 

make sure this is completed before continuing, it could take a while if you have large partitions. Now we need to chroot the environment to finish off, I'll explain why in a second:

 

mount -t proc none /mnt/array/proc
chroot /mnt/array /bin/bash
source /etc/profile

 

Because Mandriva (even Red Hat) uses initrd, we have to generate a new initrd. This is simple enough. First off, check your /boot directory for the kernel and initrd. We have to rename the initrd, so here is mine as an example:

 

total 3318
-rw-r--r--  1 root root   49513 Feb 24  2006 config-2.6.9-34.EL
drwxr-xr-x  2 root root	1024 Oct 27 15:05 grub
-rw-r--r--  1 root root  529911 Oct 27  2006 initrd-2.6.9-34.EL.img
-rw-r--r--  1 root root  528337 Oct 27 15:02 initrd-2.6.9-34.EL.img.old
drwx------  2 root root   12288 Oct 27  2006 lost+found
-rw-r--r--  1 root root   23108 Aug  3  2005 message
-rw-r--r--  1 root root   21282 Aug  3  2005 message.ja
-rw-r--r--  1 root root  733742 Feb 24  2006 System.map-2.6.9-34.EL
-rw-r--r--  1 root root 1473752 Feb 24  2006 vmlinuz-2.6.9-34.EL

 

normally in Mandriva, there is a symlink called initrd.img which points to initrd-version.img. Here is an example from my Mandriva 2007 machine:

 

-rw-r--r-- 1 root root  384450 Oct  6 13:36 initrd-2.6.17-5mdv.img
lrwxrwxrwx 1 root root	  22 Oct  6 13:36 initrd.img -> initrd-2.6.17-5mdv.img

 

So for Mandriva, you need to move the full initrd-2.6.17-5mdv.img to a backup, here is how I did mine:

 

mv /boot/initrd-2.6.9-34.EL.img /boot/initrd-2.6.9-34.EL.img.old

 

substitute filename where appropriate based on your grub.conf file from before. Now, to make the initrd:

 

mkinitrd initrd-2.6.9-34.EL.img 2.6.9-34.EL

 

the first part is the initrd filename to create, the second is the kernel version you're using. As an example for Mandriva 2007:

 

mkinitrd initrd-2.6.17-5mdv.img 2.6.17-5mdv

 

would do the trick. After this has been done, you can safely boot the system. If your system doesn't use initrd, then you can skip doing this, as it's not required.

 

Reboot

 

Now reboot:

 

umount /mnt/array/proc /mnt/array/boot /mnt/array/usr /mnt/array/var /mnt/array
reboot

 

if reboot doesn't work, then type "halt" instead. Some CD's don't have reboot on them, or choose your own appropriate method. Your system should now boot perfectly fine. :beer:

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

×
×
  • Create New...