Jump to content
  • Announcements

    • spinynorman

      Mandriva Official Documentation

      Official documentation for extant versions of Mandriva can be found at doc.mandriva.com.   Documentation for the latest release may take some time to appear there. You can install all the manuals from the main repository if you have Mandriva installed - files are prefixed mandriva-doc.
    • paul

      Forum software upgrade   10/29/17

      So you may have noticed the forum software has upgraded !!!
      A few things that have changed. We no longer have community blogs (was never really used) We no longer have a portal page.
      We can discuss this, and decide whether it is needed (It costs money) See this thread: Here
Sign in to follow this  

remotely grow and shrink LVM partitions (Mandriva)

Recommended Posts



I just did something I thought would be much harder to do, so here I share it, so that you know:

1/ it can be done,

2/ it is not that hard :)


I wanted to resize the server's LVM partitions, those being /, /home, /var, /usr, /tmp, and /data, in this manner: all partitions had to shrink as much as reasonable, so that /data could grow as much as possible.


— I only have remote SSH access, so all must be done on the live system (online).

— All partitions initially ReiserFS, but I'm not against trying JFS or Ext3 (XFS heavier on the CPU, or so says the Internet… and the CPU is an old PIII).


Initial situation:

/ and /home much too big (1GB available, where disks are 8GB in size)

/home barely used (this server is now mostly for anonymous/ro SMB access)

/usr, /var and /tmp slightly over-sized (especially /tmp)

/data emptied from previous usages of the server, ready for its new mission.


As it happens, let me say staight away that my biggest surprise was that MCC (diskdrake) manages LVM transparently and does it well!


1 - Shrink /

This is the only thing that I couldn't find a way to do. ReiserFS can be growed online, but not shrinked, and / cannot be unmounted…


2 - Remove /home

Problem: I remotely connect with SSH, so /home is used.

Solution: Just for once, allow remote login for the root user.

I did so (/etc/ssh/sshd_config), restarted sshd, logged out, and back in using root.

Then it's all easy: move /home/* to /, then (with diskdrake) umount and destroy /home, and finally move back /home's former content from / to /home/ (directory, not mount point).

At that point, don't revert yet the change in /etc/ssh/sshd_config about allowing root to login.


3 - Shrink /var and /tmp

These were the easiest.

First look for processes that use the partitions with

lsof | grep /var
lsof | grep /tmp

Then kill/stop the processes/services that are reported (included sshd if needed; don't forget to restart it after the resize is done).

All else is done with diskdrake: unmount both partitions, shrink them, mount them again.


4 - Shrink /usr

This was the trickiest bit. /usr is used by diskdrake, and used by localisation, and numerous other running services (among which is sshd), not forgetting lsof itself.

Still, it can be shrinked :)


You have to login remotely as root (see the /home part) so as to limit the number of processes using the /usr partition.


step A - First, being cautious, I stopped all I could find that was using /usr. To limit the output of lsof, I forced the shell and the commands I ran to a locale-less mode:

LC_ALL=C exec bash
LC_ALL=C lsof | LC_ALL=C grep /usr | LC_ALL=C grep -v '^lsof'

As before, kill/stop the processes/services that are reported (included the sshd service if needed; I reboot soon anyway; just don't logout before the reboot is done).

The only thing I did not remove (I would have been locked out of the system) was the sshd process allowing me to work remotely.


step B - Next, I copied the whole /usr contents to /data/usr (the only place big enough for such a big content):

cd /
tar -cf --one-file-system --force-local usr | (cd data && tar -xvf - --numeric-owner --atime-preserve --preserve --same-owner --force-local)

Some options may be superflous, but this is the command I use when I want as exact a copy as possible.


step C - Then, I lazilly unmounted /usr and quickly replaced it with a working equivalent:

cd /
LC_ALL=C umount -l /usr
LC_ALL=C mv usr usr.old
LC_ALL=C ln -s data/usr usr

I then edited /etc/fstab so that the /usr mount point becomes /usr.old instead, and rebooted (“reboot†command).


step D - With root login still allowed, I logged in as root, and re-did step A.

Then I ran “diskdrake†to unmount /usr.old and shrink it. In my case, as I spotted a corrupted directory at the whole-copy-with-tar stage, I also changed the FS type to Ext3 (just to see if it allowed online shrinking: it does not), and formatted with disk check. /usr.old must then be mounted again.

Back at the command-line, I put the data back in the partition:

cd /data/usr
tar -cf - --one-file-system --force-local . | (cd /usr.old/ && tar -xvf - --numeric-owner --atime-preserve --preserve --same-owner --force-local)

Finally, I edited /etc/fstab so that /usr.old becomes /usr again. And I ran these commands:

cd /
umount usr.old
rm -f usr
mv usr.old usr


4 - Grow /data

That's the easiest part. If you went that far, no more explanations are needed. It can even be done online with ReiserFS, in theory.

I did not do it online, though, because I took the opportunity to try and format this partition with JFS, just to see if it allowed online shrinking: it does not. diskdrake doesn't even allow offline resizing with this filesystem! I switched back to ReiserFS.


At this stage, two last things can be done:

— revert the change on /etc/ssh/sshd_config so that remote root login isn't allowed anymore;

— “rm -rf /data/usrâ€, unless you formatted the partition.



That's all. I hope this will help others.



Share this post

Link to post
Share on other sites

nice write-up, you are quite brave to attempt that remotely... ;)


With regards to JFS, you are right it can't be shrunk, but you can grow it even online:


mount -o remount,resize /mountpoint


this would resize it to the maximum available space on the partition.

(you can also specify resize=<nr_of_blocks> to grow it to a specific size)


I started using JFS last year and I'm very happy with it, it's really true that it's very fast and at the same time very light on cpu/ram resources.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this