-
Posts
409 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Posts posted by banjo
-
-
Thanks for the pointers.
Banjo
(_)=='=~
-
Anybody know where I can find an introduction to
what is required to install and use MySQL on Mandy 9.1?
I'm not ready for the details yet, and that is what
my searches are finding. I am looking for an overview.
Thanks in advance
Banjo
(_)=='=~
-
I remember back in the 1980's when some folks came
out with the GEM UI. It ran on the 6800 processors
(specifically on the Atari ST) and the Intel chips and it
looked just like the interface for the Baby Mac....... but
in color.
Apple sued 'em and put 'em out of business.
And Apple had stolen then entire Lisa and Baby Mac
interface from Xerox PARC..................
That didn't stop 'em. They won anyway.
There ain't no justice.
Linux rocks!
Banjo
(_)=='=~
-
Thanks, looks like just what I was looking for.
Banjo
(_)=='=~
-
I am curious about how the journal works in a
journalled file system such as ext3. I am interested
in the details, not just a summary of the functions.
Anybody know of a document or tutorial on the
subject?
Thanks in advance
Banjo
(_)=='=~
-
I use Quanta to maintain my web site,
Quanta maintains a timestamp to tell it which files to upload
when you ftp using "Upload Project". It uploads only files that
have changed since the last upload. Sometimes it would fail
to register that files had already been uploaded and then
upload them again at the next session.
I found out that if I close all of the files that I had open before
I quit the session then it works fine. So, in case anyone else
is having that problem, just try closing all open files before
quitting quanta. YMMV
Quanta is a way kewl tewl for keeping a web site.
Linux rocks!
Banjo
(_)=='=~
-
I think there is a market there.
If somebody in the know did the initial setup, Joe Sixpack
would have a much easier time with Linux because of its
stability in comparison to fnWindoze.
The ease of use of fnWindoze is largely an illusion. I have
36 years of experience with computers, and I am often
puzzled by problems encountered using fnWindoze. With
Linux, we don't run into those problems (viruses, blue screens,
frozen pointer, the magical disappearing floppy disk drive... etc.)
My whole family uses Linux. Sure, I had to set it up, but now
everybody just uses it.
Just my $0.02
Banjo
(_)=='=~
-
Yes but a forum like this isnt the sorta place a business is likely to turn.
I use Java here at work to create the GUI on our product.
I get all my support from the Sun forum, which is very much
like this one........
But then, that is just me.... and I am a techie... not a suit. :lol:
Banjo
(_)=='=~
-
Well, this board and the other open source doc groups have
provided all the support I have needed so far, which has not
been much. My Mandy 9.1 runs very well, thank you.
Banjo
(_)=='=~
-
one work "SUPPORT"
I guess I just assumed that I was on my own with Linux.
I did not expect the distro packager to provide any
hand-holding once I installed Mandy.
Does Red Hat actually provide continuing support for their product?
Banjo
(_)=='=~
-
I would agree with Chris that fnWindoze is easier to
set up and get going. In fact, you can just buy the computer
with the OS already installed and just use it. So, with
Linux you do need somebody in the house who can deal
with Linux issues.
However, once the Linux has been set up, it is WAY
easier for everybody to use because it does not have
all the problems that fnWindoze has.
Some of my family members have no clue about how the
compter got there or how it hangs together. But now that
it works, *everybody* prefers the Linux to the fnWindoze
because it *just works* without all of the hassles.
Why should people who just want to email and surf be
required to have a PhD in virus removal and rebooting
techniques and recovery from frozen screens?
With Linux they don't have to. It just works.
Linux rocks!
Banjo
(_)=='=~
-
Desire,
Thanks for the information on the -c flag.
I have not been back to this thread for a while, so
I hope this question isn't too stale.
How much protection is provided by the journaled
file system? For example, if I do not use the -c option,
and then I run into a bad block on the disk, will I gain some
measure of protection from the journal?
Anybody have any pointers to some docs about
exactly what the journal does and how it works?
Thanks in advance.
Banjo
(_)=='=~
-
Oh, yeah, and I forgot to add.....
My Win98SE is running on a Dell, and the hardware
IS crap.
I built my Linux computer myself from quality parts,
and it runs fine.
I have heard that often fnWindoze gets a bad rap from
running on flakey hardware. So it is entirely possible that
other folks are having a better time of it with Win98 than
I am.
Banjo
(_)=='=~
-
I got tired of all the bluescreens and the
viruses and having to shell out extra $$$ every
year to protect the OS from all the nasties out there.
My Linux computer has been running now for a year,
and the whole family uses it , and it never crashes.
It just feels really good to sit down at the computer and
concentrate on what I am doing instead of worrying
about covering my @$$ with respect to keeping the OS
running.
The last time (and I do mean the LAST time) I let
Win98 defrag the disk, it took 8 hours...... :blink:
I don't remember having to do that on Linux.
Banjo
(_)=='=~
-
I don't get it.
So, you buy one Mandy and put it on 10 of your
desktops. Why would I want to buy 10 Red Hats
for the same 10 desktops?
:huh:
Am I missing something?
Banjo
(_)=='=~
-
Ya, well it isn't just Win98. XP is just as bad.
I know some folks who just got XP and are furious
at all the down time from the Gate's invasions over
the internet. "Um, excuse me for a couple of hours
while I download megabytes of security patches without
asking"
I still think that Linux is a better deal.
:D
Banjo
(_)=='=~
-
I installed my Mandy 9.1 onto a virgin disk.
I just followed the directions and chose lots of the
defaults. It took one half hour.
After a couple of hours, I had all four user accounts
up and configured to surf the web and do email.
All the apps I really needed were installed in the
system install. I have found that the Linux apps in
many cases are *superior* to what I was using on
the fnWiindoze box.
Since May of last year, we have had zero (0) system
crashes on the Linux computer. We have about three
crashes daily on the Win98 system.
fnWindoze cannot compete with that ease and convenience,
and the fnWindoze end product is junk.
It is amazing to me that the new Linux distros are not
making more headway on the home desktop.
Banjo
(_)=='=~
-
<rant>
It is a myth that fnWindoze is easier to deal with than
Linux. For a year now we have had a Mandy 9.1 box sitting
right next to a Dell running Win98SE.
With four family members of varying levels of computer skills,
we all use the two on and off. By far the most screams come
from folks on the Dell. Since the Linux just *works*, it
is far easier to deal with than the fnWindoze, which freezes
and crashes and corrupts things randomly all the time.
I don't understand the need for people to demand their
right to remain ignorant of the system they are using, and
still be able to use it trouble-free. That makes no sense.
A little bit of up-front study will not hurt anyone, and that is all
it really takes to set up a Linux computer to run the
normal email and surfing apps.
The fnWindoze nightmare goes on forever.
</rant>
Linux rocks!
Banjo
(_)=='=~
-
Oh, thank you thank you thank you!
:D
I do have cdrdao installed, but in Setup CD Devices the driver was set
to Auto, and there was no indication that it was clickable.
I had clicked on "Cdrdao driver:" and it did nothing, so I could
not figure out how to select a different driver.
Finally I clicked on "Auto" and the dropdown menu appeared.
Doh!
Selecting generic-mmc did the trick. I wrote the disk
and it works in the player.
Thanks again for the help. This board is great.
Linux rocks!
Banjo
(_)=='=~
-
When I try to burn an audio CD using K3B, I
get driver errors. This happens when I try to burn
either .wav or .ogg files.
Data CD's work just fine for both CD-RW and CD-R
Here is the error I get:
No cdrdao driver found.Please select one manually in the device settings.
For most current drives this would be 'generic-mmc'.
... etc........
My drive is a Plextor, and other than this problem
it seems to work fine.
This must be a setup problem.
I found the following advice on Google:
Cdrdao sometimes fails to detect the device type. In this case set the typeto "generic-mmc" in the K3B settings.
I cannot find any such setting in K3B.
Can anybody help?
Thanks in advance.
Banjo
(_)=='=~
-
I hope that it is fast enough to be useful.
It is, after all, written in Java, which can be a
bit slow on some apps.
If it is fast enough, I think it looks like fun.
Banjo
(_)=='=~
-
I run all of my computers through a UPS.
I use them to protect against the frequent brownouts and
momentary dips in our local power. I have APC units, but
Belkin is a good brand as well.
The UPS has a sealed lead/acid battery in it which is
constantly charged whenever the UPS is plugged into the
AC power.... even if the UPS is turned off.
You should leave the UPS plugged in to ensure that the
batteries remain charged. If you want to remove AC power
from the computer at night, turn off the UPS. If you do unplug
it, make sure to turn it off first or it will assume that it just
had a power failure and yell at you.
The UPS also has surge protection cirucuitry in it to
protect your equipment against high voltage bursts from
EMF hits during electrical storms. So, basically the UPS has
varistors and suchlike for the high voltage bursts and the
battery backup for low voltage dips.
When the UPS senses that the power has failed, it uses
the battery to power an inverter to create AC power for the
computer. Most of the low cost UPS generate square wave
AC rather than a sine wave. This is evidently optimal for the
switching power supplies in the computers; I do not
to understand the hardware details, but that is what I have
been told by people who do. Anyway, don't plug your
toaster oven into it.
I usually turn off the computers and the UPS (but not unplug it)
during bad electrical storms..... just to be extra safe.
Banjo
(_)=='=~
-
Thanks for the extra tips and pointers.
The reason that I booted from a rescue disk was to avoid
trying to copy files from the old disk as they were
being changed. I guess I have spent too many years with
ill-behaved OS's (like fnWindows) which are *always* hitting
the disk. I guess it will work OK with Linux.
I kind of figured that the 8 Gig was not an issue, but then
I didn't know how to confirm that on my particular BIOS.
Putting it all into one partition would have been simpler,
but then I would not have had the joy of my long, strange
trip.
The funniest thing is that after I did all of this, my "disk
noise" is still there.......... so it is just a fan.... after all.....
LOL
Oh well. I needed a bigger / anyway.
Maybe I will go figure out how to hook up that
ol' 40 Gig as /home2.... or I could go make a huge
/ on that ol' disk and put /home on the new one......
The possibilities are endless..........
Thanks again for the pointer to the HOWTO
I will now go and study.
Banjo
(_)=='=~
-
Shortly after I installed Mandrake 9.1 my hard disk began
making a squealing noise as it spun up. This made me
skeptical about the longevity of the drive, so I bought
a new, larger disk to replace it.
Since I was also unhappy with the size of the root directory and
the way it was partitioned by the Mandrake install, (it was almost
full already), I decided to partition the new disk differently.
I was moving from a 40 Gig Western Digital to a 120 Gig Maxtor.
Cloning the disk was not an option because of the change
in size and partition table, so I did some research
and decided to copy the old disk to the new one manually.
This is the way I copied the disk. I wrote this in the
hope that someone may find it useful. I used generic *nix
tools to do this, so it should work on just about any
distro you have.
How to copy the Linux disk to a new, larger disk.
***************************************
Here are the major steps I performed to do this task.
1. Obtain The Required Materials
2. Install The New Disk in the Computer
3. Partition The Disk
4. Make File Systems On The New Disk
5. Copy Files From Old Disk To New Disk
6. Swap The Disks
7. Fix LILO to make it bootable
8. Boot the new disk
***************************************
Obtain Required Materials
***************************************
I bought a copy of Knoppix from
http://www.edmunds-enterprises.com/ to use as
a rescue Linux.
You can use any rescue disk you want to as long as
it has utilities like cfdisk, cp, mke2fs, etc.
I chose Knoppix because of its ease of use and
reputation for reliability.
I bought a new hard disk.
I got a Maxtor 120 Gig drive. They just keep
getting bigger. 97 bucks OEM.
***************************************
Install The New Disk
***************************************
I installed new drive as slave on IDE0. The
goal is to copy the files from the old disk on
/dev/hda to the new disk on /dev/hdb.
I booted the computer to Setup and confirmed that the BIOS
had seen the new disk. Then, I set the boot priority
to boot from the ATAPI drive so that I could boot
Knoppix.
I booted to Knoppix. This left both /dev/hda (the original
disk) and /dev/hdb (the new disk) unmounted.
***************************************
Partition The New Disk
***************************************
The old partition setup was
part1 / 5.8 Gig
part2 extended
part5 swap 0.5 Gig
part6 /home 31.0 Gig
One problem was that the root partition had
all of root on it, and it was already 66% full.
So I decided to partition the new, larger disk
differently. Here is the new scheme:
part1 / 6 Gig
part2 extended
part5 swap 0.5 Gig
part6 /home 88.5 Gig
part7 /usr 25 Gig
On the old disk, /usr contains most of the files in root.
So I moved it to its own partition and made it
bigger. I did that instead of simply making / bigger
to avoid any problems with the 8 Gig limit for CMOS
access at boot time.
The new disk showed up as /dev/hdb and empty.
I used cfdisk to partition /dev/hdb. Cfdisk is a
curses based version of fdisk. It works great.
*************** NOTE *********************
Make sure that you are partitioning the correct disk.
You do not want to be changing the partitions on the
original disk.
*************** NOTE *********************
I ran cfdisk against the new, virgin disk and got:
> cfdisk /dev/hdb No partition table or unknown signature on partition table Do you wish to start with a zero table [y/N] ?
I answered "yes" and I was presented with an empty partition
table and a menu at the bottom of the screen. Here is an example of one
of the menus.
[Bootable] [ Delete ] [ Help ] [Maximize] [ Print ] [ Quit ] [ Type ] [ Units ] [ Write ]
Use the arrow keys to highlight the desired function and type
the Enter key to perform the function. You will be prompted
for the appropriate types and sizes. Make sure to toggle
the Boot flag ON for the primary partition, hdb1.
I created one [Primary] partition and three [Logical] partitions.
Cfdisk named the partitions for me, and I did not have to create
the extended partion, hdb2, which contains the three logical
partitions.
I had to create the partitions in the proper order to get them numbered
with the appropriate numbers. I wanted the numbers to match the original
numbers to minimize changes to /etc/fstab. In this case, the only
change will be the addition of hdb7 for the /usr directory. So, I
created them in the following order:
/
swap
/home
/usr
Here is the result of my efforts as displayed by cfdisk
cfdisk 2.11x Disk Drive: /dev/hdb Size: 122942324736 bytes, 122.9 GB Heads: 255 Sectors per Track: 63 Cylinders: 14946 Name Flags Part Type FS Type [Label] Size (MB) ------------------------------------------------------------------------------------------------- hdb1 Boot Primary Linux 5996.23 hdb5 Logical Linux swap 501.75 hdb6 Logical Linux 90001.02 hdb7 Logical Linux 26436.05
I then selected [Write] from the menu and answered "yes" when
asked if I really wanted to write the partition table.
After the partition table was written to the disk
the new partitions showed up in /dev.
/dev>ls hdb* hdb@ hdb1@ hdb2@ hdb5@ hdb6@ hdb7@
Notice hdb2, which is the extended partition created to
hold the logical partitions, 5, 6, and 7.
Here is more detailed information.
~>cat /proc/partitions major minor #blocks name rio rmerge rsect ruse wio wmerge wsect wuse running use aveq 3 64 120060864 ide/host0/bus0/target1/lun0/disc 6 26 48 80 0 0 0 0 -2 4304870 34340012 3 65 5855661 ide/host0/bus0/target1/lun0/part1 0 0 0 0 0 0 0 0 0 0 0 3 66 1 ide/host0/bus0/target1/lun0/part2 0 0 0 0 0 0 0 0 0 0 0 3 69 489951 ide/host0/bus0/target1/lun0/part5 0 0 0 0 0 0 0 0 0 0 0 3 70 87891583 ide/host0/bus0/target1/lun0/part6 0 0 0 0 0 0 0 0 0 0 0 3 71 25816423 ide/host0/bus0/target1/lun0/part7 0 0 0 0 0 0 0 0 0 0 0
***************************************
Make File Systems On The New Disk
***************************************
The next step is to make file systems on the
new disk partitions.
********************* NOTE *********************
Be EXTREMELY careful to make the new file systems
on the new disk. If you inadvertently make new file
systems on the original disk, all of the data on the
disk will be lost, and it doesn't ask, and there is no
undo! If you are nervous about this, install *only*
the new disk for this part and then shut down and
install the original disk for the copies.
********************* NOTE *********************
The new disk is still installed as /dev/hdb, so
we must make the file systems there.
Use mke2fs on each partition
Options
-j creates the ext3 journal
-c checks for bad blocks
-L <name> labels the file system
/dev/hdb* is the target partition
> mke2fs -j -c -L / /dev/hdb1 > mke2fs -j -c -L /home /dev/hdb6 > mke2fs -j -c -L /usr /dev/hdb7
If you use the -c flag, this will take a long
long time. I don't know exactly what it does or
how important it is since I am using a journaled
version of the file system. Just be prepared to
spend an hour or so doing this if you are checking
for bad blocks.
***************************************
Copy Files From Old Disk To New Disk
***************************************
Once the new file systems are in place, you can copy
all of the files from the original disk onto the new
one.
I did the file copy using Knoppix so that the
old disk will be mounted read-only. This should
minimize any accidents causing files being written
to the old disk by mistake as I am copying them over.
make mountpoints for new and old root directories:
> mkdir /mnt/newroot > mkdir /mnt/oldroot
These mount points exist only in the RAM disk
of Knoppix, so they must be recreated each
time you boot Knoppix.
mount the root partiions
> mount -w /dev/hdb1 /mnt/newroot > mount -r /dev/hda1 /mnt/oldroot
Some of the directories do not need to be copied,
so I just made those directories on the new disk, i.e.
/home which will be on a separate partion
/proc which contains only pseudo files
/tmp which should be empty
/usr which will be on a separate partition
>mkdir /mnt/newroot/home >mkdir /mnt/newroot/proc >mkdir /mnt/newroot/tmp >mkdir /mnt/newroot/usr
Because I am moving /usr to its own partition on
the new disk, I did not want to copy it from
the old disk onto the new one. I wanted to copy all
of the directories other than /usr. To do that,
I copied them one at a time from the old to the new.
I could not figure out how to tell cp to skip
/usr. There must be a way. But I did them all one
by one anyway. There are not that many. Here is
an example of the commands I used:
> cp -pax /mnt/oldroot/bin /mnt/newroot/bin
Options:
p means preserve the ownership, timestamps etc
a means archive, same as dpR
d means do not dereference links
R means recursive copy
x means stay on this one file system
That command both makes the new directory and also
copies all of the contents over to the new one.
I copied all of the relevant root directories to
the new disk:
> cp -pax /mnt/oldroot/boot /mnt/newroot/boot > cp -pax /mnt/oldroot/dev /mnt/newroot/dev > cp -pax /mnt/oldroot/etc /mnt/newroot/etc > cp -pax /mnt/oldroot/initrd /mnt/newroot/initrd > cp -pax /mnt/oldroot/lib /mnt/newroot/lib > cp -pax /mnt/oldroot/mnt /mnt/newroot/mnt > cp -pax /mnt/oldroot/opt /mnt/newroot/opt > cp -pax /mnt/oldroot/root /mnt/newroot/root > cp -pax /mnt/oldroot/sbin /mnt/newroot/sbin > cp -pax /mnt/oldroot/var /mnt/newroot/var
Once the root directories have been created and copied,
the mount points for /home and /usr exist on the new disk.
So mount and copy the other two partitions.
Knoppix mounted /mnt/oldroot/home automatically
when I mounted /mnt/oldroot. I don't know why.
If it does not do that, just mount it yourself,
readonly, like this:
> mount -r /dev/hda6 /mnt/oldroot/home
Mount the new file systems and copy
> mount - w /dev/hdb6 /mnt/newroot/home > cp -pax /mnt/oldroot/home /mnt/newroot > mount - w /dev/hdb7 /mnt/newroot/usr > cp -pax /mnt/oldroot/usr /mnt/newroot
I left the usr and home names off the destination
path because if I put them there, cp would create
new directories by that name in the existing ones,
and I would end up with /mnt/newroot/home/home and
/mnt/newroot/usr/usr.
The above commands took quite a while to
complete. More than half an hour each.
Since I have changed the way the / directory is
partitioned, I needed to modify the /etc/fstab
to mount the new partiton.
edit /mnt/newroot/etc/fstab and make it mount the
new /usr partition.
Here is the current fstab:
/dev/hda1 / ext3 defaults 1 1 none /dev/pts devpts mode=0620 0 0 /dev/hda6 /home ext3 defaults 1 2 none /mnt/cdrom supermount dev=/dev/scd0,fs=auto,ro,--,iocharset=iso8859-1,codepage=850,umask=0 0 0 none /mnt/floppy supermount dev=/dev/fd0,fs=auto,--,iocharset=iso8859-1,sync,codepage=850,umask=0 0 0 none /proc proc defaults 0 0 /dev/hda5 swap swap defaults 0 0 /dev/sda1 /mnt/flasha vfat noauto,user 0 0 /dev/sdb1 /mnt/flashb vfat noauto,user 0 0
I needed to add:
/dev/hda7 /usr ext3 defaults 1 2
Notice that this mounts /dev/hda7, not /dev/hdb7.
Once I make the new drive master, it will become
/dev/hda. Here are the meanings of the options.
defaults adds the following options
rw, suid, dev, exec, auto, nouser, and async.
1 means that dump(8) should dump this file system
2 means the fsck(8) checks the file system on second pass
************** IMPORTANT **********************
Unmount all of the disks to flush out the files.
************************************************
> umount /mnt/oldroot > umount /mnt/newroot/usr > umount /mnt/newroot/home > umount /mnt/newroot
Shutdown Knoppix and turn off the power.
***************************************
Swap The Disks
***************************************
Unplug the IDE cable and power plugs from both disks.
Jumper the new disk to be IDE master.
Plug the IDE cable and power plug into only the new disk.
************** IMPORTANT **********************
By swapping the disks before fixing the MBR
with lilo, you won't have to edit lilo.conf
to get the MBR installed onto the correct disk.
Apparently the boot=/dev/hda in /etc/lilo.conf
tells lilo where to *store* the new MBR as well
as where to get it at boot time. If you leave
lilo.conf the way it was on the old disk, lilo
puts the new MBR onto /dev/hda, which must be the
new disk if you want it to boot.
************************************************
***************************************
Fix LILO
***************************************
Boot the computer using Knoppix.
The new disk is now recognized by the BIOS
as /dev/hda.
> mkdir /mnt/newroot > mount -w /dev/hda1 /mnt/newroot > mount -w /dev/hda6 /mnt/newroot/home > mount -w /dev/hda7 /mnt/newroot/usr > cd /mnt/newroot > sbin/lilo -v -v -r /mnt/newroot
The options mean:
-v make this verbose
-v make this even more verbose
-r chroot to the /mnt/newroot directory before
running lilo
I used the version of lilo on the new disk
(relative addressing... no / before sbin)
> umount /mnt/newroot/usr > umount /mnt/newroot/home > umount /mnt/newroot
Shutdown Knoppix.
Fix the BIOS to boot first from the primary
IDE disk.
Boot the computer.
The computer should come up with a copy of the
system you had previously, but on the new disk.
***************************************
What really happened.
***************************************
Of course, this project did not go as cleanly as
I describe it above. In fact, I did the thing three times
before I got it right.
The first time I tried, I fixed LILO with the new disk
still installed as /dev/hdb, thinking that the -r /mnt/newroot
would cause the lilo compiler to install the MBR onto /dev/hdb.
Not so. In fact, it went onto /dev/hda. Then,
when I tried to boot the new disk, my computer went through
the BIOS POST and then........... nothing........ the
black screen of death! Of course what had happened was
that there was no MBR on the new disk at all. It had been
written to the old disk.
I ran out of time and reinstalled the old disk
(which still managed to boot somehow). Days later I
figured out what had gone wrong, so I hooked up the new
disk as /dev/hda and fixed LILO. After that, it booted
fine.
However, we had added files and changed files
on the old disk. The new disk was no longer an accurate
copy. So, to capture the changes, I decided to
do the copies over again.
I hooked up both disks and copied the /bin directory
again. But then when I did a du on the new /bin it was
twice the size of the old one. I tried another directory
using the following copy command:
> cp -ax --backup=none /mnt/oldroot/var /mnt/newroot/var
to make sure that no backup copies were being made
on the new disk.
That also resulted in a var directory that was twice the
size of the original. Obviously, the original files I
had copied before and the new copies were both on the
new disk. This must be a function of the journaled file
system, but I did not want to leave it that way. I wanted
to start the new disk with a copy that was as close as
possible to the original one.
I decided to start over again by making new file systems
on the old disk and doing the copies again. A couple
of weeks went by before I got around to it.
I booted to Knoppix again with both drives installed,
and did the command
> mke2fs -j -c -L / /dev/hdb1
I watched this cook for over 20 minutes and it still was not done.
Knoppix was hung up, and there was no cursor or mouse
action. I could not ctrl-c out of the process. This did not
happen when I made the file systems on the empty partitions.
So I reset the computer.
I got around the problem by leaving out the check for
bad blocks when I made the new file systems.
I made the file systems using:
> mke2fs -j -L / /dev/hdb1 > mke2fs -j -L /home /dev/hdb6 > mke2fs -j -L /usr /dev/hdb7
This took only a few minutes, and it did not hang the
computer. I do not know what effect this will have on
the future of the file systems, but I am typing this onto
the new disk, and it seems to work OK.
After making the file systems again, I reaccomplished
all of the copies in the same way I did it before.
Then, I shut down the computer and swapped the disks.
I fixed LILO and booted from the disk.
LILO worked fine. Linux came up, but with lots of
errors, which went by too fast for me to tell what
they were. Then, it dropped me into a console login
instead of my normal KDE startup.
After a few minutes of cogitation, I realized that
I had not fixed /etc/fstab this time. This made /usr
just an empty directory. So, I booted into Knoppix
again and added the mount of the new /usr partion.
I rebooted the computer from the new disk, and it ran.
That's the real story. If you follow the instructions
in the first half of this saga, and you leave
nothing out, and you make no mistakes, you should end
up with a bootable disk in two or three hours.
If you do it the way I did it, complete with brain
cramps and poking around in the dark, it will take
a few weeks.
I hope this writeup is not too long. I wanted to include all
the details so that it can provide some answers for
those who have not yet been through this. I learned a
lot about what is on my disk and how to maintain it,
and that is the point of running Linux instead of
fnWindows. I own it; it does not own me.
Happy computing to all.
Linux rocks!
Banjo
(_)=='=~
Foxfire and Mozilla
in Software
Posted
I am confused about the relationship between
Foxfiire and Mozilla.
I am running an old Moz 1.3 on my Mandy 9.1 and
I would like to upgrade to Foxfire. Do I have to
uninstall Moz before installing Foxfire to stay out
of trouble? The Foxfire install procedures say not to
install a new Foxfire over an old one, but will
an old Mozilla screw it up?
I would like to avoid dependency Hell on this
because I only have a 56 K modem to download
with.
Thanks in advance
Banjo
(_)=='=~