Jump to content

New to large networking and need information


flarefox
 Share

Recommended Posts

I've been asked to setup a good linux lab for the local schools here (small town wanting to cut cost corners and I'm the local linux guy). Thing is, I only have experience with single-user linux. Can anyone point me to a good Q&A sheet or a how-to? What I want to know:

 

Installing in the computers without overloading an rpm server. Is it called setting up a "park?"

Partition cloning. What's the best tool for it in linux and what's the best way to go about it?

I saw that madriva can let a windows network contain the login data for users. Any problems people run into a lot?

I want certain programs and things to be loaded every time a user logs in, like their network drive and a few custom desktop entries. Is there a place to go to learn how to do that?

 

I did some googles but the pages I found weren't very informative, so I figured you guys would have links that you know are good or could point me in the right direction. Thanks!

 

~Dee

Link to comment
Share on other sites

How many computers are you looking at installing? If only a few, you could do it all from CD.

 

Cloning can be done with a package by Acronis as it supports Linux partitions. Packages like Ghost unless it's the Linux version do sector copies, which means the entire size of the hard disk, and therefore would only go back to the same size hard disk. So be careful when choosing your cloning tool.

 

Alternatively, build one machine, copy the CD's/DVD to it, and then make the bootable CD ISO that's contained on the CD1/DVD, and then you can install directly from the server, could be a lot faster than accessing the CD directly from each machine.

 

What I would then do is set up an ftp mirror for all your updates. If you choose Mandriva for this, it's nice and easy, I do it already now for main/contrib/updates. Means that the first time you sync you download anything of 10-12GB, but any small updates after that are easily downloaded and fast. The second reason, is that all the remaining machines can be updated in seconds, than taking a lot longer to do each one as they all download from the internet.

 

I'm not sure about the rest, but hope that helps get you going at least. Samba could be used for your authentication or networking stuff though.

Link to comment
Share on other sites

I've been asked to setup a good linux lab for the local schools here (small town wanting to cut cost corners and I'm the local linux guy). Thing is, I only have experience with single-user linux. Can anyone point me to a good Q&A sheet or a how-to? What I want to know:

Ill try

Installing in the computers without overloading an rpm server. Is it called setting up a "park?"

You can just rsynch to a mirror ... but better still see if your communal ISP has one...

Partition cloning. What's the best tool for it in linux and what's the best way to go about it?

Partimage works well the real questions as iamw asks how many computers and are they all the same?

If they are all different then it might actually be easier using a live distro... Im not so familiar with the Mandriva based ones... but other here are... but I know for kanotix it installs in about 12-15 mins with 2GB of apps ... the nice part is the HW config is already done by the liveCd part... but I should expect mandriva live or other derivatives would work the same....

 

I saw that madriva can let a windows network contain the login data for users. Any problems people run into a lot?

Depends what you call a lot ... what's a small town? If its less than 5000 or so it shouldn't be any noticable load for the authentification... but the actual file access will depend on the NW and ability of the file server to serve the files...

I want certain programs and things to be loaded every time a user logs in, like their network drive and a few custom desktop entries. Is there a place to go to learn how to do that?

 

I did some googles but the pages I found weren't very informative, so I figured you guys would have links that you know are good or could point me in the right direction. Thanks!

 

~Dee

Hmm I think you might want to firm up exactly what you want....

There is commercial SW like nxserver or tarantella which do the whole thing for you.... either of them might treat you as a charity case... (advertising) I run the free version on my server with a decent connection its literally like being local ...

All the user needs is the client... for NX its binary and works on Win/Lin/Mac/*NIX/ etc... Tarantella also has a web client so you just need a browser...

 

VNC also works... its just treacle slow compared to the commerical algorithms and depending on users the commerical stuff allows load balancing etc.,

 

If I were doing this (and I have done it for thousands of users in my old company from worldwide offices) I would think of using a thin client...

With this you specificy a session .... that session is whatever you want... it can be an xterm, a XDMPC (login session on xdm/kdm/gdm) or start up say kde with their user parameters....

 

I even run a session that simply starts X with no WM and a fullscreen vmplayer with Windows which shares my home as "my Documents" in windows when it starts!

 

but what you need to do is think what they will be running.. is it graphics or CPU intensive etc. do you want them physically restricted to the linux lab or logging in from home.... ??

Give us some numbers and locations etc. what apps and stuff and Ill get back :D

Link to comment
Share on other sites

How many computers are you looking at installing? If only a few, you could do it all from CD.

 

Cloning can be done with a package by Acronis as it supports Linux partitions. Packages like Ghost unless it's the Linux version do sector copies, which means the entire size of the hard disk, and therefore would only go back to the same size hard disk. So be careful when choosing your cloning tool.

 

Alternatively, build one machine, copy the CD's/DVD to it, and then make the bootable CD ISO that's contained on the CD1/DVD, and then you can install directly from the server, could be a lot faster than accessing the CD directly from each machine.

 

What I would then do is set up an ftp mirror for all your updates. If you choose Mandriva for this, it's nice and easy, I do it already now for main/contrib/updates. Means that the first time you sync you download anything of 10-12GB, but any small updates after that are easily downloaded and fast. The second reason, is that all the remaining machines can be updated in seconds, than taking a lot longer to do each one as they all download from the internet.

 

I'm not sure about the rest, but hope that helps get you going at least. Samba could be used for your authentication or networking stuff though.

 

Thanks for all that, Ian! To answer your questions, it's not an amazingly huge number of computers but enough to make it tiresome for them to update them if/when the need arises. 80-100 pcs total. They have something like 300-400 students in there that all have logins and accesses. It's for a special class setup. I think the school system uses ghost, but every machine must be cloned exactly for that and I'm not really sure if all the computers in the lab have exactly the same specs for their hard drive sizes. I'll read up on all that. So, if I setup an ftp mirror for all my downloads...do you mean that I have one computer download and set everything up on itself, then do some sort-of sharing from that machine so all the others can connect to it for their software/updates? If I did a live install, I'm sure it would be taxing on a random ftp server to have 100 machines all trying to grab the same file at the same time....or more taxing than I would like to be on someone else's machines. I'll look more into the ftp or rsync mirrors.

Link to comment
Share on other sites

Hmm I think you might want to firm up exactly what you want....

There is commercial SW like nxserver or tarantella which do the whole thing for you.... either of them might treat you as a charity case... (advertising) I run the free version on my server with a decent connection its literally like being local ...

All the user needs is the client... for NX its binary and works on Win/Lin/Mac/*NIX/ etc... Tarantella also has a web client so you just need a browser...

 

VNC also works... its just treacle slow compared to the commerical algorithms and depending on users the commerical stuff allows load balancing etc.,

 

If I were doing this (and I have done it for thousands of users in my old company from worldwide offices) I would think of using a thin client...

With this you specificy a session .... that session is whatever you want... it can be an xterm, a XDMPC (login session on xdm/kdm/gdm) or start up say kde with their user parameters....

 

I even run a session that simply starts X with no WM and a fullscreen vmplayer with Windows which shares my home as "my Documents" in windows when it starts!

 

but what you need to do is think what they will be running.. is it graphics or CPU intensive etc. do you want them physically restricted to the linux lab or logging in from home.... ??

Give us some numbers and locations etc. what apps and stuff and Ill get back :D

 

On the login stuff, basically, on the server each user has a few gigs of space that is their homespace. I want to load that for each different username on login. Other than that, I want them to use the regular machine for all the normal data saving and just throw their saved files onto the server afterward. I also want to have custom icons load up on their desktops each time. That's really as far as I want them to go. I am locking them out of every other part of the system in the hopes that this will make it work faster and better. I've used partimage before on my home machines to back up everything before I tried something risky, but I never got it to save the data much and I never got a boot disk or cd made for it that would install correctly (network errors). It seemed really great, though with what worked.

 

The apps the students will be running are: sketchup (windows only) (seeing if it will run in wine or something. It requires 100% OGL compliance which linux uses but I'm not sure how the wine emulation goes for that), SoftImage, several CAD apps, Lightwave (works wonderfully in wine/cedega, Blender with verse integration, Gimp with GAP and verse integration, photoshop (wine/cedega), openoffice, and a few other 3d/2d apps that I can't remember.

 

I'll look up nxserver and tarantella to see more about them, too. Thanks for that!

Edited by flarefox
Link to comment
Share on other sites

Hmm I think you might want to firm up exactly what you want....

There is commercial SW like nxserver or tarantella which do the whole thing for you.... either of them might treat you as a charity case... (advertising) I run the free version on my server with a decent connection its literally like being local ...

All the user needs is the client... for NX its binary and works on Win/Lin/Mac/*NIX/ etc... Tarantella also has a web client so you just need a browser...

 

VNC also works... its just treacle slow compared to the commerical algorithms and depending on users the commerical stuff allows load balancing etc.,

 

If I were doing this (and I have done it for thousands of users in my old company from worldwide offices) I would think of using a thin client...

With this you specificy a session .... that session is whatever you want... it can be an xterm, a XDMPC (login session on xdm/kdm/gdm) or start up say kde with their user parameters....

 

I even run a session that simply starts X with no WM and a fullscreen vmplayer with Windows which shares my home as "my Documents" in windows when it starts!

 

but what you need to do is think what they will be running.. is it graphics or CPU intensive etc. do you want them physically restricted to the linux lab or logging in from home.... ??

Give us some numbers and locations etc. what apps and stuff and Ill get back :D

 

On the login stuff, basically, on the server each user has a few gigs of space that is their homespace. I want to load that for each different username on login. Other than that, I want them to use the regular machine for all the normal data saving and just throw their saved files onto the server afterward. I also want to have custom icons load up on their desktops each time. That's really as far as I want them to go. I am locking them out of every other part of the system in the hopes that this will make it work faster and better. I've used partimage before on my home machines to back up everything before I tried something risky, but I never got it to save the data much and I never got a boot disk or cd made for it that would install correctly (network errors). It seemed really great, though with what worked.

 

The apps the students will be running are: sketchup (windows only) (seeing if it will run in wine or something. It requires 100% OGL compliance which linux uses but I'm not sure how the wine emulation goes for that), SoftImage, several CAD apps, Lightwave (works wonderfully in wine/cedega, Blender with verse integration, Gimp with GAP and verse integration, photoshop (wine/cedega), openoffice, and a few other 3d/2d apps that I can't remember.

 

I'll look up nxserver and tarantella to see more about them, too. Thanks for that!

 

 

From memory lightwave is java is it not? (The linux version is also FREE as in beer) so no wine needed?

The OpenGL will be a problem for thin clients... Tarantella were working on it last time I looked a few years ago but ThinAnywhere (Mercury) did a much better OpenGL job (but this was for HUGE datasets (TB each)

 

If you can get everything working OK under Wine (that would be my first effort) then I'd simply run a file server ... 100 people logging in at once on a netboot is probably stressing the NW... too far... unless its a real minimalist boot.... but what I would do is simply put their homes on the server...

 

For the install I'd make the absolute minimal boot disk and do an NFS install...

The simple way is you just use the -o loop to mount the ISO... and kick off a pre configured install.... on the clients from CD/floppy... using network boot

 

There are lots of tools for this basically its just a script... that the install takes and most distro's let you save this as a file and reuse it... in effect its just the install the most basic setup part and then a series of (for mandriva) rpm's to be installed ...

i.e urpmi < /mnt/server/rpms/default/*.rpm

then you edit the server /etc/skel which is the basis for user accounts and create what you want... then every new account it uses your pre defined stuff....

 

root@Kanotix32:/etc/skel# ls -a

. .bashrc .kde .nessus.keys .weechat .xine

.. Desktop .kderc .nessusrc .xchat2 .xmms

.acrorc .gtkrc-2.0 .links tmp .Xdefaults .xscreensaver

you can go mad and do GIMP and .gimp and .wine .. this way they base is all pre-installed in the users directory as you create the user. This part can also be scripted... so basically you run through the script with adduser <userlist

 

and it creates ALL the directories and everyone is installed the same...

Check out the LDP for guides on this :D and ask here :D

 

 

but the concept is classic, its what Sun called Jumpstart years ago...

No need to setup any users..on the clients. just configure it to use NIS (or even LDAP) for user authentification... and then you configure that on the server..

Link to comment
Share on other sites

Thanks for all that, Ian! To answer your questions, it's not an amazingly huge number of computers but enough to make it tiresome for them to update them if/when the need arises. 80-100 pcs total. They have something like 300-400 students in there that all have logins and accesses. It's for a special class setup. I think the school system uses ghost, but every machine must be cloned exactly for that and I'm not really sure if all the computers in the lab have exactly the same specs for their hard drive sizes. I'll read up on all that. So, if I setup an ftp mirror for all my downloads...do you mean that I have one computer download and set everything up on itself, then do some sort-of sharing from that machine so all the others can connect to it for their software/updates? If I did a live install, I'm sure it would be taxing on a random ftp server to have 100 machines all trying to grab the same file at the same time....or more taxing than I would like to be on someone else's machines. I'll look more into the ftp or rsync mirrors.

 

Here's my rsync scripts, so you can see what I'm doing. This is for Mandriva 2007 by the way, but it's more or less similar just a change in directory structure (my scripts have 2006 in them but commented out).

 

[admin@elan ~]$ cat /etc/cron.daily/rsync_main
#!/bin/bash
#
# This script syncs the Mandriva Main mirror.
#

# 2006
# rsync -rv --delete rsync://anorien.csc.warwick.ac.uk/Mandriva/official/2006.0/i586/media/main/ /home/ftp/pub/mirrors/mandriva/main/

# 2007
# rsync -rv --delete rsync://anorien.csc.warwick.ac.uk/Mandriva/official/2007.0/i586/media/main/release/ /home/ftp/pub/mirrors/mandriva/2007/main/
rsync -rv --delete rsync://mirrors.usc.edu/mandrakelinux/official/2007.0/i586/media/main/release/ /home/ftp/pub/mirrors/mandriva/2007/main/

 

[admin@elan ~]$ cat /etc/cron.daily/rsync_contrib
#!/bin/bash
#
# This script syncs the Mandriva Contrib mirror.
#

# 2006
# rsync -rv --delete rsync://anorien.csc.warwick.ac.uk/Mandriva/official/2006.0/i586/media/contrib/ /home/ftp/pub/mirrors/mandriva/contrib/

# 2007
# rsync -rv --delete rsync://anorien.csc.warwick.ac.uk/Mandriva/official/2007.0/i586/media/contrib/release/ /home/ftp/pub/mirrors/mandriva/2007/contrib/
rsync -rv --delete rsync://mirrors.usc.edu/mandrakelinux/official/2007.0/i586/media/contrib/release/ /home/ftp/pub/mirrors/mandriva/2007/contrib/

 

[admin@elan ~]$ cat /etc/cron.daily/rsync_updates
#!/bin/bash
#
# This script syncs the Mandriva Updates mirror.
#

# 2006
# rsync -rv --delete rsync://anorien.csc.warwick.ac.uk/Mandriva/official/updates/2006.0/main_updates/ /home/ftp/pub/mirrors/mandriva/updates/

# 2007
# rsync -rv --delete rsync://anorien.csc.warwick.ac.uk/Mandriva/official/updates/2007.0/i586/media/main/updates/ /home/ftp/pub/mirrors/mandriva/2007/updates/
rsync -rv --delete rsync://mirrors.usc.edu/mandrakelinux/official/updates/2007.0/i586/media/main/updates/ /home/ftp/pub/mirrors/mandriva/2007/updates/

 

There you have all scripts for main/contrib/updates. These are placed in /etc/cron.daily and the name you can see from the commands I used to display the script. You then have to do:

 

chmod +x scriptname

 

so for example:

 

chmod +x rsync_main

 

then, you have to configure urpmi on all your machines (including the one doing the mirror), to use the directories you rsync'd. So make sure you have an ftp server installed on the machine and allow anonymous access for all the machine to connect. I use vsftpd for mine, and it's real easy to configure. Normally the shared ftp directory is something like /var/ftp but I changed this by adding a line to the vsftpd.conf file:

 

anon_root=/home/ftp

 

and then you make sure that the ftp user has all rights to this directory:

 

chown -R ftp:ftp /home/ftp

 

The urpmi sources from /etc/urpmi/urpmi.cfg on one of my machines:

 

updates ftp://elan/pub/mirrors/mandriva/2007/updates {
 hdlist: hdlist.updates.cz
 key-ids: 22458a98
 synthesis
 update
 with_hdlist: media_info/synthesis.hdlist.cz
}

main ftp://elan/pub/mirrors/mandriva/2007/main {
 hdlist: hdlist.main.cz
 key-ids: 70771ff3
 with_hdlist: media_info/hdlist.cz
}

contrib ftp://elan/pub/mirrors/mandriva/2007/contrib {
 hdlist: hdlist.contrib.cz
 key-ids: 78d019f5
 with_hdlist: media_info/hdlist.cz
}

 

where I have the name "elan" this is the name of my ftp server, and so, you need to make sure that /etc/hosts can resolve this on all your machines. Alternatively, just replace with the ip address of the machine instead. Here is a script I made for adding the urpmi sources:

 

[ian@europa Mandriva]$ cat mdv2007urpmi
#!/bin/bash
#
# Urpmi sources for Mandriva 2007
#
urpmi.addmedia main ftp://172.20.12.230/pub/mirrors/mandriva/2007/main with media_info/hdlist.cz
urpmi.addmedia contrib ftp://172.20.12.230/pub/mirrors/mandriva/2007/contrib with media_info/hdlist.cz
urpmi.addmedia --update updates ftp://172.20.12.230/pub/mirrors/mandriva/2007/updates with media_info/synthesis.hdlist.cz
urpmi.addmedia plf-free ftp://spirit.bentel.sk/mirrors/plf/mandriva/2007.0/free/release/binary/i586 with synthesis.hdlist.cz
urpmi.addmedia plf-nonfree ftp://spirit.bentel.sk/mirrors/plf/mandriva/2007.0/non-free/release/binary/i586 with synthesis.hdlist.cz

 

Notice that main, contrib and updates are pointing to my elan machine with the ip address shown. And you're done on this part. One machine downloads the rpms which is the server. Then all machines point to this and get there updates instantly. By placing the scripts for rsync in /etc/cron.daily, it will sync every morning at 4am!

Link to comment
Share on other sites

Awesome! That is a lot of what I wanted to do! That solves the problems. Thanks!

 

~Dee

 

 

Thanks for all that, Ian! To answer your questions, it's not an amazingly huge number of computers but enough to make it tiresome for them to update them if/when the need arises. 80-100 pcs total. They have something like 300-400 students in there that all have logins and accesses. It's for a special class setup. I think the school system uses ghost, but every machine must be cloned exactly for that and I'm not really sure if all the computers in the lab have exactly the same specs for their hard drive sizes. I'll read up on all that. So, if I setup an ftp mirror for all my downloads...do you mean that I have one computer download and set everything up on itself, then do some sort-of sharing from that machine so all the others can connect to it for their software/updates? If I did a live install, I'm sure it would be taxing on a random ftp server to have 100 machines all trying to grab the same file at the same time....or more taxing than I would like to be on someone else's machines. I'll look more into the ftp or rsync mirrors.

 

Here's my rsync scripts, so you can see what I'm doing. This is for Mandriva 2007 by the way, but it's more or less similar just a change in directory structure (my scripts have 2006 in them but commented out).

 

[admin@elan ~]$ cat /etc/cron.daily/rsync_main
#!/bin/bash
#
# This script syncs the Mandriva Main mirror.
#

# 2006
# rsync -rv --delete rsync://anorien.csc.warwick.ac.uk/Mandriva/official/2006.0/i586/media/main/ /home/ftp/pub/mirrors/mandriva/main/

# 2007
# rsync -rv --delete rsync://anorien.csc.warwick.ac.uk/Mandriva/official/2007.0/i586/media/main/release/ /home/ftp/pub/mirrors/mandriva/2007/main/
rsync -rv --delete rsync://mirrors.usc.edu/mandrakelinux/official/2007.0/i586/media/main/release/ /home/ftp/pub/mirrors/mandriva/2007/main/

 

[admin@elan ~]$ cat /etc/cron.daily/rsync_contrib
#!/bin/bash
#
# This script syncs the Mandriva Contrib mirror.
#

# 2006
# rsync -rv --delete rsync://anorien.csc.warwick.ac.uk/Mandriva/official/2006.0/i586/media/contrib/ /home/ftp/pub/mirrors/mandriva/contrib/

# 2007
# rsync -rv --delete rsync://anorien.csc.warwick.ac.uk/Mandriva/official/2007.0/i586/media/contrib/release/ /home/ftp/pub/mirrors/mandriva/2007/contrib/
rsync -rv --delete rsync://mirrors.usc.edu/mandrakelinux/official/2007.0/i586/media/contrib/release/ /home/ftp/pub/mirrors/mandriva/2007/contrib/

 

[admin@elan ~]$ cat /etc/cron.daily/rsync_updates
#!/bin/bash
#
# This script syncs the Mandriva Updates mirror.
#

# 2006
# rsync -rv --delete rsync://anorien.csc.warwick.ac.uk/Mandriva/official/updates/2006.0/main_updates/ /home/ftp/pub/mirrors/mandriva/updates/

# 2007
# rsync -rv --delete rsync://anorien.csc.warwick.ac.uk/Mandriva/official/updates/2007.0/i586/media/main/updates/ /home/ftp/pub/mirrors/mandriva/2007/updates/
rsync -rv --delete rsync://mirrors.usc.edu/mandrakelinux/official/updates/2007.0/i586/media/main/updates/ /home/ftp/pub/mirrors/mandriva/2007/updates/

 

There you have all scripts for main/contrib/updates. These are placed in /etc/cron.daily and the name you can see from the commands I used to display the script. You then have to do:

 

chmod +x scriptname

 

so for example:

 

chmod +x rsync_main

 

then, you have to configure urpmi on all your machines (including the one doing the mirror), to use the directories you rsync'd. So make sure you have an ftp server installed on the machine and allow anonymous access for all the machine to connect. I use vsftpd for mine, and it's real easy to configure. Normally the shared ftp directory is something like /var/ftp but I changed this by adding a line to the vsftpd.conf file:

 

anon_root=/home/ftp

 

and then you make sure that the ftp user has all rights to this directory:

 

chown -R ftp:ftp /home/ftp

 

The urpmi sources from /etc/urpmi/urpmi.cfg on one of my machines:

 

updates ftp://elan/pub/mirrors/mandriva/2007/updates {
 hdlist: hdlist.updates.cz
 key-ids: 22458a98
 synthesis
 update
 with_hdlist: media_info/synthesis.hdlist.cz
}

main ftp://elan/pub/mirrors/mandriva/2007/main {
 hdlist: hdlist.main.cz
 key-ids: 70771ff3
 with_hdlist: media_info/hdlist.cz
}

contrib ftp://elan/pub/mirrors/mandriva/2007/contrib {
 hdlist: hdlist.contrib.cz
 key-ids: 78d019f5
 with_hdlist: media_info/hdlist.cz
}

 

where I have the name "elan" this is the name of my ftp server, and so, you need to make sure that /etc/hosts can resolve this on all your machines. Alternatively, just replace with the ip address of the machine instead. Here is a script I made for adding the urpmi sources:

 

[ian@europa Mandriva]$ cat mdv2007urpmi
#!/bin/bash
#
# Urpmi sources for Mandriva 2007
#
urpmi.addmedia main ftp://172.20.12.230/pub/mirrors/mandriva/2007/main with media_info/hdlist.cz
urpmi.addmedia contrib ftp://172.20.12.230/pub/mirrors/mandriva/2007/contrib with media_info/hdlist.cz
urpmi.addmedia --update updates ftp://172.20.12.230/pub/mirrors/mandriva/2007/updates with media_info/synthesis.hdlist.cz
urpmi.addmedia plf-free ftp://spirit.bentel.sk/mirrors/plf/mandriva/2007.0/free/release/binary/i586 with synthesis.hdlist.cz
urpmi.addmedia plf-nonfree ftp://spirit.bentel.sk/mirrors/plf/mandriva/2007.0/non-free/release/binary/i586 with synthesis.hdlist.cz

 

Notice that main, contrib and updates are pointing to my elan machine with the ip address shown. And you're done on this part. One machine downloads the rpms which is the server. Then all machines point to this and get there updates instantly. By placing the scripts for rsync in /etc/cron.daily, it will sync every morning at 4am!

 

 

While I know I don't want to mount the root, but definitely just install the software on every machine, that helps to know the mounting and copying scripts. It makes sense. I could have it mount their homespace on logon and then mount another space that copies all the default settings over for everything and then unmounts their homespace and removes the local directories when they log out for file space and convenience. That makes a lot of sense to do. I like the idea of that so they get a fresh workspace every time. Limits the number of errors they would have do to faulty settings, too, if it constantly refreshes itself.

 

Thanks a lot!

~Dee

 

Hmm I think you might want to firm up exactly what you want....

There is commercial SW like nxserver or tarantella which do the whole thing for you.... either of them might treat you as a charity case... (advertising) I run the free version on my server with a decent connection its literally like being local ...

All the user needs is the client... for NX its binary and works on Win/Lin/Mac/*NIX/ etc... Tarantella also has a web client so you just need a browser...

 

VNC also works... its just treacle slow compared to the commerical algorithms and depending on users the commerical stuff allows load balancing etc.,

 

If I were doing this (and I have done it for thousands of users in my old company from worldwide offices) I would think of using a thin client...

With this you specificy a session .... that session is whatever you want... it can be an xterm, a XDMPC (login session on xdm/kdm/gdm) or start up say kde with their user parameters....

 

I even run a session that simply starts X with no WM and a fullscreen vmplayer with Windows which shares my home as "my Documents" in windows when it starts!

 

but what you need to do is think what they will be running.. is it graphics or CPU intensive etc. do you want them physically restricted to the linux lab or logging in from home.... ??

Give us some numbers and locations etc. what apps and stuff and Ill get back :D

 

On the login stuff, basically, on the server each user has a few gigs of space that is their homespace. I want to load that for each different username on login. Other than that, I want them to use the regular machine for all the normal data saving and just throw their saved files onto the server afterward. I also want to have custom icons load up on their desktops each time. That's really as far as I want them to go. I am locking them out of every other part of the system in the hopes that this will make it work faster and better. I've used partimage before on my home machines to back up everything before I tried something risky, but I never got it to save the data much and I never got a boot disk or cd made for it that would install correctly (network errors). It seemed really great, though with what worked.

 

The apps the students will be running are: sketchup (windows only) (seeing if it will run in wine or something. It requires 100% OGL compliance which linux uses but I'm not sure how the wine emulation goes for that), SoftImage, several CAD apps, Lightwave (works wonderfully in wine/cedega, Blender with verse integration, Gimp with GAP and verse integration, photoshop (wine/cedega), openoffice, and a few other 3d/2d apps that I can't remember.

 

I'll look up nxserver and tarantella to see more about them, too. Thanks for that!

 

 

From memory lightwave is java is it not? (The linux version is also FREE as in beer) so no wine needed?

The OpenGL will be a problem for thin clients... Tarantella were working on it last time I looked a few years ago but ThinAnywhere (Mercury) did a much better OpenGL job (but this was for HUGE datasets (TB each)

 

If you can get everything working OK under Wine (that would be my first effort) then I'd simply run a file server ... 100 people logging in at once on a netboot is probably stressing the NW... too far... unless its a real minimalist boot.... but what I would do is simply put their homes on the server...

 

For the install I'd make the absolute minimal boot disk and do an NFS install...

The simple way is you just use the -o loop to mount the ISO... and kick off a pre configured install.... on the clients from CD/floppy... using network boot

 

There are lots of tools for this basically its just a script... that the install takes and most distro's let you save this as a file and reuse it... in effect its just the install the most basic setup part and then a series of (for mandriva) rpm's to be installed ...

i.e urpmi < /mnt/server/rpms/default/*.rpm

then you edit the server /etc/skel which is the basis for user accounts and create what you want... then every new account it uses your pre defined stuff....

 

root@Kanotix32:/etc/skel# ls -a

. .bashrc .kde .nessus.keys .weechat .xine

.. Desktop .kderc .nessusrc .xchat2 .xmms

.acrorc .gtkrc-2.0 .links tmp .Xdefaults .xscreensaver

you can go mad and do GIMP and .gimp and .wine .. this way they base is all pre-installed in the users directory as you create the user. This part can also be scripted... so basically you run through the script with adduser <userlist

 

and it creates ALL the directories and everyone is installed the same...

Check out the LDP for guides on this :D and ask here :D

 

 

but the concept is classic, its what Sun called Jumpstart years ago...

No need to setup any users..on the clients. just configure it to use NIS (or even LDAP) for user authentification... and then you configure that on the server..

Link to comment
Share on other sites

While I know I don't want to mount the root, but definitely just install the software on every machine, that helps to know the mounting and copying scripts. It makes sense. I could have it mount their homespace on logon and then mount another space that copies all the default settings over for everything and then unmounts their homespace and removes the local directories when they log out for file space and convenience. That makes a lot of sense to do. I like the idea of that so they get a fresh workspace every time. Limits the number of errors they would have do to faulty settings, too, if it constantly refreshes itself.

 

Thanks a lot!

~Dee

Dee I think the simplest and tried and tested method :D which has been used in your situation is just to have the homes mounted on nfs...

 

Using NIS etc is a good way to do the authentification/logon process, essentially it just replaces parts of your local /etc with a standard one. (i.e. networking, mounts, user etc.) its old but its well tested and there are literally hundreds of documentations for it but you don't need it or could implement it later...

 

The easiest? way is probably just that you replace the /etc/fstab for home....

 

So traditionally it might be

/dev/hda3	   /home	 ext3	defaults		0	   0

and it becomes

server:/home	   /home	 nfs	defaults		0	   0

 

This way you don't need to download everything at every login and its independant of where the user logs in.

On a 100Mbit switched network you will notice very little perfomance hit... (if any)

If you copy the home (at beginning and end of session) you will because everyone will do it at once...

You could minimalise this using rsynch to do only changed files but its an extra error prone step... and things like .Mail can be tricky... this way a file is only using bandwidth while its opened or saved, while a executable is being run its copied to /tmp anyway...

 

there are many good reasons for this way... like simple backup (you just backup homes on the server) and the inverse when a user screws something up...

 

The other way you end up copying files to each PC...

 

e.g. process or creating a new user...

On the server you just add a user and the /etc/skel with create that user with the standard desktop and scripts

 

if not then you end up creating that user on each machine and copying the files across ..to each machine.. so that user can log-in to each machine... yes you can do this as needed but then you are messing with scripts on login/logoff etc.

 

Now the scripts are important... once your up and running you will see why... but it just makes all your system admin easier.

 

You might want to read the whole doc but start here

http://www.tldp.org/LDP/nag2/x-087-2-nis.clients.html

 

An NIS-aware implementation of these functions, however, modifies this behavior and places an RPC call to the NIS server, which looks up the username or user ID. This happens transparently to the application. The function may treat the NIS data as though it has been appended to the original passwd file so both sets of information are available to the application and used, or as though it has completely replaced it so that the information in the local passwd is ignored and only the NIS data is used.
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

×
×
  • Create New...