Jump to content
Sign in to follow this  
tyme

just an idea, seeing if any others may be interested...

Recommended Posts

Guest c_m_f

this sounds similair to what i was suggesting a while ago, but it was an modified version of mandrake which will otmize all the stuff for your architecture, but taking that one ste further and incorporating everything which has been suggested sounds great!

 

http://www.mandrakeusers.org/viewtopic.php?t=4977

 

also for packaging, if people are still interested take a look at Autopackage

Share this post


Link to post
Share on other sites

I agree that scoopy's idea is good, but part of the reason I wanted to get into this was to really learn how a distro is built. It's like a house, if you buy a completely house and then modify it to your liking you won't learn as much as if you build the house yourself. As I said, since static doesn't plan on getting too much into this until around Christmas time, I may go ahead and join in at that point, but right now I'm considering other projects. But don't let that deter the rest of you from doing something like Scoopy suggest ;-)

Share this post


Link to post
Share on other sites

Yeah really - tyme has a good point - there is no law that says we all have to agree on anything - both ideas sound good and very different - there's no reason to toss one because of another. I only graduate from college at the end of december, and up to then I have to design and program a huge (GPL of course ;) ) cross-platform physics app that lets you apply forces on masses in real-time. That leaves me NO time to distrofy, if ya read me, until then. Once I do have time, after work every day will be devoted to linux, every weekend devoted to music. Voila!

Share this post


Link to post
Share on other sites

Just some gabber.

 

I was interested to learn the internals of doing a distro and so I'm prety easy on which/what for.

 

The scoopy/scratch ideas both have merit and perhaps one would lead to the other. (I don't need to say which direction, surely)

 

In reality no-ones exactly sure of the future of MDK right now (unless I missed something) so ....

 

I had some other ideas, one is making a packaging tool along the lines of alien but better.

 

If you write your own package format you'd limit users to just the packages you create, not good for choice.

If you choose either APT or RPM both have there plus's and minus's ...

 

I had an idea about how to do this: someone tell me if they think its stupid.

 

It needs a special root user not called root. I'll call it install. (You'll see why)

 

A major prob with RPM's is the different directory structures.

RedHat vs (look its an example I know well so its just an example) KDE.

 

This causes no end of problems and so you need someone thoughtful, like texstar or plf to redo the RPM's. (Check out Mosfets site)

 

OK idea is a INSTALL user, but the install user uses a chroot to a virtual filetree. The exact filetree is dependent upon the exact distro so the install procedes as if the files are being put where they are intended, in fact they are for the INSTALL user. Other users however see them in the place where they work int he distro becuase the tree chroot'd too was mapped that way.

 

Becuase everything is owned by install its easier to see what is installed.

I think it might need an urmpi DB for each package type (RH, MDK etc) but maybe not.

 

Same would go for Debs.....

 

A further option being INSTALL from source.

The different databases could be checked, compared etc. an RPM (xine-ui....) installed from RPM would be deinstalled from RPM before the compiled version was inserted, then the make install would be parsed to track the files and would be run as INSTALL thus remapping any differences.

 

This could be done first for MDK but would form the basis of a distro later!!! ???

Share this post


Link to post
Share on other sites

erm...Gowator, if i read your idea right, you would end up with basically two installs....one in the chrooted environment, and one in the regular environment...this would be a inefficient use of space...if i'm wrong, please explain better :-)

 

 

here's what i understand you as saying:

 

we have user install. install as read/write/execute access to /var/tmp/virtualinstall

 

so you su to the install user, and in the install users .bashrc we could probably put the commands that would chroot him to /var/tmp/virtualinstall so to him it would appear this was his root directory. now, if i understand chroot correctly, once this dir. becomes his root dir. than he no longer has access to any of the other dir's unless they get mounted into the root dir. (which, as mentioned, is actually /var/tmp/virtualinstall). but that would be pointless, because we'd end up installing into the original system anyways. and symlinks would be just as useful.

 

but, why, you ask, does he need access to the other dir's on the system? well, because otherwise he'd need a complete copy of the system in /var/tmp/virtualinstall so that all the dependencies would be filled....

 

then again i may just be reading your idea wrong....

 

reading this portion:

Other users however see them in the place where they work int he distro becuase the tree chroot'd too was mapped that way.
i get the impression that maybe what you mean is for things to by symlinked so that RPM installs would match correctly with the setup of the specific system.

 

i.e. if a certain RPM installs it's files into /usr/local/bin but the distro wants them in /usr/bin you would symlink the file in /usr/local/bin to /usr/bin, is that what you're saying?

Share this post


Link to post
Share on other sites

No duplication....

No what I meant is:::

Well look at it like NFS running the NIS automounter.

 

We do something similar at work all the time with Solaris.

 

Imagine program a) want to install in /mnt/programs/bin/programa

 

but (since were French) we have it in (its not a real example but close)

 

/mnt/logiciel/bin/logiciela

 

using the NIS automounter we make a map which says /mnt/programs/bin/programa is equivalent to /mnt/logiciel/bin/logiciela

 

if you type cd /mnt/programs/bin/programa ou'll see the software but if you do DF . it will tell you your using /mnt/logiciel/bin/logiciela.

 

Relative paths etc. are retained.

 

So what I was suggesting and I guess LVM could be used as well...

The install user makes a mount: (/usr/.......) which would be the home directory of the user install.

 

A remapped file tree is bilt which basically is the differences between two distro's places to store information.

The process is chroot'd. so as far as install sees everything is being put where it was meant to go package wise. But its only in that directory structure when chroot'd.

 

Imagine rescue disk to fix lilo. You boot up and then mount your positions under /mnt. lilo.conf is in /mnt/etc and vmliuz in /mnt/boot.

 

Now a chroot /mnt will make the new root /mnt .... so lilo.conf is now in /etc. (No duplication its the same inode.)

 

Now you edit lilo.conf, type lilo and viola...

If you don't do this then you need to:

set your path so mnt/usr/bin/vi (can't remember exactly but you get the point) can be seen.

You also need to tell lilo explicitly where /mnt/etc/lilo.conf is andwhere /boot is. (not impossible but simpler with a chroot.

 

You can do the same if you have different versions of linux installed. It helps if you have exactly the same kernel before you do this...

 

RH is on /mnt/distros/RH9

Im' currently in Mandrake. single user.. and Ive recompiled both kernels to exactly the same!!!!

now chroot /mnt/distros/RH9 viola your running RH9...

 

DON'T TRY THIS UNLESS YOU WANT TO CRASH YOUR SYSTEM.... its to illustrate a point but I have done this for lilo once generated the files under the other distro and then copied across...

 

The point is the directory trees chroot'd into a virtual. Isn't linux beautifull..... everything is just a file...

 

The idea is the install takes place on a file tree which is mapped to the real file tree. (in the end your filetree is whatever you tell it to be) root can start where-ever you want....

Share this post


Link to post
Share on other sites

ok, i understand what you want to do, i just don't understand why you want to do it that way...(please note I've never used NIS, so I don't know how it works)

 

if the program that's supposed to be in /blah/duh/blah is installed into /duh/blah/duh by the package, all you'd need to do is use symlinks...you don't need a seperate tree. so, /duh/blah/duh is symlinked to /blah/duh/blah so that when the program thinks it's installing into /duh/blah/duh it's _actually_ installing into /blah/duh/blah (i know, horrible names for directories).

 

ok, maybe I do understand what you're trying to do. you want a virtual filesystem in, say, /mnt/virtualsystem which is setup the way the program wants it to look. So, if the program being installed wants something in /usr/local/bin and the system wants it in /usr/bin, then /mnt/virtualsystem/usr/local/bin would be symlinked (mapped) to /usr/bin basically?

 

if this is what you are thinking, the only way i know of doing such mapping in linux is with symlinks, and I think after a chroot these symlinks would be broken-or rather, they'd be pointing to the directories of the chrooted environment.

 

i understand your examples, but with a rescue disk, you aren't chrooting to a directory that isn't setup to point _back_ to the original directory tree, you are simply changing what the root is (basically, /mnt/whatever becomes / and what used to be / doesn't exist-as far as the shell is concerned).

 

ok...my head is starting to spin trying to think of a way to make your idea work, so, i'll let you reply and correct anything i may have said wrong. it's not that i don't understand your idea, it's that how don't know of any tools for linux that would allow it to be done. that is because once you chroot, as far as the shell is concerned, the original root doesn't exist anymore-until you exit out of the chrooted environment.

Share this post


Link to post
Share on other sites

tyme,

Its a fairly inventive thing .. but pretty far out as far as getting your head round it.

Believe me when I first started using the NIS automounter I was confused as hell. Wan't till my sysad was away and I had to learn it all very quickly that I suddenly understood how beautfully simple it is. (Like everything UNIX).

 

p.s. not being sarcastic, I appreciate the abstract nature of this idea.

 

But I think you actually answered your own question....

that is because once you chroot, as far as the shell is concerned, the original root doesn't exist anymore-until you exit out of the chrooted environment.

 

Thats the point, you can create any directory tree you want

It only exists for that console

but it represents the realtree. like a symlink on steroids for a single console.

 

LVM is another poss. I need to look into it more...

but with a rescue disk, you aren't chrooting to a directory that isn't setup to point _back_ to the original directory tree, you are simply changing what the root is (basically, /mnt/whatever becomes / and what used to be / doesn't exist-as far as the shell is concerned).

 

Yep, now my head is spinning... The original doesn't exist for that shell butr the whole chrooted tree does ... just in a different location.

 

It would duplicate, you were right, but only during the actual install. A physical tree would be created (/usr/....wherever) and the files actually copied there. They would then be moved to their correct locations based on a script that understands the differences in the two trees. This script would have to be in the original environment.

 

The script would also have to update the config files (preferably backing up the old ones) with the new location.... say a bizarre package had a different place for the conf file like say /etc/http.d/conf/httpd.conf (this is memory so don't check) now lets say the apache config put it in

/var/http/conf instead, it would be moved and refrences pointing to it updated.

 

Reason i chose that was the switch for apache from /home/apache or /usr/apache/htdocs to /var/http/html (I knw some distros' still put DocumentRoot in on of the first options wheras apache now state its preferred place is /var/httpd/

 

ANY CLEARER ???

I'll try and sketch it out and explain better but no tonight I'll be too drunk by the time I get home. Last day of exams for my girlfriend so its a compuslary get drunk night....

Share this post


Link to post
Share on other sites
Guest xaff

I could help out with the graphic work, boot screens and such.. Website perhaps.. Or I could write an "HELLO WORLD" app in pascal to amuse pepole. d-:

Share this post


Link to post
Share on other sites

Ok, I get the idea. We'd have to force any packages we were installing to not check for dependencies since they wouldn't exist in our chrooted environment (this was the problem I was trying to point out).

 

Which also means it couldn't be source based.

 

But we'd also have to know how every single possible program is setup to install, and how we want it to install......bah. It'd be a pain in the arse, you know....

 

Might as well just make the packages ourselves, and make them install where we want them :-P (has anyone else noticed that the "sticking out my tongue" emoticon just looks like someone with a really red mouth wide open?)

Share this post


Link to post
Share on other sites
Guest fubar::chi

ok i skipped most of the posts here so tell me if i'm saying anything that was already said. first off to answer your question. maybe.

my coding skillz aren't that good, i know c++ but not that well, i just started looking at php. My point is don't expect any coding from me.

What i can do is write (and write well) :twisted:

I can write installation how-to's and such, documentation and the like.

That's assuming that you guys are going to actually do this.

If you really plan to do this i think you should set up a webpage and get things rolling. Do you?

Share this post


Link to post
Share on other sites

fubar:

after some discussion last night with some people who definitely know more about this subject than me, I've decided to place this on the back burner until I learn more about Linux.  Per a discussion I had with static, I believe he plans to go forward with this, but not really until Christmas time.

 

So...I guess contact him.  I'm looking at other projects that I feel I'm better suited to do right now.

Share this post


Link to post
Share on other sites
Guest c_m_f

ok Gowator, i know ive been preaching this lots, but rather than writing your own packaging system from scratch, why not join in teh Autopackage development and incorporate what you envisage into that system, because what i have seen of it so is vey nice looking.

 

Also there available on #Autopackage on freenode

 

Here's the FAQ from there page, hope it helps.

 

When we had an article published at OSNews I answered many questions about the project. If after reading the FAQ you still have questions, take a look at that article and especially the comments

 

Note that some of these answers are speculative - ie the design allows for it, but we haven't written the code yet

 

   * What is autopackage?

     autopackage is software that lets you create software packages for Linux that will install on any distribution, can be interactive, can automatically resolve dependencies and can be installed using multiple front ends, for instance from the command line or from a graphical interface.

 

   * What does this offer me that RPM/DPKG doesn't?

     The Red Hat Package Manager was originally created by Red Hat to solve a problem that they had - namely, how to manage the software that they had packaged for their distribution. As Red Hat became dominant, people began creating RPMs and putting them on the internet, and other distributions adapted RPM to their own needs. The problem is, RPM was never designed to work on multiple types of distro, it is essentially designed to aid in the building of a distribution.

 

     DPKG is the Debian package manager. It too was designed with a single distribution in mind. Although Debian became famous for having easy package management, this is mostly due to the very large repositories it has. The chances are good that the software you want is available, though not necessarily always in the latest version.

 

     The system of attempting to package everything the user of the distro might ever want is not particularly scalable or easily adapted to other platforms. Although apt-rpm exists, it doesn't solve the problem that RPMs are specific to a distribution. By not scalable, I'm referring to the way in which packages are created and stored in a central location, usually by separate people to those who made the software in the first place. I can't see that system scaling up to all the software in the world, can you? Hence, autopackage tries to be decentralised, at least when it makes sense.

 

     In short, if you create an autopackage, it will be able to install on any distribution for which there is information available, which lets us all concentrate on writing amazing free software, instead of building and rebuilding packages :)

 

     Autopackage also differs in its approach to dependency management: rather than maintaining a huge database of files which will inevitably get out of date the moment you install from the source or copy files from another computer, autopackage directly checks the system itself for the stuff it needs.

   * Is autopackage meant to replace RPM?

     No. RPM is good at managing the core software of a distro. It's fast, well understood and supports stuff like prepatching of sources. What RPMs tend not to be good at is non-distro supplied packages, ie programs available from the net, from extra CDs and so on. This is the area that autopackage tackles. Although in theory it'd be possible to build a distro based around it, in reality such a solution would be somewhat suboptimal as we sacrifice speed for flexibility and distro neutrality. For instance, it can take several seconds to verify the presence of all required dependencies, something that RPM can do far quicker. One day we may optimize autopackage sufficiently that it's possible to use it for managing a complete distro as well, but that isn't a priority.

 

   * Why are the RPMs I find on the net today not portable between distros?

 

     There are a number of reasons, some obvious, some not so obvious. Let's take them one at a time:

 

         o Dependency metadata: RPMs can have several types of dependencies, the most common being file deps and package deps. In file deps, the package depends on some other package providing that file. Depending on /bin/bash for a shell script is easy, as that file is in the same location with the same name on all systems. Other dependencies are not so simple, there is no file that reliably expresses the dependency, or the file could be in multiple locations. That means sometimes package dependencies are preferred. Unfortunately, there is no standard for naming packages, and distros give them different names, as well as splitting them into different sized pieces. Because of that, often dependency information has to be expressed in a distro-dependent way.

 

         o RPM features: because RPM is, at the end of the day, a tool to help distro makers, they sometimes add new macros and features to it and then use them in their specfiles. People want proper integration of course, so they use Mandrake specific macros or whatever, and then that RPM won't work properly on other distros.

 

         o Binary portability: This one is fun, and affects all binary packaging systems. It turns out that it's quite hard to make a binary which will work reliably on different distros and even versions of the same distro. The first problem is glibc symbol versions. The GNU C library is a rather special package. All programs use it, either directly or indirectly, as it interfaces programs to the kernel and provides functions for services like file system access, DNS lookup, arithmetic and so on. It also deals with shared library support. Normally, when a library breaks binary compatability, the maintainer increases the major number in the soname, effectively renaming the library (but in a standard way). This means you can have several versions present on the system at once. glibc does not do this, for a variety of reasons I won't go into here. Instead, it renames each individual function, meaning you have several different versions of the same function. The ELF interpreter glibc provides is capable of dealing with this transparently. To see them, run nm /lib/libc.so.6 | grep chown.

 

           When a program is compiled, it's linked against the latest versions of the symbols. If those symbol versions are not present on a system when the program is run, the link process will fail and the program won't start. This problem, and variants of it, have plagued UNIX-like systems for a very long time now. Luckily, there is at least a partial solution in the form of the Linux Standard Base, which provides a stable set of symbol versions you can use in your apps, which gives your binaries some degree of backwards compatability and portability between distributions. We provide tools to let you use these, but at the moment they aren't documented. That'll be fixed soon. Because users cannot usefully upgrade glibc, instead you must compile your app to use a reasonably old set of symbol versions - the versions used bracket the range of distros your binary can run on.

 

           Another problem is that of symbol collisions. The semantics of ELF are unfortunately based on the old static linking days. When you run a program, the dependency tree of the binary is walked by /lib/ld-linux.so (which is the ELF dynamic linker). If you have a program, "foo" which depends on libbar.so, which in turn is linked against libpng.so, then foo, libbar.so and libpng.so will all be mapped into memory. Semantically, all the symbols from these objects are dumped into one big pot, and this is the crux of the problem. When performing symbol fixup, the glibc ELF interpreter will always choose the first symbol that matches, regardless of what the object being processed is linked against.

 

           For example, let's take our foo binary, and link it against two libraries, libA and libB. libA is in turn linked against libA1, and libB is linked against libB1. Now libA1 and libB1 are different libraries, BUT they both define a symbol called someFunction(). They have the same name, but do completely different things. You'd expect libA to be linked to the definition in libA1, and libB to be linked to the definition in libB1 wouldn't you, that is after all what makes intuitive sense. But that's not what happens. They will BOTH be linked to the symbol in libA1, because that's the one that came first. D'oh. This usually results in an instant segfault on startup.

 

           OK, so why does this cause problems with binary portability? Well, although having two libraries that declare a function with the same name is unusual, having two different versions of the same library in use at once is a lot more common. Libpng has 2 major versions in wide usage, libpng.so.2 and libpng.so.3 - they differ only internally, they are source compatible (but not ABI compatible). If I compile on a Linux distro that uses libpng.so.3, then my program will also be linked against libpng.so.3. If a user then wishes to run it on an older distro, say one which was compiled against libpng.so.2, they'll need to install the newer version for my app to work. Normally we say, so what? Unfortunately, my app (let's pretend it's a game) doesn't just link against libpng.so.3, it also links against libSDL.

 

           Now libSDL links against libSDL_image, which in turn links against libpng.so.2! So, now when my app is loaded, 2 different versions of libpng, both libpng.so.2 and libpng.so.3 will be linked in together, and things go boom. Not good.

 

           Note that the two versions are source, but not ABI compatible. That means the user can fix the problem by recompiling my app against libpng.so.2 - this time. It's not always that easy.

 

           As a result, binaries end up tied, often unknowingly, to the set of libraries the developer used when compiling. Running it on another distro might work, but there are no guarantees.

 

           Luckily, there is a solution to this problem in the form of an extension to the ELF fixup rules, originally implemented by Sun in Solaris. Direct and grouped fixup allows you to restrict the scope of symbols, preventing such collisions. Unluckily, it's not implemented by glibc. Volunteers? The problem is big enough that at some point, we (the autopackage hackers) may have to down tools and go work on glibc for a few months.

 

         o Bad interactions with source code distros: because the current versions of RPM don't check the system directly, they only check a database, it makes it hard to install them on things like Gentoo, even when they only use file deps. Future versions of RPM will address this issue with probe dependencies apparently

 

   * Surely a neutral package can't integrate as well as one built specifically for my distro?

     Well, that's an interesting one. In short, for some packages it will, for some it won't. Although in theory you could for instance install Gnome or KDE using autopackage, in practice it's not meant for that. Most distros customize and tweak many packages, that's part of what makes them what they are. Things like apache for instance often come with customised start pages, integration into the configuration tools and so on. Clearly an autopackage of apache would not have these things.

 

     On the other hand, often a distro simply doesn't have a package for a particular program you want, and it's not the sort of package that needs integration anyway. There are only so many ways you can install a library, or a game, or a word processor, or an icon theme...... you get the idea. In turn, autopackage is better suited to dealing with fiddly details than say RPM, so if you're distro only provided KDE but you use Gnome via garnome (ie you installed it yourself) then distro-provided RPMs won't integrate with that environment at all, whereas a .package will (at least in theory).

 

     Regardless, we realise that many people want to use their distro-provided packages as much as possible. It's for this reason that when it's able to, it delegates to a distro-specific package. If for instance you install frozen-bubble via autopackage, but the Perl-SDL dependency is available via apt or urpmi, that's what it'll use. If you type "package upgrade apache" then it'll use up2date rather than the autopackage network.

 

     We hope this will provide good integration with the host system and ensure that your system feels as integrated as it can do.

   * How does autopackage work?

     An autopackage (a .package file) contains all the files needed for the package in a distribution neutral format with special control files inside, wrapped in a tarball with a stub script appended to the beginning. In order to install a .package file, you run it, and the scripts then check your system for the autopackage tools and offers to download them if they're not present. It'll then boot the front end of your choice and begin doing the things that installers do - check for dependencies, ask questions, even do things like present EULAs and perform copy-protection checks. Finally, you can uninstall or repair a package with the "package uninstall" or "package verify" commands.

 

   * How does it deal with dependencies?

     Rather than attempting to maintain a huge database of files, it checks the system directly using scripts. Each .package contains what are called skeleton files - files that wrap up all the information about a dependency such as (most importantly) how to detect it, but also a description of it, some metadata and instructions for how to retrieve it if the dependency check fails. Skeleton files should be created for any package which might be required by other packages. This approach is similar to the one autoconf took many years ago when faced with a similar problem (although back then it was portability between various forms of unix rather than forms of linux).

 

   * Isn't that a bit of a hack?

     Yeah, it is. Unfortunately it's necessary, at least for now. The problem (that software on linux can be hard to install) is caused by the fact that Linux is open, and as such people tend to create differing versions of it. One of the most obvious ways in which they differ is file paths, but distros can differ in other ways too. There are two solutions to this: either ALL distros must conform to some standards such as the LSB, or a package manager must be built that is powerful enough to deal with the myriad differences.

 

     Also, some dependencies are harder to detect than just looking for a single file. For instance the genst utility requires the dialog program, a small executable that displays a variety of screen widgets and forms using ncurses. Unfortunately, there are at least 3 different forks of dialog, and not all of them support a --version flag. Detecting them can be a bit of a game, but is certainly doable when using scripts. You can also create dependency skeletons that don't directly corrolate to one package: virtual dependencies such as "sound server" are supported, where the script will scan for known sound servers that the app can use.

 

     Finally, it has to be said that standards are always good, and now many major distros ahere to the LSB tightly. However, even though they do, the LSB doesn't standardise everything, and often for instance there is still some slack in these standards. The "Filing System Heirarchy Standard" for instance, does not define any particular location for KDE/GNOME/Enlightenment/WindowMaker and so on. As such, distros can and do place them in different locations. As the FHS is in some sections rather vague, this problem isn't going to go away anytime soon. So autopackage attacks the problem from the second angle, with the hope that the two solutions meet somewhere in the middle :)

   * Does it do automatic dependency resolution like apt and emerge?

     Yes, but in a slightly different way. The autopackage network is more akin to DNS than a big filestore - it doesn't actually contain many packages itself. Rather, it acts as a way of figuring out a URL to download given the root name of a package. The idea is that project maintainers themselves build and host the packages, rather than have huge FTP servers with them all on.

 

   * But I thought it was the job of the distro to package things!

     At the moment, that's how it works. It's unrealistic to expect distributions to be able to package any piece of software you will ever want however, and as Linux expands this will only become even more of a problem. Although of course a big part of a distro will always be what software comes with it, autopackage is designed to allow distribution maintainers to concentrate on the real work of improving their product, rather than just packaging software.

 

     Users of Debian and Gentoo should note how hard it is to package everything a user might want, and to keep the packages already built from slipping behind. It's for this reason that we are attempting for a decentralised system, rather than setting up large apt repositories.

   * What's wrong with NeXT style appfolders?

     One of the more memorable features of NeXT based systems like MacOS X or GNUstep is that applications do not have installers, but are contained within a single "appfolder", a special type of directory that contains everything the application needs. To install apps, you just drag them into a special Applications folder. To uninstall, drag them to the trash can. This is a beguilingly easy way of managing software, and it's a common conception that Linux should also adopt this mechanism. I'd like to explain why this isn't the approach that autopackage takes to software management.

 

     The first reason is the lack of dependency management. Because you are simply moving folders around, there is no logic involved so you cannot check for your apps dependencies. Most operating systems are made up of many different components, that work together to make the computer work. Linux is no different, but due to the way in which it was developed, Linux has far more components and is far more "pluggable" than most other platforms. As such, the number of discrete components that must be managed is huge. Linux is different to what came before not only in this respect, but also because virtually all the components are freely available on the internet. Because of this, software often has large numbers of dependencies which must be satisfied for it to work correctly. Even simple programs often make use of many shared libraries in order to make them more efficient, and to make the developers lives easier.

 

     Because of this, Linux has to take a different approach to software management than other platforms. Appfolders are ideally suited to platforms such as MacOS X, which are largely monolithic in that the user rarely upgrades components (either themselves or via other software), and are now being phased in via .NET into Windows. In the case of the Mac, Apple are the only people who actually modify the components in the OS. Want to upgrade your widget toolkit? You'll have to buy the latest upgrade then. The Windows case isn't so much that the user doesn't upgrade components, their problem is that the user can do this, but it tends to break things. For instance, IE, Office and various games routinely upgrade core parts of the OS. Other apps often ship with their own version of the C/VB runtimes etc. By having the various components an application needs embedded in the EXE or DLL file, they can minimize DLL hell, at only a small loss of efficiency (Windows apps tend not to have large numbers of dependencies).

 

     Appfolders have a number of other disadvantages as well, even if you ignore the loss of efficiency they entail. One obvious one is that there is no uninstall logic either, so the app never gets a chance to remove any config files it placed on the system. Apps can't modify the system in any way, except when they are first run - but what if you need to modify the system in order to run the app? Another is that the app menus are largely determined by filing system structures. This means that it's hard to have separate menus for each user/different desktop environments without huge numbers of (manually maintained) symlinks.

 

     Dependencies aren't going to go away, indeed they should not, as code sharing is a good thing that should be encouraged. The problem is that they are hard to manage well. The solution is not simply to dispose of dependencies, to hide from the problem by packaging everything an application needs into the app itself, the solution is to make something that can manage dependencies well. We'll then have the best of both worlds - the ease of use of MacOS, with the efficiency of Linux.

 

     Update: As of 21st May 2003 I'd consider AppFolders broke on MacOS as it appears that most Mac software is now shipped using installers, even Apples own software. Wise is also forging a business in InstallShield style wrappers. It seems they really are too simple at this stage in the game, even for the Mac.

   * Does it support commercial software?

     Yes, it does. It will have the ability to do things that commercial vendors require, such as present click-through EULAs and the fact that it is script based gives a lot of flexibility to implement copy protection mechanisms. Whether you choose to use these features, is of course up to you.......

 

   * How does the multiple front ends system work?

     When the user runs a package, the scripts figure out which front end to use based on a series of heuristics. If you ran it from the command line it will use the command line interface. If you ran it from inside of X, for instance from Konqueror or Nautilus then a graphical front end will be selected based on your running desktop environment. The back end (the part that actually does the installation) communicates with the front end via a simple protocol based (currently) on named pipes. You can write install scripts that ask the user questions easily, as the details of the underlying protocol and front end are hidden from view.

 

   * Can autopackages be localised?

     Yes, autopackage itself is internationalised and the specfile format used (which is based on the INI format) supports having the same section in multiple languages. In addition, autopackage ships with a set of pre-translated strings that you can use in your own packages.

 

   * What widget toolkit does the front end use?

     At the moment the plan is for the GUI front end to be based on GTK2. This is mainly because we have a coder on the project who is an experienced GTK/GNOME hacker, and because we all use GNOME. When ap 1.0 is released we're expecting GTK2 to be pretty widespread. However, I used to be a die hard KDE fan and so would be delighted for somebody to contribute a KDE/Qt front end. As it says above, the correct front end would be guessed based on the currently running processes, so if a Qt version was available then that would be used to increase integration with KDE.

 

   * What's the relationship between autopackage and the GNU project

     autopackage isn't currently an official part of the GNU project, sorry if you got the wrong idea.

 

   * What about security?

     What about it?

 

   * You mean you're not going to do package signing?

     This is something we're still thinking about. In a decentralised environment like the one we're aiming for, it's obviously rather difficult to provide guarantees the the package hasn't been trojaned/won't blow up your hard disk. As anybody can produce a .package file without permission from us (or anybody else) there is always a risk, no matter how slight, that you will download a package that will attempt to destroy data or root your box. This is a risk that exists with any form of software distribution, even source tarballs.

 

     So what can be done about? Well, one solution is to have a known trusted authority digitally sign all packages, where they audit the code for trojans. This introduces centralisation however, and worse, if you no longer trust packages which aren't signed, any holdups at the signing authority can cause serious problems.

 

     Another possibility is a simplistic network of trust. The root server trusts the gnome.org server, and gnome.org trusts gstreamer.net, and gstreamer.net trusts the gst-player package. So you trust gst-player. That might be workable, but it's not something we're currently implementing, and it'd require quite a bit of thought.  

 

Share this post


Link to post
Share on other sites

COOL :D

No point reinventing the wheel but I'llhave to read the full text later...

 

tyme: good point apart from the package database can be accessible becuase it can be in the root of the chroot. (or just copied since from a 'normal console' the chroot'd tree exists but not from root from /usr...... etc.

 

Hadn't actually considered it but the workaround s easy !

Still let me check out the c_m_f post first....

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Sign in to follow this  

×
×
  • Create New...