Jump to content

Gowator

Platinum
  • Posts

    5668
  • Joined

  • Last visited

Everything posted by Gowator

  1. Jareez, Do you have 2x Desktop files or one? Just wondering? p.s. For anyone interested....
  2. Apparently, my microwave doesn't allow me to use it with the door open. Is there any particular reason for this? I am new to microwave cookers but find it very inconvenient to have to keep closing the door while Im cooking? Why are $MANUFACTURER taking away my freedoms? Surely its up to me to determine the risks involved? Why have this stupid default when some people want to use the microwave with the door open... When we solve this imposition on my freedoms we can sort out my gas water heater which turns itself off if the pilot light is not lit. Discuss... (in case anyone misses it this is about root login's to the GUI)
  3. You can leave both keyboards plugged in and just set the keymap in the relevant stanza for xorg.conf Option "XkbLayout" "<code>" but you then need to identify the keyboards cat /dev/input/event0 should show the first keyboard see something like ls /dev/input/ by-path event0 event1 event2 mice mouse0 ts0 so now you need 2x keyboard stanza's using the correct /dev/input for each Section "InputDevice" Identifier "Keyboard0" Driver "keyboard" # this is for supporting deadkeys Option "XkbRules" "xfree86" Option "XkbModel" "aa" Option "XkbLayout" "aa" EndSection Section "InputDevice" Identifier "Keyboard1" Driver "keyboard" Option "Protocol" "usbev" Option "Device" "/dev/input/event0" Option "XkbModel" "xx" Option "XkbLayout" "xx" EndSection
  4. Two seperate things ... I personally find I can do more at once and more efficently from the CLI. Out of your list I would probably only use a GUI tool for creating users dir and avatars .. everything else can just be backgrounded like updating urpmi and I'd probably use kate started as root for the config files if I want cut n paste but I rarely have to do this since I just copy the old ones back.. Can't argue with that. What lack of control? Noone is stopping you doing anything except what is not possible.. Want to log in as root then find the config file and edit .. its only 2 letters to delete and 3 to add? That this isn't a default is just common sense ...more people by far would object to having a root login in KDM than wouldn't? Any distro that has a root login is going to be continually berated as a toy disto.. no serious linux pro would even have a root login let alone use one so that means everyone who doesn't want it has to remove it. Why are you ranting just about root? Why not have a GUI login for apache and one for mysql as well? Or on the other side why even have a login? Why not set the password to null and just boot straight into root? or use the autologin for a default user? Someone has to decide whether a certain binary decision is YES or NO or add another step to the install... Do you want a root login ? Y/N Do you want a mysql login Y/N Do you want .......... (about 20 other default users) login ? So you can either do this or allow every user a GUI login by default? Personally, If I caught anyone working for me using a root GUI login on any *NIX I would give them a written warning and a second time I'd sack them straight off as would more or less anyone who is responsible for admin'ing a professional *nix network. If I could have convinced HR just how stupid and irresponsible it is I'd just fire them but usuall HR demand a written warning first. So far as Im concerned anyone who logs into X as root is not fit to work in a *nix environment, full stop, no discussion. The default has to be one way or the other if it allows root GUI logings then Mandriva will be written up as a second rate distro that doesn't take security seriously ...wheras this way round it just causes people who don't know better to complain and hopefully by the time they work out how to edit a single line they have realised how irresponsible it is to log in as root.
  5. Yes it does because /.kde/share/config/kdm will take you to there regardless (if it existed) whereas .kde/share/config/kdm would take you to a hidden directory called .kde in your present directory. However that will only install a new KDE theme, not a KDM theme Whereas http://linuxmigrations.hd.free.fr/kdtheme.png Will install a kdm theme (I already clicked on administrator mode)
  6. Didn't make any difference. The odd thing is that Start-E brings up Konqueror, and when I'm running Aiglx then Start-Mousewheel activates the zoom function, so the key is working - I'm guessing it's just ignored by KDE, or not getting passed to the taskbar or something. I noticed that my trackpad vertical scrolling has stopped working too, but that's also just a little niggle rather than a major pain. Yep I got similar niggles.... its something in the setup for AIGLX and XGL somewhere I guess that "hijacks" certain keypresses but I guess even when you switch them off that part is still grabbing the keys... It might be possible (haven't tried so hard) to reactivate AIGLX and then turnoff the grabbing the WinKey and then deactivate AIGLX again... worth a shot I guess?
  7. I don't know about the mandriva hacked KDE CC (and Im not starting a vmware session right now to check) but in the KDE CC (unhacked) is a theme manager where you just click install new theme and it lets you select a downloaded theme file In the system administration tab is the same for the splash screen ..."KDM theme manager" You need to use administrator mode but when you click on it it asks to install a theme... I don't know if this menu is avaiable in mandriva though???
  8. Well I'd be a hypocrite if I didn't say Im using them but also I do try and buy products with opensource drivers .. for a few reasons. IMHO, this turned out badly and I personally lay the blame on procedures which are only necessary on an idealistic level. There is no legal issue with having closed source code being used by open sourced code as long as the open sourced code does not directly contain any of the closed source code aside from the calling of functions, which is often known as "the black box" - that is, "i don't know what happens in there but i know if i pass it this stuff i get this stuff back, and that's all i need to know for it to function". I think all too often the OSS community gets high on it's horse and says "If it ain't OSS we ain't gonna use it!" I believe that this conclusion: is false. Binary drivers give Linux the ability to use hardware that was previously unavailable. This helps advance Linux by making more hardware compatible, there by getting more users, which in the end improves and expands the Linux community. Coders have a tendency to only look at things from the developer perspective, when there's a much larger picture they completely miss. They can't see the forest for the trees. Taking the last part first, this is certainly true and probably kernel devels more than most. However drivers do represent a contentious issue because of what they do. I don't have any problem at all with commercial closed source software in Linux, indeed I encourage it up to a point. I'm certainly very happy with my photoprocessing SW... skype and quite a few others ... though skype does deserve special criticism because of the way it hijacks the sound ... I'm pointing this out as an example because its actually a badly behaved application. The thing is the part which need protecting for commercial reasons doesn't have anything to do with this part. The encryption and compression algoriths could continue to be a black box and the UI and handling the sound server could be opensources without compromising the trade secrets in the important part. If you think then all of what makes skype work well is the compression and encrytption algorithms .. the UI isn't anything special and the sound driver hooks are terrible since not only do they take exclusive access they often forget to let go afterwards! Now onto drivers... the problem is drivers have direct HW control and load into kernel space. I think the black box itself isn't bad but it depends where that black box is. As I say, its not simple. Certain firmware (WiFi) is impossible to open source in the USA for instance simply because of legislation regarding radio transmissions, specifically that the user should not be able to boost the power output of a consumer item. I understand WHY.... but the legislation is then directly preventing an opensource driver for most wifi chipsets, I doubt that was its intended goal, its just a byproduct that only really affects opensource because if its closed source then you can't play with the gain in the firmware. (and ultimately if your determined then you can mess with it anyway using external add-ins, anyone who ever had a CB radio will know that) The real problem is that the drivers are in effect a part of the kernel... and the firmware gets direct control over kernel processes but there is no review process for the binary drivers... Specifically "i don't know what happens in there but i know if i pass it this stuff i get this stuff back, and that's all i need to know for it to function" Yes and No.... because the part that misses out is you don't necassarily know what comes out of the black box, only those parts for which you have known hooks. It could be malicious but more likely it could just be poor code*. Hence even Windows does driver certification (if people choose the MS certified driver or not is up to them)... but the whole idea of not having a certification or peer review is alien to linux. In the case of the pwc driver... well I think this was going over the top in many ways but I think from a kernel dev's POV its a question of where the slippery slope starts..??? Overall its a strange dance in the IP world....companies want to protect their IP and development costs naturally but in many cases they are just shaving off cents of development by shifting a hardware function into firmware so that the host's CPU can run it. The problem is that by loading firmware into a kernel module they are introducing parts of the runing kernel that may or may not interfer with other parts of the runing kernel. Perhaps the simplest example is a WinPrinter. If you take a hardware only printer you send it pure instructions, be it ps or PCL5 etc. but with a winprinter you are actually loading the firmware into the kernel to do the translation.... Of course the translation is no big deal... but exactly how that kernel module interacts with everything else is then unknown. A few badly coded hooks or just direct memory access etc. and the whole kernel is borked or at least one other device might then stop working. This is a simple consquence of the monolithic linux kernel...its far more suceptible to kernel taints than a micro kernel like NT4 based OS's or BSD's.... Its also got a lot of advantages.... but its what we have in linux. As a OT interesting read (if very old) http://people.fluidsignal.com/~luferbu/mis..._Tanenbaum.html edits:* When I say poor code what I mean't is the driver writer for closed source HW is only interested in their one device, not how it interact with other devices running in the same kernel space. This is a particular problem with a monlithic kernel... so how well a driver behaves in kernel space is far more important than how an application behaves and co-exists in user space. I don't think this is really exploited in a malicious way but it would be easy to do so... I'm not saying this happens, its an example: but say ATI build some code into the closed source driver that deliberatly messes about with say a nforce chipset and disables the sound or network or whatever. Its unlikely because it would be stupid because in most cases the probs would start AFTER adding the new ATI card... but there are plenty of places it could be malicious. Mainly however its just down to the driver manufacturer having no interest in what they break so long as thier product works. Im sure a few of us remember the probs with DMA and IRQ conflicts back in DOS/Win95/98 times Because the linux kernel is monolithic it is susceptible to bad code in modules and this bad code might not even be apparent until you add a second module... but then you can't fix it nor can the kernel dev team because its closed source. However linux is also more generic in driver support...so it tends to support chipsets not actual products. As more closed source drivers are added then a situation like DOS/Win98 with conflicting drivers is likely to occur .. in other words I can have SATA access or sound but not at the same time? In summary I think there is a difference between what happens in userspace and kernel space and that is inherent in the design of linux. Sloppy programing happens everywhere but in the case of closed source its not repairable unless the manufacturer sees a need... and if the problem is that the programming just affects a competitors hardware then motivation to fix it (and then admit to the bad coding to start off) is low whereas with opensource cross compatibility and well behaved drivers have a higher motivation.
  9. This is an excellent tip.... its easy to forget if you leave windows open like I do..
  10. If the Windows or Super key is not working in KDE then try adding the following to your xorg.conf file under the section InputDevice for Keyboard Option "XkbOptions" "altwin:super_win"
  11. I think you need to remove the Clone If you want them completey independent you end up with 2xDesktop (~/Desktop and @/Desktop1)
  12. I'm not trying to tell people to use them or not.... and I agree that at least they consider linux... but I think in reality there are far more manufacturers who provide open source drivers than we realise ... simply because they are open source and in the kernel. The first spring to mind is Adaptec. You don't even notice the adaptec drivers because they are a standard module and unless your compiling a kernel then they just work... you don't really need to know about them.... Then there is CUPS (not exactly a driver but written OS from a commercial company...) which brings me to the many printers with linux drivers. Then there is creative with the numerous OS drivers for different sound cards. A recent(ish) addition is Intel , check out http://www.intellinuxgraphics.org/ but also http://ipw3945.sourceforge.net/ and others for the wireless chipsets. But either way I don't think its simple. Here is an interesting take (and Im not personally sure how I stand) http://kerneltrap.org/node/3729
  13. I think that sums it up.... I'm not saying people shouldn't use the nvidia drivers (I do) I'm saying people should be aware of the effect of their choice. and the message that choice sends NVIDIA. (or ati) As for the problems with the nv driver... I think the two are kinda linked, that is the resources being used on the nv driver are probably less than if the nvidia driver didn't exist... I think the devels must feel (at least partly) that the opensource driver is a sideline which will be used in installs and then 90% of people will install the closed source anyway. In many ways I think a lot of the small problems with the driver would have been fixed with a bigger user base? Its only specualtion... but there are other examples... I read a post on the kanotix forum about Flash9 where as someone put it they are not going to use it because it just detracts from all the progress made by Gnash ... I see the point, if I was a Gnash developer then Adobe suddenly waking up having skipped flash 8 is a bit of a slap in the face if the user base and bug testers etc. all just give up on Gnash in favour of the "real thing". Further more they might skip Flash 10-11 .. who knows? but by then unless people keep using Gnash its likely to die or at least not get any further. I guess its the same as browser id spoofing - plenty of sites are written for IE only... not because they won't run on mozilla, just because they deliberatly use browser detection to allow Microsoft users only. Quite why they do this I don't know, lazyness is probably a large factor but if you battle through some Linux protected site by spoofing browser id then you are just adding to their traffic and in a way reinforcing the point they don't need to cater for non Windows users.
  14. What Im saying is you can edit the kdmrc but if you do you get into a habit of using it instead of figuring out the correct way and 99% of stuff can be done with a GUI, you just need to find the right tool. (it might take a lot longer in a GUI or it may be less optimal because the GUI might not give all the options) However if on a oneoff you need to run as root you can just start the X session AS root.... by stopping the login manager (I forgot, when you finish restart it with /etc/init.d/dm start) Also make sure you do let people know.... its honestly very simple to wreck something just by presuming the person is running as a user ... obviously in a perfect world advice would start ... if you are running as root then ...... first OR only do this as a user NOT root. but its not a perfecdt world and we mostly assume the convention... i was helping a friend on IRC last night and he wasa trying to start gtk-gnutella BUT he had opened a terminal ans was doing it as root so when I tried to tell him do x,y it didn't actually work and had I not caught on (easier live on IRC) I could well have wrecked something.
  15. NVIDIA and ATI can obviously do what they want ... the problem is that we the consumers keep buying graphics cards without opensource drivers and this sends a message that we are willing to accept opensource drivers. Intel drivers should at least make a difference. The problem is that nvidia and ati are a defacto cartel.. perhaps not even by their own wishes so the choice of graphics cards today is limited to basically these two and Intel. I suppose Matrox might represent a small blip... If you go back 10 years then there were probably well over 100 graphics chipset manufacturers who's names we have probably all but forgotten until we look through the driver lists... what happened to Diamond, Hercules, S3 etc. ? even 5 years ago there were probably 30+ mainstream chipsets. This is the problem, how can I (or anyone else fix it if the drivers are closed source) ... even if I wait for a month or two for it to happen again (I call this regualrly) can I get any decent debugging info when its completely frozen and I can't even sysrq or ssh in? My server, without X installed even has run as long as I let it without ever crashing.... last shutdown was actually for cleaning it ... (getting the fluff out of the fans) but prior to that the uptime was from when I installed. I would be interested how many people have uptimes > 1 year using closed source drivers. Anything under a year is not even approaching STABLE (if you rebooted for a kernel change fine but Im talking other than voluntary shutdowns uptimes 24x7 of over a year) The reason I think 3D desktops present a problem is simple.... If you don't play games then the nv driver is perfectly acceptable, you miss out on googleearth and a few things but really not much when all is said and done. (Mainly functions like twinview or TV out etc.) I can't actually think of many apps that *need* 3d and the closed source driver outside games, Im sure there are a few but for basic office work, internet access etc. NOT using the closed source driver is acceptable. As 3D desktops mature they are likely going to be the default. Even right now I guess many are downloading them just to try the 3D desktop and this is sending a strong message to nvidia and ati .... Here is another good perspective http://www.securityfocus.com/news/11189 What I would add is that the graphics card manufacturers are not concerned with security or programming flaws. If it crashes linux on average once every 2 months I doubt they have issues... whereas a kernel developer would find this ($EXPLITIVE) unacceptable in the extreme. Most importantly the closed source developers don't give a toss for security http://www.heise-security.co.uk/news/79623 http://news.com.com/Exploit+code+released+..._3-6126846.html The fact is this has been over 2 years of a known security bug.... Even Microsoft fix them faster than that. IF this was an opensource driver it would have been fixed the next day, if not the same day. I am unfortunately using the nvidia driver... NOT for googleearth but because I need the twinhead support for my projector. Since I have it installed I use it for google earth :D but was it not for the projector I would not use it... MY GF's comp is running the nv driver. It has NEVER CRASHED ....certainly never so it needs rebooting. root@PixieBox:~# uptime 15:03:14 up 49 days, 22:44, 2 users, load average: 0.05, 0.11, 0.06 Ive no idea why she rebooted it? Actually I guess that was when we got back from vacation and she turned it back on.
  16. Incidentally I just made a 4.5GB file using k3b ls -l k3* -rw-r--r-- 1 sl sl 4875624448 2006-10-19 07:07 k3b_image.iso exactly as I described..... here is the tail of the log..... 99.95% done, estimate finish Thu Oct 19 07:07:04 2006 99.97% done, estimate finish Thu Oct 19 07:07:04 2006 99.99% done, estimate finish Thu Oct 19 07:07:05 2006 Total translation table size: 0 Total rockridge attributes bytes: 102314 Total directory bytes: 184320 Path table size(bytes): 360 Max brk space used c6000 2380676 extents written (4649 MB) mkisofs command: ----------------------- /usr/bin/mkisofs -gui -graft-points -volid K3b data project -volset -appid K3B THE CD KREATOR © 1998-2005 SEBASTIAN TRUEG AND THE K3B TEAM -publisher -preparer -sysid LINUX -volset-size 1 -volset-seqno 1 -sort /home/sl/.kde/tmp-Kanotix32/k3bInWwCb.tmp -rational-rock -hide-list /home/sl/.kde/tmp-Kanotix32/k3bG4JBfa.tmp -joliet -hide-joliet-list /home/sl/.kde/tmp-Kanotix32/k3bvL86dc.tmp -udf -full-iso9660-filenames -iso-level 2 -path-list /home/sl/.kde/tmp-Kanotix32/k3bdjHT7a.tmp If you think Im making it up just try it! Once again ONLY CREATE IMAGE is NOT THE SAME as ticking OFF on the fly when it is using a tmpfs filesystem.
  17. sorry i was under the impression you thought "unticking on the fly" was the same as creating an image. Sorry I assumed this was an MPG file, you can call your zip files anything you want but I just assumed this was a video file? Once again I just saw you had unticked the on the fly box... I didn't realise you had chosen create image only. I can't see where you said you actually created an image in a specific location. You might want to read up on the tmpfs filesystem http://kb.vmware.com/KanisaPlatform/Publis...SAL_Public.html That's not specific to K3B but the default place for images wirtten with on the fly unticked is /tmp Hence ticking OFF on the fly is NOT the same as creating a file in /home
  18. Sure its /etc/kde3/kdm/kdmrc and change AllowRootLogin=false to AllowRootLogin=true But be so kind as to add that you are running the desktop as root to your signature so people who read it can choose wether to answer your questions when you bork the system. I'm not saying I won't help... but I certainly wouldn't give you priority over helping someone else who is not doing the linux equaivalent of self-harm so I would answer your posts when I have finished helping others. It wouldn't be fair to do otherwise IMHO. Equally anyone helping you needs to know this or they will bork your system by assuming you are running as a user and tell you to do something that will bork your system as root. However as I already told you before you don't need to edit anything to run X as root, just stop the login manager and start X directly... As root init 3 /etc/init.d/dm stop then login then startkde However this is pretty much guaranteed to bork your system for your user sooner or later.... unless you really know what you are doing... since you will be creating things or over writing things your user has no permissions for. You can also just boot directly into a root desktop... no need to use a login manager at all. just change the startup scripts to start Xorg at boot instead of dm.
  19. Yep or just create a disk image ...and then burn DVD image? Please read most posts above. I unticked 'on the fly', and that's exactly what that does. It creates an image, then burns it. That doesn't help with the 4GB limit however, as you can't create an image with a file > 4GB for the same reasons you can't burn a DVD with a file > 4GB in it. Don't try it then but its works for me for files >2GB and its entirely possibly it might fail at 4GB since I don't think I tried. Its also extremely possibly its only when its a single file and that by making the MPG into more than one file each under 2GB would work.... and payback would be no different to a commercial dual layer DVD where it starts the second part with a tiny hickup. I did but being rude won't fix this for you....still since you know exactly where this temporary file is located I don't need to give you any hints. ;) I'm certainly not going to actually try it if you are going to be rude but you might consider that its being created in tmpfs on /dev/shm type tmpfs (rw)
  20. http://searchopensource.techtarget.com/col...1225136,00.html I'll say again what I have said before.... The biggest barrier to Linux drivers are users using ndiswrapper or linmodem, lin printer etc. ITs not individuals as such, sure I used a linmodem driver for my laptop... so Im as guilty as anyone but this is really what allows manufacturers to skip the driver issue for linux. Now it seems the fascination for a 3D desktop seems poised to kill off opensource drivers as well as making linux unstable. I don't know about everyone else but prior top using closed source drivers I don't remember the desktop locking up very often at all... and when it did you could always log-in remotely ...and kill it. Now I get frequent freezes .. usually leaving googleearth open while a openGL screen saver starts or something but these lock the whole kernel... which is of course impossible if the programmers have followed the rules....
  21. add to server layout InputDevice "Mouse1" "CorePointer" edits changed capitalisation (mouse1->Mouse1)
  22. You could be right.... Sometimes its just not worth the effort and you obviously have some experience in other distro's...and mandriva users is quite happy to help you whichever one works for you... Im with scooma though if you do want to sort it out ... try some alternative kernels .. If you find that most are OK and your missing out then you can always see what changed in the .config file?
  23. Probably not... Sometimes if you run games when it comes back afterwards the resolution for some reason stays? Usually you can just CRTL+ALT and numpad +/- to cycle through the displays... Overall The virtual resolutiuon displays are a pain in the £$% IMHO... and I never figured why mandriva adds them...I prefer to just leave straight displays like SubSection "Display" Depth 24 Modes "1440x900" "1152x864" "1024x768" "800x600" "640x480" EndSubSection Its not like the virtual displays are actually usable? or is it just me?
  24. Yep or just create a disk image ...and then burn DVD image? before anyone says "but I don't have the space" if you dont then it will fail anyway.
  25. I had this exact problem when I was using "mona" (beta 2 or 3 I think). Believe it or not it fixed itself when; after having many problems I thought were related to 3d-desktop, I uninstalled the 3d-desktop nvidia packages, mesa-demos and compiz packages. When I restarted x and logged in VOILA! instead of the cups error I got "Installing Packages" and then it never came back, even when I later reinstalled the 3d-desktop stuff. My best guess is there is a dependancy conflict that requires the cups packages to be installed first but the install program (I my opinion buggy in almost every release of mandriva since mandrake 7.0) doesn't do it in the right order. Another good reason not to install so many packages at initial install. To bad there isn't better error messages in this regard. This really does bear up to my personal experience. I have had the least problems by installing the absolute minimum in the installation itself and then installing what I need. The same goes for the updates ... It has rarely worked while installing and basically as you say once the package install gets screwed somehow it seems to make a real big mess... I know its not exactly newbie friendly and if its someones first install its confusing but I really think if people choose the absolute minimum they will have far less probs. They can then update once its all working (and have less downloads) and add the packages they want .... I too sometimes find it expediant to log in as root, usually when I am first installing and setting up a system. I get tired of continually entering the root password (the "remember" box doesn't always work as advertised and logging in and out or restarting X defeats it anyway) and frequently kdesu or launching from the konsole results in errors (years of documentation on this issue haven't resolved anything). You may be able to allow root logins by editing kdmrc - I'll let you know if I figure it out. Its a single line in kdmrc, just "allow root logins=true" but i still think it is likely to cause more problems than not. Willie answers that pretty much so ... The real basis for this is Linux isn't designed to be running a GUI as root. In many ways its more prone to messing stuff up than running windows as administrator because the safeguards are just not in place. There are also an increasing number of applications which will not run as root because the programmers deliberatly add this safety check into the code. If you stop and think "why would they do this?" then its because the security risks are too high and they would be forced to write loads of extra code just securing it (and even then it wouldn't be so secure) and in this context secure doesn't just mean being hacked it includes user error etc. Now many programs do run as root... so perhaps the arguament is if they can then they all can? I tend to think its the safety concious who do this and while some progs are designed to be run as root many just assume the user is not going to attempt running the prog as root. Running any app as root when logged in as a normal user is pretty simple...and adding it to the menu is also easy. For example if you want to edit config files just add a second entry for your favorite editor ..running kdesu... The only other real problem running progs as root (apart from those that will not run as root) is allowing root to access the desktop .. this is done using xhost ... and again can be added to your menu like xhost + localhost If you add export DISPLAY=:0.0 to the root .bashrc then you can just type then name of the program you need (If you don't know the name then you can just run it as a user and check the process manager....) As willie says you can also use utilities but even without this if you right click on a file in konqueror and choose open with you can just type kdesu kate for instance for an editor running as root opening that file as root.
×
×
  • Create New...