Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by ianw1974

  1. I expect it won't do it automatically, because it seems you do not have a ppp0 idle timeout. Your simulation with pulling the telephone cable shows that this is the case. So you need to set some kind of idle timeout for ppp0, and then perhaps that will fix your problem.
  2. As I said, just removing the telephone line, CentOS still thinks the ppp0 connection is active, and it's not responding, and so this is your problem. Ideally ppp0 should disconnect when there is no longer an internet connection. I don't have such a setup to be able to tell you what to do. You need some kind of timeout for ppp0 when it's not in use, or when no traffic is able to go across the link. This would then initiate an ifdown and you would have the default route via your Windows machine. Maybe this google search for an idle timeout will help give you the info you need: https://www.google.co.uk/?gws_rd=ssl#q=linux+ppp0+idle+timeout
  3. Just taking the phone line out, your system still thinks that the ppp0 connection is active, and so you have two default routes. Which is probably why it works when you do ifdown ppp0. You said eth1 is your isp, so why are you using ppp0 unless of course that is also connected with eth1 for user authentication to bring the link up (PPPoE)? I expect we need more information on your computer setup before we can help you further. But I expect it is related to the scenario I listed above that you have two default routes and everything is attempting to go via the one created by the ppp0 link until you drop the link. In fact you don't need two default routes as such. You don't need this for your internal network if you are not accessing the internet from it. Perhaps explain more on what you are trying to achieve.
  4. I don't use shorewall, so can't tell you specifically. I am using iptables however, so you should be blocking all inbound traffic unless you need access on those ports. You don't list your inbound requirements. You have squid and postfix, so do these need to be accessible to the internet? If yes, then secure squid so that nobody can use it unless you authorise them to, and the same with postfix: http://www.mailradar.com/openrelay/ that page will help you test and tell you what you need to do and fix postfix. As for the rest, you need to generate iptables or shorewall rules to block what you don't want access to. If you want to block on that particular IP: iptables -A INPUT -d -j DROP assuming that is your public IP address assigned to this server. If not, replace it with your public IP address. Because the one above that you mentioned is Fremont, California, and your IP posting here is South Africa. So change that destination IP appropriately. will be suffice, and allow you to continue using your server locally. However it won't be accessible now from the internet for any of your resources. If you need access, then generate appropriate iptables rules prior to this to grant access for particular source IP's, or secure squid so that only authorised users can use it. Based on the text from the cbl, someone is using your proxy to hide their conficker requirements, and so you were correctly blocked.
  5. is thinking about the weekend :)

  6. When I used Mint, it updates automatically, same as on Ubuntu, as they are the linux-image-x.xx.x-xx-generic (where x.xx.x-xx is the kernel version). Most distros update the kernel automatically when one becomes available. Same happened for me with RHEL/CentOS/Mandriva. With Gentoo it installs with the emerge, but then you have to manually build and compile it.
  7. ianw1974


    Ran it in VirtualBox. The only thing when doing updates to iPhone, etc, you have to keep monitoring if the USB device is connected and not and click every now and then when required, to get the update of the phone to continue. Other than that, no real major problems.
  8. Lightdm is good, but does require a bit of configuration. I found this out not so long ago on my Gentoo installation, when I was trying to sort out something because of issues trying to emerge gdm, and didn't want Gnome3. Here's a good link with some info: https://wiki.mageia.org/en/Display_Managers note there says about lightdm being usable in Mageia 3 and higher. Maybe it will help. I've tried lightdm and lxdm because I was attempting to build a system with LXDE but also have at least a pretty display manager. If I remember to get on my Gentoo system later tonight, I'll take a look at my config and see what special stuff there I configured. Was a while ago, so can't remember exactly what I did with it, or even what DM I'm using now. Incidently, lightdm is in use with the latest Ubuntu's, and works straight out of the box - as does pretty much all stuff with Ubuntu. Just in case you want an alternative to check out.
  9. just deleted 2500 posts from a spammer. don't come back!

  10. You tried from repos which probably no longer exist, because 2011 is extremely old now. It probably did not find files on there, and so of course failed to download. You can try: https://openmandriva.org/ try downloading DVD iso image here for Mandriva 2011 maybe: http://mandriva.mirror.dkm.cz/pub/mandriva/official/iso/2011/ maybe it will be possible to fix failed upgrade booting from DVD once you burnt ISO image.
  11. It is possible that the motherboard is not supported by Mandriva 2010, or in particular the IDE/ATA side - this can be quite common if the motherboard was not available when Mandriva 2011 was released. The fact it reads grub/lilo does not necessarily mean that everything is OK. It says it is waiting for /dev/sda5 and /dev/sda7. The best way, would be to start the system from the Mandriva 2011 CD/DVD in recovery/rescue mode. Then look at your disk partitions to see how it is recognised. Maybe it is not /dev/sda5 and /dev/sda7 now, and that is why the problem exists. When you boot to rescue mode, from the console: fdisk -l to list partitions, and then you can see how Mandriva 2011 recognises the disk.
  12. Gnome and KDE will be memory hungry. Alternatives are LXDE or XFCE for a lighter desktop experience. That would help speed things up a bit more.
  13. There could be a couple of things here. You say it's an old machine? So it's possible it only has a CD-ROM drive, rather than a DVD-ROM drive. Could it be that the later versions that you have burned are DVD's and not CD's? Alternatively, it could always be a bad burn, or corrupted ISO download, and hence the problem with not being able to read the disks that you install. EDIT: http://h10025.www1.hp.com/ewfrf/wc/document?docname=c00282472&tmp_task=prodinfoCategory&cc=us&dlc=en&lc=en&product=446896 from here it looks like it just has a CD-RW drive, so if you're attempting to start from DVD's, that will not work and might be the source of your problem. If using CD's for later versions, then I guess a bad burn or corrupted ISO download. Verify the ISO with the md5sum, and if incorrect download and burn again. The other alternative, if there are no CD ISO images for the distro that you want to install (which can be the case), just replace the CD-RW with a DVD-RW, and then you're done. Best bet will be to take out the one you have in there, verify the type of connection (I expect IDE), and make sure you get a DVD drive with IDE. A lot of them now are SATA, which because this is an old machine, isn't going to work.
  14. Maybe not trivial, but on my particular system it didn't work with the new installer. The old installer however did. Had I had the time, I would have probably posted about it to get it fixed and get my system working. However, I didn't have time, I needed my system working as I didn't have another, so dropped it for something else that just installed and worked immediately.
  15. Yep, you'll be fine :)
  16. If you aren't natting on the router (port-forwarding) to gain access to it externally, then no, you won't need to. Then you are just using it for internal use on your private network.
  17. On private network yes it's enough, but if you will access it from the internet then you'll need to secure SSH with public keys instead of passwords, and also install something like denyhosts to protect SSH from being attacked by people trying to login to your server. Can even change SSH to run on a different port than port 22, to also minimise the risk. And yes, the chown is fine, because you upload everything using the web browser which connects to the web server, and the web server needs access to that folder/directory to save any files you upload. I expect all of that is in the owncloud docs.
  18. You probably already had self-signed certificates by default, if you connected to owncloud with https from the beginning. Firefox will shout about self-signed certificates. If you want to generate free one's which are verified Class I certs, you can get from http://startssl.com
  19. The bit paul wrote is for helping with access to the directory, it doesn't fix your webdav issue. That's something you need to get apache configured for. Maybe you need to install some extra packages? Trying installing: apache-mod_dav I found this when googling "mageia apache webdav". Restart apache afterwards, and try again.
  20. Changing the owner of data is correct. You probably need to enable the webdav module in apache - that's what your error is throwing. I found a post that curl is optional, so another way is edit php.ini find the php_curl module in there, and comment it out. Maybe curl is just throwing an error because something is not quite correct. Two things to check at least. Normally I download and extract manually from owncloud. This means you get something without any "distro" tampering - which is something you might be experiencing because you installed it via MCC. Also you don't necessarily have the newest version available. Too many times I've had problems with distros and installing from the repo, for example joomla, redmine, owncloud (Debian when I experienced problems). Sometimes it's just easier and better to do it a bit more "manual" and download and extract manually.
  21. I stopped using Arch when they changed the installation process, because it was a piece of crap and didn't work after the reboot, and I consider myself quite good on Linux now, and even I couldn't figure out how to get it working. Something so simple as an installation process should work. But yes you are right when people reply with RTFM, it's not exactly helpful at the least. Gentoo is probably more complex to install, but the documents work, and the help on the forum is also excellent, even if you are a newbie. I lately am Ubuntu, mainly because I just want it to work. I still use more complex things for servers, but that depends on what I'm attempting to achieve, and then use an appropriate distro.
  22. Now there is OpenMandriva. Or Mageia, if you're wanting to stick with something like Mandrake/Mandriva used to be.
  • Create New...