Jump to content

Downloading an entire website


satelliteuser083
 Share

Recommended Posts

Taken from the wget man page:

 

Wget can follow links in HTML and XHTML pages and create local versions of remote web sites, fully recreating the directory structure of the original site. This is sometimes referred to as ‘‘recursive downloading.'' While doing that, Wget respects the Robot Exclusion Standard (/robots.txt). Wget can be instructed to convert the links in downloaded HTML files to the local files for offline viewing.
Link to comment
Share on other sites

I also suggest installing and using httrack, with or without the gui (khttrack) from the repos.

 

wget with the mirror option will work. However I would suggest using the limit-rate option with it, so you don't over task the server bandwidth (that's not very friendly to do). It would be

wget -m --limit-rate='amount' url-of-website

The 'amount' can be in kilobytes (k), or megabytes (m).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

×
×
  • Create New...