After last month's article about the handy expect utility was published, it struck me that there are many such utilities of which people are unaware. And there's a good reason for this. Those of you who were brought up with operating systems such as the various Microsoft Windows variants or our beloved OS/400 are accustomed to receiving essentially bare-bones systems. Once the operating system is loaded, you tend to have to add additional software (either purchased or homegrown) to get "cool tools." IBM used to provide a tools package with OS/400 called TAATOOLS, but that is now a package that needs to be purchased separately from Jim Sloan. Windows users can gauge for themselves their reliance on after-market software.
When most people think of Linux, they consider it to be just an operating system, much in the same way as they view Windows or OS/400. But that couldn't be further from the truth. In reality, Linux is more properly referred to as GNU/Linux, in deference to the huge contribution of the Free Software Foundation's GNU software. The part that Linus Torvalds has contributed is the kernel, which he calls Linux. Virtually all of the core software that makes up what we refer to as Linux came from the GNU project. And this software is included with all of the various Linux distributions. For the next couple of months, I'm going to delve into some of the more interesting utilities that are readily available to Linux users. Users of that other operating system should note that many of the GNU utilities (including the topic of this month's column) have been ported to Windows.
The Problem
I have a client who wanted to move his Web site from its original host, which suffered from serious bandwidth constraints, to one of my servers. As part of the transition, he decided that he wanted to register a domain for his site, instead of using the "http//www.hisISP.net/hisDomain"-style URL he had been using. After some discussion, we decided that the easiest way to do this was for me to create a mirror of his existing site on my server. That way, his new site would be hot as soon as his domain propagated through the various DNS servers, yet he could keep the old site (with the old URL) active for as long as he thought necessary.
The plan would have worked fine, except that, through his use of a brain-damaged HTML editor, all of the links on his umpteen HTML pages had his original site's URL embedded in them. Instead of using relative links, his were all hard-linked to his original site. It appeared as though his HTML editor helpfully placed the full URL in each link even though he had specified a relative link. A quick examination of the editor's options didn't reveal a means to suppress that annoying behavior; and, frankly, I didn't want to spend the time trying to figure it out.
One solution to the problem might have been to have my client upload all of the current files to my system, then use sed (another *nix utility, short for "stream editor") to change the errant links in his multitudes of static Web pages. Although straightforward, the downside to this plan would be that he would have to ensure that he uploaded changes to both sites. And I would have to set up a means to catch his changes.
Another option would have been to set up an FTP script (see last month's article) to occasionally transfer the files from his original site to my host and then use sed as described earlier. Done frequently enough, both sites could be kept synchronized rather well. So this method was a possibility. Fortunately, another GNU tool exists that provides an easier solution than either of the those mentioned.
The Solution
One of the fine utilities provided by the FSF is called Wget. The man page for Wget states:
"GNU Wget is a free utility for non-interactive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies.
Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user's presence, which can be a great hindrance when transferring a lot of data.
Wget can follow links in HTML pages and create local versions of remote Web sites, fully recreating the directory structure of the original site. This is sometimes referred to as "recursive downloading.'' While doing that, Wget respects the Robot Exclusion Standard (/robots.txt). Wget can be instructed to convert the links in downloaded HTML files to the local files for offline viewing."
As you can see from the description, Wget appears to be a tailor-made solution to this problem. Mirroring the site was as simple as issuing the following command:
The --mirror switch directs Wget to set its options to those appropriate for making an exact duplicate of the original site. The --convert-links option converted all of his hard-coded links into relative ones, which the Web server (Apache, in this case) can easily handle. By setting up a cron job (similar to the OS/400 WRKJOBSCDE command), I was able to have his original site mirrored to mine on a regular basis. Problem solved!
It Dices, It Slices, It Juliennes
I've used Wget to obtain local copies of some of my favorite sites so that I can do my reading offline. If that was all it was capable of, Wget would be an outstanding tool. But sharp-eyed readers caught the reference to the FTP protocol in the man page excerpt, which brings up another use for the utility--that of downloading files from various FTP servers.
The man page for Wget continues:
"Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved. If the server supports regetting, it will instruct the server to continue the download from where it left off."
I am one who has chosen to live in the technological backwaters of the United States and, until recently, didn't have broadband Internet access. As a result, downloading software was an exercise that would try the patience of a saint. I could make the experience less painful by scripting my system to dial my ISP in the wee hours of the morning, then launching Wget to retrieve the desired software. Assuming that I didn't request more than the available bandwidth would allow for one evening, I could expect that my desired software would be waiting for me in my download directory. Should I need more than one night's worth of bandwidth could handle, I would relaunch the same command, this time using the "-c" switch (continue), and Wget would dutifully skip everything that it had already obtained, picking up where it left off.
Where to Wget It
If you are a Linux user, you'll most likely find that that you already have Wget, either already loaded or available on your distribution's CDs. If not, or if you would like a native Windows port of this utility, you'll find everything you need on the project's home page. In fact, I'd suggest that you browse around the GNU site to get a flavor for the richness of the utilities available for your use.
I hope you find Wget to be a useful addition to your toolkit. Next month, I'll discuss a way for Windows users to be able to take advantage of the *nix environment in a manner not unlike OS/400 users can through their use of QSH. As always, feel free to send me email if you have any suggestions for topics that you'd like me to cover.
Barry L. Kline is a consultant and has been developing software on various DEC and IBM midrange platforms for over 20 years. Barry discovered Linux back in the days when it was necessary to download diskette images and source code from the Internet. Since then, he has installed Linux on hundreds of machines, where it functions as servers and workstations in iSeries and Windows networks. He recently co-authored the book Understanding Linux Web Hosting with Don Denoncourt. Barry can be reached at
LATEST COMMENTS
MC Press Online