Fetch 'em the Geeky Way

Now suppose you found a web page that is really a pageful of links, say some PDFs.
A sharing soul decided to offer some documents off her web site, so these are nicely embedded in the (X)HTML code. They could be interesting, even helpful, but you have that gut reaction that clicking through each and every one of them is not an option.

If you are a Linux guy, you are lucky 1. Fire up your preferred terminal application and type:

lynx -source <website><page> | grep pdf | cut -d '"' -f<fieldnum> | awk '{print "<website>"$0}' | xargs wget

That makes four pipings and six tools, which is far from awesome, I know. Now, I'd like to avoid Lynx and use wget from the start, but that means a little more time to browse those man pages (doh) on redirecting wget to stdout. Well, next time.

(Later)
Ok, I couldn't resist and I simply grepped my way through man wget. This is a wget-only version of the above, no Lynx:

wget -O - <website><page> | grep pdf | cut -d '"' -f<fieldnum> | awk '{print "<website>"$0}' | xargs wget

Watch out: that is dash - capital O - space - dash and it redirects output to standard output.

  • 1. In all fairness, if you are a Linux guy, chances are you already know how to do this in some dozen exotic exciting geek variants which really empowers your bash skills, but since I find myself repeatedly accessing man pages time after time to (re)find the way to do this, bear with me
My Dad's portable Underwood

Big rock small rock

Information architecture, way-finding, user experience, and design.

Usability banzai

Title says it all. The Takeshi's Castle of web site usability

Life in the tech lane

I used to be a sysadmin, and I still rsync now and then.

Daglig Svenska

The undersea adventures of getting settled in Sweden. Just details from a very small picture.