in Uncategorized

Reading weblogs more efficiently

I am tired trying to go around the web and read my favorite sites every day, so I’ve come up with a more efficient way to keep up.

I’ve put together a script which downloads a couple of RSS feeds and puts them together on a simple HTML page. Have a look if you like, but it’s still pretty rough layout-wise. There are also some problems with certain types of feeds.

I could use an RSS aggregator program, and I did for a while, but it just didn’t work out. The RSS programs I tried (such as SharpReader) ran very slowly on my PC and had what I thought was a pretty awkward user interface.

I find the HTML version a lot more useful. I can read it from wherever I am. It never crashes. Stuff downloads very rapidly. I can scroll down through it quickly, and I can tell which stuff has already been read from the link-colour.

Architecturally, I like this setup too. I think that processing and aggregating should happen on the server end, not on the client. If I was on the road more, I would rewrite the script to run on laptop, as a server.

I’m using Perl and XML::RSS at the moment. I plan to switch to using Ben Trott’s XML::Feed, which should cope better with a wider variety of types of feed.

Everybody should learn a scripting language like perl or python. But that’s a subject for another entry …

Write a Comment

Comment