A plaintext RSS crawler
Find a file
2020-04-14 23:06:58 +02:00
fonts add Canterbury font 2020-04-14 12:14:40 +02:00
images add logo images 2020-04-14 12:14:59 +02:00
scripts add OMPL reader script 2020-04-14 22:18:11 +02:00
.gitignore Initial commit 2020-02-27 12:44:18 +01:00
config.toml change config 2020-04-14 22:20:59 +02:00
default.nix add nix python environment 2020-04-13 16:15:27 +02:00
LICENSE Initial commit 2020-02-27 12:44:18 +01:00
README.md update README 2020-04-14 23:06:58 +02:00
requirements.txt order requirements 2020-04-14 22:48:31 +02:00
spiderss.py order imports 2020-04-14 22:23:24 +02:00

spiderss

spiderss logo

spiderss is a plaintext RSS reader / crawler. Articles are stored as Markdown files on the filesystem.

Why?

Because plaintext is God.

How can I read the articles?

Use your favourite Markdown viewer, or just the console. spiderss integrates nice with the ranger filemanager to browse categories.

How does it work?

Edit the config.toml file to your liking and run the script. The script creates a folder structure the following way:

base_directory
| - category
    | - feedname
        | - new
        | - read
    | - another feedname
        | - new
        | - read
| - another category
    | - a third feedname
        | - new
        | - read
| - loved

Every feed gets a new and a read subfolder. Article files are stored in the new folder. Move them to the read folder if you're done reading them. You can do this easily e.g. by setting a keybinding in ranger.

A special folder, called loved, is created on startup in the base_directory. It is intended for articles, you want to keep. Articles inside are never deleted, even if they expire the max_age.