Skip to content
This repository was archived by the owner on Feb 9, 2026. It is now read-only.
This repository was archived by the owner on Feb 9, 2026. It is now read-only.

A few comments #3

@gurchik

Description

@gurchik

Hello, I have a few suggestions. Rather than create an Issue for each one I've decided to lump them all together in this one.

I'm writing these suggestions because I think the bot is cool and I'd like to help in any way I can. I can even implement some of these suggestions in code and submit a pull request, but I wanted to run them by you first in case so we can discuss them first, and to give you a chance to write the code yourself if you'd prefer.

  1. It would be cool if this bot could be generalized, since I can think of it being cool to use on other subreddits. For example, the bot could be rewritten to load in a config file, which instructs the bot which subreddits to crawl, what users or flair css classes to find, and some other settings. Then people can adapt this bot for other subreddits without needing to understand or change the code, just the configuration files.
  2. The current design of the bot puts puts unnecessary strain on the Reddit servers. Don't get me wrong, this bot isn't going to crash the website, but if you've seen as many "You broke Reddit!" error messages as I have seen over the years, you'd agree that we should minimize any unnecessary strain on the servers as we can and be responsible developers on this free website. I have a lot of suggestions, but a few of them are:
    1. The bot currently saves all its stateful data to the "archive" subreddit. The bot needs this subreddit to function, and as such it loads all the posts and comments on this subreddit every time its run. This is pretty inefficient. In a way you're using the archive subreddit as a database for your application, which is a gray area of the terms of service. Some subreddits have been banned for doing this (like /r/A858DE45F56D9BC9, although to be fair you are not blatantly and deliberately using Reddit's servers for personal gain like they were doing). In my opinion a better way to run this bot would be to store the database in a local file and get rid of the archive subreddit. The archive subreddit is a good idea in case we were worried about jmods editing or deleting their posts, but I believe this rarely happens, and even if it were common, your bot currently only archives a few words of a jmod's comment so it's not effective at creating a true archive. Rest assured this database file should not take up too much space; with some quick back-of-an-envelope math I estimate it will grow at a rate of only a few dozen megabytes per year (you could run it for a century and only use up a gigabyte or two!). And if you're worried about losing the file or not wanting to bother with it when moving the bot to another computer, we can program in a safemode to search the bot's previous comments to rebuild the database. Better to do this once in a blue moon than every 5 minutes like the bot currently does.
    2. Every five minutes the bot gets the Hottest 100 posts in the subreddit and searches the comments in those posts for jmod comments. I have a few ideas where we can ignore posts in certain situations to save some effort from the servers.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions