Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add configurable disk space cleanup trigger/target by time or space #15498

Closed
wants to merge 4 commits into from

Conversation

jake9190
Copy link

Proposed change

This PR adds the ability to configure the fallback disk space cleanup policies based on a number of minutes of free disk space or a specific amount of raw disk space. This extends upon the default of 1 hour that was introduced in 0.12.

A few notes:

  • This aims to maintain the same behavior as previously, with cleanup triggering at < 1 hour of free space, and freeing up an additional hour (2 hours total free after cleanup)
  • I debated on using hours rather than minutes but went with minutes for greater flexibility.
  • The PR uses the prior precedent for configurations that only apply globally also being present in the camera configuration subtypes.
  • Relevant documentation has been updated, including the FAQ (I referenced 0.15 here, so if this PR is accepted and ends up in 0.16, I can update the doc).
  • I'm new to python, so feedback is appreciated

I've been running this for a couple of days while tweaking, and it's been working well for me.

Type of change

  • Dependency upgrade
  • Bugfix (non-breaking change which fixes an issue)
  • New feature
  • Breaking change (fix/feature causing existing functionality to break)
  • Code quality improvements to existing code
  • Documentation Update

Additional information

Checklist

  • The code change is tested and works locally.
  • Local tests pass. Your PR cannot be merged unless tests pass
  • There is no commented out code in this PR.
  • The code has been formatted using Ruff (ruff format frigate)

Copy link

netlify bot commented Dec 13, 2024

Deploy Preview for frigate-docs ready!

Name Link
🔨 Latest commit 4e532d1
🔍 Latest deploy log https://app.netlify.com/sites/frigate-docs/deploys/675c63bd4fe97f0008cf12b4
😎 Deploy Preview https://deploy-preview-15498--frigate-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@hawkeye217
Copy link
Collaborator

hawkeye217 commented Dec 13, 2024

Thanks for this.

With four different possible options to configure, I think this could add a significant amount of confusion for users. I'd prefer to see this simplified into time only rather than space. Frigate only tracks storage related to its recordings, and that would be another point of confusion.

Curious what the other maintainers think, though.

Also, this should be targeted against 0.16. We're in the beta phase already for 0.15, so we're targeting bugfixes only rather than new features.

Copy link
Collaborator

@NickM-27 NickM-27 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if this really makes sense to be implemented as part of the storage maintainer. This maintainer exists purely as a last ditch effort to ensure Frigate does not run out of space. Making it configurable essentially makes two different configs where you can configure how long Frigate keeps storage which is confusing and creates many opportunities for conflicting settings.

I also see other issues with this config not really acting the way a user expects on a per camera basis. Realistically, this should likely be a separate recording retention config where you can either have time based or storage based retention but not both defined.

These challenges are why this type of thing has not been implemented yet. I would suggest creating a discussion where you can suggest an approach before implementing the feature in code.


```yaml
record:
cleanup:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think cleanup is a good name here. There is a separate cleanup process for recordings, the storage check there is a last ditch effort to ensure Frigate does not run out of space. Cleanup implies these are somehow related when they are not

@@ -86,26 +86,48 @@ def calculate_camera_usages(self) -> dict[str, dict]:

return usages

def calculate_storage_recovery_target(self) -> int:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have concerns about this being a per-camera config because in reality it does not behave that way. This logic aggregates the per-camera config to a total number, but in reality storage cleanup is just being run and deleting the oldest recordings regardless of which camera they belong to

@hawkeye217
Copy link
Collaborator

@NickM-27 agreed.

A discussion would be a better place to hash this out. At this point, there's usually a good reason we haven't implemented something "simple". We always suggest new contributors open discussions before spending time on PRs.

@jake9190
Copy link
Author

Makes sense.

I'll close this

@jake9190 jake9190 closed this Dec 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants