-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache bound by weighted count #602
Comments
I plan to replicate Caffeine's size-based algorithm for Implementing this from the outside presents a few problems:
In summary I would expect it to be quite fast but unstable in the sense that the cache would be trimmed more than needed, reducing hit rate. In some scenarios adding the global lock to restore stability might not matter, but I wouldn't choose that option without measuring/testing in the context of the real application. |
Thanks for the advice. We're having a tough time finding an implementation in .NET! |
Since there's locks inside trim, we're thinking something like this will be more appropriate as a workaround: https://gist.github.com/dave-yotta/a3163bb7c81aa5b0d4e2ad4b482ac2aa |
I left a comment on your gist - in practice I think it will work but will be subject to incorrect total size due to races between The semaphore inside |
@bitfaster how would you feel about a PR to make |
I left another comment in your gist with a workaround using Class methods can be made public, it is more complicated to change interfaces. I didn't do a good job of aligning all the |
i.e. if we know or approx the memory usage of each entry in
Mb
and want to bound the cache toX Mb
. I see there's a fixed capacity internally on the concurrent dictionary - so probably not straightforward.If we picked a suitable N for the capacity bound and did this psuedocode:
any thoughts on the performance/stability?
The text was updated successfully, but these errors were encountered: