You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NCA is checking the batches.json endpoint, and then all individual batch JSON endpoints, every five minutes. This was meant to ensure newly-loaded batches are seen "real time", but given #310, this isn't actually that useful a thing to have anyway.
If that were the only problem, this wouldn't be a big deal, but it seems that on some systems, it's blasting a ton of DNS requests, making the process exceptionally slow, and in one case actually causing external DNS servers to block us temporarily. (This doesn't affect production to my knowledge, because the lookup is using local DNS servers, but in dev it can be a nightmare)
A better approach is probably to leave all web issue caching to the once-a-week process that currently does its full refresh of all issue data. Then add something to the automation pipelines that adds/removes items from the cache when an automated job is successful in production. It'll get us a less brittle cache, and should still mirror what's been loaded or purged.
The tricky part is building something into the issue finder / scanner nonsense which allows us to manually modify the web cache.
The text was updated successfully, but these errors were encountered:
NCA is checking the batches.json endpoint, and then all individual batch JSON endpoints, every five minutes. This was meant to ensure newly-loaded batches are seen "real time", but given #310, this isn't actually that useful a thing to have anyway.
If that were the only problem, this wouldn't be a big deal, but it seems that on some systems, it's blasting a ton of DNS requests, making the process exceptionally slow, and in one case actually causing external DNS servers to block us temporarily. (This doesn't affect production to my knowledge, because the lookup is using local DNS servers, but in dev it can be a nightmare)
A better approach is probably to leave all web issue caching to the once-a-week process that currently does its full refresh of all issue data. Then add something to the automation pipelines that adds/removes items from the cache when an automated job is successful in production. It'll get us a less brittle cache, and should still mirror what's been loaded or purged.
The tricky part is building something into the issue finder / scanner nonsense which allows us to manually modify the web cache.
The text was updated successfully, but these errors were encountered: