You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My knowledge of caching is very high-level and I'd like to better understand how it appears to be saving so much time in my routine requests for hour/day/week/month S&P data. As suggested in the docs, I'm using a CachedLimiterSession:
This job runs every 10 minutes during market hours. On first-run (after a fresh deploy), the above code takes roughly 8 minutes. Subsequent runs however, take only 30 seconds! How is this possible given I need to pull 500 stocks across 4 time intervals (i.e 2000 requests) with a 6 request per second rate limit? Assuming the historical data is cached, the current hour/day/week/month close data (i.e. current price) is always in flux, so isn't a minimum of 2000 requests still needed every time? How does caching achieve such dramatic time savings in my scenario? Thank you kindly for your insight.
Thank you ValueRaider. Unfortunately, I don't see anything in the article which helps address this question:
The current close price is always in flux during market hours, so at an absolute minimum (assuming all other data is cached) the current close price for all 500 stocks will need to be pulled. With a six request per second rate limit, wouldn't that require at least 83 seconds? (500/6)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
My knowledge of caching is very high-level and I'd like to better understand how it appears to be saving so much time in my routine requests for hour/day/week/month S&P data. As suggested in the docs, I'm using a CachedLimiterSession:
This job runs every 10 minutes during market hours. On first-run (after a fresh deploy), the above code takes roughly 8 minutes. Subsequent runs however, take only 30 seconds! How is this possible given I need to pull 500 stocks across 4 time intervals (i.e 2000 requests) with a 6 request per second rate limit? Assuming the historical data is cached, the current hour/day/week/month close data (i.e. current price) is always in flux, so isn't a minimum of 2000 requests still needed every time? How does caching achieve such dramatic time savings in my scenario? Thank you kindly for your insight.
Beta Was this translation helpful? Give feedback.
All reactions