-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Client-side QPS appears to share a bucket with the leader election client #3092
Comments
The explanation seems unlikely to me, we copy the config we use for leader election: controller-runtime/pkg/manager/manager.go Line 360 in 990f2ed
Did you see logs rquests being throttled from client-go? It logs it if it does that. An easy way to test your theory would be to set the |
Here's a longer log example of the client-side rate limiting that we were seeing. I'll definitely try the LeaderElectionConfig suggestion that you called out!
|
While scale testing
kubernetes-sigs/karpenter
using the controller-runtime client with client-side QPS enabled, we would try to scale-up thousands of objects at one time. While this scale-up was occurring, we noticed that we received logs indicating that the client was getting client-side throttled (which was expected); however, what wasn't expected was that during this same time, we would also drop updating our lease and lose leader election.I tried this on a large AWS instance so I think it's highly unlikely that we were getting CPU throttled. To me, this appeared to look like a case where the same client-side QPS that was used by Go client was sharing a bucket with the lease QPS, causing them to get throttled at the same rate.
Do we know if the lease QPS and the generic object QPS are sharing the same bucket? If so, do we think that it makes sense to split them into separate buckets since, in general, retaining the lease should be prioritized above creating or updating objects at the apiserver?
Error Log
The text was updated successfully, but these errors were encountered: