Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add timeout and failureThreshold to multicluster probe #13061

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

alpeb
Copy link
Member

@alpeb alpeb commented Sep 11, 2024

This adds the probeSpec.failureThreshold and probeSpec.timeout fields to the Link CRD spec.

Likewise, the gateway.probe.failureThreshold and gateway.probe.timeout fields are added to the linkerd-multicluster chart, that are used to populate the new mirror.linkerd.io/probe-failure-threshold and mirror.linkerd.io/probe-timeout annotations in the gateway service (consumed by linkerd mc link to populate probe spec).

In the probe worker, we replace the hard-coded 50s timeout with the new timeout config (which now defaults to 30s). And the probe loop got refactored in order to not mark the gateway as unhealty until the consecutive failures threshold is reached.

- This adds the `probeSpec.failureThreshold` and `probeSpec.timeout` fields to the Link CRD spec.
- Likewise, the `gateway.probe.failureThreshold` and `gateway.probe.timeout` fields are added to the linkerd-multicluster chart, that are used to populate the new `mirror.linkerd.io/probe-failure-threshold` and `mirror.linkerd.io/probe-timeout` annotations in the gateway service (consumed by `linkerd mc link` to populate probe spec).
- In the probe worker, we replace the hard-coded 50s timeout with the new timeout config (which now defaults to 30s). And the probe loop got refactored in order to not mark the gateway as unhealty until the consecutive failures threshold is reached.
@alpeb alpeb requested a review from a team as a code owner September 11, 2024 23:27
Copy link
Member

@adleong adleong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good but I have questions about the defaults (which preceded this PR). A probe timeout of 30s but interval of 3s means that in the case of a timeout, many probe requests will pile up, right? It seems like the interval should be at least as long as the timeout.

@alpeb
Copy link
Member Author

alpeb commented Sep 20, 2024

Good insight 👍, yes I've observed a pile up of one request when timeouts occur; the ticker goroutine blocks on run() consuming probeTicker.C, but that has a capacity of one, which is generating the piling up.
If we increase the probe interval that'll affect the non-failed use case as well. WDYT about just removing the buffer on that channel so both goroutines become completely synchronized?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants