-
Notifications
You must be signed in to change notification settings - Fork 684
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature(eviction): add event when EvictPod failed #1536
Conversation
/retest |
t.Run(test.description, func(t *testing.T) { | ||
fakeClient := fake.NewClientset(test.pods...) | ||
fakeClient.PrependReactor("create", "pods/eviction", func(action core.Action) (handled bool, ret runtime.Object, err error) { | ||
if test.wantErrMsg == fmt.Sprintf(notFoundText, test.evictedPod.Name, test.evictedPod.Name) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about to pass k8serrors.XXX
directly? I.e.
[]struct {
description string
node *v1.Node
evictedPod *v1.Pod
pods []runtime.Object
wantErr error
}{
...
{
description: "test pod eviction - pod absent (not found error)",
node: node1,
evictedPod: pod1,
pods: []runtime.Object{test.BuildTestPod("p2", 400, 0, "node1", nil), test.BuildTestPod("p3", 450, 0, "node1", nil)},
wantErr: k8serrors.NewNotFound(v1.Resource("pods"), pod1.Name),
},
}
...
fakeClient.PrependReactor("create", "pods/eviction", func(action core.Action) (handled bool, ret runtime.Object, err error) {
return true, nil, test.wantErr
})
The original wantErr bool can be turned into wantErr != nil
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we return using the fmt.Error
, we cannot directly use k8serrors.XXX
, but we can simplify it to directly use fmt.Error
to return
descheduler/pkg/descheduler/evictions/evictions.go
Lines 213 to 221 in da52983
err := client.PolicyV1().Evictions(eviction.Namespace).Evict(ctx, eviction) | |
if apierrors.IsTooManyRequests(err) { | |
return fmt.Errorf("error when evicting pod (ignoring) %q: %v", pod.Name, err) | |
} | |
if apierrors.IsNotFound(err) { | |
return fmt.Errorf("pod not found when evicting %q: %v", pod.Name, err) | |
} | |
return err |
if test.expectedError != nil { | ||
return true, nil, test.expectedError | ||
} | ||
return true, nil, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The whole function body can be reduced to return true, nil, test.expectedError
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
04775aa
to
44f3adf
Compare
/test pull-descheduler-test-e2e-k8s-master-1-30 |
@@ -481,6 +481,7 @@ func (pe *PodEvictor) EvictPod(ctx context.Context, pod *v1.Pod, opts EvictOptio | |||
} | |||
span.AddEvent("Eviction Failed", trace.WithAttributes(attribute.String("node", pod.Spec.NodeName), attribute.String("err", err.Error()))) | |||
klog.ErrorS(err, "Error evicting pod", "limit", *pe.maxPodsToEvictTotal) | |||
pe.eventRecorder.Eventf(pod, nil, v1.EventTypeWarning, "EvictionFailed", "Descheduled", "pod eviction from %v node by sigs.k8s.io/descheduler failed: total eviction limit exceeded (%v)", pod.Spec.NodeName, *pe.maxPodsToEvictTotal) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If these events get produced by default in a cluster with many pods the cluster gets flooded with events. This needs to be disabled by default and enabled if needed. By extending DeschedulerConfiguration
with a new field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, this sounds more reasonable. I will add a new field in the DeschedulerConfiguration. Because we have similar monitoring cluster-specific failure events (ex: scheduling failure, eviction failure, etc.), this can quickly let cluster administrators know important event information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i will handle this in weekend
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
0a50078
to
e1f8046
Compare
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ingvagabund The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/kind feature
When it comes to eviction, what we really care about are the errors that occur during evictions. So I tend to include events when an eviction fails. Besides metrics and application logs, events are also a valuable source of information. Typically, operations personnel respond to events to manage the cluster effectively.