You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What k8s version are you using (kubectl version)?
v1.22.9
kubectl version Output
$ kubectl version
Client Version: v1.20.15"
Server Version: v1.22.10-
What did you do?
terminated a node in order to test if pods were rescheduled on these underutilized nodes.
What did you expect to see?
I expected that tool would find pods under overutilized nodes, evict them, and reschedule them on underutilized nodes.
What did you see instead?
usage resources were measured and mapped with data that did not match the actual values. For instance, when the CPU of one node was 100%, it measured it as 50%, which found an overutilized node as appropriately utilized.
"Node is appropriately utilized" node="ip-xx-xxx-xx-155.xx-xx-x.xxxx.xxx" usage=map[cpu:4005m memory:8380Mi pods:28] usagePercentage=map[cpu:50.63211125158028 memory:27.484329571097998 pods:48.275862068965516]
Node is underutilized" node="ip-xx-xxx-xx-185.xx-xx-x.xxxx.xxx" usage=map[cpu:1255m memory:2460Mi pods:11] usagePercentage=map[cpu:15.865992414664982 memory:8.068192212995354 pods:18.96551724137931]
"Number of evicted pods" totalEvicted=0
The text was updated successfully, but these errors were encountered:
NOTE: Node resource consumption is determined by the requests and limits of pods, not actual usage. This approach is chosen in order to maintain consistency with the kube-scheduler, which follows the same design for scheduling pods onto nodes. This means that resource usage as reported by Kubelet (or commands like kubectl top) may differ from the calculated consumption, due to these components reporting actual usage metrics. Implementing metrics-based descheduling is currently TODO for the project.
Closing as duplicate, please feel free to continue discussion in the linked issue. Thanks!
/close
NodeUtilization is calculated by the requests on the pods, not the actual current resource usage as reported by tools like kubectl top. This is done to remain consistent with the scheduler, which places pods based on resource requests.
Closing as duplicate, please feel free to continue discussion in the linked issue. Thanks!
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
What version of descheduler are you using?
descheduler version: 0.24.1
Does this issue reproduce with the latest release?
yes. and also occurred with an older version.
Which descheduler CLI options are you using?
LowNodeUtilization
Please provide a copy of your descheduler policy config file
CronJob or Deployment
kind: Deployment
resources:
requests:
cpu: 256m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
Specifies whether Leader Election resources should be created
Required when running as a Deployment
leaderElection:
enabled: true
leaseDuration: 15s
renewDeadline: 10s
retryPeriod: 2s
resourceLock: "leases"
resourceName: "descheduler"
resourceNamescape: "kube-system"
Required when running as a Deployment
deschedulingInterval: 2m
replicas: 1
deschedulerPolicy:
nodeSelector: workload_type=services
maxNoOfPodsToEvictPerNode: 30
maxNoOfPodsToEvictPerNamespace: 150
ignorePvcPods: true
evictLocalStoragePods: true
strategies:
RemoveDuplicates:
enabled: true
RemovePodsViolatingNodeTaints:
enabled: false
RemovePodsViolatingNodeAffinity:
enabled: false
RemovePodsViolatingInterPodAntiAffinity:
enabled: false
LowNodeUtilization:
enabled: true
params:
nodeResourceUtilizationThresholds:
thresholds:
cpu: 20
targetThresholds:
cpu: 80
What k8s version are you using (
kubectl version
)?v1.22.9
kubectl version
OutputWhat did you do?
terminated a node in order to test if pods were rescheduled on these underutilized nodes.
What did you expect to see?
I expected that tool would find pods under overutilized nodes, evict them, and reschedule them on underutilized nodes.
What did you see instead?
usage resources were measured and mapped with data that did not match the actual values. For instance, when the CPU of one node was 100%, it measured it as 50%, which found an overutilized node as appropriately utilized.
"Node is appropriately utilized" node="ip-xx-xxx-xx-155.xx-xx-x.xxxx.xxx" usage=map[cpu:4005m memory:8380Mi pods:28] usagePercentage=map[cpu:50.63211125158028 memory:27.484329571097998 pods:48.275862068965516]
Node is underutilized" node="ip-xx-xxx-xx-185.xx-xx-x.xxxx.xxx" usage=map[cpu:1255m memory:2460Mi pods:11] usagePercentage=map[cpu:15.865992414664982 memory:8.068192212995354 pods:18.96551724137931]
"Number of evicted pods" totalEvicted=0
The text was updated successfully, but these errors were encountered: