-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
spot to spot consolidation is disabled #7699
Comments
Can you please post your Karpenter deployment spec and NodePools? |
nodepool-service configuration
nodeclass- apiVersion: karpenter.k8s.aws/v1beta1 Optional, configures IMDS for the instancemetadataOptions: Optional, IAM instance profile to use for the node identity.Must specify one of "role" or "instanceProfile" for Karpenter to launch nodesinstanceProfile: "${instance_profile}" Required, discovers security groups to attach to instancesEach term in the array of securityGroupSelectorTerms is ORed togetherWithin a single term, all conditions are ANDedsecurityGroupSelectorTerms:
Required, discovers subnets to attach to instancesEach term in the array of subnetSelectorTerms is ORed togetherWithin a single term, all conditions are ANDedsubnetSelectorTerms: blockDeviceMappings: |
which type of information you required |
Would you be interested in scheduling a call to discuss this issue in more detail? |
Just looking to see if the Karpenter deployment itself has the correct |
karpetner deployment apiVersion: apps/v1 |
i have shared deployment yaml, can you review it |
karpenter version : 1.0.5
kubernetes version(eks cluster) : 1.31
I configured an EKS cluster and Karpenter using Terraform. After upgrading Karpenter to version 1.0.5, I experienced application downtime due to old nodes failing to drain properly. This prevented new nodes from provisioning, resulting in errors. Although I had enabled consolidation and spot-to-spot, and used node pools and node class configurations, the old nodes still failed to drain. The following error message pertains to an old node that did not drain
Type Reason Age From Message
Normal Unconsolidatable 7m47s (x338 over 3d13h) karpenter SpotToSpotConsolidation is disabled, can't replace a spot node with a spot node
Normal NodeNotReady 7s node-controller Node ip-10-172-40-3.ap-south-1.compute.internal status is now: NodeNotReady
The text was updated successfully, but these errors were encountered: