Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adaptive schedule strategy for UnitedDeployment #1720

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

AiRanthem
Copy link
Contributor

@AiRanthem AiRanthem commented Sep 2, 2024

Ⅰ. Describe what this PR does

Added an adaptive scheduling strategy to UnitedDeployment. During scaling up, if a subset causes some Pods to be unschedulable for certain reasons, the unschedulable Pods will be rescheduled to other partitions. During scaling down, if elastic allocation is used (i.e., the subset is configured with min/max), each partition will retain the ready Pods as much as possible without exceeding the maximum capacity, rather than strictly scaling down in reverse order of the Subset list.

Ⅱ. Does this pull request fix one issue?

fixes #1673

Ⅲ. Describe how to verify it

Use the yaml below to create a UD with subset-b unschedulable.

apiVersion: apps.kruise.io/v1alpha1
kind: UnitedDeployment
metadata:
  name: sample-ud
spec:
  replicas: 5
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: sample
  template:
    deploymentTemplate:
      metadata:
        labels:
          app: sample
      spec:
        selector:
          matchLabels:
            app: sample
        template:
          metadata:
            labels:
              app: sample
          spec:
            terminationGracePeriodSeconds: 0
            containers:
              - name: nginx
                image: curlimages/curl:8.8.0
                command: ["/bin/sleep", "infinity"]
  topology:
    scheduleStrategy:
      type: Adaptive
      adaptive:
        rescheduleCriticalSeconds: 10
        unschedulableLastSeconds: 20

    subsets:
      - name: subset-a
        maxReplicas: 2
        nodeSelectorTerm:
          matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - ci-testing-worker
      - name: subset-b
        maxReplicas: 2
        nodeSelectorTerm:
          matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - not-exist
      - name: subset-c
        nodeSelectorTerm:
          matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - ci-testing-worker3
  1. when created, two pods in subset-b will stay pending
  2. after 10s, the two pending pods will be rescheduled to subset-c
  3. scale up immediately, new pods will be created in subset-c instead of subset-b (even not full)
  4. wait 20s, when subset-b is recovered, scale up again, 2 pods will be scheduled into subset-b again (and still pending)
  5. whenever you scale down:
    subset-c -> subset-b -> subset-a

Ⅳ. Special notes for reviews

  1. adapter.go: GetReplicaDetails returns pods in the subset
  2. xxx_adapter.go: return pods implementation ⬆️
  3. allocator.go: about safeReplica
  4. pod_condition_utils.go: extract PodUnscheduledTimeout function from workloadwpread
  5. reschedule.go: PodUnscheduledTimeout function extracted
  6. subset.go: add some field to Subset object to carry related information
  7. subset_control.go: store subset pods to Subset object
  8. uniteddeployment_controller.go
    1. add requeue feature to check failed pods
    2. subset unschedulable status management
  9. uniteddeployment_types.go: API change
  10. uniteddeployment_update.go: sync unschedulable to CR

@kruise-bot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign fei-guo for approval by writing /assign @fei-guo in a comment. For more information see:The Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

codecov bot commented Sep 2, 2024

Codecov Report

Attention: Patch coverage is 48.66310% with 96 lines in your changes missing coverage. Please review.

Project coverage is 49.10%. Comparing base (0d0031a) to head (a50e39e).
Report is 83 commits behind head on master.

Files with missing lines Patch % Lines
...er/uniteddeployment/uniteddeployment_controller.go 59.37% 32 Missing and 7 partials ⚠️
...til/expectations/comparable_version_expectation.go 0.00% 35 Missing ⚠️
...roller/uniteddeployment/uniteddeployment_update.go 16.66% 8 Missing and 2 partials ⚠️
pkg/controller/uniteddeployment/allocator.go 80.64% 4 Missing and 2 partials ⚠️
pkg/controller/workloadspread/reschedule.go 50.00% 1 Missing and 1 partial ⚠️
...deployment/adapter/advanced_statefulset_adapter.go 0.00% 1 Missing ⚠️
...oller/uniteddeployment/adapter/cloneset_adapter.go 0.00% 1 Missing ⚠️
...ler/uniteddeployment/adapter/deployment_adapter.go 0.00% 1 Missing ⚠️
...er/uniteddeployment/adapter/statefulset_adapter.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1720      +/-   ##
==========================================
+ Coverage   47.91%   49.10%   +1.19%     
==========================================
  Files         162      192      +30     
  Lines       23491    19704    -3787     
==========================================
- Hits        11256     9676    -1580     
+ Misses      11014     8766    -2248     
- Partials     1221     1262      +41     
Flag Coverage Δ
unittests 49.10% <48.66%> (+1.19%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@AiRanthem AiRanthem force-pushed the feature/ud-adaptive-240827 branch 2 times, most recently from e58f279 to 1cc7c87 Compare September 2, 2024 06:45
@zmberg zmberg added this to the 1.8 milestone Sep 3, 2024
@kruise-bot kruise-bot added size/XL size/XL: 500-999 and removed size/XXL labels Sep 3, 2024
@kruise-bot kruise-bot added size/XXL and removed size/XL size/XL: 500-999 labels Sep 4, 2024
1. adapter.go: GetReplicaDetails returns pods in the subset
2. xxx_adapter.go: return pods implementation ⬆️
3. allocator.go: about safeReplica
4. pod_condition_utils.go: extract PodUnscheduledTimeout function from workloadwpread
5. reschedule.go: PodUnscheduledTimeout function extracted
6. subset.go: add some field to Subset object to carry related information
7. subset_control.go: store subset pods to Subset object
8. uniteddeployment_controller.go
   1. add requeue feature to check failed pods
   2. subset unschedulable status management
9. uniteddeployment_types.go: API change
10. uniteddeployment_update.go: sync unschedulable to CR

Signed-off-by: AiRanthem <[email protected]>
@@ -252,6 +322,10 @@ type UnitedDeploymentStatus struct {
// +optional
SubsetReplicas map[string]int32 `json:"subsetReplicas,omitempty"`

// Record whether each subset is unschedulable.
// +optional
SubsetUnschedulable *SubsetUnschedulable `json:"subsetUnschedulable,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider change the field subsetUnschedulable to a slice of SubsetStatus

return false
}

type UnschedulableStatus struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change Unschedulable to slice of SubsetCondition, so that we can record possible other condition besides Schedulable condition

// +optional
UnschedulableTimestamp *metav1.Time `json:"unschedulableTimestamp,omitempty"`
// +optional
PendingPods int32 `json:"-"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to record pending pods here ?

AiRanthem added 4 commits September 13, 2024 16:57
Signed-off-by: AiRanthem <[email protected]>
Signed-off-by: AiRanthem <[email protected]>
Signed-off-by: AiRanthem <[email protected]>
Signed-off-by: AiRanthem <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[feature request] UnitedDeployment support reschedule pod to other subset if current subset lacks resources
4 participants