-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ctrl: HCP adaptations for NodeGroup validation and RTE #1027
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: shajmakh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
#1018 is needed first |
35e2ad6
to
82e9115
Compare
82e9115
to
a0c30fd
Compare
depends on #1015 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to come with a better approach than all the branching in the flow per platform type.
I'll try to experiment with the code and will provide some feedback soon
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's add the missing FindTree
alias per inline comment and we can merge.
It's more important to fix CI than polish, but we need to polish as soon as we have CI running.
@@ -257,11 +260,64 @@ func (em *ExistingManifests) State(mf rtemanifests.Manifests, updater GenerateDe | |||
} | |||
|
|||
for _, tree := range em.trees { | |||
for _, mcp := range tree.MachineConfigPools { | |||
if em.plat == platform.OpenShift { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can merge to have actually CI running, but this code is now untenable, we need to plan a cleanup/possibly a redesign ASAP
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will add an issue/card to track this work
if err := validation.NodeGroups(instance.Spec.NodeGroups, r.Platform); err != nil { | ||
return r.updateStatus(ctx, instance, status.ConditionDegraded, validation.NodeGroupsError, err) | ||
} | ||
|
||
trees, err := getTreesByNodeGroup(ctx, r.Client, instance.Spec.NodeGroups) | ||
trees, err := getTreesByNodeGroup(ctx, r.Client, instance.Spec.NodeGroups, r.Platform) | ||
if err != nil { | ||
return r.updateStatus(ctx, instance, status.ConditionDegraded, validation.NodeGroupsError, err) | ||
} | ||
|
||
multiMCPsErr := validation.MultipleMCPsPerTree(instance.Annotations, trees) | ||
var multiMCPsErr error | ||
if r.Platform == platform.OpenShift { | ||
multiMCPsErr = validation.MultipleMCPsPerTree(instance.Annotations, trees) | ||
|
||
if err := validation.MachineConfigPoolDuplicates(trees); err != nil { | ||
return r.updateStatus(ctx, instance, status.ConditionDegraded, validation.NodeGroupsError, err) | ||
if err := validation.MachineConfigPoolDuplicates(trees); err != nil { | ||
return r.updateStatus(ctx, instance, status.ConditionDegraded, validation.NodeGroupsError, err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all the validation code should be abstracted (in the aforementioned future cleanup once CI runs) in a platform-specific function instead of having plenty of if platform ==
like we have today
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will add an issue/card to track this work
It's possible to rebase this one on top of #1071 to make sure CI is happy |
f033f45
to
7f40dbf
Compare
7f40dbf
to
75c629e
Compare
Adapt the possibility of running the operator on Hypershift platform by replacing MCP context with PoolName where relevant; In addition to extra NodeGroup validation on hypershift. Signed-off-by: Shereen Haj <[email protected]>
The controller reconciliation should reflect the node group statuses after verifying all components are up and healthy. To preserve backward compatibility, for OCP we copy the node group config from MachineConfigPools[].Config to NodeGroupStatus[].Config. For hypershift we don't have MCP so we copy that info from the NodeGroup[].Config. That is safe because we know the controller loop will exit if a node group is misconfigured. Signed-off-by: Shereen Haj <[email protected]>
75c629e
to
bcac52d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/override ci/prow/ci-e2e-install-hypershift this is still expected |
@ffromani: Overrode contexts on behalf of ffromani: ci/prow/ci-e2e-install-hypershift In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Adapt the possibility of running the operator on Hypershift platform by
replacing MCP context with PoolName where relevant; In addition to extra
NodeGroup validation on hypershift.