-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8s: Make it work even if kubectl is not installed on the machine #1298
base: main
Are you sure you want to change the base?
Conversation
There's code in the k8s provider that takes care of pulling the kubectl binary from the cluster and use that one for subsequent operations; however, right before that happens, a few settings are changed in kubeconfig, which means that it's necessary for the host system to provide its own kubectl binary. Invert the order of these operations, and make sure that we always invoke the binary obtained from the cluster. This way, it is possible to successfully bring up a cluster on a machine that doesn't have kubectl installed. Signed-off-by: Andrea Bolognani <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I should note that I'm forcing the use of Docker by having
in my environment, but the change should be logically sound regardless of runtime. |
@andreabolognani: The following tests failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
# Set server and disable tls check | ||
export KUBECONFIG=${KUBEVIRTCI_CONFIG_PATH}/$KUBEVIRT_PROVIDER/.kubeconfig | ||
${KUBEVIRTCI_CONFIG_PATH}/$KUBEVIRT_PROVIDER/.kubectl config set-cluster kubernetes --server="https://$(_main_ip):$(_port k8s)" | ||
${KUBEVIRTCI_CONFIG_PATH}/$KUBEVIRT_PROVIDER/.kubectl config set-cluster kubernetes --insecure-skip-tls-verify=true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We get $kubectl
variable from cluster/ephemeral-provider-common.sh
, it is used in this file in other places too. Just changing that would be enough or you do need to move it down?
I've also noticed the kubectl
usage on the if podman
branch above, probably similar issue would happen there too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch.
There's definitely a chicken-and-egg problem here: in the case of podman, ${KUBEVIRTCI_CONFIG_PATH}/$KUBEVIRT_PROVIDER/.kubectl
(pointed to by the $kubectl
variable) is downloaded for a specific k8s version in order to match the running cluster, but that in turn requires calling kubectl
...
We could look into whether containers/conmon#315, which is mentioned as the reason for this workaround existing, is still a problem in practice or whether we can just unify the code and avoid the additional call to kubectl
.
Alternatively, it might be possible to obtain the value of .status.nodeInfo.kubeletVersion
without needing kubectl
by using curl
against some HTTP endpoint? I'm not sure how feasible that would be.
Moving the calls down would still be necessary though, because if we left things as they are right now we would be calling ${KUBEVIRTCI_CONFIG_PATH}/$KUBEVIRT_PROVIDER/.kubectl
before having had the chance to create the file in question...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, on the podman topic, my guess is this has been fixed based on the comment... google/oss-fuzz#4774 (comment)
Looking at containers/conmon#315 (comment) seems that this was reproducible with conmon 2.1.0 but on the other comment it was fixed in 2.1.5
I think we are doing some good tests with podman
so if this breaks something, we will know quite fast too.
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What this PR does / why we need it:
Makes it possible to bring up a cluster on a machine where
kubectl
is not installed.Which issue(s) this PR fixes:
Fixes #1293
Release note: