Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s: Make it work even if kubectl is not installed on the machine #1298

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

andreabolognani
Copy link

What this PR does / why we need it:

Makes it possible to bring up a cluster on a machine where kubectl is not installed.

Which issue(s) this PR fixes:

Fixes #1293

Release note:

NONE

There's code in the k8s provider that takes care of pulling the
kubectl binary from the cluster and use that one for subsequent
operations; however, right before that happens, a few settings
are changed in kubeconfig, which means that it's necessary for
the host system to provide its own kubectl binary.

Invert the order of these operations, and make sure that we
always invoke the binary obtained from the cluster. This way, it
is possible to successfully bring up a cluster on a machine that
doesn't have kubectl installed.

Signed-off-by: Andrea Bolognani <[email protected]>
@kubevirt-bot kubevirt-bot added the dco-signoff: yes Indicates the PR's author has DCO signed all their commits. label Oct 11, 2024
@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign phoracek for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@andreabolognani
Copy link
Author

I should note that I'm forcing the use of Docker by having

KUBEVIRT_CRI=docker
KUBEVIRTCI_RUNTIME=docker

in my environment, but the change should be logically sound regardless of runtime.

@kubevirt-bot
Copy link
Contributor

@andreabolognani: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
check-up-kind-1.30-vgpu 80c6f8e link false /test check-up-kind-1.30-vgpu
check-up-kind-sriov 80c6f8e link true /test check-up-kind-sriov
check-provision-k8s-1.31 80c6f8e link true /test check-provision-k8s-1.31
check-provision-k8s-1.30 80c6f8e link true /test check-provision-k8s-1.30
check-provision-k8s-1.29 80c6f8e link true /test check-provision-k8s-1.29

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

# Set server and disable tls check
export KUBECONFIG=${KUBEVIRTCI_CONFIG_PATH}/$KUBEVIRT_PROVIDER/.kubeconfig
${KUBEVIRTCI_CONFIG_PATH}/$KUBEVIRT_PROVIDER/.kubectl config set-cluster kubernetes --server="https://$(_main_ip):$(_port k8s)"
${KUBEVIRTCI_CONFIG_PATH}/$KUBEVIRT_PROVIDER/.kubectl config set-cluster kubernetes --insecure-skip-tls-verify=true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We get $kubectl variable from cluster/ephemeral-provider-common.sh, it is used in this file in other places too. Just changing that would be enough or you do need to move it down?

I've also noticed the kubectl usage on the if podman branch above, probably similar issue would happen there too

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch.

There's definitely a chicken-and-egg problem here: in the case of podman, ${KUBEVIRTCI_CONFIG_PATH}/$KUBEVIRT_PROVIDER/.kubectl (pointed to by the $kubectl variable) is downloaded for a specific k8s version in order to match the running cluster, but that in turn requires calling kubectl...

We could look into whether containers/conmon#315, which is mentioned as the reason for this workaround existing, is still a problem in practice or whether we can just unify the code and avoid the additional call to kubectl.

Alternatively, it might be possible to obtain the value of .status.nodeInfo.kubeletVersion without needing kubectl by using curl against some HTTP endpoint? I'm not sure how feasible that would be.

Moving the calls down would still be necessary though, because if we left things as they are right now we would be calling ${KUBEVIRTCI_CONFIG_PATH}/$KUBEVIRT_PROVIDER/.kubectl before having had the chance to create the file in question...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, on the podman topic, my guess is this has been fixed based on the comment... google/oss-fuzz#4774 (comment)

Looking at containers/conmon#315 (comment) seems that this was reproducible with conmon 2.1.0 but on the other comment it was fixed in 2.1.5

I think we are doing some good tests with podman so if this breaks something, we will know quite fast too.

@kubevirt-bot kubevirt-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 8, 2024
@kubevirt-bot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dco-signoff: yes Indicates the PR's author has DCO signed all their commits. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/S
Projects
None yet
Development

Successfully merging this pull request may close these issues.

kubectl.sh,bug: possibly broken - run kubectl.sh without having kubectl installed on your machine
3 participants