Skip to content

Conversation

nojnhuh
Copy link
Contributor

@nojnhuh nojnhuh commented Jan 14, 2025

This change adds new steps to the e2e test to verify that each container in each pod contains logs showing the presence of any GPU_DEVICE_* environment variables. I checked that this at least catches where the CDI config is not setup properly, but didn't try all of the other possible failure modes here.

Fixes #57

@k8s-ci-robot k8s-ci-robot requested review from elezar and pohly January 14, 2025 18:47
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Jan 14, 2025

kubectl wait --for=condition=Ready -n gpu-test4 pod/pod0 --timeout=120s
kubectl wait --for=condition=Ready -n gpu-test4 pod/pod1 --timeout=120s
gpu_test_4=$(kubectl get pods -n gpu-test4 | grep -c 'Running')
if [ $gpu_test_4 != 2 ]; then
echo "gpu_test_4 $gpu_test_4 failed to match against 1 expected pods"
echo "gpu_test_4 $gpu_test_4 failed to match against 2 expected pods"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI I did sneak in this typo fix also.

@elezar
Copy link
Contributor

elezar commented Jan 15, 2025

@nojnhuh this is definitely an improvement, thanks.

Do you have a feel for the effort required to actually test the expected behaviour as called out in the README?

@nojnhuh
Copy link
Contributor Author

nojnhuh commented Jan 15, 2025

Do you have a feel for the effort required to actually test the expected behaviour as called out in the README?

Do you mean checking at least as far as whether certain containers have the same or different GPUs allocated based on the environment variables defined in each container? And that the correct TimeSlicing/SpacePartitioning parameters are set? I could definitely do that, but it might get a little messy in a bash script.

@nojnhuh
Copy link
Contributor Author

nojnhuh commented Jan 15, 2025

Do you have a feel for the effort required to actually test the expected behaviour as called out in the README?

Do you mean checking at least as far as whether certain containers have the same or different GPUs allocated based on the environment variables defined in each container? And that the correct TimeSlicing/SpacePartitioning parameters are set? I could definitely do that, but it might get a little messy in a bash script.

I pushed these changes in a new commit which I plan to squash if we're good with those changes.

/hold

@k8s-ci-robot k8s-ci-robot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Jan 15, 2025
@@ -35,6 +72,36 @@ if [ $gpu_test_1 != 2 ]; then
exit 1
fi

gpu_test1_pod0_ctr0_logs=$(kubectl logs -n gpu-test1 pod0 -c ctr0)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm very open to suggestions for ways to make this less of a copy-paste nightmare.

@nojnhuh
Copy link
Contributor Author

nojnhuh commented Jan 29, 2025

@elezar This should be ready for another review, PTAL.

Copy link
Contributor

@elezar elezar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @nojnhuh.

This is quite verbose, but I suppose that's a side-effect of using bash instead of writing the tests in Ginkgo -- which is out of scope for this PR.

LGTM

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 3, 2025
@nojnhuh
Copy link
Contributor Author

nojnhuh commented Feb 3, 2025

Squashed commits, didn't make any more changes in the process like the diff shows.

Copy link
Contributor

@elezar elezar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: elezar, nojnhuh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@nojnhuh
Copy link
Contributor Author

nojnhuh commented Feb 11, 2025

Forgot to remove the hold here once I squashed.

/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 11, 2025
@pohly
Copy link
Contributor

pohly commented Feb 19, 2025

/lgtm

Based on #73 (review).

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 19, 2025
@k8s-ci-robot k8s-ci-robot merged commit 892b7d0 into kubernetes-sigs:main Feb 19, 2025
6 checks passed
MaskerPRC pushed a commit to cosdt/ascend-dra-driver that referenced this pull request Feb 21, 2025
This reverts commit 892b7d0, reversing
changes made to cfe7e11.
@nojnhuh nojnhuh deleted the e2e-env branch June 30, 2025 05:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enhance e2e tests to detect GPU devices
4 participants