-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
File the pull request with information about the latest built and pushed image for the model #137
Conversation
- name: get-image-sha | ||
image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest | ||
script: | | ||
oc get -n $(params.namespace) pipelinerun --selector tekton.dev/pipeline=test-mlflow-image --sort-by=.status.completionTime -o jsonpath='{range .items[?(@.spec.params[0].name == "model-name")]}{.spec.params[?(@.name == "model-name")].value} {.status.results[?(@.name == "image-sha")].value} {.status.results[?(@.name == "target-registry-url")].value}{"\n"}{end}' | awk -v model=$(params.model-name) '$1 == model && NF == 3 { print $2, $3 }' | tail -1 | tee /dev/stderr | while read sha registry ; do echo -n "$sha" > $(results.image-sha.path) ; echo -n "$registry" > $(results.target-registry-url.path) ; done ; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since a Task is supposed to be a reusable self-contained action, I think this script should stay embedded in the Pipeline
since it is hardcoded to test-mlflow-image
.
Instead of querying a PipelineRun
, can we simplify this workflow to rely on a known incluster ImageTag OR a ConfigMap that holds the current ImagestreamTag
to reference for this Pipeline step
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is, with a task defined merely as an entry in tasks
in Pipeline
, we are not able to have results
there, IIRC. The only place I found results
definition working is when I created it as a separate Task
.
Yes, the task hardcodes test-mlflow-image
-- but we have to hardcode something, to be able to reliably pick up the values from the previous PipelineRuns. I can turn it into a default of a parameter, if that's what you are suggesting.
The problem of using an ImageTag -- we want to be able to record in the pull request the SHA-256 of the exact image that was tested by the test-mlflow-image
Pipeline. And the tag can change between the testing PipelineRun running the application and this PipelineRun finding the tag. That's why the whole logic of providing that value through results
in the PipelineRuns.
Or do you suggest for the test-mlflow-image
to set some very-well-known tag like :testing-passed
any time the tests pass?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course, my ideal approach would be to do it all in one Pipeline, test and file the pull request. But @piotrpdev was not fond of that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or do you suggest for the test-mlflow-image to set some very-well-known tag like :testing-passed any time the tests pass?
Your intent is sound but in the event we have to pass some config between PipelineRuns then we should use a configmap or annotation on the target object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I merged the changes after approval by @piotrpdev and after having #146 for the followup work. Let's consider this an interim quick-fix for the #125 problem and a baseline for further work.
Yes, we will need a rebase now that #135 got merged. |
Rebased on main on top of merged #135. |
no need to do it manually.
Rebased on top of main. |
@piotrpdev Thanks for filing #146. Now the question is -- should we rework this PR into the one-pipeline approach, or can we merge this to fix the specific problem (the recording My concern at this point is to have a working end-to-end workflow, with solid documentation, as a basis for further development. So while I would happily work on #146 as that approach was my preference for end solution, I might propose to just merge what we have here, and build on top of that. |
I agree with this approach 👍 . |
Might want to wait for other opinions to come in first before merging e.g. @LaVLaS |
Thanks, merging. |
File the pull request with information about the latest built and pushed image for the model
Description
File the pull request with information about the latest built and pushed image for the model retrieved from the PipelineRuns.
Fixes #92.
Fixes #125.
This builds on top of #123.
How Has This Been Tested?
Manually.
Merge criteria: