Replies: 2 comments
-
re-run using a specific task never worked for me either. |
Beta Was this translation helpful? Give feedback.
0 replies
-
a possible workaround is to put the retry in your task in the workflow if you expect network gremlins to cause failures. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a relatively complex workflow that is doing some provisioning work. Many of the tasks are somewhat long-running and can fail for various reasons. For this reason, all of those tasks use
with-items
so that the workflow can bere-run
using the--tasks
flag and the--no-reset
flag.The each subsequent task relies on the published values from the previous tasks to "know" what to do. However, I've found that, when using
re-run
the execution created by the requestedre-run
has identicalresults
/output
to the original execution. The prior execution's information is visible before the requestedre-run
execution is even finished.This makes sense to me, in that, the intention is to re-use the original workflow execution. However, it appears that the
result
/output
of the requestedre-run
execution does not update the output of the original execution. Essentially in a case with one or more items, where there is a failure, there appears to be no way to actually reference the new executions results:In ☝️ scenario, I would respond by running:
However, the
published
values of the nowsuccessful
task2
foritem1
are either inaccessible, resulting inTypeErrors
or in a simplified workflow, where the output of the tasks are not consumed, theresult
of thatre-run
execution is identical to the original execution. I posted about this in the#community
channel in Slack, but I'm adding here, just in case there are more eyes on these discussions. The following are some minimal files that set up the scenario I've described and their output:dummy_action2.yaml
:dummy_workflow2.yaml
:dummy_python_action.yaml
:dummy_action.py
☝️ I wanted to verify that I wasn't just missing something, so I created this example for my own edification. The
dummy_action.py
pythonAction
uses a value in the datastore to determine whether or not a task spun outwith-items
that includeshost1
as input succeeds or fails. The test goes as follows:I set the datastore value to
false
ensuringhost1
will fail:And run the action:
The output is as expected. The
dummy_python
task fails forhost1
. Theworkflow
succeeds because it ends with anoop
action that always succeeds. I change the datastore value totrue
so that uponre-run
host1
will succeed:And execute the
re-run
:The output of these to executions, the original and the
re-run
are identical, save timestamps and the table calling out the actions that were actually run in the workflow. Here is thediff
output:There are two things that prick my senses. The biggest thing by far, is that, the new execution
result/output
is no where to be found. The nowsucceeded
second execution ofdummy_python_action
againsthost1
has aresult/output
, but it is no where to be found (unless I specifically pull it usingst2
). The fact that there-run
execution doesn't have it'soutput/result
updated means that it isn't possible to consume that output to resume a workflow that has tasks beyond these, that rely on that output.I'm uncertain if this is intentional. It is a hinderance for my use-case so I'll need to figure out a way to work around it, but if it is in fact unintentional, I'd be happy to file a bug (and hopefully find some time to work on it). The scenario I've created above, is a best-case for this particular situation. As I mentioned, in cases where subsequent actions actually try to make use of the
output
from there-run
task, it often results in variousTypeErrors
:With all of this said, the TLDR:
re-run
executions be updated with the outcomes produced by thetasks
encapsulated by it? (I think so)Beta Was this translation helpful? Give feedback.
All reactions