If you experience issues with ods-pipeline
, the following instructions and tips may help to resolve them.
When you push a commit to Bitbucket, the webhook will fire a request to the configured ODS pipeline manager (ods-pipeline
deployment). The result of such a webhook request is usually a pipeline run. When this is not the case, a few things might have gone wrong:
-
The webhook setting in Bitbucket is not enabled.
-
The webhook setting in Bitbucket does not fire on "Push" events.
-
The webhook setting in Bitbucket does not point to the route connected to the
ods-pipeline
service. -
The webhook setting in Bitbucket has an incorrect secret. This would be logged in the
ods-pipeline
deployment logs. The configured secret must match the one in theods-bitbucket-webhook
secret. -
The pipeline assembled from the
ods.y(a)ml
file is not valid. This would be visible in theods-pipeline
deployment logs. An example of this case might be YAML syntax errors or passing unknwon parameters to tasks. -
The commit pushed contains instructions to skip CI such as
[ci skip]
. This would be visible in theods-pipeline
deployment logs.
In general, the logs of the ods-pipeline
deployments should contain more information what went wrong if no pipeline run has been triggered.
When a pipeline run has been triggered but it does not make (any) progress, it may help to look at the YAML representation of the PipelineRun
resource. Especially the status
may yield more information. Further information might be gathered from looking at the events connected to the pipeline run.
If the pipeline run does not progress for a specific task run, the TaskRun
resources of the pipeline run allow further inspection of the YAML of each TaskRun
resource as well as their corresponding pods. Typical issues why pods are pending and do not run are missing ConfigMap
or Secret
resources to mount, missing/mispelled image references, or insufficient compute resources in the cluster.
When a task fails, its logs may indicate what the issue is. While some issues might be related to the application (e.g. failing tests), others might not, such as network issues. If the task logs do not provide enough information to diagnose the issue, you can turn on debug mode by setting the debug
field to true
in the ods-pipeline
ConfigMap resource. The next pipeline run (or a re-run of the failed one) will output additional information.
Sometimes it can also help to inspect the workspace that was used for the task run. For example, it might contain generated files such as test reports etc. that you can inspect to diagnose the problem. This can be achieved by copying the following and saving it to a file debug-pod.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: ods-pipeline-debug
spec:
volumes:
- name: ods-workspace-vol
persistentVolumeClaim:
claimName: ods-workspace-<COMPONENT_NAME>
containers:
- name: ods-pipeline-debug-container
image: index.docker.io/crccheck/hello-world
volumeMounts:
- mountPath: "/workspace"
name: ods-workspace-vol
Then, run the following commands (on OpenShift you can replace kubectl
with oc
):
kubectl -n <namespace> apply -f debug-pod.yaml
kubectl -n <namespace> exec -it ods-pipeline-debug -- /bin/sh
The workspace is mounted at /workspace
. Once you have completed debugging, do not forget to delete the pod again:
kubectl -n <namespace> delete -f debug-pod.yaml