This task copies specified Assets from Source STAC Item(s), uploads them to S3 and updates Item assets hrefs to point to the new location.
In order to run this task within Argo Workflows, follow the below instructions.
-
cdinto this directory. -
Create an image from the provided Dockerfile. If you are using Rancher Desktop to run your K8s cluster, you need to use
nerdctlto build the image.
nerdctl build --namespace k8s.io -t copyassets .
This will create an image with the name & tag of copyassets:latest.
-
Make sure Argo Workflows is installed on the K8s cluster (see instructions here).
-
Upload the
payload_workflow.jsonfile to object storage, such as S3. Change thepath_templatevariable inupload_optionsto match a path where you want to save the output Item assets of this task. For example, if you want to save the output Item assets inside theoutputfolder of a bucket namedcopy_resultsand templated by the Item's collection and id, thepath_templatewould bes3://copy_results/output/${collection}/${id}/. -
Make the bucket publically accessible and get the object URL associated with the uploaded payload in step 4.
-
Create a secret named
my-s3-credentialsthat contains your AWS credentials. The secret must have the keysaccess-key-id,secret-access-key, andsession-tokenfor authenticating to AWS. -
Run the Argo workflow in the same namespace where the Argo Workflow Controller is installed using:
argo submit -n <NAMESPACE>--watch <FULL PATH TO WORKFLOW YAML FILE>
substituting the appropriate values where needed.
You can either run the workflow_copyassets_with_template.yaml file or the workflow_copyassets_no_template.yaml file. If you run the workflow_copyassets_with_template.yaml file, you need to first have the Workflow Template installed. You can do this with kubectl apply -n <NAMESPACE> -f <FULL PATH TO THE workflow-template.yaml file> where <NAMESPACE> is the namespace where the Argo Workflow Controller is installed and the path is the full path to the workflow-template.yaml file.