Skip to content

Conversation

@tthvo
Copy link
Member

@tthvo tthvo commented Sep 9, 2025

Important

A rough draft of installer changes required to support dual-stack environment on AWS.

This PR is only for previewing the changes and experimenting with upstream CAPA PR. I will close this and open another PR with finalized sets of changes.

This PR also includes commits (message starting with hack: ) to "imitate" CCM and Cluster Ingress Operator to create necessary resources for cluster ingress (i.e. NLB, Route53 records, Security Groups, etc). These commits are to be removed, assuming AWS CCM support dualstack LB later on.

This depends on upstream CAPA PR: kubernetes-sigs/cluster-api-provider-aws#5603 (not finalized yet).

How to install

Below is the details of how to reproduce the installation.

$ export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=quay.io/thvo/origin-release:v4.21.0-preview
$ export AWS_PROFILE=<profile-admin>
$ ./openshift-install create cluster --dir=.

Custom release image

Custom release image: quay.io/thvo/origin-release:v4.21.0-preview.

This includes the following operator changes:

For the cluster-network-operator, we have the open PR here with feature gate checking: openshift/cluster-network-operator/pull/2804

Install Config

Use the below install-config snippet to configure networking and AWS platform.

Note: machineNetwork does not contain IPv6 CIDR as it is unknown at install time (i.e. will be patched later when infra is ready). The cluster network and service network contain ULA IPv6 CIDR.

IPv4 Primary:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  - cidr: fd01::/48
    hostPrefix: 64
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OVNKubernetes
  serviceNetwork:
  - 172.30.0.0/16
  - fd02::/112
platform:
  aws:
    region: us-east-1
    infraStack: DualStack

IPv6 Primary:

networking:
  clusterNetwork:
  - cidr: fd01::/48
    hostPrefix: 64
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OVNKubernetes
  serviceNetwork:
  - fd02::/112
  - 172.30.0.0/16
platform:
  aws:
    region: us-east-1
    infraStack: DualStackIPv6Primary

Important notes: [IPv6-primary only] The ingress operator will be stuck as health check on targets are failing because the k8s Service for ingress routers only have IPv6 cluster IP. The hacks only configures the ingress LB target group as IPv4, thus the connection cannot switch to IPv6 when travelling internally.

You must edit the that service openshift-ingress/router-nodeport-default to set its ipFamilyPolicy to PreferDualStack. For example:

$ kubectl -n openshift-ingress patch svc router-nodeport-default \
    -p '{"spec":{"ipFamilyPolicy":"PreferDualStack"}}'

Installer binary

It looks the installer binary can be built from these commits despite a reference to my local CAPA fork. So, just:

./hack/build.sh

/hold
/label platform/aws

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Sep 9, 2025
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Sep 9, 2025

@tthvo: This pull request references CORS-4072 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target either version "4.21." or "openshift-4.21.", but it targets "openshift-4.20" instead.

In response to this:

A rough draft of installer changes required to support dual-stack environment on AWS.

Important

This PR is only for previewing the changes and supposed to be closed after done experimenting. I will open another PR with finalized sets of changes.

Notes

/hold
/label platform/aws

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. platform/aws labels Sep 9, 2025
@tthvo
Copy link
Member Author

tthvo commented Sep 9, 2025

/cc @sadasu @barbacbd @rna-afk

@tthvo
Copy link
Member Author

tthvo commented Sep 9, 2025

/cc @mtulio

Just rough hacks but in case you are interested :D

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 9, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign sdodson for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Sep 9, 2025

@tthvo: This pull request references CORS-4072 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target either version "4.21." or "openshift-4.21.", but it targets "openshift-4.20" instead.

In response to this:

A rough draft of installer changes required to support dual-stack environment on AWS.

This PR is only for previewing the changes and experimenting with upstream CAPA PR. I will close this and open another PR with finalized sets of changes.

How to install

Below is the details of how to reproduce the installation.

$ export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=quay.io/thvo/origin-release:v4.21.0-preview
$ export AWS_PROFILE=<profile-admin>
$ ./openshift-install create cluster --dir=.

Custom release image

Custom release image: quay.io/thvo/origin-release:v4.21.0-preview.

This includes the following operator changes:

Install Config

Use the below install-config snippet to configure networking and AWS platform.

networking:
 clusterNetwork:
 - cidr: 10.128.0.0/14
   hostPrefix: 23
 - cidr: fd01::/48
   hostPrefix: 64
 machineNetwork:
 - cidr: 10.0.0.0/16
 networkType: OVNKubernetes
 serviceNetwork:
 - 172.30.0.0/16
 - fd02::/112
platform:
 aws:
   region: us-east-1
   infraStack: DualStack

Note that machineNetwork does not contain IPv6 CIDR as it is unknown at install time (i.e. will be patched later when infra is ready). The cluster network and service network contain ULA IPv6 CIDR.

Installer binary

It looks the installer binary can be built from these commits despite a reference to my local CAPA fork. So, just:

./hack/build.sh

Important

/hold
/label platform/aws

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@tthvo tthvo force-pushed the CORS-4072 branch 3 times, most recently from 0e37a46 to 537f4d0 Compare September 9, 2025 07:17
infraStack:
description: |-
InfraStack indicates the network stack of the cluster infrastructure.
If left empty, the installer will figure it out from the machineNetwork.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did we decide on this behavior or were we defaulting to IPv4 Only ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh we haven't decided at all. I just proposed this idea for discussion with the foundation:

  1. For AWS, we only supports IPv4 currently.
  2. Users might be thinking of specifying the machineNetwork to add IPv6, similar to other platforms. In this case, it will be BYO VPC/subnets for AWS where VPC/subnet IPv6 CIDR is already known.

Open for ideas or thoughts on this approach 💭 🙏

defaultServiceNetwork = ipnet.MustParseCIDR("172.30.0.0/16")
defaultIpv6ServiceNetwork = ipnet.MustParseCIDR("fd02::/112")
defaultClusterNetwork = ipnet.MustParseCIDR("10.128.0.0/14")
defaultIpv6ClusterNetwork = ipnet.MustParseCIDR("fd01::/48")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the new default service and cluster networks coming from a source or was this essentially the equivalent of the ipv4 string ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, these are arbitrary ULA IPv6 ranges (RFC 4193). We are still unsure of what default values to use...

For now, I picked these values from official doc: https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html/ovn-kubernetes_network_plugin/converting-to-dual-stack

@tthvo
Copy link
Member Author

tthvo commented Sep 10, 2025

/test e2e-aws-default-config e2e-aws-ovn-shared-vpc-custom-security-groups

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 13, 2025
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 15, 2025
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 21, 2025
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 21, 2025
@tthvo
Copy link
Member Author

tthvo commented Oct 21, 2025

/retitle CORS-4072: [Draft] Dual stack support for AWS

This PR is for experimenting and collecting info about what changes are needed. I will separate the commits into smaller PRs :D

PTAL 🙏 All reviews and nitpicks are appreciated!

@openshift-ci openshift-ci bot changed the title CORS-4072: [WIP] Dual stack support for AWS CORS-4072: [Draft] Dual stack support for AWS Oct 21, 2025
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Oct 21, 2025

@tthvo: This pull request references CORS-4072 which is a valid jira issue.

In response to this:

A rough draft of installer changes required to support dual-stack environment on AWS.

This PR is only for previewing the changes and experimenting with upstream CAPA PR. I will close this and open another PR with finalized sets of changes.

How to install

Below is the details of how to reproduce the installation.

$ export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=quay.io/thvo/origin-release:v4.21.0-preview
$ export AWS_PROFILE=<profile-admin>
$ ./openshift-install create cluster --dir=.

Custom release image

Custom release image: quay.io/thvo/origin-release:v4.21.0-preview.

This includes the following operator changes:

Install Config

Use the below install-config snippet to configure networking and AWS platform.

Configuration YAML Example
Dualstack (IPv4 Primary) See below ⬇️
Dualstack (IPv6 Primary) See below ⬇️

IPv4 Primary:

networking:
 clusterNetwork:
 - cidr: 10.128.0.0/14
   hostPrefix: 23
 - cidr: fd01::/48
   hostPrefix: 64
 machineNetwork:
 - cidr: 10.0.0.0/16
 networkType: OVNKubernetes
 serviceNetwork:
 - 172.30.0.0/16
 - fd02::/112
platform:
 aws:
   region: us-east-1
   infraStack: DualStack

IPv6 Primary:

networking:
 clusterNetwork:
 - cidr: fd01::/48
   hostPrefix: 64
 - cidr: 10.128.0.0/14
   hostPrefix: 23
 machineNetwork:
 - cidr: 10.0.0.0/16
 networkType: OVNKubernetes
 serviceNetwork:
 - fd02::/112
 - 172.30.0.0/16
platform:
 aws:
   region: us-east-1
   infraStack: DualStackIPv6Primary

Note: machineNetwork does not contain IPv6 CIDR as it is unknown at install time (i.e. will be patched later when infra is ready). The cluster network and service network contain ULA IPv6 CIDR.

Additionally, for IPv6-primary installation, pod will have both IPv4 and IPv6 internal IPs, but the IPv4 address will be set in Pods.status.podIP.

Installer binary

It looks the installer binary can be built from these commits despite a reference to my local CAPA fork. So, just:

./hack/build.sh

Important

/hold
/label platform/aws

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@tthvo tthvo force-pushed the CORS-4072 branch 2 times, most recently from b1d688a to 52843a3 Compare October 22, 2025 02:04
@tthvo
Copy link
Member Author

tthvo commented Oct 22, 2025

Hmm 🤔 Some failed jobs are complaining about missing permission S3: PutBucketLifecycleConfiguration for the IAM user with minimal permissions.

level=warning msg=Condition S3BucketCreated has status: "False", reason: "S3BucketCreationFailed",
message: "ensuring bucket lifecycle configuration: creating S3 bucket lifecycle configuration: operation error 
S3: PutBucketLifecycleConfiguration, https response error StatusCode: 403, RequestID: REDACTED, HostID: REDACTED,
api error AccessDenied: User: arn:aws:iam::REDACTED:user/ci-op-f2bmbtq3-247a4-minimal-perm-installer
is not authorized to perform: s3:PutLifecycleConfiguration on
resource: \"arn:aws:s3:::openshift-bootstrap-data-ci-op-f2bmbtq3-247a4-9dvdj\" because
no identity-based policy allows the s3:PutLifecycleConfiguration action"

It might be something new added in latest CAPA that we will need to be aware of 👀 But regular admin level should be just fine to install dualstack with the PR!

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Oct 22, 2025

@tthvo: This pull request references CORS-4072 which is a valid jira issue.

In response to this:

Important

A rough draft of installer changes required to support dual-stack environment on AWS.

This PR is only for previewing the changes and experimenting with upstream CAPA PR. I will close this and open another PR with finalized sets of changes.

This PR also includes commits (message starting with hack: ) to i"imitate" CCM and Cluster Ingress Operator to create necessary resources for cluster ingress (i.e. NLB, Route53 records, Security Groups, etc). These commits are to be removed, assuming AWS CCM support dualstack LB later on.

This depends on upstream CAPA PR: kubernetes-sigs/cluster-api-provider-aws#5603 (not finalized yet).

How to install

Below is the details of how to reproduce the installation.

$ export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=quay.io/thvo/origin-release:v4.21.0-preview
$ export AWS_PROFILE=<profile-admin>
$ ./openshift-install create cluster --dir=.

Custom release image

Custom release image: quay.io/thvo/origin-release:v4.21.0-preview.

This includes the following operator changes:

Install Config

Use the below install-config snippet to configure networking and AWS platform.

Note: machineNetwork does not contain IPv6 CIDR as it is unknown at install time (i.e. will be patched later when infra is ready). The cluster network and service network contain ULA IPv6 CIDR.

IPv4 Primary:

networking:
 clusterNetwork:
 - cidr: 10.128.0.0/14
   hostPrefix: 23
 - cidr: fd01::/48
   hostPrefix: 64
 machineNetwork:
 - cidr: 10.0.0.0/16
 networkType: OVNKubernetes
 serviceNetwork:
 - 172.30.0.0/16
 - fd02::/112
platform:
 aws:
   region: us-east-1
   infraStack: DualStack

IPv6 Primary:

networking:
 clusterNetwork:
 - cidr: fd01::/48
   hostPrefix: 64
 - cidr: 10.128.0.0/14
   hostPrefix: 23
 machineNetwork:
 - cidr: 10.0.0.0/16
 networkType: OVNKubernetes
 serviceNetwork:
 - fd02::/112
 - 172.30.0.0/16
platform:
 aws:
   region: us-east-1
   infraStack: DualStackIPv6Primary

Important notes: [IPv6-primary only] The ingress operator will be stuck as health check on targets are failing because the k8s Service for ingress routers only have IPv6 cluster IP. The hacks only configures the ingress LB target group as IPv4, thus the connection cannot switch to IPv6 when travelling internally.

You must edit the that service openshift-ingress/router-nodeport-default to set its ipFamilyPolicy to PreferDualStack. For example:

$ kubectl -n openshift-ingress patch svc router-nodeport-default 
   -p '{"spec":{"ipFamilyPolicy":"PreferDualStack"}}'

Installer binary

It looks the installer binary can be built from these commits despite a reference to my local CAPA fork. So, just:

./hack/build.sh

/hold
/label platform/aws

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

tthvo added 10 commits October 30, 2025 22:09
1. Relax validations to allow dualstack IPv6 on AWS
2. Validate subnet IPv6 CIDR if any
3. Configure cloud-config configMap to set NodeFamilies
4. [hack] Set the default ingress controller to use NodePort publish strategy
5. Set cluster Ingress to use NLB when IPv6 is enabled (optional)
6. Add a custom DNS controller manifest to configure IPv6 nameserver
The installconfig in the cluster-config ConfigMap needs to have the
Ipv6 CIDR of the VPC in the case of full IPI.
…-network-server

The commit ensures all service networks are considered (i.e. that is all
IP families) when generating the certificate
kube-apiserver-service-network-server.
FIXME: we should use the VPC CIDR as the source CIDRs. But the IPv6 cidr
is not yet knowned at install time. We should edit the awscluster after
infraReady to add the VPC IPv6 CIDR as source instead.
This applies to dualstack installation only.

IPv4-primary: IPv4 Target Group
IPv6-primary: Ipv6 Target Group
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Oct 31, 2025

@tthvo: This pull request references CORS-4072 which is a valid jira issue.

In response to this:

Important

A rough draft of installer changes required to support dual-stack environment on AWS.

This PR is only for previewing the changes and experimenting with upstream CAPA PR. I will close this and open another PR with finalized sets of changes.

This PR also includes commits (message starting with hack: ) to "imitate" CCM and Cluster Ingress Operator to create necessary resources for cluster ingress (i.e. NLB, Route53 records, Security Groups, etc). These commits are to be removed, assuming AWS CCM support dualstack LB later on.

This depends on upstream CAPA PR: kubernetes-sigs/cluster-api-provider-aws#5603 (not finalized yet).

How to install

Below is the details of how to reproduce the installation.

$ export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=quay.io/thvo/origin-release:v4.21.0-preview
$ export AWS_PROFILE=<profile-admin>
$ ./openshift-install create cluster --dir=.

Custom release image

Custom release image: quay.io/thvo/origin-release:v4.21.0-preview.

This includes the following operator changes:

For the cluster-network-operator, we have the open PR here with feature gate checking: openshift/cluster-network-operator/pull/2804

Install Config

Use the below install-config snippet to configure networking and AWS platform.

Note: machineNetwork does not contain IPv6 CIDR as it is unknown at install time (i.e. will be patched later when infra is ready). The cluster network and service network contain ULA IPv6 CIDR.

IPv4 Primary:

networking:
 clusterNetwork:
 - cidr: 10.128.0.0/14
   hostPrefix: 23
 - cidr: fd01::/48
   hostPrefix: 64
 machineNetwork:
 - cidr: 10.0.0.0/16
 networkType: OVNKubernetes
 serviceNetwork:
 - 172.30.0.0/16
 - fd02::/112
platform:
 aws:
   region: us-east-1
   infraStack: DualStack

IPv6 Primary:

networking:
 clusterNetwork:
 - cidr: fd01::/48
   hostPrefix: 64
 - cidr: 10.128.0.0/14
   hostPrefix: 23
 machineNetwork:
 - cidr: 10.0.0.0/16
 networkType: OVNKubernetes
 serviceNetwork:
 - fd02::/112
 - 172.30.0.0/16
platform:
 aws:
   region: us-east-1
   infraStack: DualStackIPv6Primary

Important notes: [IPv6-primary only] The ingress operator will be stuck as health check on targets are failing because the k8s Service for ingress routers only have IPv6 cluster IP. The hacks only configures the ingress LB target group as IPv4, thus the connection cannot switch to IPv6 when travelling internally.

You must edit the that service openshift-ingress/router-nodeport-default to set its ipFamilyPolicy to PreferDualStack. For example:

$ kubectl -n openshift-ingress patch svc router-nodeport-default \
   -p '{"spec":{"ipFamilyPolicy":"PreferDualStack"}}'

Installer binary

It looks the installer binary can be built from these commits despite a reference to my local CAPA fork. So, just:

./hack/build.sh

/hold
/label platform/aws

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@tthvo
Copy link
Member Author

tthvo commented Oct 31, 2025

The rebase is to stay on top of upstream/main and remove the hack for MCO (9fa264d) as MCO should handle setting the approriate 0.0.0.0/:: to --node-ip argument of kubelet. See PR description more details 😁 🙏

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 31, 2025

@tthvo: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-azure-ovn-resourcegroup 49c133a link false /test e2e-azure-ovn-resourcegroup
ci/prow/e2e-aws-custom-dns-techpreview 49c133a link false /test e2e-aws-custom-dns-techpreview
ci/prow/e2e-azurestack 44dfeb3 link false /test e2e-azurestack
ci/prow/e2e-aws-ovn-fips 44dfeb3 link false /test e2e-aws-ovn-fips
ci/prow/e2e-gcp-ovn-byo-vpc 44dfeb3 link false /test e2e-gcp-ovn-byo-vpc
ci/prow/e2e-aws-ovn-imdsv2 44dfeb3 link false /test e2e-aws-ovn-imdsv2
ci/prow/e2e-azure-default-config 44dfeb3 link false /test e2e-azure-default-config
ci/prow/aws-private 44dfeb3 link false /test aws-private
ci/prow/verify-vendor 44dfeb3 link true /test verify-vendor
ci/prow/e2e-azure-ovn 44dfeb3 link true /test e2e-azure-ovn
ci/prow/e2e-gcp-custom-dns 44dfeb3 link false /test e2e-gcp-custom-dns
ci/prow/e2e-aws-byo-subnet-role-security-groups 44dfeb3 link false /test e2e-aws-byo-subnet-role-security-groups
ci/prow/e2e-azure-ovn-shared-vpc 44dfeb3 link false /test e2e-azure-ovn-shared-vpc
ci/prow/verify-deps 44dfeb3 link true /test verify-deps
ci/prow/e2e-aws-ovn-edge-zones 44dfeb3 link false /test e2e-aws-ovn-edge-zones
ci/prow/e2e-gcp-xpn-dedicated-dns-project 44dfeb3 link false /test e2e-gcp-xpn-dedicated-dns-project
ci/prow/e2e-aws-ovn-edge-zones-manifest-validation 44dfeb3 link true /test e2e-aws-ovn-edge-zones-manifest-validation
ci/prow/azure-ovn-marketplace-images 44dfeb3 link false /test azure-ovn-marketplace-images
ci/prow/okd-scos-e2e-aws-ovn 44dfeb3 link false /test okd-scos-e2e-aws-ovn
ci/prow/e2e-aws-ovn-single-node 44dfeb3 link false /test e2e-aws-ovn-single-node
ci/prow/e2e-gcp-ovn-xpn 44dfeb3 link false /test e2e-gcp-ovn-xpn
ci/prow/e2e-gcp-secureboot 44dfeb3 link false /test e2e-gcp-secureboot
ci/prow/e2e-aws-ovn-heterogeneous 44dfeb3 link false /test e2e-aws-ovn-heterogeneous
ci/prow/images 44dfeb3 link true /test images
ci/prow/e2e-aws-default-config 44dfeb3 link false /test e2e-aws-default-config
ci/prow/azure-private 44dfeb3 link false /test azure-private
ci/prow/e2e-gcp-ovn 44dfeb3 link true /test e2e-gcp-ovn
ci/prow/e2e-aws-ovn 44dfeb3 link true /test e2e-aws-ovn
ci/prow/gcp-private 44dfeb3 link false /test gcp-private
ci/prow/e2e-aws-ovn-shared-vpc-edge-zones 44dfeb3 link false /test e2e-aws-ovn-shared-vpc-edge-zones
ci/prow/e2e-gcp-custom-endpoints 44dfeb3 link false /test e2e-gcp-custom-endpoints
ci/prow/e2e-aws-ovn-shared-vpc-custom-security-groups 44dfeb3 link false /test e2e-aws-ovn-shared-vpc-custom-security-groups

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@tthvo
Copy link
Member Author

tthvo commented Nov 11, 2025

I rebuilt another release image: quay.io/thvo/origin-release:v4.21.0-preview-1. This includes the changes for openshift/cluster-network-operator#2804 instead of my own hack tthvo/cluster-network-operator@617e05f.

If you'd like to use the new custom release image, you need to set the techpreview feature set:

featureSet: TechPreviewNoUpgrade

@openshift-merge-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. platform/aws

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants