Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installing multiple instances of the ALB controller with different configuration and ingress class into the same namespace (kube-system) #2233

Open
dnutels opened this issue Sep 16, 2021 · 45 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@dnutels
Copy link

dnutels commented Sep 16, 2021

Describe the bug

I am trying to install multiple, differently configured instances of the ALB controller into kube-system namespace. It won't work because the first instance claims ownership of aws-load-balancer-tls in the target namespace via meta.helm.sh/release-name and prevents the second instance to do the same.

Steps to reproduce

values.yml for the first instance:

clusterName: my-cluster

ingressClass: alb-a
watchNamespace: app

fullnameOverride: alb-a-controller

serviceAccount:
  create: false
  name: aws-load-balancer-controller

defaultTags:
  ingressClass: alb-a

values.yml for the second instance:

clusterName: my-cluster

ingressClass: alb-b
watchNamespace: app

fullnameOverride: alb-b-controller

serviceAccount:
  create: false
  name: aws-load-balancer-controller

defaultTags:
  ingressClass: alb-b

Then the instances are deployed using:

helm upgrade -i alb-a-controller eks/aws-load-balancer-controller -n kube-system -f alb-a/values.yml
helm upgrade -i alb-b-controller eks/aws-load-balancer-controller -n kube-system -f alb-b/values.yml

Expected outcome

Both controller instances are deployed and each handles separate ingress class.

Actual outcome

Error: rendered manifests contain a resource that already exists. 
Unable to continue with install: Secret "aws-load-balancer-tls" in 
namespace "kube-system" exists and cannot be imported into 
the current release: invalid ownership metadata; annotation 
validation error: key "meta.helm.sh/release-name" must equal 
"alb-b-controller": current value is "alb-a-controller"

Environment

  • AWS Load Balancer controller version: 2.2
  • Kubernetes version 1.21
  • Using EKS (yes/no), if so version? 1.21 eks.2

Additional Context:

@dnutels
Copy link
Author

dnutels commented Sep 17, 2021

Additional information...

Installing alb-a and alb-b controllers into different namespaces doesn't work either. After installing alb-a into infra-a namespace (worked fine), while trying to install alb-b into infra-b namespace:

Error: rendered manifests contain a resource that already exists. 
Unable to continue with install: MutatingWebhookConfiguration 
"aws-load-balancer-webhook" in namespace "" exists and cannot 
be imported into the current release: invalid ownership metadata; 
annotation validation error: key "meta.helm.sh/release-name" 
must equal "alb-b-controller": current value is "alb-a-controller"; 
annotation validation error: key "meta.helm.sh/release-namespace" 
must equal "infra-b": current value is "infra-a"

It appears that the webhook is non-shareable and is installed into a different namespace from the controller?

@M00nF1sh
Copy link
Collaborator

@dnutels
The current controller is designed to run as a single deployment, and we have updated our docs to reflect that: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/configurations/

what's your use case to run multiple deployments instead of a single one?

@dnutels
Copy link
Author

dnutels commented Sep 18, 2021

Thank you for clarifying, I somehow missed that one. It's somewhat academic.

The main use case it to be able to configure the controller differently for different namespaces/ingress classes.
I realize that at this point most (but not all) of the controller configuration can be overridden on the Ingress level.

I would imagine that having different service accounts might be useful...

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 17, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 16, 2022
@willthames
Copy link

Our use case is to have an alb-internal and an alb-external IngressClass, and then set the scheme of the ALB in the associated IngressClassParams so that we don't have to annotate every single Ingress with ALB annotations.

At the moment the external ALB controller creates the stack for an external ingress and then the internal ALB controller removes it all again (I did think that was a bug but this issue makes it clear that it's more of an unimplemented feature)

@willthames
Copy link

willthames commented Feb 14, 2022

/remove-lifecycle rotten

@kishorj kishorj removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 14, 2022
@willthames
Copy link

Another reason for wanting a separate ingress class is for use with external-dns

We use separate external-dns controllers, one that controls private DNS and one that controls public DNS, so that the controller knows which zones to manage records

We use annotation filters (and more likely ingress class filters soon) to associate a particular ingress class with a particular external-dns controller. Currently therefore we can only manage only one of public or private ALB ingresses

@willthames
Copy link

Yet a third reason is if you need a network load balancer and an application load balancer for different workloads (for example, doing TCP passthrough for one workload vs needing WAF protection for another workload)

@visit1985
Copy link

Our use case is to have an alb-internal and an alb-external IngressClass, and then set the scheme of the ALB in the associated IngressClassParams so that we don't have to annotate every single Ingress with ALB annotations.

At the moment the external ALB controller creates the stack for an external ingress and then the internal ALB controller removes it all again (I did think that was a bug but this issue makes it clear that it's more of an unimplemented feature)

We have this use case as well. I got the alb-ingress helm chart (v2.4.1) deployed twice by specifying nameOverride and fullnameOverride with a suffix. Only one of the controllers is doing all the work, because both deployments still share the same ConfigMap for leader election. Looks like this works for our ALB + EKS Fargate setup only. We'd hit #2185 otherwise.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 24, 2022
@visit1985
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 24, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 22, 2022
@visit1985
Copy link

We are still here…
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 22, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 20, 2023
@dim-at-ocp
Copy link

Still an issue. Confirming the use-case - public vs. internal ALBs - and the need for separate configuration.

In fact, instead of installing multiple controller instances a more elegant solution could probably be introducing support for multiple ingressClass definitions handled by a single controller (helm release).

Thank you!

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 25, 2023
@soluwalana
Copy link

I'm having a similar issue. The inability to specify watchNamespace for multiple namespaces and the inability to create multiple deployments means that it is impossible to deploy load balancers for only 2 specific, externally facing namespaces.

@kmoorejr9
Copy link

Would like to echo the sentiments above, particularly for using this to manage public/private DNS alongside external-dns.

@acjohnson
Copy link

We currently work around this limitation with a script that temporarily removes the MutatingWebhookConfiguration and ValidatingWebhookConfiguration objects, creates TargetGroupBindings that the secondary aws-load-balancer-controllers use, then recreate the MutatingWebhookConfiguration and ValidatingWebhookConfiguration objects...

+1 for actual multiple aws-load-balancer-controller support. Also thank you @M00nF1sh and others who have continually improved this project!

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 30, 2024
@FernandoMiguel
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 30, 2024
@jcogilvie
Copy link

Chiming in that I would also benefit from the ability to serve multiple IngressClasses from one deployment for internal vs external purposes.

@flaviomoringa
Copy link

Amazing how this still seems a low priority ticket... and we are paying to use EKS... this is really sad :-(

@shraddhabang shraddhabang added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 14, 2024
@ns-mkusper
Copy link

I have the same internal/external ingressClass need. Would love to see some movement on this.

@nd-at-globetel
Copy link

I have the same issue as well. I should be able to create both internal and external ingress. Any updates?

@nd-at-globetel
Copy link

Still an issue. Confirming the use-case - public vs. internal ALBs - and the need for separate configuration.

In fact, instead of installing multiple controller instances a more elegant solution could probably be introducing support for multiple ingressClass definitions handled by a single controller (helm release).

Thank you!

/remove-lifecycle stale

@dim-at-ocp
Does the installation of multiple controller instances works? (e.g. having an internal ingress and an external ingress)

aws-lb-controller instance#1 -> internal ingress (private alb)
aws-lb-controller instance#2 -> external ingress (public alb)

Thank you!

@visit1985
Copy link

I guess you still hit #2185 with that approach.

@nd-at-globetel
Copy link

@visit1985 Thanks for the response. I didn't notice that there's also an open issue #2185 (Allow multiple controller deployment per cluster) regarding deployment of multiple controller instances within a single cluster.

I've thought it will work as a workaround in the meantime while this issue #2233 is still open. It kinda sucks, we have a use case for exposing an internal ingress and external ingress using aws-lb-controller.

@visit1985
Copy link

visit1985 commented Apr 4, 2024

I can just give an update on my impl.: We are currently installing a single controller on EKS Fargate via helm-chart with createIngressClassResource=false + ingressClassParams.create=false and then deploying the IngressClass and IngressClassParams 2 or more times depending on our needs.

All IngressClasses are handled by a singe controller without issues in our scenario. We only use it to provision ALBs from ingresses. No other use-cases like NLBs or k8s services etc.

@nd-at-globetel
Copy link

@visit1985 Thanks and in your implem, since it provisions multiple ALBs as ingresses with a single controller, were you guys able to implement an internal and external ingress with it?

@visit1985
Copy link

Yes, but as stated, on a Fargate only cluster. We didn’t test it with EC2 node groups.

@nd-at-globetel
Copy link

@visit1985 Got it, Fargate only cluster, what is the target type of those ALBs? is it private IPs?

We're using EC2 node groups

@visit1985
Copy link

@nd-at-globetel alb.ingress.kubernetes.io/target-type=ip

@talkerbox
Copy link

Yet another use case - if I want to deploy another LB controller (for using different IngressClass for each) in another VPC (like in this blogpost https://aws.amazon.com/blogs/containers/expose-amazon-eks-pods-through-cross-account-load-balancer/, not only in VPC-account-A, but and in the current EKS cluster VPC-account-B.

@nd-at-globetel
Copy link

@talkerbox how about in the same VPC (EKS Cluster)? Is it still not supported?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 18, 2024
@flaviomoringa
Copy link

Please don't close this ticket.. this is a basic need for many of the users.

@talkerbox
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 18, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 17, 2024
@acjohnson
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 17, 2024
@ggiallo28
Copy link

We are encountering a similar challenge where a single Kubernetes cluster is shared among multiple customers.

Our objective is to control the subnet where the network load balancer (NLB) is created and the account responsible for the operation. Specifically, our goal is to provision NLBs in the customer’s account rather than in the account hosting the Kubernetes cluster.

Currently, the setup includes a NetworkAccount hosting the EKS cluster and VPC. In the NetworkAccount, we create subnets for each customer and share these subnets with their respective CustomerAccounts. We aim to provision NLBs within each CustomerAccount.

What are the available options to effectively manage account switching in this scenario?

I am considering using roles—one role per CustomerAccount—and deploying multiple controllers. Is this approach feasible?

Thank you.

@prabhatnagpal
Copy link

prabhatnagpal commented Dec 30, 2024

There is a simple way you guys can solve this issue. Don't use fullNameOverride, instead use nameOverride (Tried and tested, working!)

Make the following changes in values.yaml to create new aws load balancer controller with different ingressClass:-

nameOverride: "<your_new_ingress_class_name>-aws-load-balancer-controller"
ingressClass: <your_new_ingress_class_name>

Then run the helm command to install new deployment:-

helm upgrade --install <your_new_ingress_class_name>-aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system -f values.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests