-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PoC: Dymanically add watch for clusterclaim CRD #2743
base: main
Are you sure you want to change the base?
PoC: Dymanically add watch for clusterclaim CRD #2743
Conversation
Signed-off-by: Umanga Chapagain <[email protected]>
Skipping CI for Draft Pull Request. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: umangachapagain The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
cluster, err := cluster.New(mgr.GetConfig(), func(options *cluster.Options) { | ||
options.Scheme = mgr.GetScheme() | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about we fetch it in main and pass it to all controllers and it can be used in a storagecluster controller as well.
@@ -63,6 +68,8 @@ func InitNamespacedName() types.NamespacedName { | |||
// nolint:revive | |||
type OCSInitializationReconciler struct { | |||
client.Client | |||
cluster cluster.Cluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a comment on what does it holds?
cluster, err := cluster.New(mgr.GetConfig(), func(options *cluster.Options) { | ||
options.Scheme = mgr.GetScheme() | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@umangachapagain it's been sometime that I looked into controller-runtime (during ODFMS), pls excuse if I'm a bit wrong here.
You are using only Scheme from manager but not the cluster which is auto created and has a cache attached to it. The controller that is being saved on reconciler could probably be configured against managers' cluster cache but may not be against this newly created cluster cache.
If above is correct, we may want to use a single cache across all controllers?
if err := r.controller.Watch(source.Kind[client.Object](r.cluster.GetCache(), | ||
&apiextensionsv1.CustomResourceDefinition{ | ||
ObjectMeta: metav1.ObjectMeta{ | ||
Name: ClusterClaimCrdName, | ||
}, | ||
}, crdHandler, crdPredicate)); err != nil { | ||
return reconcile.Result{}, fmt.Errorf("unable to watch CRD") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@umangachapagain This will not achieve the desired result. You cannot just add watches to a controller after the manager started, the cache will become incoherent and inconsistent. The only way to do it within the process is to close all controllers stop and reset the manager then start the manager again. What you are doing here can only work if the client we use is cacheless, but then the informers will not work properly and there will be a runtime penalty for lists and gets
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
No description provided.