-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
controllers: trigger reconcile event to storage client if hash is not matched #217
controllers: trigger reconcile event to storage client if hash is not matched #217
Conversation
rchikatw
commented
Aug 29, 2024
•
edited
Loading
edited
- Save hash to storage client status
- add an annotation for the subscription channel to the storage client
- Add a new field to the status of the storage client
- Testing is pending will add results once it's done
- I have messed PR (controllers: trigger reconcile event to storage client if hash of desired config does not match #200) which has similar changes and due to which its closed automatically hence creating the current pr.
a46f21f
to
f18a18e
Compare
f75007b
to
4247517
Compare
4247517
to
f6b8bb8
Compare
LGTM but will wait for Ohad's approval. |
/hold @rchikatw Something here seems to be in misalignment with the design.
One of them is unnecessary and redundant, I will vote out the status value. |
If we decided to remove the status section from storage client then there is no point in keeping the root field at the storage config response here |
f6b8bb8
to
3cb78a7
Compare
pkg/utils/k8sutils.go
Outdated
@@ -41,6 +41,9 @@ const StatusReporterImageEnvVar = "STATUS_REPORTER_IMAGE" | |||
// Value corresponding to annotation key has subscription channel | |||
const DesiredSubscriptionChannelAnnotationKey = "ocs.openshift.io/subscription.channel" | |||
|
|||
// Value corresponding to annotation key has desired client hash | |||
const DesiredConfigHashAnnotationKey = "ocs.openshift.io/desired.config.hash" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- This is the name of an annotation on the CR so by default it represents the desired state. There is no reason to mention desired in the name of the annotation.
- The encoding of the data
hash
might change in the future, I don't think we need to mention that as part of the name as it is an impl detail. - The annotation name does not describe what kind of function this annotation represents
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about changing it to consumer.config.state
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about ocs.openshift.io/provider-side-state
? or something like that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just ocs.openshift.io/provider-state
is good enough i guess
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
provider-state
implies this represents the entire provider state which is wrong.
provider-side-state
implies this represent some state on the provider side
service/status-report/main.go
Outdated
@@ -119,7 +119,8 @@ func main() { | |||
|
|||
storageClientCopy := &v1alpha1.StorageClient{} | |||
storageClient.DeepCopyInto(storageClientCopy) | |||
if utils.AddAnnotation(storageClient, utils.DesiredSubscriptionChannelAnnotationKey, statusResponse.DesiredClientOperatorChannel) { | |||
if utils.AddAnnotation(storageClient, utils.DesiredSubscriptionChannelAnnotationKey, statusResponse.DesiredClientOperatorChannel) || | |||
utils.AddAnnotation(storageClient, utils.DesiredConfigHashAnnotationKey, statusResponse.DesiredConfigHash) { | |||
// patch is being used here as to not have any conflicts over storageclient cr changes as this annotation value doesn't depend on storageclient spec | |||
if err := cl.Patch(ctx, storageClient, client.MergeFrom(storageClientCopy)); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure that patch is the correct approach for this new annotation? What happens if a reconcile is in motion while you are trying to read and update the annotation. It might be that be the time you get to the patch command the CR already changed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure which method I should use in this case. If I use "update" on CR, it sends the entire object to the API server, while "patch" sends only the changes that need to be made. There is a greater risk of conflict when using "update" and a lesser risk when using "patch," but in both cases, we can still encounter issues.
Even if I use PartialMetadata there will be same issue I feel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We want the conflict because we do not want to update based on stale data. Let's use update instead of paths for this new annotation. The old annotation should continue to use patch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will use .Update
because it will consider the resource version while updating the resource.
@rchikatw Why do the changes in this PR require changes in hundreds of vendor files? |
|
Sure once this is merged I will rebase it. |
3cb78a7
to
75cfdc2
Compare
75cfdc2
to
d472312
Compare
d472312
to
6791922
Compare
6791922
to
d4a4cfc
Compare
d4a4cfc
to
0e5bdc0
Compare
if hash is not matched Signed-off-by: rchikatw <[email protected]>
0e5bdc0
to
2d459aa
Compare
@rchikatw new proto changes aren't being pulled? |
/retest |
4fe8766
to
2d459aa
Compare
Signed-off-by: rchikatw <[email protected]>
Signed-off-by: rchikatw <[email protected]>
It's pulled now. It took some time to push that change; meanwhile, you saw my PR. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: leelavg, rchikatw The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/unhold From Ohad
|
11a91c9
into
red-hat-storage:main
/cherry-pick release-4.17 |
@rchikatw: Failed to get PR patch from GitHub. This PR will need to be manually cherrypicked. Error messagestatus code 406 not one of [200], body: {"message":"Sorry, the diff exceeded the maximum number of files (300). Consider using 'List pull requests files' API or locally cloning the repository instead.","errors":[{"resource":"PullRequest","field":"diff","code":"too_large"}],"documentation_url":"https://docs.github.com/rest/pulls/pulls#list-pull-requests-files","status":"406"}In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |