Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

controllers: trigger reconcile event to storage client if hash is not matched #217

Merged
merged 3 commits into from
Sep 24, 2024

Conversation

rchikatw
Copy link
Contributor

@rchikatw rchikatw commented Aug 29, 2024

service/status-report/main.go Outdated Show resolved Hide resolved
internal/controller/storageclient_controller.go Outdated Show resolved Hide resolved
@leelavg
Copy link
Contributor

leelavg commented Sep 10, 2024

LGTM but will wait for Ohad's approval.

@nb-ohad
Copy link
Contributor

nb-ohad commented Sep 10, 2024

/hold

@rchikatw Something here seems to be in misalignment with the design.
I can see 3 different values taking into consideration

  1. The annotation
  2. The value on the status
  3. The value coming from the provider

One of them is unnecessary and redundant, I will vote out the status value.

@rchikatw
Copy link
Contributor Author

/hold

@rchikatw Something here seems to be in misalignment with the design. I can see 3 different values taking into consideration

  1. The annotation
  2. The value on the status
  3. The value coming from the provider

One of them is unnecessary and redundant, I will vote out the status value.

If we decided to remove the status section from storage client then there is no point in keeping the root field at the storage config response here

@@ -41,6 +41,9 @@ const StatusReporterImageEnvVar = "STATUS_REPORTER_IMAGE"
// Value corresponding to annotation key has subscription channel
const DesiredSubscriptionChannelAnnotationKey = "ocs.openshift.io/subscription.channel"

// Value corresponding to annotation key has desired client hash
const DesiredConfigHashAnnotationKey = "ocs.openshift.io/desired.config.hash"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. This is the name of an annotation on the CR so by default it represents the desired state. There is no reason to mention desired in the name of the annotation.
  2. The encoding of the data hash might change in the future, I don't think we need to mention that as part of the name as it is an impl detail.
  3. The annotation name does not describe what kind of function this annotation represents

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about changing it to consumer.config.state

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about ocs.openshift.io/provider-side-state? or something like that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just ocs.openshift.io/provider-state is good enough i guess

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

provider-state implies this represents the entire provider state which is wrong.
provider-side-state implies this represent some state on the provider side

@@ -119,7 +119,8 @@ func main() {

storageClientCopy := &v1alpha1.StorageClient{}
storageClient.DeepCopyInto(storageClientCopy)
if utils.AddAnnotation(storageClient, utils.DesiredSubscriptionChannelAnnotationKey, statusResponse.DesiredClientOperatorChannel) {
if utils.AddAnnotation(storageClient, utils.DesiredSubscriptionChannelAnnotationKey, statusResponse.DesiredClientOperatorChannel) ||
utils.AddAnnotation(storageClient, utils.DesiredConfigHashAnnotationKey, statusResponse.DesiredConfigHash) {
// patch is being used here as to not have any conflicts over storageclient cr changes as this annotation value doesn't depend on storageclient spec
if err := cl.Patch(ctx, storageClient, client.MergeFrom(storageClientCopy)); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure that patch is the correct approach for this new annotation? What happens if a reconcile is in motion while you are trying to read and update the annotation. It might be that be the time you get to the patch command the CR already changed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure which method I should use in this case. If I use "update" on CR, it sends the entire object to the API server, while "patch" sends only the changes that need to be made. There is a greater risk of conflict when using "update" and a lesser risk when using "patch," but in both cases, we can still encounter issues.
Even if I use PartialMetadata there will be same issue I feel

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want the conflict because we do not want to update based on stale data. Let's use update instead of paths for this new annotation. The old annotation should continue to use patch

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will use .Update because it will consider the resource version while updating the resource.

@nb-ohad
Copy link
Contributor

nb-ohad commented Sep 11, 2024

@rchikatw Why do the changes in this PR require changes in hundreds of vendor files?

@leelavg
Copy link
Contributor

leelavg commented Sep 16, 2024

@rchikatw Why do the changes in this PR require changes in hundreds of vendor files?

@rchikatw
Copy link
Contributor Author

@rchikatw Why do the changes in this PR require changes in hundreds of vendor files?

Sure once this is merged I will rebase it.

if hash is not matched

Signed-off-by: rchikatw <[email protected]>
@leelavg
Copy link
Contributor

leelavg commented Sep 17, 2024

@rchikatw new proto changes aren't being pulled?

@rchikatw
Copy link
Contributor Author

/retest

@rchikatw
Copy link
Contributor Author

@rchikatw new proto changes aren't being pulled?

It's pulled now. It took some time to push that change; meanwhile, you saw my PR.

Copy link

openshift-ci bot commented Sep 24, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: leelavg, rchikatw

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@leelavg
Copy link
Contributor

leelavg commented Sep 24, 2024

/unhold

From Ohad

The PR is so big I am unable to access the git without my tab crashing

@lgangava Can you please remove the hold on my behalf?

@openshift-merge-bot openshift-merge-bot bot merged commit 11a91c9 into red-hat-storage:main Sep 24, 2024
12 checks passed
@rchikatw
Copy link
Contributor Author

/cherry-pick release-4.17

@openshift-cherrypick-robot

@rchikatw: Failed to get PR patch from GitHub. This PR will need to be manually cherrypicked.

Error messagestatus code 406 not one of [200], body: {"message":"Sorry, the diff exceeded the maximum number of files (300). Consider using 'List pull requests files' API or locally cloning the repository instead.","errors":[{"resource":"PullRequest","field":"diff","code":"too_large"}],"documentation_url":"https://docs.github.com/rest/pulls/pulls#list-pull-requests-files","status":"406"}

In response to this:

/cherry-pick release-4.17

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants