Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel access to Microsoft.Keyvaults #3113

Open
stan-sz opened this issue Jun 8, 2021 · 6 comments
Open

Parallel access to Microsoft.Keyvaults #3113

stan-sz opened this issue Jun 8, 2021 · 6 comments
Labels

Comments

@stan-sz
Copy link
Contributor

stan-sz commented Jun 8, 2021

Is your feature request related to a problem? Please describe.
In a bicep file that in parallel creates modules that in parallel set keyvault secrets we hit this runtime problem:

{
    "status": "Failed",
    "error": {
        "code": "ConflictError",
        "message": "A conflict occurred that prevented the operation from completing. The operation failed because the Microsoft.KeyVault.UnifiedStorage.Core.DomainModel.ResourceId 'Key Vault' changed from the point the operation began. This can happen if parallel operations are being performed on the Microsoft.KeyVault.UnifiedStorage.Core.DomainModel.ResourceId. To prevent this error, serialize the operations so that only one operation is performed on the Microsoft.KeyVault.UnifiedStorage.Core.DomainModel.ResourceId at a time. Follow this link for more information: https://go.microsoft.com/fwlink/?linkid=2147741"
    }
}

Describe the solution you'd like
Is there a way to detect this in Bicep and signal a potential issue? Should @batchSize(1) be suggested in such cases?

@stan-sz stan-sz added the enhancement New feature or request label Jun 8, 2021
@ghost ghost added the Needs: Triage 🔍 label Jun 8, 2021
@alex-frankel
Copy link
Collaborator

alex-frankel commented Jun 10, 2021

@batchSize(1) should be the current fix - has that worked for you? One thing to double-check, each module will have a different secret name, correct?

The right fix is for KV to make the RP more resilient so this can be handled.

@darrsull
Copy link

darrsull commented Jun 15, 2021

I have noticed that this is an issue as well. I have tried the decorator of @batchsize(1) and I still get the error. I even tried just one keyvault secret being fed into the resource and still get back that the keyvault has changed since the start of the operation. sample batch size example:

@description('Resource name for key vault')
param resourceName string 
@description('Name of the CosmosDb account')
param cosmosDbAccountName string
@description('array of secrets tha has a SecretDisplayName, SecretName, and SecretValue')
param SecretsList array

var SecretsList: [
      {
        SecretDisplayName: 'CosmosDb Primary Key'
        SecretName: 'CosmosDbAuthSettings--PrimaryKey'
        SecretValue: '${listKeys(resourceId('Microsoft.DocumentDB/databaseAccounts', cosmosDbAccountName), '2015-04-08').primaryMasterKey}'
      }
]

@batchSize(1)
resource keyvaultSecret 'Microsoft.KeyVault/vaults/secrets@2019-09-01' = [for Secret in SecretsList: {
  tags: {
    displayName: Secret.SecretDisplayName
  }
  name: '${resourceName}/${Secret.SecretName}'
  properties: {
    contentType: 'text/plain'
    attributes: {
      enabled: true
    }
    value: Secret.SecretValue
  }
}]

Thinking it might be a listkeys operation issue, i tried static text values, still no dice.

Seeing this with just the one secret being added it seems like the resource was in an unhealthy state. I deleted the key vault and redeployed the parent key vault resource and it all worked fine. So this could just be that the actual key vault its self was not in a healthy state.

Long story short, it might not be a parallelism issue, since this seems to work ok for me in a for loop but rather a bad instance of the key vault.

@satano
Copy link

satano commented Jun 23, 2021

I can confirm what @darrsull wrote.

This issue hit me today. while investigating, I ended up having a loop with just 1 secret a also @batchSize(1) abd I still got this error. When I deleted the key vault and created it from scratch, everything started to work. In this case it works as expected even when I remove @batchSize(1) and have multiple secrets in loop. So the secrets may be really created in parallel.

Just to note, when I had more secrets in a loop, each of them has unique name, so there cannot be conflict with accessing the same secret.

@satano
Copy link

satano commented Jun 23, 2021

I think I got it. This conflict error strikes when there is a deleted secret with the same name. You do not need to delete whole key vault, just purge deleted secrets (and wait approximately 10 minutes, because purge is not immediate). Than everything works.

But this kind of error message is a bit misleading. The better would be something that will tel me that there exists deleted secret with the same name.

Is there a way how to handle this situation in Bicep? It would be great if Bicep was able to handle this kind of conflict. For me the best way is, if there is deleted secret, recover it first and then set the new value. And it is the same with deleted whole key vault – something like recover-or-create command.

@NRREDDY999
Copy link

I am trying to deploy azure key-vault using ARM json template.
facing below error

ConflictError:
The operation failed because the Microsoft.KeyVault.UnifiedStorage.Core.DomainModel.ResourceId 'Key Vault' changed from the point the operation began. This can happen if parallel operations are being performed on the Microsoft.KeyVault.UnifiedStorage.Core.DomainModel.ResourceId. To prevent this error, serialize the operations so that only one operation is performed on the Microsoft.KeyVault.UnifiedStorage.Core.DomainModel.ResourceId at a time. Follow this link for more information: https://go.microsoft.com/fwlink/?linkid=2147741

please help me on this error

@stan-sz
Copy link
Contributor Author

stan-sz commented Oct 20, 2022

Related to #4364

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants