Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confluent Cloud API Times out even though Creation is Successful #99

Open
theharrisonchow opened this issue Aug 12, 2021 · 4 comments
Open
Labels
bug Something isn't working

Comments

@theharrisonchow
Copy link

When creating multiple API keys at once through the provider, all api keys timeout even though they were created successfully. On the terraform state file, it shows as the resources were tainted. Current work around is just to untaint the resource, but it seems the provider is just not able to find the api key post creation.

Sample error:

confluentcloud_api_key.api-key-001: Still creating... [5m30s elapsed]
2021/08/11 13:00:46 [DEBUG] POST https://state/
╷
│ Error: Error waiting for API Key (400000) to be ready: timeout while waiting for state to become 'Ready' (last state: 'Pending', timeout: 5m0s)
│ 
│   with confluentcloud_api_key.api-key-001,
│   on main.tf line 23, in resource "confluentcloud_api_key" "api-key-001":
│   23: resource "confluentcloud_api_key" "api-key-001" {
│ 
╵
@kunallanjewar
Copy link

Yeah I ran into this as well especially for dedicated VPC cluster but not for public facing cluster.

After it times out, I noticed the API key does exist in TF stage file and it's simply marked as tainted.

So as a temporary workaround I marked this API key resource as untaint.

For example:

terraform state show confluentcloud_api_key.test_01

Once this showed a valid object I ran following

terraform untaint confluentcloud_api_key.test_01

This temp fix worked for me until we get a PR to resolve this bug on provider side.

@Mongey Mongey added the bug Something isn't working label Aug 24, 2021
@Mongey
Copy link
Owner

Mongey commented Aug 24, 2021

There's so many caveats to the "wait for cluster to be healthy" (introduced in #37) feature -- that I think adding the ability to disable it, is a good idea. Perhaps it should be disabled by default 😬

@atharvai
Copy link

Can someone post an update on this please? Is it on hold or actively worked on?

@tarciosaraiva
Copy link

Keen to know if something is planned to fix this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants