Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic: interface conversion: interface {} is nil, not map[string]interface {} #511

Open
maruizj opened this issue Dec 4, 2024 · 4 comments
Labels
bug Something isn't working needs-more-info Waiting on additional information from issue/PR reporter.

Comments

@maruizj
Copy link

maruizj commented Dec 4, 2024

Can anyone give more info on what this is telling?
should this be consider a bug?
thanks in advance for any help.

STEPS TO REPRODUCE:
terraform apply <plan_file>

VERSIONS:
Terraform 1.9.5
argocd provider: 7.1.0
argocd version: 2.12.4

EXPECTED BEHAVIOUR:
terraform apply planfile -> successfuly

ACTUAL BEHAVIOUR:
Give error bellow

DEBUG OUTPUT:

Stack trace from the terraform-provider-argocd_v7.1.0.exe plugin:

panic: interface conversion: interface {} is nil, not map[string]interface {}

goroutine 84 [running]:
github.com/argoproj-labs/terraform-provider-argocd/argocd.expandClusterConfig({0x35fa400?, 0xc0004e4180})
        github.com/argoproj-labs/terraform-provider-argocd/argocd/structure_cluster.go:87 +0xad1
github.com/argoproj-labs/terraform-provider-argocd/argocd.expandCluster(0xc00017fc80)
        github.com/argoproj-labs/terraform-provider-argocd/argocd/structure_cluster.go:41 +0x206
github.com/argoproj-labs/terraform-provider-argocd/argocd.resourceArgoCDClusterCreate({0x42a2a20, 0xc0006165b0}, 0xc00017fc80, {0x395ce20, 0xc00031af08})
        github.com/argoproj-labs/terraform-provider-argocd/argocd/resource_argocd_cluster.go:34 +0x9b
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xc000caa8c0, {0x42a2978, 0xc000d7eba0}, 0xc00017fc80, {0x395ce20, 0xc00031af08})
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:806 +0x119
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc000caa8c0, {0x42a2978, 0xc000d7eba0}, 0xc0009d44e0, 0xc00017fa80, {0x395ce20, 0xc00031af08})
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:937 +0xa89
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc000a22108, {0x42a2978?, 0xc000d7ea20?}, 0xc000b02cd0)
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1155 +0xd5c
github.com/hashicorp/terraform-plugin-mux/tf5to6server.v5tov6Server.ApplyResourceChange({{0x42c1bf8?, 0xc000a22108?}}, {0x42a2978, 0xc000d7ea20}, 0x0?)
        github.com/hashicorp/[email protected]/tf5to6server/tf5to6server.go:38 +0x54
github.com/hashicorp/terraform-plugin-mux/tf6muxserver.(*muxServer).ApplyResourceChange(0xc00086c300, {0x42a2978?, 0xc000d7e750?}, 0xc000b02c80)
        github.com/hashicorp/[email protected]/tf6muxserver/mux_server_ApplyResourceChange.go:36 +0x193
github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server.(*server).ApplyResourceChange(0xc00044e140, {0x42a2978?, 0xc000d6bd10?}, 0xc000269c70)
        github.com/hashicorp/[email protected]/tfprotov6/tf6server/server.go:865 +0x3d0
github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6._Provider_ApplyResourceChange_Handler({0x3ac6b00, 0xc00044e140}, {0x42a2978, 0xc000d6bd10}, 0xc00017f100, 0x0)
        github.com/hashicorp/[email protected]/tfprotov6/internal/tfplugin6/tfplugin6_grpc.pb.go:611 +0x1a6
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0007b4000, {0x42a2978, 0xc000d6bc80}, {0x42b4560, 0xc000614000}, 0xc000d737a0, 0xc000ccfb00, 0x6043378, 0x0)
        google.golang.org/[email protected]/server.go:1394 +0xe49
google.golang.org/grpc.(*Server).handleStream(0xc0007b4000, {0x42b4560, 0xc000614000}, 0xc000d737a0)
        google.golang.org/[email protected]/server.go:1805 +0xe8b
google.golang.org/grpc.(*Server).serveStreams.func2.1()
        google.golang.org/[email protected]/server.go:1029 +0x8b
created by google.golang.org/grpc.(*Server).serveStreams.func2 in goroutine 47
        google.golang.org/[email protected]/server.go:1040 +0x125

Error: The terraform-provider-argocd_v7.1.0.exe plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

2024-12-04T22:06:35.751+0100 [DEBUG] provider: plugin exited
2024-12-04T22:06:35.753+0100 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-12-04T22:06:35.753+0100 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-12-04T22:06:35.753+0100 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-12-04T22:06:35.754+0100 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-12-04T22:06:35.780+0100 [INFO]  provider: plugin process exited: plugin=.terraform/providers/registry.terraform.io/hashicorp/null/3.2.3/windows_amd64/terraform-provider-null_v3.2.3_x5.exe id=20020
2024-12-04T22:06:35.780+0100 [DEBUG] provider: plugin exited
2024-12-04T22:06:35.790+0100 [INFO]  provider: plugin process exited: plugin=.terraform/providers/registry.terraform.io/hashicorp/kubernetes/2.16.1/windows_amd64/terraform-provider-kubernetes_v2.16.1_x5.exe id=27576
2024-12-04T22:06:35.791+0100 [DEBUG] provider: plugin exited
2024-12-04T22:06:35.801+0100 [INFO]  provider: plugin process exited: plugin=.terraform/providers/registry.terraform.io/hashicorp/helm/2.15.0/windows_amd64/terraform-provider-helm_v2.15.0_x5.exe id=5288
2024-12-04T22:06:35.802+0100 [INFO]  provider: plugin process exited: plugin=.terraform/providers/registry.terraform.io/hashicorp/azurerm/4.2.0/windows_amd64/terraform-provider-azurerm_v4.2.0_x5.exe id=31232
2024-12-04T22:06:35.802+0100 [DEBUG] provider: plugin exited
2024-12-04T22:06:35.803+0100 [DEBUG] provider: plugin exited
@maruizj maruizj added the bug Something isn't working label Dec 4, 2024
@the-technat
Copy link
Collaborator

the-technat commented Dec 5, 2024

Thanks for reporting this!

In most cases a panic in the provider is related to a specific resource you are deploying with specific values. It would thus be helpful if you could share the TF code that you had in your planfile.

@maruizj
Copy link
Author

maruizj commented Dec 5, 2024

Thanks for reporting this!

In most cases a panic in the provider is related to a specific resource you are deploying with specific values. It would thus be helpful if you could share the TF code that you had in your planfile.

The thing is @the-technat Is composed by modules and I cannot share all because of propietary , I share how we using argocd resources which is basically the examples in provider site bellow. but regarding argocd plugin. I use : argocd project(3), argocd apps (around 20), argocd repos (around 20) and argocd cluster (3) ., this are deployed on different modules and different aks resources and plus another arm resources.

If the panic is not telling you something specific maybe you can suggest a way to rule out resources? so I can focus on the resource causing it.

 + resource "argocd_project" "pre" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "pre"
          + namespace        = "argocd"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }

      + spec {
          + description    = "Project for production applications"
          + signature_keys = []
          + source_repos   = [
              + "*",
            ]

          + cluster_resource_whitelist {
              + group = "*"
              + kind  = "*"
            }
          + cluster_resource_whitelist {
              + group = "*"
              + kind  = "Service"
            }

          + destination {
                name      = null
              + namespace = "services"
              + server    = "https://aks-group.westeurope.azmk8s.io"
            }
...
          + namespace_resource_whitelist {
              + group = "*"
              + kind  = "ServiceAccount"
            }
...
          + role {
              + name     = "admin"
              + policies = [
                  + "p, proj:pre:admin, applications, override, pre/*, allow",
                  + "p, proj:pre:admin, applications, sync, pre/*, allow",
                  + "p, proj:pre:admin, clusters, get, pre/*, allow",
                  + "p, proj:pre:admin, repositories, create, pre/*, allow",
                  + "p, proj:pre:admin, repositories, delete, pre/*, allow",
                  + "p, proj:pre:admin, repositories, update, pre/*, allow",
                ]
            }
          + role {
              + name     = "app-deployer"
              + policies = [
                  + "p, proj:pre:app-deployer, applications, override, pre/*, allow",
                  + "p, proj:pre:app-deployer, applications, sync, pre/*, allow",
                  + "p, proj:pre:app-deployer, repositories, create, pre/*, allow",
                  + "p, proj:pre:app-deployer, repositories, update, pre/*, allow",
                ]
            }

          + sync_window {
              + applications = [
                  + "*",
                ]
              + clusters     = [
                  + "*",
                ]
              + duration     = "10m"
              + kind         = "allow"
              + manual_sync  = true
              + namespaces   = [
                  + "*",
                ]
              + schedule     = "* * * * *"
              + timezone     = "UTC"
            }
        }
    }

  + resource "argocd_application" "cc" {
      + cascade  = true
      + id       = (known after apply)
      + status   = (known after apply)
      + validate = true
      + wait     = false

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "live" = "true"
            }
          + name             = "application-name"
          + namespace        = (known after apply)
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }

      + spec {
          + project                = "pre"
          + revision_history_limit = 10

          + destination {
                name      = null
              + namespace = "service"
              + server    = "https://aks-group.westeurope.azmk8s.io:443"
            }

          + source {
              + path            = "k8s/uat"
              + repo_url        = "[email protected]"
              + target_revision = "main"
            }

          + sync_policy {
              + sync_options = [
                  + "CreateNamespace=true",
                ]

              + automated {
                  + allow_empty = false
                  + prune       = true
                  + self_heal   = true
                }
            }
        }
    }

  + resource "argocd_repository" "cc" {
      + connection_state_status = (known after apply)
      + id                      = (known after apply)
      + inherited_creds         = (known after apply)
      + insecure                = true
      + name                    = "application-name"
      + project                 = "pre"
      + repo                    = "[email protected]"
      + type                    = "git"
    }

  + resource "argocd_repository_certificate" "repo_one" {
      + id = (known after apply)

      + ssh {
          + cert_data    = ""
          + cert_info    = (known after apply)
          + cert_subtype = "ssh-ed25519"
          + server_name  = "git.domain.es"
        }
    }

@the-technat
Copy link
Collaborator

Ah yeah sorry for not being precise @maruizj. The crash logs tell my that the issue happens on this type-conversion. So we are specifically looking for an argocd_cluster resource where TLS certificates for communication are configured.

@the-technat the-technat added the needs-more-info Waiting on additional information from issue/PR reporter. label Dec 6, 2024
@maruizj
Copy link
Author

maruizj commented Dec 8, 2024

Ah yeah sorry for not being precise @maruizj. The crash logs tell my that the issue happens on this type-conversion. So we are specifically looking for an argocd_cluster resource where TLS certificates for communication are configured.

thank you very much @the-technat , this tells me more to proceed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs-more-info Waiting on additional information from issue/PR reporter.
Projects
None yet
Development

No branches or pull requests

2 participants