Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plugin crashes when running README example #80

Open
BnMcG opened this issue May 9, 2021 · 5 comments
Open

Plugin crashes when running README example #80

BnMcG opened this issue May 9, 2021 · 5 comments
Labels
bug Something isn't working documentation Improvements or additions to documentation

Comments

@BnMcG
Copy link

BnMcG commented May 9, 2021

Hi,

I was giving this plugin a try this evening, but unfortunately the example in the README seems to crash with new versions of the Kafka provider:

terraform {
  required_providers {
    confluentcloud = {
      source = "Mongey/confluentcloud"
    }

    kafka = {
      source  = "Mongey/kafka"
      version = "0.3.1"
    }
  }
}


resource "confluentcloud_environment" "environment" {
  name = var.confluent_cloud_environment_name
}

resource "confluentcloud_kafka_cluster" "kafka_cluster" {
  name             = "foo"
  service_provider = "aws"
  region           = "eu-west-2"
  availability     = "LOW"
  environment_id   = confluentcloud_environment.environment.id
  deployment = {
    sku = "BASIC"
  }
  network_egress  = 100
  network_ingress = 100
  storage         = 5000
}

resource "confluentcloud_api_key" "credentials" {
  cluster_id     = confluentcloud_kafka_cluster.kafka_cluster.id
  environment_id = confluentcloud_environment.environment.id
}

locals {
  bootstrap_servers = [replace(confluentcloud_kafka_cluster.kafka_cluster.bootstrap_servers, "SASL_SSL://", "")]
}

provider "kafka" {
  bootstrap_servers = local.bootstrap_servers

  tls_enabled    = true
  sasl_username  = confluentcloud_api_key.credentials.key
  sasl_password  = confluentcloud_api_key.credentials.secret
  sasl_mechanism = "plain"
  timeout        = 10
}

resource "kafka_topic" "bar" {
  name               = "bar"
  replication_factor = 3
  partitions         = 1
  config = {
    "cleanup.policy"  = "delete"
    "retention.ms"    = 900000
    "retention.bytes" = 10485760
  }
}

When I run this on Terraform Cloud:

�Terraform v0.15.3
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
╷
│ Error: Request cancelled
│ 
│   with module.confluent_cloud.kafka_topic.bar,
│   on ../modules/platform/modules/confluent_cloud/main.tf line 57, in resource "kafka_topic" "bar":
│   57: resource "kafka_topic" "bar" {
│ 
│ The plugin.(*GRPCProvider).PlanResourceChange request was cancelled.
╵

Stack trace from the terraform-provider-kafka_v0.3.1 plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xef5a7c]

goroutine 27 [running]:
github.com/Mongey/terraform-provider-kafka/kafka.NewClient(0xc00040a090, 0xc000489e70, 0x0, 0x42dd4a)
	/home/runner/work/terraform-provider-kafka/terraform-provider-kafka/kafka/client.go:36 +0x1bc
github.com/Mongey/terraform-provider-kafka/kafka.(*LazyClient).init.func1()
	/home/runner/work/terraform-provider-kafka/terraform-provider-kafka/kafka/lazy_client.go:23 +0x40
sync.(*Once).doSlow(0xc000110e70, 0xc000189160)
	/opt/hostedtoolcache/go/1.13.15/x64/src/sync/once.go:66 +0xe3
sync.(*Once).Do(...)
	/opt/hostedtoolcache/go/1.13.15/x64/src/sync/once.go:57
github.com/Mongey/terraform-provider-kafka/kafka.(*LazyClient).init(0xc000110e70, 0xfc9060, 0x1487d60)
	/home/runner/work/terraform-provider-kafka/terraform-provider-kafka/kafka/lazy_client.go:22 +0x3f8
github.com/Mongey/terraform-provider-kafka/kafka.(*LazyClient).CanAlterReplicationFactor(0xc000110e70, 0x11d52b3, 0x12, 0x1)
	/home/runner/work/terraform-provider-kafka/terraform-provider-kafka/kafka/lazy_client.go:101 +0x2f
github.com/Mongey/terraform-provider-kafka/kafka.customDiff(0xc000114d40, 0x114e300, 0xc000110e70, 0xc000604060, 0xc000114d40)
	/home/runner/work/terraform-provider-kafka/terraform-provider-kafka/kafka/resource_kafka_topic.go:305 +0xfe
github.com/hashicorp/terraform-plugin-sdk/helper/schema.schemaMap.Diff(0xc00036b710, 0xc000429ae0, 0xc000111ce0, 0x1218320, 0x114e300, 0xc000110e70, 0x14ccc00, 0xc000424300, 0x0, 0x0)
	/home/runner/go/pkg/mod/github.com/hashicorp/[email protected]/helper/schema/schema.go:509 +0xac2
github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Resource).simpleDiff(0xc000106a00, 0xc000429ae0, 0xc000111ce0, 0x114e300, 0xc000110e70, 0xc000111c01, 0xc000015790, 0x40d34d)
	/home/runner/go/pkg/mod/github.com/hashicorp/[email protected]/helper/schema/resource.go:351 +0x85
github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Provider).SimpleDiff(0xc000106b80, 0xc000015978, 0xc000429ae0, 0xc000111ce0, 0xc000122c60, 0xc000111ce0, 0x0)
	/home/runner/go/pkg/mod/github.com/hashicorp/[email protected]/helper/schema/provider.go:316 +0x99
github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin.(*GRPCProviderServer).PlanResourceChange(0xc00000ea40, 0x14cbcc0, 0xc000111680, 0xc00010e720, 0xc00000ea40, 0xc000111680, 0xc00008aa80)
	/home/runner/go/pkg/mod/github.com/hashicorp/[email protected]/internal/helper/plugin/grpc_provider.go:633 +0x765
github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5._Provider_PlanResourceChange_Handler(0x1180120, 0xc00000ea40, 0x14cbcc0, 0xc000111680, 0xc00010e6c0, 0x0, 0x14cbcc0, 0xc000111680, 0xc0002cc360, 0x112)
	/home/runner/go/pkg/mod/github.com/hashicorp/[email protected]/internal/tfplugin5/tfplugin5.pb.go:3171 +0x217
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0000e8000, 0x14da9c0, 0xc00040f500, 0xc000130800, 0xc00036b950, 0x1ca7a68, 0x0, 0x0, 0x0)
	/home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:995 +0x460
google.golang.org/grpc.(*Server).handleStream(0xc0000e8000, 0x14da9c0, 0xc00040f500, 0xc000130800, 0x0)
	/home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:1275 +0xd97
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc000488000, 0xc0000e8000, 0x14da9c0, 0xc00040f500, 0xc000130800)
	/home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:710 +0xbb
created by google.golang.org/grpc.(*Server).serveStreams.func1
	/home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:708 +0xa1

Error: The terraform-provider-kafka_v0.3.1 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

I'm assuming this is something to do with the Confluent Cloud cluster not existing yet when the Kafka adapter attempts to connect to it, but I'm unsure of how to proceed. Do you have any pointers?

I'd like to be able to create a cluster and some topics in one Terraform plan, if possible.

Cheers.

@BnMcG
Copy link
Author

BnMcG commented May 9, 2021

Initial testing suggests that downgrading to 0.2.11 (the version in the example) fixes this problem, but upgrading to 0.3.2 does not.

@Mongey Mongey added bug Something isn't working documentation Improvements or additions to documentation labels May 10, 2021
@Mongey
Copy link
Owner

Mongey commented May 11, 2021

Thanks for reporting @BnMcG

This is actually more of an issue with terraform-provider-kafka. I fixed the crash, but actually didn't fix the problem with supporting lazy-provider initiation😅

@BnMcG
Copy link
Author

BnMcG commented May 11, 2021

No problem! I suspected that might be the case, but wasn't 100% sure. I'll test 0.3.3 now - or is lazy-provider initiation required? Sorry - I'm not too familiar with how Terraform providers work!

@Mongey
Copy link
Owner

Mongey commented May 11, 2021

lazy initiation is needed to run provision the cluster and topic in one run, but, if break it into two different runs ... it should work.

main.tf

terraform {
  required_providers {
    confluentcloud = {
      source = "Mongey/confluentcloud"
    }

    kafka = {
      source  = "Mongey/kafka"
      version = "0.3.1"
    }
  }
}


resource "confluentcloud_environment" "environment" {
  name = var.confluent_cloud_environment_name
}

resource "confluentcloud_kafka_cluster" "kafka_cluster" {
  name             = "foo"
  service_provider = "aws"
  region           = "eu-west-2"
  availability     = "LOW"
  environment_id   = confluentcloud_environment.environment.id
  deployment = {
    sku = "BASIC"
  }
  network_egress  = 100
  network_ingress = 100
  storage         = 5000
}

resource "confluentcloud_api_key" "credentials" {
  cluster_id     = confluentcloud_kafka_cluster.kafka_cluster.id
  environment_id = confluentcloud_environment.environment.id
}
  • terraform apply

  • Add back in the topic config

main.tf

terraform {
  required_providers {
    confluentcloud = {
      source = "Mongey/confluentcloud"
    }

    kafka = {
      source  = "Mongey/kafka"
      version = "0.3.1"
    }
  }
}


resource "confluentcloud_environment" "environment" {
  name = var.confluent_cloud_environment_name
}

resource "confluentcloud_kafka_cluster" "kafka_cluster" {
  name             = "foo"
  service_provider = "aws"
  region           = "eu-west-2"
  availability     = "LOW"
  environment_id   = confluentcloud_environment.environment.id
  deployment = {
    sku = "BASIC"
  }
  network_egress  = 100
  network_ingress = 100
  storage         = 5000
}

resource "confluentcloud_api_key" "credentials" {
  cluster_id     = confluentcloud_kafka_cluster.kafka_cluster.id
  environment_id = confluentcloud_environment.environment.id
}

locals {
  bootstrap_servers = [replace(confluentcloud_kafka_cluster.kafka_cluster.bootstrap_servers, "SASL_SSL://", "")]
}

provider "kafka" {
  bootstrap_servers = local.bootstrap_servers

  tls_enabled    = true
  sasl_username  = confluentcloud_api_key.credentials.key
  sasl_password  = confluentcloud_api_key.credentials.secret
  sasl_mechanism = "plain"
  timeout        = 10
}

resource "kafka_topic" "bar" {
  name               = "bar"
  replication_factor = 3
  partitions         = 1
  config = {
    "cleanup.policy"  = "delete"
    "retention.ms"    = 900000
    "retention.bytes" = 10485760
  }
}
  • terraform apply

I'll work on fixing the lazy initiation over in terraform-provider-kafka

@BnMcG
Copy link
Author

BnMcG commented May 11, 2021

I'm happy to have a crack at the lazy initiation if you have any pointers - I googled but nothing too comprehensive came up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants