Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confluent Cloud Connectors #35

Open
rohits-dev opened this issue Oct 15, 2020 · 8 comments
Open

Confluent Cloud Connectors #35

rohits-dev opened this issue Oct 15, 2020 · 8 comments
Labels
enhancement New feature or request

Comments

@rohits-dev
Copy link

Hello

Do you have plans for adding features to add source / sink managed connectors to this provider?

Thanks

@Mongey Mongey added the enhancement New feature or request label Oct 15, 2020
@rohits-dev
Copy link
Author

@Mongey
Hi I see you use https://github.com/cgroschupp/go-client-confluent-cloud
How do you know the underlying confluent api, I couldn't find any documentation of it?

@rjudin
Copy link

rjudin commented Nov 26, 2020

@Mongey
Hi I see you use https://github.com/cgroschupp/go-client-confluent-cloud
How do you know the underlying confluent api, I couldn't find any documentation of it?

good question :)
in my understanding instead of direct

  • tf-provider <-> ConfluentCloud_API
    we are observing
  • tf-provider <-> go-client-of-ccloud <-> ConfluentCloud_API

Why? Because (currently) available API methods of ConfluentCloud are behind CLI (ccloud).
CLI documentation? use verbose mode to ccloud xx yy -vvvv - example

@mlovic-earnin
Copy link

mlovic-earnin commented Jan 11, 2021

Confluent has now added a Connect API to their REST API documentation and marked it as released for General Availability. https://confluent.cloud/api/docs#tag/Connectors-(v1)

@askoriy
Copy link
Contributor

askoriy commented Jan 15, 2021

There is similar issue in terraform-provider-kafka-connect repo
Mongey/terraform-provider-kafka-connect#33

@albrechtflo-hg
Copy link
Contributor

albrechtflo-hg commented May 18, 2021

I added the required functions to the base library, and created a Pull Request here, for this provider: #82

@askoriy
Copy link
Contributor

askoriy commented Aug 3, 2021

@albrechtflo-hg I tried to create connectors with your implementation and found two bugs:

  1. There is no way to hide any sensitive data during create connector, all are printed as plain-text
    I think the solution could be providing separate map variable sensitive_config besides config with merging both maps into final connector confiugration.

  2. After creating connector next run of terraform plan show the difference:

  # confluentcloud_connector.connector will be updated in-place
  ~ resource "confluentcloud_connector" "connector" {
        cluster_id     = "lkc-51gd2"
      ~ config         = {
            "cloud.environment"            = "prod"
            "cloud.provider"               = "gcp"
            "connector.class"              = "PubSubSource"
            "errors.tolerance"             = "all"
          ~ "gcp.pubsub.credentials.json"  = <<~EOT
              - ****************
              + {
              +   "type": "service_account",
              +   "project_id": "test-project-1234",
              +   "private_key_id": "bd1a89a58853e9e819d3d99a26083aa6e39fe634",
              <...>
              + }
            EOT
            "gcp.pubsub.max.retry.time"    = "5"
            "gcp.pubsub.message.max.count" = "1000"
            "gcp.pubsub.project.id"        = "test-project-1234"
            "gcp.pubsub.subscription.id"   = "test-project-1234-subscription1"
            "gcp.pubsub.topic.id"          = "topic1"
            "internal.kafka.endpoint"      = "PLAINTEXT://kafka-0.kafka.pkc-4r297.svc.cluster.local:9071,kafka-1.kafka.pkc-4r297.svc.cluster.local:9071,kafka-2.kafka.pkc-4r297.svc.cluster.local:9071"
            "kafka.api.key"                = "****************"
            "kafka.api.secret"             = "****************"
            "kafka.dedicated"              = "false"
            "kafka.endpoint"               = "SASL_SSL://pkc-4r297.europe-west1.gcp.confluent.cloud:9092"
            "kafka.region"                 = "europe-west1"
            "kafka.topic"                  = "topic1"
            "name"                         = "connector-name"
          - "schema.registry.url"          = "https://psrc-lgy7n.europe-west3.gcp.confluent.cloud" -> null
            "tasks.max"                    = "1"
            "valid.kafka.api.key"          = "true"
        }
        environment_id = "env-ab123"
        id             = "connector-name"
        name           = "connector-name"
    }

Plan: 0 to add, 1 to change, 0 to destroy.

futher terraform apply doesn't solve the difference

@askoriy
Copy link
Contributor

askoriy commented Aug 6, 2021

I tried to eliminate these problems with PR #97 and #98

@Samarthramesh5
Copy link

Hello,
Need to use IAM Roles for s3 authorization for confluent cloud, anything on this in terraform?
I dont want to use (AccessKey and Id).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

7 participants