Skip to content

Conversation

@arthurlapertosa
Copy link
Contributor

No description provided.

@arthurlapertosa arthurlapertosa marked this pull request as ready for review August 28, 2025 03:59
@arthurlapertosa arthurlapertosa requested review from a team, apeabody and ericyz as code owners August 28, 2025 03:59
@arthurlapertosa
Copy link
Contributor Author

arthurlapertosa commented Aug 28, 2025

@apeabody could you please run the build for this PR?

@apeabody
Copy link
Collaborator

/gcbrun

@arthurlapertosa
Copy link
Contributor Author

@apeabody I don't have access to the GCP cloud build project. Could you please send me the error?

@apeabody
Copy link
Collaborator

@apeabody I don't have access to the GCP cloud build project. Could you please send me the error?

Error: Reference to undeclared input variable

  on ../../modules/beta-autopilot-private-cluster/cluster.tf line 72, in resource "google_container_cluster" "primary":
  72:       confidential_instance_type = lookup(var.node_pools[0], "confidential_instance_type", null)

An input variable with the name "node_pools" has not been declared. This
variable can be declared with a variable "node_pools" {} block.}

@arthurlapertosa
Copy link
Contributor Author

@apeabody could you please re-run the build?

@apeabody
Copy link
Collaborator

/gcbrun

@arthurlapertosa
Copy link
Contributor Author

@apeabody I think the build wasn't properly triggered, could you please take a look?

@apeabody
Copy link
Collaborator

apeabody commented Sep 2, 2025

/gcbrun

@apeabody
Copy link
Collaborator

apeabody commented Sep 2, 2025

@apeabody I think the build wasn't properly triggered, could you please take a look?

Might have been too quick after the merge, it's running now.

Copy link
Collaborator

@apeabody apeabody left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

content {
enabled = confidential_nodes.value.enabled
{% if autopilot_cluster != true %}
confidential_instance_type = lookup(var.node_pools[0], "confidential_instance_type", null)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@apeabody
Copy link
Collaborator

/gemini review

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new example for creating a GKE cluster with confidential nodes and GPUs. This is a valuable addition. The changes include modifications to several Terraform modules to support confidential_instance_type and guest_accelerator configurations, along with the new example files and corresponding integration tests. The implementation is mostly correct, but I've found a few issues related to version constraints, external dependencies, and a bug in the for_each logic that need to be addressed.

Comment on lines +1387 to +1388
enabled = confidential_nodes.enabled
confidential_instance_type = confidential_nodes.confidential_instance_type

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There seems to be an issue with accessing the values from the for_each iterator. The iterator for a dynamic block is named after the block itself (confidential_nodes in this case), and you should use .value to access the current item. The correct way to access the properties of the object in the for_each list would be confidential_nodes.value.enabled and confidential_nodes.value.confidential_instance_type.

This issue is present in multiple files where this pattern is repeated, including cluster.tf and various cluster.tf files under modules/.

        enabled                    = confidential_nodes.value.enabled
        confidential_instance_type = confidential_nodes.value.confidential_instance_type

Comment on lines +107 to +108
kubectl_create_command = "kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/refs/heads/master/nvidia-driver-installer/cos/daemonset-confidential.yaml"
kubectl_destroy_command = "kubectl delete -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/refs/heads/master/nvidia-driver-installer/cos/daemonset-confidential.yaml"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The kubectl_create_command and kubectl_destroy_command use a URL pointing to the master branch of GoogleCloudPlatform/container-engine-accelerators. This is not a stable reference and can change at any time, which can break this example. It is a best practice to use a permalink to a specific commit hash or tag to ensure reproducibility and security.

  kubectl_create_command  = "kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/e0368140228308253634173809140953c0721245/nvidia-driver-installer/cos/daemonset-confidential.yaml"
  kubectl_destroy_command = "kubectl delete -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/e0368140228308253634173809140953c0721245/nvidia-driver-installer/cos/daemonset-confidential.yaml"

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants