This example deploys Trend Micro Vision One File Security on AWS EKS with full infrastructure automation.
┌──────────────────────────────────────────────────────────────────────────────────┐
│ AWS Cloud │
│ ┌────────────────────────────────────────────────────────────────────────────┐ │
│ │ VPC │ │
│ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │
│ │ │ EKS Cluster │ │ │
│ │ │ │ │ │
│ │ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │ │
│ │ │ │ Scanner │ │ Scan Cache │ │ Backend │◀──┐ │ │ │
│ │ │ │ (gRPC) │ │ (Valkey) │ │Communicator │ │ │ │ │
│ │ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │ │ │
│ │ │ ▲ │ │ │ │
│ │ │ │ ┌─────────────┐ │ │ │ │
│ │ │ │ │ Management │───▶┌──────────┐ │ │ │ │
│ │ │ │ │ Service │ │Database │ │ │ │ │
│ │ │ │ └─────────────┘ │(Postgres)│ │ │ │ │
│ │ │ │ ▲ └────┬─────┘ │ │ │ │
│ │ │ ┌─────────────┐ │ ┌────▼────┐ │ │ │ │
│ │ │ │ Ingress │─────────┘ │EBS CSI │ │ │ │ │
│ │ │ │ (ALB) │ │Driver │ │ │ │ │
│ │ │ └─────────────┘ └────┬────┘ │ │ │ │
│ │ │ ▲ │ │ │ │ │
│ │ │ │ ┌─────────────────┐ │ ┌─────┴─────┐ │ │ │
│ │ │ │ │ ALB Controller │ │ │Prometheus │ │ │ │
│ │ │ │◀────│ (manages ALB) │ │ │ Agent │ │ │ │
│ │ │ │ └─────────────────┘ │ └─────┬─────┘ │ │ │
│ │ │ │ │ │ scrape │ │ │
│ │ │ │ │ ┌─────▼─────┐ │ │ │
│ │ │ │ │ │ KSM │ │ │ │
│ │ │ │ │ │(kube-state│ │ │ │
│ │ │ │ │ │ -metrics) │ │ │ │
│ │ │ │ │ └───────────┘ │ │ │
│ │ └──────────│─────────────────────────────────│─────────────────────────┘ │ │
│ │ │ │ │ │
│ │ ┌─────────────────┐ ┌──────▼──────┐ │ │
│ │ │ Application │ │ EBS Volume │ │ │
│ │ │ Load Balancer │◀─ ACM Cert │ (Persistent)│ │ │
│ │ └─────────────────┘ └─────────────┘ │ │
│ │ ▲ │ │
│ └─────────────│──────────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌────────────────────┐ │
│ │ Route53 │ (Optional: auto-managed DNS records) │
│ │ Hosted Zone │ │
│ └────────────────────┘ │
└──────────────────────────────────────────────────────────────────────────────────┘
▲
│ HTTPS/gRPC
│
┌───────────────┐
│ Clients │
│ (Your Apps) │
└───────────────┘
| Resource | Description | How to Create |
|---|---|---|
| ACM Certificate | TLS certificate for HTTPS/gRPC | AWS Console → Certificate Manager → Request certificate |
| Route53 Hosted Zone | DNS zone for your domain (if using Route53) | AWS Console → Route53 → Create hosted zone |
| VPC & Subnets | Network infrastructure with at least 2 AZs | Use existing or create new VPC |
⚠️ Important: The ACM certificate MUST be inIssuedstatus before deployment. DNS validation records must be configured and verified.
- Same Region: Certificate must be in the same AWS region as your EKS cluster
- Domain Coverage: Certificate must cover your
v1fs_domain_name(exact match or wildcard) - Issued Status: Certificate must be fully validated and in
Issuedstatus
# Verify certificate status
aws acm describe-certificate \
--certificate-arn arn:aws:acm:us-east-1:123456789012:certificate/xxx \
--query 'Certificate.Status'
# Expected output: "ISSUED"- Log in to Vision One Console
- Navigate to: File Security → Containerized Scanner
- Click Add Scanner or Get Registration Token
- Copy the token (starts with
eyJ...)
| Name | Version |
|---|---|
| Terraform | >= 1.5.0 |
| AWS CLI | >= 2.0 |
| kubectl | >= 1.24 |
| Helm | >= 3.0 |
cp terraform.tfvars.example terraform.tfvarsEdit terraform.tfvars with your settings:
# Required
aws_region = "us-east-1"
create_eks_cluster = true
eks_cluster_name = "visionone-filesecurity"
v1fs_domain_name = "scanner.example.com"
v1fs_registration_token = "eyJ..." # Your Vision One token
certificate_arn = "arn:aws:acm:us-east-1:123456789012:certificate/xxx"
# Network
subnet_ids = ["subnet-xxx", "subnet-yyy"] # At least 2 AZs
# Optional: Auto-manage Route53 DNS records
manage_route53_records = true
route53_zone_id = "Z0123456789ABC"terraform init
terraform plan
terraform applyaws eks update-kubeconfig --name visionone-filesecurity --region us-east-1# Check pods
kubectl get pods -n visionone-filesecurity
# Get ALB hostname
kubectl get ingress -n visionone-filesecurityIf you already have an EKS cluster, set create_eks_cluster = false:
create_eks_cluster = false
eks_cluster_name = "my-existing-cluster"
# Network settings for your existing cluster
vpc_id = "vpc-xxx"
subnet_ids = ["subnet-xxx", "subnet-yyy"]If your cluster already has AWS Load Balancer Controller installed:
create_alb_controller = falseIf you need to install ALB Controller but your subnets are shared with other clusters:
create_alb_controller = true
manage_network_elb_tag = false # Don't manage shared elb tags
manage_network_cluster_tag = true # Cluster-specific tag is safeIf your cluster already has EBS CSI Driver installed:
create_ebs_csi_driver = false
create_ebs_csi_driver_iam_role = false
create_ebs_csi_driver_addon = false
# Still create StorageClass if needed
create_ebs_storage_class = true| Name | Version |
|---|---|
| terraform | >= 1.5.0 |
| aws | ~> 5.0 |
| helm | ~> 2.11 |
| kubernetes | ~> 2.23 |
| tls | ~> 4.0 |
| Name | Version |
|---|---|
| aws | ~> 5.0 |
| kubernetes | ~> 2.23 |
| Name | Source | Version |
|---|---|---|
| alb_controller | ./modules/alb-controller | n/a |
| ebs_csi_driver | ./modules/ebs-csi-driver | n/a |
| ebs_storage_class | ./modules/ebs-storage-class | n/a |
| eks | ./modules/eks | n/a |
| network | ./modules/network | n/a |
| v1fs | ../../modules/v1fs | n/a |
| Name | Type |
|---|---|
| aws_route53_record.scanner | resource |
| aws_eks_cluster.existing | data source |
| aws_eks_cluster_auth.existing | data source |
| aws_iam_openid_connect_provider.existing | data source |
| aws_subnet.selected | data source |
| kubernetes_ingress_v1.scanner | data source |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| additional_security_group_ids | Additional security group IDs to attach to EKS nodes | list(string) |
[] |
no |
| alb_controller_version | Version of AWS Load Balancer Controller Helm chart | string |
"1.8.1" |
no |
| alb_scheme | ALB scheme: internet-facing or internal | string |
"internet-facing" |
no |
| aws_profile | AWS CLI profile to use for authentication (e.g., 'dev', 'prod') | string |
"" |
no |
| aws_region | AWS region for resources | string |
"us-east-1" |
no |
| certificate_arn | YOU MUST create and validate this certificate BEFORE deploying. Requirements: - Certificate MUST be in 'Issued' status (not pending validation) - Certificate MUST be in the SAME AWS region as your EKS cluster - Certificate MUST cover your v1fs_domain_name (exact match or wildcard) - DNS validation records MUST be already configured in your DNS provider Steps to create: 1. Request certificate in AWS Certificate Manager (ACM) 2. Add DNS validation records to your DNS provider (Route53, Cloudflare, etc.) 3. Wait for certificate status to become 'Issued' 4. Copy the certificate ARN Example ARN: arn:aws:acm:us-east-1:123456789012:certificate/abc-def-123 To verify certificate status: aws acm describe-certificate --certificate-arn --region |
string |
"" |
no |
| create_alb_controller | Whether to create AWS Load Balancer Controller resources. Set to true for: - New EKS clusters - Existing clusters without ALB Controller Set to false for: - Existing clusters that already have ALB Controller installed |
bool |
true |
no |
| create_alb_controller_helm_release | Whether to create Helm release for ALB Controller. Set to false if: - ALB Controller is already installed via Helm - You only need IAM role/ServiceAccount creation Only applicable when create_alb_controller = true. |
bool |
true |
no |
| create_alb_controller_iam_role | Whether to create IAM role for ALB Controller. Set to false if: - Deploying to existing cluster with IAM role already configured - You want to use an externally managed IAM role Only applicable when create_alb_controller = true. |
bool |
true |
no |
| create_alb_controller_service_account | Whether to create Kubernetes ServiceAccount for ALB Controller. Set to false if: - ServiceAccount already exists in the cluster - Using a pre-configured service account Only applicable when create_alb_controller = true. |
bool |
true |
no |
| create_ebs_csi_driver | Whether to create EBS CSI driver resources. Set to true for: - New EKS clusters - Existing clusters without EBS CSI driver - Clusters requiring database persistent storage Set to false for: - Existing clusters that already have EBS CSI driver installed - Clusters using alternative storage solutions The EBS CSI driver is required for: - EKS clusters version 1.23 and later - Dynamic provisioning of EBS volumes (gp2, gp3, io1, io2) |
bool |
true |
no |
| create_ebs_csi_driver_addon | Whether to create EBS CSI Driver EKS addon. Set to false if: - EKS addon already exists but you want to create StorageClass - Using an externally managed EBS CSI driver Only applicable when create_ebs_csi_driver = true. |
bool |
true |
no |
| create_ebs_csi_driver_iam_role | Whether to create IAM role for EBS CSI Driver. Set to false if: - Deploying to existing cluster with IAM role already configured - You want to use an externally managed IAM role Only applicable when create_ebs_csi_driver = true. |
bool |
true |
no |
| create_ebs_storage_class | Whether to create StorageClass for EBS volumes. Set to true if: - You need a StorageClass for database persistent storage - Your cluster has EBS CSI driver installed (either by this module or pre-existing) Set to false if: - StorageClass already exists in the cluster - You want to use an existing StorageClass Note: This is independent of create_ebs_csi_driver. You can create a StorageClass even if you're not creating the EBS CSI driver (e.g., when the driver is pre-installed). |
bool |
true |
no |
| create_eks_cluster | Whether to create a new EKS cluster or use an existing one | bool |
n/a | yes |
| create_v1fs_namespace | Whether to create the V1FS namespace. Set to false if using an existing namespace like 'default' or if the namespace was created elsewhere | bool |
true |
no |
| ebs_csi_driver_version | Version of the EBS CSI driver add-on. Leave as null to use the latest compatible version for your EKS cluster. To find available versions: aws eks describe-addon-versions --addon-name aws-ebs-csi-driver --kubernetes-version Example versions: v1.28.0-eksbuild.1, v1.27.0-eksbuild.1 |
string |
null |
no |
| ebs_encrypted | Whether to encrypt EBS volumes by default (recommended for production) | bool |
true |
no |
| ebs_volume_type | EBS volume type for the database StorageClass. Available types: - gp3: General Purpose SSD (recommended) - baseline 3,000 IOPS, 125 MB/s - gp2: General Purpose SSD (older) - burstable IOPS - io1: Provisioned IOPS SSD - for high performance requirements - io2: Provisioned IOPS SSD - higher durability than io1 |
string |
"gp3" |
no |
| eks_cluster_endpoint_private_access | Enable private access to EKS cluster endpoint | bool |
true |
no |
| eks_cluster_endpoint_public_access | Enable public access to EKS cluster endpoint | bool |
true |
no |
| eks_cluster_name | Name of the EKS cluster (existing or to be created) | string |
null |
no |
| eks_cluster_version | Kubernetes version for EKS cluster (only used if creating new cluster) | string |
"1.34" |
no |
| enable_icap | [NOT PRODUCTION READY] Enable ICAP service with NLB. Currently, only gRPC protocol is supported and stable. ICAP support is under development and should remain disabled. Use gRPC protocol instead for production deployments. |
bool |
false |
no |
| enable_v1fs_management | Whether to enable management service | bool |
true |
no |
| enable_v1fs_management_db | Whether to enable PostgreSQL database for management service | bool |
false |
no |
| enable_v1fs_scan_cache | Whether to enable scan result caching with Redis/Valkey | bool |
true |
no |
| enable_v1fs_scanner_autoscaling | Whether to enable autoscaling for scanner | bool |
false |
no |
| enable_v1fs_telemetry_kube_state_metrics | Whether to enable kube-state-metrics for Kubernetes resource metrics | bool |
true |
no |
| enable_v1fs_telemetry_prometheus | Whether to enable Prometheus agent for metrics collection and forwarding to Vision One | bool |
true |
no |
| icap_certificate_arn | ACM certificate ARN for ICAP TLS (optional) | string |
"" |
no |
| icap_port | Port for ICAP service | number |
1344 |
no |
| manage_network_cluster_tag | Whether to manage kubernetes.io/cluster/<cluster_name> tag on subnets. Set to false if: - Subnets are shared with other EKS clusters - Tags are managed externally Only applicable when create_alb_controller = true. |
bool |
true |
no |
| manage_network_elb_tag | Whether to manage kubernetes.io/role/elb (or internal-elb) tag on subnets. WARNING: This tag is shared across all clusters using the same subnets. If you destroy this module with manage_network_elb_tag = true, the tag will be removed and may affect ALB Controller in other clusters. Set to false if: - Subnets are shared with other EKS clusters that use ALB Controller - Tags are already configured on the subnets - Tags are managed externally (e.g., by infrastructure team) Only applicable when create_alb_controller = true. |
bool |
true |
no |
| manage_route53_records | Whether to automatically create Route53 DNS records (OPTIONAL). This is a convenience feature for customers using Route53 as their DNS provider. Options: - false (default): You manage DNS records yourself (supports any DNS provider) - true: Terraform creates Route53 A (Alias) records for you Requirements when true: - You must provide route53_zone_id - Your domain must be managed by this Route53 hosted zone Use Cases: - Set to true: If you use Route53 and want convenience - Set to false: If you use other DNS providers (Cloudflare, GoDaddy, etc.) or prefer manual DNS management Note: Even if false, you still need to create DNS records manually after deployment to point your domain to the ALB. |
bool |
false |
no |
| nlb_scheme | NLB scheme for ICAP: internet-facing or internal | string |
"internet-facing" |
no |
| node_disk_size | Disk size in GB for EKS nodes | number |
100 |
no |
| node_group_desired_size | Desired number of nodes in the node group | number |
2 |
no |
| node_group_max_size | Maximum number of nodes in the node group | number |
6 |
no |
| node_group_min_size | Minimum number of nodes in the node group | number |
2 |
no |
| node_group_type | Node group capacity type: on-demand or spot | string |
"on-demand" |
no |
| node_instance_types | Instance types for EKS node group | list(string) |
[ |
no |
| project_name | Project name to be used as a prefix for resources | string |
"v1fs" |
no |
| route53_zone_id | Route53 hosted zone ID (OPTIONAL - only needed if manage_route53_records = true). This is the zone where your domain_name will be created. For example, if your domain_name is "scanner.example.com", you need the zone ID for "example.com". How to find your zone ID: 1. AWS Console → Route53 → Hosted zones 2. Click on your domain zone 3. Copy the "Hosted zone ID" (e.g., Z0123456789ABC) Or use CLI: aws route53 list-hosted-zones --query "HostedZones[?Name=='example.com.'].Id" --output text Format: Alphanumeric string starting with Z (e.g., Z0123456789ABC) Note: Do NOT include the "/hostedzone/" prefix Leave empty if manage_route53_records = false |
string |
"" |
no |
| set_storage_class_as_default | Whether to set the database StorageClass as the cluster default | bool |
false |
no |
| storage_class_reclaim_policy | Reclaim policy for the database StorageClass. WARNING: 'Delete' will permanently remove EBS volumes when PVC is deleted! - Retain (default): EBS volume is retained for manual cleanup - RECOMMENDED for production - Delete: EBS volume is deleted when PVC is deleted - Use with caution For production databases, keep the default 'Retain' to prevent accidental data loss. |
string |
"Retain" |
no |
| subnet_ids | List of subnet IDs for EKS cluster (must be in at least 2 AZs) | list(string) |
null |
no |
| subnet_type | Type of subnets: public or private | string |
"public" |
no |
| tags | Additional tags to apply to all resources | map(string) |
{} |
no |
| v1fs_backend_communicator_cpu_request | CPU request for backend communicator pods | string |
"250m" |
no |
| v1fs_backend_communicator_memory_request | Memory request for backend communicator pods | string |
"128Mi" |
no |
| v1fs_chart_path | Local path to helm chart (relative to this example directory). Set to use local chart for development. | string |
null |
no |
| v1fs_database_cpu_limit | CPU limit for database container pods | string |
"500m" |
no |
| v1fs_database_cpu_request | CPU request for database container pods | string |
"250m" |
no |
| v1fs_database_memory_limit | Memory limit for database container pods | string |
"1Gi" |
no |
| v1fs_database_memory_request | Memory request for database container pods | string |
"512Mi" |
no |
| v1fs_database_persistence_size | Size of persistent volume for database (EBS) | string |
"100Gi" |
no |
| v1fs_database_storage_class_name | StorageClass name for database persistence | string |
"visionone-filesecurity-storage" |
no |
| v1fs_domain_name | YOU MUST own this domain and configure DNS to point to the ALB AFTER deployment. This domain will be used for: - Scanner gRPC endpoint (e.g., https://scanner.example.com) - Management service endpoint (e.g., https://scanner.example.com/ontap) - ALB Ingress host configuration Requirements: - Must be a valid FQDN you own and control - Must be covered by your ACM certificate (exact match or wildcard) - You will need to create DNS records AFTER deployment (see outputs for ALB hostname) DNS Configuration (POST-DEPLOYMENT): After running terraform apply, you will receive the ALB hostname in outputs. You must create a DNS record in your DNS provider: - Type: CNAME or A (Alias if using Route53) - Name: your v1fs_domain_name - Value: ALB hostname from terraform outputs Examples: - scanner.example.com - v1fs.yourdomain.com - file-security.internal.company.com |
string |
n/a | yes |
| v1fs_helm_chart_name | Name of the Vision One File Security Helm chart | string |
"visionone-filesecurity" |
no |
| v1fs_helm_chart_repository | Helm chart repository URL for Vision One File Security | string |
"https://trendmicro.github.io/visionone-file-security-helm/" |
no |
| v1fs_helm_chart_version | Version of Vision One File Security Helm chart | string |
"1.4.2" |
no |
| v1fs_image_pull_secrets | List of Kubernetes secret names for pulling images from private registries | list(string) |
[] |
no |
| v1fs_kube_state_metrics_cpu_limit | CPU limit for kube-state-metrics pods | string |
"100m" |
no |
| v1fs_kube_state_metrics_cpu_request | CPU request for kube-state-metrics pods | string |
"50m" |
no |
| v1fs_kube_state_metrics_memory_limit | Memory limit for kube-state-metrics pods | string |
"128Mi" |
no |
| v1fs_kube_state_metrics_memory_request | Memory request for kube-state-metrics pods | string |
"64Mi" |
no |
| v1fs_log_level | Log level for V1FS services (DEBUG, INFO, WARN, ERROR) | string |
"INFO" |
no |
| v1fs_management_cpu_request | CPU request for management service pods | string |
"250m" |
no |
| v1fs_management_memory_request | Memory request for management service pods | string |
"256Mi" |
no |
| v1fs_management_plugins | List of plugins to enable for management service. Each plugin requires ALL fields to be specified. Required fields for ontap-agent: - name (required) - Plugin identifier (e.g., "ontap-agent") - enabled (required) - Whether the plugin is enabled - configMapName (required) - Name of the ConfigMap for plugin configuration - securitySecretName (required) - Name of the Secret for security credentials - jwtSecretName (required) - Name of the Secret for JWT token Example (ontap-agent): v1fs_management_plugins = [ { name = "ontap-agent" enabled = true configMapName = "ontap-agent-config" securitySecretName = "ontap-agent-security" jwtSecretName = "ontap-agent-jwt" } ] |
list(map(any)) |
[] |
no |
| v1fs_namespace | Kubernetes namespace for V1FS deployment | string |
"visionone-filesecurity" |
no |
| v1fs_prometheus_cpu_limit | CPU limit for Prometheus agent pods | string |
"200m" |
no |
| v1fs_prometheus_cpu_request | CPU request for Prometheus agent pods | string |
"100m" |
no |
| v1fs_prometheus_log_level | Log level for Prometheus agent (debug, info, warn, error) | string |
"info" |
no |
| v1fs_prometheus_memory_limit | Memory limit for Prometheus agent pods | string |
"256Mi" |
no |
| v1fs_prometheus_memory_request | Memory request for Prometheus agent pods | string |
"128Mi" |
no |
| v1fs_prometheus_scrape_interval | Prometheus scrape interval (e.g., '60s', '30s') | string |
"60s" |
no |
| v1fs_proxy_url | HTTP/HTTPS proxy URL for V1FS services (optional) | string |
"" |
no |
| v1fs_registration_token | Vision One File Security registration token for scanner authentication. This JWT token authenticates your scanner with Vision One cloud service. The token is region-specific and determines which Vision One region your scanner connects to. How to obtain: 1. Log in to Vision One console (https://portal.xdr.trendmicro.com/) 2. Navigate to: File Security → Containerized Scanner 3. Click "Add Scanner" or "Get Registration Token" 4. Copy the token (starts with "eyJ...") Security note: - This is a sensitive credential - Do NOT commit to version control - Store in terraform.tfvars (which should be .gitignored) - Or use environment variable: TF_VAR_v1fs_registration_token Token format: JWT string starting with "eyJ0eXAiOiJKV1Qi..." |
string |
n/a | yes |
| v1fs_scan_cache_cpu_request | CPU request for scan cache pods | string |
"250m" |
no |
| v1fs_scan_cache_memory_request | Memory request for scan cache pods | string |
"512Mi" |
no |
| v1fs_scanner_autoscaling_max_replicas | Maximum replicas for scanner autoscaling | number |
10 |
no |
| v1fs_scanner_autoscaling_min_replicas | Minimum replicas for scanner autoscaling | number |
2 |
no |
| v1fs_scanner_cpu_request | CPU request for scanner pods | string |
"800m" |
no |
| v1fs_scanner_memory_request | Memory request for scanner pods | string |
"2Gi" |
no |
| v1fs_scanner_replicas | Number of scanner replicas | number |
2 |
no |
| vpc_id | VPC ID where EKS cluster will be deployed | string |
"" |
no |
| Name | Description |
|---|---|
| alb_controller_installed | Whether AWS Load Balancer Controller is installed |
| alb_controller_policy_arn | ARN of IAM policy for AWS Load Balancer Controller |
| alb_controller_role_arn | ARN of IAM role for AWS Load Balancer Controller |
| alb_controller_version | Version of AWS Load Balancer Controller |
| cluster_endpoint | Endpoint for EKS control plane |
| cluster_name | Name of the EKS cluster |
| cluster_security_group_id | Security group ID attached to the EKS cluster |
| cluster_version | Kubernetes version of the cluster |
| configure_kubectl | Command to configure kubectl |
| get_ingress_info | Command to get ingress information |
| get_nlb_dns | Command to get NLB DNS name for ICAP |
| icap_enabled | Whether ICAP service is enabled |
| icap_port | ICAP service port |
| management_service_enabled | Whether management service is enabled |
| management_service_endpoint | Management service endpoint URL |
| next_steps | Next steps after deployment |
| oidc_provider_arn | ARN of the OIDC Provider for EKS |
| route53_managed | Whether Route53 DNS records are managed by Terraform |
| route53_record_created | Whether Route53 record was successfully created |
| scanner_alb_hostname | ALB hostname for scanner service - USE THIS to create your DNS records |
| scanner_alb_zone_id | ALB hosted zone ID for Route53 Alias records |
| scanner_dns_fqdn | Fully qualified domain name for scanner |
| scanner_domain | Scanner service domain name |
| scanner_endpoint | Scanner service endpoint URL |
| subnet_ids | Subnet IDs used by EKS |
| subnet_type | Type of subnets - public or private |
| v1fs_chart_version | Version of the deployed V1FS Helm chart |
| v1fs_namespace | Kubernetes namespace for Vision One File Security |
| v1fs_release_name | Name of the V1FS Helm release |
| v1fs_release_version | Version of the deployed V1FS Helm release |
| vpc_id | VPC ID where EKS is deployed |
If you use Route53 for DNS, set manage_route53_records = true and provide your route53_zone_id. Terraform will automatically create the necessary A (Alias) records.
manage_route53_records = true
route53_zone_id = "Z0123456789ABC"If you use a different DNS provider (Cloudflare, GoDaddy, etc.), leave manage_route53_records = false (default) and manually create a CNAME record after deployment:
- Run
terraform apply - Get the ALB hostname from outputs:
terraform output scanner_alb_hostname - Create a CNAME record in your DNS provider:
- Name: your
v1fs_domain_name(e.g.,scanner) - Value: ALB hostname from output
- Name: your
# Verify certificate status
aws acm describe-certificate --certificate-arn <your-arn> --region <region>
# Check certificate covers your domain
aws acm describe-certificate --certificate-arn <your-arn> \
--query 'Certificate.DomainValidationOptions'# Check ALB Controller pods
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller
# View ALB Controller logs
kubectl logs -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller --tail=50
# Check ingress status
kubectl describe ingress -n visionone-filesecurity# Wait 2-5 minutes for DNS propagation, then test
nslookup scanner.example.com
# Verify Route53 record (if using Route53)
aws route53 list-resource-record-sets --hosted-zone-id <zone-id> \
--query "ResourceRecordSets[?Name=='scanner.example.com.']"terraform destroyNote: If
storage_class_reclaim_policy = "Retain", EBS volumes will not be automatically deleted. You must manually delete them from the AWS Console.