Skip to content

elva-labs/terraform-ec2-example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EC2 Startup Example

A simple Terraform project that shows how to deploy an EC2 instance with proper IAM permissions, security groups, and initialization scripts.

What's in this repo?

  • EC2 Module (modules/ec2/) - A reusable module that creates:
    • EC2 instance with your chosen AMI and instance type
    • Security group with configurable ingress/egress rules
    • IAM role and instance profile with custom permissions
  • Main Configuration (main.tf) - Example usage of the EC2 module
  • Init Script (scripts/init.sh) - Bash script that runs when the instance starts

How to use

  1. Update configuration - Edit main.tf locals block with your AWS resource IDs:

    vpc_id        = "vpc-xxxxx"
    subnet_id     = "subnet-xxxxx"
    key_pair_name = "your-key-name"
  2. Initialize Terraform - Download providers and setup backend:

    terraform init
  3. Review changes - See what will be created:

    terraform plan
  4. Deploy - Create the infrastructure:

    terraform apply
  5. SSH to instance - Connect using your key pair:

    ssh -i ~/.ssh/your-key.pem ubuntu@<instance-public-ip>

Understanding Terraform State

What is state?

Terraform state is a file that tracks what resources exist in AWS. It maps your .tf configuration to real AWS resources (like EC2 instances, security groups, etc).

Without state, Terraform would:

  • Not know what it created before
  • Try to create duplicate resources
  • Be unable to update or destroy existing infrastructure

Local vs Remote State

Local state (default):

  • Stored in terraform.tfstate file on your computer
  • ❌ Problem: Loses track if file is deleted
  • ❌ Problem: Can't share with team members
  • ❌ Problem: No backup if corrupted

Remote state (this project uses S3):

  • Stored in an S3 bucket configured in providers.tf
  • ✅ Backed up and versioned automatically
  • ✅ Team members can access the same state
  • ✅ Lock file prevents concurrent changes
  • ✅ Works seamlessly in CI/CD pipelines

How state works in CI

When CI/CD runs Terraform:

  1. Fetches current state from S3
  2. Compares state with your configuration
  3. Determines what needs to change
  4. Applies changes and updates state in S3

This means multiple CI jobs can safely work with the same infrastructure without conflicts.

Setting up providers for multiple platforms

The .terraform.lock.hcl file ensures everyone uses the same provider versions. To support both ARM and x86 architectures:

terraform providers lock \
  -platform=darwin_arm64 \
  -platform=darwin_amd64 \
  -platform=linux_amd64 \
  -platform=linux_arm64

Commit the updated lock file so it works on all development machines and CI environments.

S3 Backend Access Requirements

To use the remote state backend, you need AWS permissions for:

Required IAM Policy

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject"
      ],
      "Resource": "arn:aws:s3:::your-terraform-state-bucket/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket"
      ],
      "Resource": "arn:aws:s3:::your-terraform-state-bucket"
    }
  ]
}

For CI/CD

Your CI service (GitHub Actions, GitLab CI, etc.) needs:

  • IAM user or role with the above S3 permissions
  • AWS credentials configured as environment variables:
    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
    • Or use OIDC/IAM roles for more secure authentication

Customizing the deployment

Change security group rules

Edit ingress_rules in main.tf:

ingress_rules = [
  {
    description = "SSH from my IP"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["1.2.3.4/32"]  # Your IP
  }
]

Add IAM permissions

Edit iam_permissions in main.tf:

iam_permissions = [
  {
    effect    = "Allow"
    actions   = ["s3:GetObject", "s3:PutObject"]
    resources = ["arn:aws:s3:::my-bucket/*"]
  }
]

Application setup with private repository

The init script clones your private repository and automatically runs setup.sh if it exists. This pattern keeps infrastructure code separate from application code.

How it works:

  1. EC2 instance boots up and runs scripts/init.sh
  2. Init script installs Git and AWS CLI
  3. Fetches GitHub PAT from SSM Parameter Store
  4. Clones https://github.com/elva-labs/coretura-private-repo-example to /opt/coretura-app
  5. If setup.sh exists in the repo, runs it automatically

Recommended pattern:

Create a setup.sh in your private repository with application-specific setup:

#!/bin/bash
# setup.sh in your private repository

# Install application dependencies
apt-get install -y docker.io python3-pip nginx

# Setup your application
pip3 install -r requirements.txt

# Configure and start services
docker-compose up -d
systemctl enable nginx

This approach keeps the Terraform init script minimal and version-controls your application setup alongside your application code.

See docs/SETUP_GITHUB_PAT.md for instructions on setting up GitHub PAT access.

Project structure

.
├── main.tf                  # Main configuration (edit this)
├── providers.tf             # AWS provider and S3 backend config
├── scripts/
│   └── init.sh             # Instance initialization script
└── modules/
    └── ec2/
        ├── ec2.tf          # EC2 instance resource
        ├── iam.tf          # IAM role and permissions
        ├── security_group.tf  # Security group rules
        ├── variables.tf    # Module inputs
        └── outputs.tf      # Module outputs

Cleanup

To destroy all resources:

terraform destroy

This removes the EC2 instance, security group, and IAM resources. The state file remains in S3 for historical reference.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors