This repository contains modules used to instantiate the Source.Plus infrastructure. These modules are generally abstract from source.plus, but are tailored to follow the source.plus infrastructure patterns.
The state and arguments provided to these modules are in a separate repo source-plus-terragrunt. If you're looking to spin up new services, or see how these modules are provisioned, you probably want that.
Each module has, at minimum, an inputs.tf and an outputs.tf. The inputs.tf is where all input arguments are specified. outputs.tf specifies all return values. Additional *.tf files in a module are for required resources, and are generally named after the resource they create. E.G. ec2.tf manages EC2 resources.
Modules are grouped into directories named after their respective providers.
All of these modules are located in the aws directory.
aws_access_keys- Creates an IAM user with the provided permissions, generates AWS Access Keys, and stores the access keys in AWs Secrets Manager. Mostly used for instantiating AWS Access Keys that must be used outside of AWS.bastion- Provisions an EC2 instance which behaves as a bastion host. The bastion host is configured with security group ids to determine which services it can access, as well as IP addresses to allow access from specific locations. It will also create an SSH key pair for the bastion host and store it in AWS Secrets Manager.build_pipeline- Manages an AWS CodeBuild pipeline for building, testing, and migrating services. The pipeline is triggered by a commit to the service's repository.- This is hardcoded for a monorepo with multiple services, since that's how the source.plus repository is organized. The path to a service is passed in via
service_path. - The module also has a dedicated
migratestage for running database migrations on the service's database if required. - CodeBuild requires spec files to be provided to drive behaviors. These are located in the
filessubsdirectory, and generally named after the stage they're used in. They run Makefile commands, which are defined in the source.plus services.
- This is hardcoded for a monorepo with multiple services, since that's how the source.plus repository is organized. The path to a service is passed in via
cognito- Provisions a cognito user pool and client.- Accepts an
oauth_providersargument to also configure google oauth.
- Accepts an
ecr- Elastic Container Registry definition.ecs_consumer_service- A Fargate ECS container service which runs a consumer container. The consumer container is configured to listen to a queue, with autoscaling rules based on multiple queue metrics, deployment rules with CodeBuild, and all related security groups.ecs_funcs (deprecated)- This module was to execute functions as ECS Fargate tasks, but was abandoned when we decided to instead use long-running instances, as the amount of partitioning desired would reach the Fargate task cap.ecs_http_service- A Fargate ECS container service which runs an HTTP application container. The application container is configured with autoscaling rules, a load balancer, cloudwatch alarms, and a route53 subdomain.- Accepts an SSL certificate configured in ACM to provide HTTPS through SSL termination in the load balancer.
- WAF can be configured to ignore some rules, as XSS rules will occasionally catch file uploads. Only tweak if you know what you're doing.
efs- Create an Elastic File System within the given subnets and with the given access points.elasticache- Create an ElastiCache Redis cluster. Hardcoded to a Redis cluster, which serves sharded primaries with configurable number of replicas.external_secrets_store- A simple AWS Secrets Manager instance to manually load in secrets from other platforms.- An example is an API Key for a system delivered to you via email. You can't reference this using terraform, so instead make this external secrets container and drop it in, and now you can reference it elsewhere.
fargate_cluster- Define a Fargate ECS cluster.kinesis_firehouse- Configure Kinesis firehose with an S3 location. Kinesis firehose it used to record processed events so they can be replayed if needed.monorepo_splitter- The source.plus services repo is a monorepo, so getting each service to build from commits is kind of a pain. This module looks for git changes in the service directory and triggers a build for that service.opensearch- Defines an Opensearch cluster.populated_external_secrets_store- This is similar toexternal_secrets_store, but instead reads in secrets from the args and populates them into the secrets store. This is useful to bootstrap secrets from external services into AWS if you want them to be access via AWS Secrets Manager instead of provided through environment variables or terraform configurations.rds- Defines a serverless RDS Aurora Postgres instance, and optionally an RDS Proxy. All access credentials are created and stored in a dedicated AWS Secrets Manager instance.route53_external_app- Route53 configurations for applications outside of AWS. This is useful for configuring DNS for services that are not hosted in AWS, but need to be accessed by our services.route53_rule- A simple route53 DNS record.sagemaker_model- Defines a SageMaker model and inference endpoints to call the model.secrets_store- An AWS Secrets Manager store which will generate random secrets. Can be used to generate salts or other similar values.sns_sqs_policy- Define permissions for SNS topics to public events to the given SQS queue.standalone_queue_alarms- Create and attach alarms to an existing queue.- This exists because we don't have a dedicated module for a queue, even though we use queues frequently. We use the default AWS module for a queue, which means the configurations for the queue alarms are in the consumer service (since they're closely related). These alarms are for queues which are not paired directly with an ECS consumer service.
All of these modules are located in the cloudflare directory.
firewall- A simple list of Cloudflare firewall rules for a domain name.r2_bucket- A simple module to create a bucket in Cloudflare R2 storage.
All of these modules are located in the gcp directory.
http_cluster- Configures a load balancer, SSL termination, and instance group for GCP. Does not create and add instances to the group, only configures the group itself.- In the future, this module should be updated to create the instances as well, but for now it's just the group.
All of these modules are located in the stripe directory.
stripe_webhook- Configures a Stripe webhook endpoint.stripe_metered_usage (deprecated)- An attempt to manage stripe metered products with terraform.- While it was convenient to define these with terraform, and create the same configurations for prod/dev, updating products and prices frequently created issues with migrating customers.
stripe_flat_recurring (deprecated)- An attempt to manage stripe flat recurring products with terraform.- While it was convenient to define these with terraform, and create the same configurations for prod/dev, updating products and prices frequently created issues with migrating customers.