The geo-messenger is designed to function across geogrpahies by definition. The following instruction shows how to deploy multiple application instances across several distant regions in Google Cloud. The regions are as follows - us-west2
, us-central1
, us-east4
, europe-west3
and asia-east1
. You're free to follow the instruction precisely by deploy the application instances in all of those locations or skip as many location as you like.
YugabyteDB Managed or self-managed YugabyteDB should be deployed in the regions similar to those selected for the application deployment.
- Google Cloud account
- Multi-region YugabyteDB Managed cluster or a self-managed cluster
The microservice instances and Kong Gateway nodes are deployed across multiple cloud regions.
The global cloud load balancer intercepts the user traffic at the PoP (point-of-presence) and forwards to the nearest application instance.
The Messaging microservice stores the application data in your YugabyteDB cluster (need to be provisioned by you separately). The Attachments microservice uploads picture to Google Cloud Storage.
Refer to the following articles for a detailed architectural overview:
- Automating Java Application Deployment Across Multiple Cloud Regions
- Geo-distributed API Layer With Kong Gateway
- Using Global Cloud Load Balancer to Route User Requests to App Instances
- Geo-Distributed Microservices and Their Database: Fighting the High Latency
-
Navigate to the
gcloud
directory within the project structure:cd gcloud
-
Log in under your account:
gcloud auth login
-
Create a new project for the app (use any other project name if
geo-distributed-messenger
is not available):gcloud projects create geo-distributed-messenger --name="Geo-Distributed Messenger"
-
Set this new project as default:
gcloud config set project geo-distributed-messenger
-
Open Google Console and enable a billing account for the project:
https://console.cloud.google.com
This is an OPTIONAL step. Follow the steps below only if you need to run the Attachments service on your local machine and wish to store pictures in Google Cloud Storage. Otherwise, skip this section!
- Create the service account:
gcloud iam service-accounts create google-storage-account gcloud projects add-iam-policy-binding geo-distributed-messenger \ --member="serviceAccount:google-storage-account@geo-distributed-messenger.iam.gserviceaccount.com" \ --role=roles/storage.admin gcloud projects add-iam-policy-binding geo-distributed-messenger \ --member="serviceAccount:google-storage-account@geo-distributed-messenger.iam.gserviceaccount.com" \ --role=roles/viewer
- Generate the key:
cd {project_dir}/glcoud gcloud iam service-accounts keys create google-storage-account-key.json \ --iam-account=google-storage-account@geo-distributed-messenger.iam.gserviceaccount.com
- Add a special environment variable. The attachments service will use it while working with the Cloud Storage SDK:
echo 'export GOOGLE_APPLICATION_CREDENTIALS={absolute_path_to_the_key}/google-storage-account-key.json' >> ~/.bashrc echo 'export GOOGLE_APPLICATION_CREDENTIALS={absolute_path_to_the_key}/google-storage-account-key.json' >> ~/.zshrc
-
Create the custom VPC network:
gcloud compute networks create geo-messenger-network \ --subnet-mode=custom
-
Create subnets in 3 regions of the USA:
gcloud compute networks subnets create us-central-subnet \ --network=geo-messenger-network \ --range=10.1.10.0/24 \ --region=us-central1 gcloud compute networks subnets create us-west-subnet \ --network=geo-messenger-network \ --range=10.1.11.0/24 \ --region=us-west2 gcloud compute networks subnets create us-east-subnet \ --network=geo-messenger-network \ --range=10.1.12.0/24 \ --region=us-east4
-
Create subnets in Asia and Europe:
gcloud compute networks subnets create europe-west-subnet \ --network=geo-messenger-network \ --range=10.2.10.0/24 \ --region=europe-west3 gcloud compute networks subnets create asia-east-subnet \ --network=geo-messenger-network \ --range=10.3.10.0/24 \ --region=asia-east1
-
Create a firewall rule to allow SSH connectivity to VMs within the VPC:
gcloud compute firewall-rules create allow-ssh \ --network=geo-messenger-network \ --action=allow \ --direction=INGRESS \ --rules=tcp:22 \ --target-tags=allow-ssh
Note, allow to turn on the
compute.googleapis.com
persmission if requested. -
Create the healthcheck rule to allow the global load balancer and Google Cloud health checks to communicate with backend instances on port
80
and443
:gcloud compute firewall-rules create allow-health-check-and-proxy \ --network=geo-messenger-network \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp:80
-
(Optional) for dev and testing purpose only, add IPs of your personal laptop and other machines that need to communicate to the backend on port
80
(note, you need to replace0.0.0.0/0
with your IP):gcloud compute firewall-rules create allow-http-my-machines \ --network=geo-messenger-network \ --action=allow \ --direction=ingress \ --target-tags=allow-http-my-machines \ --source-ranges=0.0.0.0/0 \ --rules=tcp:80
This step is optional if you don't plan to change database connectivity settings in runtime. By default, the database settings are provided in the application.properties
file along with other properties. The Runtime Configurator is useful when you need an instance of Messaging microservice to connect to a specific database deployment or node from its region.
-
Enable Runtime Configurator APIs
-
Create a
RuntimeConfig
for the Messaging microservice:gcloud beta runtime-config configs create messaging-microservice-settings
An instance of the Messaging microservice subscribes for updates on the following configuration variables:
{REGION}/spring.datasource.url
{REGION}/spring.datasource.username
{REGION}/spring.datasource.password
{REGION}/yugabytedb.connection.type
where:
{REGION}
is the region the VM was started in. You provide the region name via the-r
parameter of the./create_instance_template.sh
script.yugabytedb.connection.type
- can be set tostandard
,replica
orgeo
. Refer to the section below for details.
Once an instance of the microservice is started, you can use the Runtime Configurator APIs to set and update those variable.
As an example, this is how to update the database connectivity settings for all the VMs started in the us-west2
region:
gcloud beta runtime-config configs variables set us-west2/spring.datasource.username \
{NEW_DATABASE_USERNAME} --config-name messaging-microservice-settings --is-text
gcloud beta runtime-config configs variables set us-west2/spring.datasource.password \
{NEW_DATABASE_PASSWORD} --config-name messaging-microservice-settings --is-text
gcloud beta runtime-config configs variables set us-west2/yugabytedb.connection.type standard \
--config-name messaging-microservice-settings --is-text
gcloud beta runtime-config configs variables set us-west2/spring.datasource.url \
{NEW_DATABASE_URL} --config-name messaging-microservice-settings --is-text
Note, the spring.datasource.url
parameter MUST be updated the last because the application logic watches for its changes.
The Attachments microservice uploads pictures to the Google Cloud Storage. Make sure the service is enabled at the project level.
Use the gcloud/create_instance_template.sh
script to create instance templates for the US West, Central and East regions:
./create_instance_template.sh \
-n {INSTANCE_TEMPLATE_NAME} \
-i {PROJECT_ID} \
-r {CLOUD_REGION_NAME} \
-s {NETWORK_SUBNET_NAME} \
-d {ENABLE_DYNAMIC_RUNTIME_CONFIGURATOR}
-a {APP_PORT_NUMBER} \
-c "{DB_CONNECTION_ENDPOINT}" \
-u {DB_USER} \
-p {DB_PWD} \
-m {DB_MODE} \
-f {DB_SCHEMA_FILE}
where DB_MODE
can be set to one of these values:
- 'standard' - the data source is connected to a standard/regular node.
- 'replica' - the connection goes via a replica node.
- 'geo' - the data source is connected to a geo-partitioned cluster.
and DB_SCHEMA_FILE
can be set to:
classpath:messenger_schema.sql
- a basic database schema with NO tablespaces and partitionsclasspath:messenger_schema_partitioned.sql
- a schema with tablespaces belonging to specific cloud regions and geo-partitions.
- Create a template for the US West, Central and East regions:
./create_instance_template.sh \ -n template-us-west \ -i geo-distributed-messenger \ -r us-west2 \ -s us-west-subnet \ -d false \ -a 80 \ -c "jdbc:postgresql://ADDRESS:5433/yugabyte?ssl=true&sslmode=require" \ -u {DB_USER} \ -p {DB_PWD} \ -m standard \ -f "classpath:messenger_schema.sql" ./create_instance_template.sh \ -n template-us-central \ -i geo-distributed-messenger \ -r us-central1 \ -s us-central-subnet \ -d false \ -a 80 \ -c "jdbc:postgresql://ADDRESS:5433/yugabyte?ssl=true&sslmode=require" \ -u {DB_USER} \ -p {DB_PWD} \ -m standard \ -f "classpath:messenger_schema.sql" ./create_instance_template.sh \ -n template-us-east \ -i geo-distributed-messenger \ -r us-east4 \ -s us-east-subnet \ -d false \ -a 80 \ -c "jdbc:postgresql://ADDRESS:5433/yugabyte?ssl=true&sslmode=require" \ -u {DB_USER} \ -p {DB_PWD} \ -m standard \ -f "classpath:messenger_schema.sql"
- Create a template for Europe:
./create_instance_template.sh \ -n template-europe-west \ -i geo-distributed-messenger \ -r europe-west3 \ -s europe-west-subnet \ -d false \ -a 80 \ -c "jdbc:postgresql://ADDRESS:5433/yugabyte?ssl=true&sslmode=require" \ -u {DB_USER} \ -p {DB_PWD} \ -m standard \ -f "classpath:messenger_schema.sql"
- Create a template for Asia:
./create_instance_template.sh \ -n template-asia-east \ -i geo-distributed-messenger \ -r asia-east1 \ -s asia-east-subnet \ -d false \ -a 80 \ -c "jdbc:postgresql://ADDRESS:5433/yugabyte?ssl=true&sslmode=require" \ -u {DB_USER} \ -p {DB_PWD} \ -m standard \ -f "classpath:messenger_schema.sql"
-
Start an application instance in every region:
gcloud compute instance-groups managed create ig-us-west \ --template=template-us-west --size=1 --zone=us-west2-b gcloud compute instance-groups managed create ig-us-central \ --template=template-us-central --size=1 --zone=us-central1-b gcloud compute instance-groups managed create ig-us-east \ --template=template-us-east --size=1 --zone=us-east4-b gcloud compute instance-groups managed create ig-europe-west \ --template=template-europe-west --size=1 --zone=europe-west3-b gcloud compute instance-groups managed create ig-asia-east \ --template=template-asia-east --size=1 --zone=asia-east1-b
-
(YugabyteDB Managed specific) Add VMs external IP to the IP Allow list.
-
Open Google Cloud Logging and wait while the VM finishes executing the
startup_script.sh
that sets up the environment and start an application instance. It can take between 5-10 minutes.Alternatively, check the status from the terminal:
#find an instance name gcloud compute instances list --project=geo-distributed-messenger #connect to the instance gcloud compute ssh {INSTANCE_NAME} --project=geo-distributed-messenger sudo journalctl -u google-startup-scripts.service -f
-
Open the app by connecting to
http://{INSTANCE_EXTERNAL_IP}
. Use[email protected]
andpassword
as testing credentials. Note, you can find the external address by running this command:gcloud compute instances list
Now that the instances are up and running, configure a global load balancer that will forward user requests to the nearest instance.
Set named ports for every instance group letting the load balancers know that the instances are capable of processing the HTTP requests on port 80
:
gcloud compute instance-groups unmanaged set-named-ports ig-us-west \
--named-ports http:80 \
--zone us-west2-b
gcloud compute instance-groups unmanaged set-named-ports ig-us-central \
--named-ports http:80 \
--zone us-central1-b
gcloud compute instance-groups unmanaged set-named-ports ig-us-east \
--named-ports http:80 \
--zone us-east4-b
gcloud compute instance-groups unmanaged set-named-ports ig-europe-west \
--named-ports http:80 \
--zone europe-west3-b
gcloud compute instance-groups unmanaged set-named-ports ig-asia-east \
--named-ports http:80 \
--zone asia-east1-b
Reserve IP addresses that application users will use to reach the load balancer:
shell gcloud compute addresses create load-balancer-public-ip \ --ip-version=IPV4 \ --network-tier=PREMIUM \ --global
-
Create a health check for application instances:
gcloud compute health-checks create http load-balancer-http-basic-check \ --check-interval=20s --timeout=5s \ --healthy-threshold=2 --unhealthy-threshold=2 \ --request-path=/login \ --port 80
-
Create a backend service that selects a VM instance for serving a particular user request:
gcloud compute backend-services create load-balancer-backend-service \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=HTTP \ --port-name=http \ --health-checks=load-balancer-http-basic-check \ --global
-
Add your instance groups as backends to the backend services:
gcloud compute backend-services add-backend load-balancer-backend-service \ --balancing-mode=UTILIZATION \ --max-utilization=0.8 \ --capacity-scaler=1 \ --instance-group=ig-us-central \ --instance-group-zone=us-central1-b \ --global gcloud compute backend-services add-backend load-balancer-backend-service \ --balancing-mode=UTILIZATION \ --max-utilization=0.8 \ --capacity-scaler=1 \ --instance-group=ig-us-east \ --instance-group-zone=us-east4-b \ --global gcloud compute backend-services add-backend load-balancer-backend-service \ --balancing-mode=UTILIZATION \ --max-utilization=0.8 \ --capacity-scaler=1 \ --instance-group=ig-us-west \ --instance-group-zone=us-west2-b \ --global gcloud compute backend-services add-backend load-balancer-backend-service \ --balancing-mode=UTILIZATION \ --max-utilization=0.8 \ --capacity-scaler=1 \ --instance-group=ig-europe-west \ --instance-group-zone=europe-west3-b \ --global gcloud compute backend-services add-backend load-balancer-backend-service \ --balancing-mode=UTILIZATION \ --max-utilization=0.8 \ --capacity-scaler=1 \ --instance-group=ig-asia-east \ --instance-group-zone=asia-east1-b \ --global
-
Create a default URL map to route all the incoming requests to the created backend service (in practice, you can define backend services and URL maps for different microservices):
gcloud compute url-maps create load-balancer-url-map --default-service load-balancer-backend-service
Create a user-facing frontend (aka. HTTP(s) proxy) that receives requests and forwards them to the backend service:
- Create a target HTTP proxy to route user requests to the backend's URL map:
gcloud compute target-http-proxies create load-balancer-http-frontend \ --url-map load-balancer-url-map \ --global
- Create a global forwarding rule to route incoming requests to the proxy:
gcloud compute forwarding-rules create load-balancer-http-frontend-forwarding-rule \ --load-balancing-scheme=EXTERNAL_MANAGED \ --network-tier=PREMIUM \ --address=load-balancer-public-ip \ --global \ --target-http-proxy=load-balancer-http-frontend \ --ports=80
After creating the global forwarding rule, it can take several minutes for your configuration to propagate worldwide.
-
Find the public IP addresses of the load balancer:
gcloud compute addresses describe load-balancer-public-ip \ --format="get(address)" \ --global
-
Send a request through the load balancer:
curl -v http://{LOAD_BALANCER_PUBLIC_IP}
Note, it can take several minutes before the load balancer's settings get propogated globally. Until this happens, the
curl
command might hit different HTTP errors.
Once the cloud load balancer is ready, use its IP address to access the application from the browser.
Use the following credentials to sing in the messenger:
username: [email protected]
pwd: password
Follow this arcticle to see how to test the app with the usage of the load balancer.
Enjoy and have fun!