Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions catalog.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,16 @@ topics:
description: Creating SQS
readme_url: https://raw.githubusercontent.com/cloudify-community/cloudify-catalog/master/aws/sqs/README.md

- id: K3S_HA_ON_EC2
name: K3S-HA-ON-EC2
path: tabs/aws/k3s_ha_on_ec2
description: Create K3S HA cluster on EC2 instances

- id: K3S_SINGLE_NODE_ON_EC2
name: K3S-SINGLE-NODE-ON-EC2
path: tabs/aws/k3s_single_node_on_ec2
description: Create K3S single node cluster on EC2 instances

- id: tf_aws_vm
name: VM-Ubuntu-AWS-TFM
path: tabs/terraform/aws
Expand Down Expand Up @@ -827,6 +837,16 @@ topics:
path: tabs/kubernetes/spot
description: Create a Spot Ocean Kubernetes cost optimized Cluster

- id: K3S_HA_ON_EC2
name: K3S-HA-ON-EC2
path: tabs/kubernetes/k3s_ha_on_ec2
description: Create K3S HA cluster on EC2 instances

- id: K3S_SINGLE_NODE_ON_EC2
name: K3S-SINGLE-NODE-ON-EC2
path: tabs/kubernetes/k3s_single_node_on_ec2
description: Create K3S single node cluster on EC2 instances

- name: K8S_Discovery
target_path: s3_json_file
blueprints:
Expand Down
71 changes: 71 additions & 0 deletions tabs/aws/k3s_ha_on_ec2/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# ECE with Kubernetes cluster

Blueprints installing a K3S cluster on top of VMs simulating the ECE devices.
There are two possible scenarios:
1. _single_ece_blueprint_ - installs one ECE (Master and Worker VMs) and installs a K3S single-node cluster.
2. _three_ece_blueprint_ - installs three ECEs (6 VMs in total) and installs a K3S HA cluster.

## Requirements

### Secrets
You need to create the following secrets before deploying blueprints:
- aws_access_key_id
- aws_secret_access_key

### Plugins
You need to upload the following plugins before deploying blueprints:
- cloudify-aws-plugin
- cloudify-fabric-plugin
- cloudify-utilities-plugin

## How to install the deployment setup and deploy K3S cluster on top of it

1. Verify if your Cloudify CLI is set to the correct Cloudify Manager instance.
```sh
cfy profiles show-current
```
If not, set it using:
```sh
cfy profiles use (...)
```
2. Set your current directory to the main directory of the repo.
```sh
cd (...)/ece-kubernetes
```
3. Perform the following commands to prepare and upload necessary blueprints.
```sh
cd ..
rm -f ece-kubernetes.zip
zip -qr ece-kubernetes.zip ece-kubernetes
cd -
cfy blueprint upload -b network_blueprint development/resources/network_blueprint.yaml
cfy blueprint upload -b vm_blueprint development/resources/vm_blueprint.yaml
cfy blueprint upload -b ece_blueprint -n ece_blueprint.yaml development_blueprints.zip
cfy blueprint upload -b witness_blueprint -n witness_blueprint.yaml development_blueprints.zip
```
4. Install desired setup.
```sh
cfy install -b single-ece-kubernetes -d single-ece-kubernetes -n single_ece_blueprint.yaml -i 'region_name=eu-west-1' ../ece-kubernetes.zip
# or
cfy install -b three-ece-kubernetes -d three-ece-kubernetes -n three_ece_blueprint.yaml -i 'region_name=eu-west-1' ../ece-kubernetes.zip
# or
TBD
```

## How to uninstall the deployment setup

1. Verify if your Cloudify CLI is set to the correct Cloudify Manager instance.
```sh
cfy profiles show-current
```
If not, set it using:
```sh
cfy profiles use (...)
```
2. Uninstall the main deployment and remove blueprints.
```sh
cfy uninstall (...)-ece-kubernetes
cfy blueprint delete network_blueprint
cfy blueprint delete vm_blueprint
cfy blueprint delete ece_blueprint
```
182 changes: 182 additions & 0 deletions tabs/aws/k3s_ha_on_ec2/blueprint.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,182 @@
tosca_definitions_version: cloudify_dsl_1_4
description: >
The blueprint creates VPC and all necessary network resources on AWS
in order to spin up EC2 instances.

imports:
- https://cloudify.co/spec/cloudify/6.4.0/types.yaml

inputs:
region_name:
type: string
display_label: AWS Region Name
default: eu-west-1

ssh_key_secret_name:
description: Name of a Secret that stores SSH Key
type: string
hidden: true
default: ec2_ssh_key

nodes_username:
type: string
hidden: true
default: ubuntu

longhorn_port:
type: string
display_label: Longhorn Port
description: Port which Longhorn should be exposed on.
default: 8080

node_templates:
network:
type: cloudify.nodes.ServiceComponent
properties:
resource_config:
blueprint:
external_resource: true
id: network_blueprint
deployment:
id: network
auto_inc_suffix: false
inputs:
region_name: { get_input: region_name }
ssh_key_secret_name: { get_input: ssh_key_secret_name }

ece_1:
type: cloudify.nodes.ServiceComponent
properties:
resource_config:
blueprint:
external_resource: true
id: ece_blueprint
deployment:
id: ece_1
auto_inc_suffix: false
inputs:
network_deployment_name: { get_attribute: [network, deployment, id] }
ece_no: '1'
relationships:
- type: cloudify.relationships.depends_on
target: network

ece_2:
type: cloudify.nodes.ServiceComponent
properties:
resource_config:
blueprint:
external_resource: true
id: ece_blueprint
deployment:
id: ece_2
auto_inc_suffix: false
inputs:
network_deployment_name: { get_attribute: [network, deployment, id] }
ece_no: '2'
relationships:
- type: cloudify.relationships.depends_on
target: network

ece_3:
type: cloudify.nodes.ServiceComponent
properties:
resource_config:
blueprint:
external_resource: true
id: ece_blueprint
deployment:
id: ece_3
auto_inc_suffix: false
inputs:
network_deployment_name: { get_attribute: [network, deployment, id] }
ece_no: '3'
relationships:
- type: cloudify.relationships.depends_on
target: network

k3s_ha:
type: cloudify.nodes.ServiceComponent
properties:
resource_config:
blueprint:
external_resource: false
id: k3s_ece_ha_embedded_db
blueprint_archive: k3s_blueprints.zip
main_file_name: ece_ha_embedded_db.yaml
deployment:
id: k3s_ece_ha_embedded_db
auto_inc_suffix: false
inputs:
longhorn_port: { get_input: longhorn_port }
# ECE 1
ece_1_details:
endpoint_master:
get_attribute: [ece_1, capabilities, master, public_ip]
username_master: { get_input: nodes_username }
ssh_key_secret_name_master:
concat: [{ get_input: ssh_key_secret_name }, '_private']
private_ip_master:
get_attribute: [ece_1, capabilities, master, private_ip]
endpoint_worker:
get_attribute: [ece_1, capabilities, worker, public_ip]
username_worker: { get_input: nodes_username }
ssh_key_secret_name_worker:
concat: [{ get_input: ssh_key_secret_name }, '_private']
# ECE 2
ece_2_details:
endpoint_master:
get_attribute: [ece_2, capabilities, master, public_ip]
username_master: { get_input: nodes_username }
ssh_key_secret_name_master:
concat: [{ get_input: ssh_key_secret_name }, '_private']
private_ip_master:
get_attribute: [ece_2, capabilities, master, private_ip]
endpoint_worker:
get_attribute: [ece_2, capabilities, worker, public_ip]
username_worker: { get_input: nodes_username }
ssh_key_secret_name_worker:
concat: [{ get_input: ssh_key_secret_name }, '_private']
# ECE 3
ece_3_details:
endpoint_master:
get_attribute: [ece_3, capabilities, master, public_ip]
username_master: { get_input: nodes_username }
ssh_key_secret_name_master:
concat: [{ get_input: ssh_key_secret_name }, '_private']
private_ip_master:
get_attribute: [ece_3, capabilities, master, private_ip]
endpoint_worker:
get_attribute: [ece_3, capabilities, worker, public_ip]
username_worker: { get_input: nodes_username }
ssh_key_secret_name_worker:
concat: [{ get_input: ssh_key_secret_name }, '_private']
relationships:
- type: cloudify.relationships.depends_on
target: ece_1
- type: cloudify.relationships.depends_on
target: ece_2
- type: cloudify.relationships.depends_on
target: ece_3

labels:
csys-obj-type:
values:
- environment

outputs:
cluster_endpoint:
description: The endpoint of K3S cluster
value: { get_attribute: [k3s_ha, capabilities, cluster_endpoint] }

longhorn:
description: Longhorn endpoint
value: { get_attribute: [k3s_ha, capabilities, longhorn] }

cluster_k3s_token:
description: Token used for adding master and worker nodes to cluster
value: { get_attribute: [k3s_ha, capabilities, cluster_k3s_token] }

kubeconfig:
description: Kubeconfig file of the newly created cluster
value: { get_attribute: [k3s_ha, capabilities, kubeconfig] }
Binary file added tabs/aws/k3s_ha_on_ec2/development_blueprints.zip
Binary file not shown.
Binary file added tabs/aws/k3s_ha_on_ec2/k3s_blueprints.zip
Binary file not shown.
Binary file added tabs/aws/k3s_ha_on_ec2/logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
71 changes: 71 additions & 0 deletions tabs/aws/k3s_single_node_on_ec2/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# ECE with Kubernetes cluster

Blueprints installing a K3S cluster on top of VMs simulating the ECE devices.
There are two possible scenarios:
1. _single_ece_blueprint_ - installs one ECE (Master and Worker VMs) and installs a K3S single-node cluster.
2. _three_ece_blueprint_ - installs three ECEs (6 VMs in total) and installs a K3S HA cluster.

## Requirements

### Secrets
You need to create the following secrets before deploying blueprints:
- aws_access_key_id
- aws_secret_access_key

### Plugins
You need to upload the following plugins before deploying blueprints:
- cloudify-aws-plugin
- cloudify-fabric-plugin
- cloudify-utilities-plugin

## How to install the deployment setup and deploy K3S cluster on top of it

1. Verify if your Cloudify CLI is set to the correct Cloudify Manager instance.
```sh
cfy profiles show-current
```
If not, set it using:
```sh
cfy profiles use (...)
```
2. Set your current directory to the main directory of the repo.
```sh
cd (...)/ece-kubernetes
```
3. Perform the following commands to prepare and upload necessary blueprints.
```sh
cd ..
rm -f ece-kubernetes.zip
zip -qr ece-kubernetes.zip ece-kubernetes
cd -
cfy blueprint upload -b network_blueprint development/resources/network_blueprint.yaml
cfy blueprint upload -b vm_blueprint development/resources/vm_blueprint.yaml
cfy blueprint upload -b ece_blueprint -n ece_blueprint.yaml development_blueprints.zip
cfy blueprint upload -b witness_blueprint -n witness_blueprint.yaml development_blueprints.zip
```
4. Install desired setup.
```sh
cfy install -b single-ece-kubernetes -d single-ece-kubernetes -n single_ece_blueprint.yaml -i 'region_name=eu-west-1' ../ece-kubernetes.zip
# or
cfy install -b three-ece-kubernetes -d three-ece-kubernetes -n three_ece_blueprint.yaml -i 'region_name=eu-west-1' ../ece-kubernetes.zip
# or
TBD
```

## How to uninstall the deployment setup

1. Verify if your Cloudify CLI is set to the correct Cloudify Manager instance.
```sh
cfy profiles show-current
```
If not, set it using:
```sh
cfy profiles use (...)
```
2. Uninstall the main deployment and remove blueprints.
```sh
cfy uninstall (...)-ece-kubernetes
cfy blueprint delete network_blueprint
cfy blueprint delete vm_blueprint
cfy blueprint delete ece_blueprint
```
Loading