The blueprint in this repo allows managing Cloudify Manager instances (Tier 1 managers) using a master Cloudify Manager (Tier 2 manager).
In order to use the blueprint the following prerequisites are necessary:
- A working 4.3 (RC or GA) Cloudify Manager (this will be the Tier 2
manager). You need to be connected to this manager (
cfy profiles use). - An SSH key linked to a cloud keypair (this will be used to SSH into the Tier 1 VMs).
- A clone of this repo.
Optional:
- A
pipvirtualenv withwagoninstalled in it. This is only necessary if you're planning on building the CMoM plugin yourself (more on the plugin below).
Two plugins are required to use the blueprint - an IaaS plugin (currently only OpenStack is supported) and the Cloudify Manager of Managers (or CMoM for short) plugin (which is a part of this repo).
First, upload the IaaS plugin to the manager. e.g. run:
cfy plugins upload <WAGON_URL> -y <YAML_URL>
Second, you'll need to create a Wagon from the CMoM plugin. Run (this
is assuming you're inside the manager-of-managers folder):
wagon create -f plugins/cmom -o <CMOM_WAGON_OUTPUT>
Now upload this plugin as well:
cfy plugins upload <CMOM_WAGON_OUTPUT> -y plugins/cmom/plugin.yaml
a few files need to be present on the Tier 2 manager. All of those files
need to be accessible by Cloudify's user - cfyuser. Because of this,
it is advised to place them in /etc/cloudify, and make sure they are
chowned by cfyuser (e.f. chown cfyuser: /etc/cloudify/filename).
The files are:
-
The private SSH key connected to the cloud keypair. Its input is is
ssh_private_key_path. -
The install RPM (its world-accessible URL will be provided separately). Its input is
install_rpm_path. -
The CA certificate and key. Those will be used by the Tier 2 manager to connect to the Tier 1 managers, as well as for generating the Tier 1 managers' external certificates. The inputs are
ca_certandca_key.
In summary, this inputs section should look like this:
inputs:
ca_cert: /etc/cloudify/ca_certificate.pem
ca_key: /etc/cloudify/ca_key.pem
ssh_private_key_path: /etc/cloudify/ssh_key
install_rpm_path: : /etc/cloudify/cloudify-manager-install.rpm
Now all that is left is to edit the inputs file (you can copy the
sample_inputs file and edit it - see the
inputs section below for a full explanation), and run:
cfy install blueprint.yaml -b <BLUEPRINT_NAME> -d <DEPLOYMENT_ID> -i <INPUTS_FILE>
To get the outputs of the installation (currently the IPs of the master and slaves Cloudify Managers) run:
cfy deployments outputs <DEPLOYMENT_ID>
Below is a list with explanations for all the inputs necessary
for the blueprint to function. Much of this is mirrored in the
sample_inputs file.
Currently only Openstack is supported as the platform for this blueprint. Implementations for other IaaSes will follow.
os_image- OpenStack image name or ID to use for the new serveros_flavor- OpenStack flavor name or ID to use for the new serveros_network- OpenStack network name or ID the new server will be connected toos_keypair- OpenStack key pair name or ID of the key to associate with the new serveros_security_group- The name or ID of the OpenStack security group the new server will connect toos_server_group_policy- The policy to use for the server groupos_username- Username to authenticate to OpenStack withos_password- OpenStack passwordos_tenant- Name of OpenStack tenant to operate onos_auth_url- Authentication URL for KeyStone
There are currently 3 supported ways to assign the manager's IP.
To toggle between the different modes you'll need to leave only one of
the lines in infra uncommented -
private_ip.yaml,
floating_ip.yaml or
private_fixed_ip.yaml
needs to be imported.
Important: only one of the files mentioned above needs to be imported, otherwise the blueprint will not work.
The 3 modes are:
- Using the FloatingIP mechanism. This requires providing a special input:
os_floating_network- The name or ID of the OpenStack network to use for allocating floating IPs
- Using only an internal network, without a floating IP. This requires creating a new port, which is assumed to be connected to an existing subnet; thus a special input is needed:
os_subnet- OpenStack name or ID of the subnet that's connected to the network that is to be used by the manager
- Using a known in advance resource pool of IPs and hostnames. Like in the previous section, this requires creating a new port. This method also creates a "resource pool" object, that holds a list of resources and allocates them as the need arises. The inputs for this mode are:
os_subnet- Like in the above moderesource_pool- A list of resources from which the IP addresses and the hostnames should be chosen. The format should be as follows:
resource_pool:
- ip_address: <IP_ADDRESS_1>
hostname: <HOSTNAME_1>
- ip_address: <IP_ADDRESS_2>
hostname: <HOSTNAME_2>
The following inputs are only relevant in KeyStone v3 environments:
os_region- OpenStack region to useos_project- Name of OpenStack project (tenant) to operate onos_project_domain- The name of the OpenStack project domain to useos_user_domain- The name of the OpenStack user domain to use
When working with block storage devices (e.g. Cinder volumes) there is a special input that needs to be provided:
os_device_mapping- this is a list of volumes as defined by the API here. An example input would look like this:
os_device_mapping:
- boot_index: "0"
uuid: "41a1f177-1fb0-4708-a5f1-64f4c88dfec5"
volume_size: 30
source_type: image
destination_type: volume
delete_on_termination: true
Where uuid is the UUID of the OS image that should be used when
creating the volume.
Note: When using the
os_device_mappinginput, theos_imageinput should be left empty.
Other potential inputs (for example, with subnet names, CIDRs etc.) might be added later.
These are general inputs necessary for the blueprint:
install_rpm_path- as specified aboveca_cert- as specified aboveca_key- as specified abovemanager_admin_password- as specified abovemanager_admin_username- the admin username for the Tier 1 managers (default: admin)num_of_instances- the number of Tier 1 instances to be created (default: 2). This affects the size of the HA cluster.ssh_user- User name used when SSH-ing into the Tier 1 manager VMsssh_private_key_path- as described above.additional_config- An arbitrary dictionary which should mirror the structure of config.yaml It will be merged (while taking precedence) with the config as described in the cloudify.nodes.CloudifyTier1Manager type in the plugin.yaml file. Whenever possible the inputs in the blueprint.yaml file should be used. For example:
inputs:
additional_config:
sanity:
skip_sanity: true
restservice:
log:
level: DEBUG
mgmtworker:
log_level: DEBUG
Inside ldap_inputs.yaml is a defined
datatype for LDAP inputs. It is useful to utilize it for convenience.
All the following inputs need to reside under ldap_config in
the inputs file:
server- The LDAP server address to authenticate againstusername- LDAP admin username. This user needs to be able to make requests against the LDAP serverpassword- LDAP admin passworddomain- The LDAP domain to be used by the serverdn_extra- Extra LDAP DN options. (separated by the;sign. e.g. a=1;b=2). Useful, for example, when it is necessary to provide an organization ID.is_active_directory- Specify whether the LDAP server used for authentication is an Active Directory server.
The actual input should look like this:
inputs:
ldap_config:
server: SERVER
username: USERNAME
password: PASSWORD
...
NOTE: When mentioning local paths in the context of the inputs below, the paths in question are paths on the Tier 2 manager. So if it is desirable to upload files from a local location, these files need to be present on the Tier 2 manager in advance. Otherwise, URLs may be used freely.
It is possible to create/upload certain types of resources on the Tier 1 cluster after installation. Those are:
tenants- a list of tenants to create after the cluster is installed. The format is:
inputs:
tenants:
- <TENANT_NAME_1>
- <TENANT_NAME_2>
plugins- a list of plugins to upload after the cluster is installed. The format is:
inputs:
plugins:
- wagon: <WAGON_1>
yaml: <YAML_1>
tenant: <TENANT_1>
- wagon: <WAGON_2>
yaml: <YAML_2>
visibility: <VIS_2>
Where:
WAGONis either a URL of a Cloudify Plugin (e.g. openstack.wgn), or a local (i.e. on the Tier 2 manager) path to such wagon (required)YAMLis the plugin's plugin.yaml file - again, either URL or local path (required)TENANTis the tenant to which the plugin will be uploaded (the tenant needs to already exist on the manager - use the abovetenantsinput to create any tenants in advance). (Optional - default is default_tenant)VISIBILITYdefines who can see the plugin - must be one of [private, tenant, global] (Optional - default is tenant). Both WAGON and YAML are required fields
secrets- a list of secrets to create after the cluster is installed. The format is:
inputs:
secrets:
- key: <KEY_1>
string: <STRING_1>
file: <FILE_1>
visibility: <VISIBILITY_1>
Where:
KEYis the name of the secret which will then be used by other blueprints by the intrinsicget_secretfunction (required)STRINGis the string value of the secret [mutually exclusive with FILE]FILEis a local path to a file which contents should be used as the secrets value [mutually exclusive with VALUE]VISIBILITYdefines who can see the secret - must be one of [private, tenant, global] (Optional - default is tenant).KEYis a required field, as well as one (and only one) of STRING or FILE.
blueprints- a list of blueprints to upload after the cluster is installed. The format is:
inputs:
blueprints:
- path: <PATH_1>
id: <ID_1>
filename: <FILENAME_1>
tenant: <TENANT_1>
visibility: <VISIBILITY_1>
Where:
PATHcan be either a local blueprint yaml file, a blueprint archive or a url to a blueprint archive (required)IDis the unique identifier for the blueprint (if not specified, the name of the blueprint folder/archive will be usedFILENAMEis the name of an archive's main blueprint file. Only relevant when uploading an archiveTENANTis the tenant to which the blueprint will be uploaded (the tenant needs to already exist on the manager - use the abovetenantsinput to create any tenants in advance). (Optional - default is default_tenant)VISIBILITYdefines who can see the secret - must be one of [private, tenant, global] (Optional - default is tenant).
scripts- a list of scripts to run after the manager's installation. All these scripts need to be available on the Tier 2 manager and accessible bycfyuser. These scripts will be executed after the manager is installed but before the cluster is created. The format is:
scripts:
- <PATH_TO_SCRIPT_1>
- <PATH_TO_SCRIPT_2>
files- a list of files to copy to the Tier 1 managers from the Tier 2 amnager after the Tier 1 managers' installation. All these files need to be available on the Tier 2 manager and accessible bycfyuser. These files will be copied after the manager is installed but before the cluster is created. The format is:
files:
- src: <TIER_2_PATH_1>
dst: <TIER_1_PATH_1>
- src: <TIER_2_PATH_2>
dst: <TIER_1_PATH_2>
The following inputs are only relevant when upgrading a previous deployment. Use them only when installing a new deployment to which you wish to transfer data/agents from an old deployment.
restore- Should the newly installed Cloudify Manager be restored from a previous installation. Must be used in conjunction with some of the other inputs below. Seeplugin.yamlfor more details (default: false)backup- Only relevant ifrestoreis set to true! Must be used in conjunction withold_deployment_id(and optionally withsnapshot_id). If set to true, a snapshot will be created on the old deployment (based onold_deployment_idand, if passed, onsnapshot_id), and it will be used in the restore workflow (default: false)snapshot_path- A local (relative to the Tier 2 manager) path to a snapshot that should be used. Mutually exclusive withold_deployment_idandsnapshot_id(default: '')old_deployment_id- The ID of the previous deployment which was used to control the Tier 1 managers. If thebackupworkflow was used with default values there will be a special folder with all the snapshots from the Tier 1 managers. If thebackupinput is set tofalsesnapshot_idmust be provided as well (default: '')snapshot_id- The ID of the snapshot to use. This is only relevant ifold_deployment_idis provided as well (default: '')transfer_agents- If set totrue, aninstall_new_agentscommand will be executed after the restore is complete (default: true)