-
Notifications
You must be signed in to change notification settings - Fork 3
Home
Welcome to the ovirt-ansible-example wiki!
This wiki will will go through playbooks created in this repository and explain how to setup environment.
Since oVirt Ansible modules are not yet part of the offical Ansible modules. We host them in this repository in library directory. These modules needs some helper utilities, which will be part of the Ansible as well in near future, but for now, we need to use these helper utilities from external upstream library called ovirt-ansible. So before using the modules please install this library as follows:
$ pip install ovirt-ansible
Once our utilities will be part of the Ansible project, this step won't be needed. To check current status see this PR.
First we need to clone the git repository:
$ git clone [email protected]:machacekondra/ovirt-ansible-example.git
├── inventory
│ └── hosts
├── library
│ ├── ovirt_auth.py
│ ├── ovirt_clusters.py
│ ├── ovirt_datacenters.py
│ ├── ovirt_groups.py
│ ├── ovirt_hosts.py
│ ├── ovirt_networks.py
│ ├── ovirt_nics.py
│ ├── ovirt_permissions.py
│ ├── ovirt_snapshots.py
│ ├── ovirt_storage_domains.py
│ ├── ovirt_templates.py
│ └── ovirt_users.py
├── playbooks
│ ├── destroy_demo.yml
│ ├── library -> ../library
│ ├── ovirt_password.yml
│ └── setup_demo.yml
Examples structure may be familiar to you if you ever run any Ansible examples. The library directory contains all oVirt modules, which we will be using. inventory directory contains hosts file, which we will be using as our static inventory file. And finally playbooks directory, this directory contains our example playbooks.
Modules documentation is rebuild with every commit to this repository and can be found here. Of course once the modules will be merged to oficall Ansible repository, the documentation of modules will be hosted on Ansible documentation site. You can follow the status here.
This paragraph will finally go you through the playbooks and what they does.
First we need to create vault with oVirt user password, so we don't use this password in plaintext. There is tool which make it easy for your, just enter this command:
$ ansible-vault create ovirt_password.yml
This will fire up your editor. Create there password variable with password of your admin@internal user:
password: MySuperPasswordOfAdminAtInternal
Next it will ask your for a vault password and then it creates ovirt_password.yml file, with your vault.
Now you need to modify inventory/hosts file, to reflect your needs.
ovirt section is the name of the group the hosts that will be used to execute the playbook. In my case I use local machine. ovirt:vars section defines variable used in the playbook, you have to mainly change the URL to URL of your oVirt.
[ovirt]
localhost ansible_connection=local ansible_user=root
[ovirt:vars]
username=admin@internal
url=https://ondra.local/ovirt-engine/api
ca_file=/etc/pki/ovirt-engine/ca.pem
datacenter=mydatacenter
cluster=mycluster
host=myhost
host_address=10.34.60.215
data_name=data
export_name=export
iso_name=iso
template=rhel7
vm=rhel7
This playbook contains tasks which will setup the oVirt environment, assuming that you have already your oVirt installed.
First line of playbook is yaml magic. Second line describes what the playbook does. The third line specifies the hosts which will be used. For the oVirt deployment it must be a host which has ovirt-anisble
library installed and must reach the oVirt machine.
---
- name: Setup oVirt environment
hosts: ovirt
First task will include our password of the oVirt API user and stores it in password variable.
- name: Include oVirt password
no_log: true
include_vars: ovirt_password.yml
Second task will obtain the SSO token using ovirt_auth module using varibales which we degined in inventory/hosts_ file. The SSO token will be revoked at the always at the end, even if some playbook will fail. As we are using Ansible block feature. The ovirt_auth module will create an ovirt_auth fact, which contain the SSO token, which we will be using later on in all modules. See ovirt_auth module documentation for more information.
- block
- name: Obtain SSO token
no_log: true
ovirt_auth:
url: "{{ url }}"
username: "{{ username }}"
password: "{{ password }}"
ca_file: "{{ ca_file }}"
...another tasks...
always:
- name: Revoke the SSO token
ovirt_auth:
state: absent
ovirt_auth: "{{ ovirt_auth }}"
Module for managing datacenters is called ovirt_datacenters. In this task we create datacenter with name we defined in hosts inventory.
- name: Create datacenter
ovirt_datacenters:
auth: "{{ ovirt_auth }}"
name: "{{ datacenter }}"
description: mydatacenter
local: false
compatibility_version: 4.0
quota_mode: disabled
Module for managing clusters is called ovirt_clusters. In this task we create cluster with name we defined in hosts inventory.
- name: Create cluster
ovirt_clusters:
auth: "{{ ovirt_auth }}"
datacenter_name: "{{ datacenter }}"
name: "{{ cluster }}"
cpu_type: Intel SandyBridge Family
description: mycluster
compatibility_version: 4.0
Module for managing hosts is called ovirt_hosts. In this task we add host with name and address we defined in hosts inventory using public key.
- name: Add host using public key
ovirt_hosts:
auth: "{{ ovirt_auth }}"
public_key: true
cluster: "{{ cluster }}"
name: "{{ host }}"
address: "{{ host_address }}"
Next three task add NFS data, iso and export storages. Module for managing storage domains is called ovirt_storage_domains.
- name: Add data NFS storage domain
ovirt_storage_domains:
auth: "{{ ovirt_auth }}"
name: "{{ data_name }}"
host: "{{ host }}"
data_center: "{{ datacenter }}"
nfs:
address: 10.34.63.199
path: /omachace/data
- name: Import export NFS storage domain
ovirt_storage_domains:
auth: "{{ ovirt_auth }}"
name: "{{ export_name }}"
host: "{{ host }}"
domain_function: export
data_center: "{{ datacenter }}"
nfs:
address: 10.34.63.199
path: /omachace/export
- name: Create ISO NFS storage domain
ovirt_storage_domains:
auth: "{{ ovirt_auth }}"
name: "{{ iso_name }}"
host: "{{ host }}"
domain_function: iso
data_center: "{{ datacenter }}"
nfs:
address: 10.34.63.199
path: /omachace/iso
Another task will import the template from export storage domain. Module for managing templates is called ovirt_templates.
- name: Import template
ovirt_templates:
auth: "{{ ovirt_auth }}"
name: "{{ template }}"
state: imported
export_domain: "{{ export_name }}"
storage_domain: "{{ data_name }}"
cluster: "{{ cluster }}"
Last task runs the virtual machine from the template we have imported. Set it as highly available, set 1GiB of memory, and run cloud init, which set the hostname to mydomain.local, set the root password to 1234567 and create a file /tmp/greetings.txt with content Hello, world!. More information about ovirt_vms module can be found in documentation.
- name: Create and run VM from template
ovirt_vms:
auth: "{{ ovirt_auth }}"
name: "{{ vm }}"
template: "{{ template }}"
cluster: "{{ cluster }}"
memory: 1GiB
high_availability: true
cloud_init:
host_name: mydomain.local
custom_script: |
write_files:
- content: |
Hello, world!
path: /tmp/greeting.txt
permissions: '0644'
user_name: root
root_password: '1234567'
To execute the playbook run following command from the cloned repository directory:
$ ansible-playbook -i inventory/hosts playbooks/setup_demo.yml --ask-vault-pass
Vault password:
PLAY [Setup oVirt environment] *************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [Include oVirt password] **************************************************
ok: [localhost]
.....
It will ask you for the password of the vault and then execute the playbook. Now try to re-run the playbokk, and see what happens ;). Then try to append task which adds another host, and see what happens... or try to change cluster description...
To clean up the environment, there is destroy_demo playbook. Execute it same way as the setup_demo playbook, and it will clean up your environment.