- Overview
- Features
- Get Started
- Step 1: Prerequisites for Creating a Virtual Edge Node
- Step 2: Proxy Settings
- Step 3: User Access Setup
- Step 4: Dependencies
- Step 5: Orchestrator and Provisioning VMs Configuration
- Step 6: Download Orchestrator Certificate
- Step 7: OS Instance and Providers
- Step 8: VMs Creation with Scripts
- Step 9: Enabling VNC Access (Optional)
- (Alternate) VMs Creation with Ansible Scripts
- How Ansible Works
- Controller Node
- Remote Hosts (Managed Nodes)
- The General Workflow
- Installation Instructions
- Step 1: Select a Controller Machine
- Step 2: Update the Inventory and Secret Files
- Step 3: Orchestrator and Provisioning VMs Configuration
- Step 4: Download Orchestrator Certificate
- Step 5: Running Ansible Playbooks to Create the VM
- 5.1. SSH Key Setup
- 5.2. Calculate Maximum VMs
- 5.3. Create VMs
- Step 6: Capture Logs from Remote Hosts
- 6.1. Run the show_vms_data.yml Playbook
- 6.2. Monitor Logs in Real-Time
- Contribute
The VM-Provisioning component serves as the heart of the repository, offering a suite of scripts that automate the setup and pre-configuration of virtual machines. These scripts can be used on physical machines (bare metal) or within existing virtualized environments, utilizing Vagrant and Libvirt APIs.
- Package Installation: Manages the installation of necessary packages and dependencies before VM provisioning, ensuring a seamless setup process.
- Reference Templates: Offers predefined templates to simplify the configuration and provisioning of virtual machines (VMs).
- VM Resource Configuration: Oversees VM resources and specifies the orchestrator URL for downloading the EFI boot file.
- Provisioning Monitoring: Uses socket_login.exp to monitor and track the progress of VM provisioning in real-time.
- Ansible Scripts: Provides automation scripts for configuring and managing VMs, ensuring consistent and efficient deployment. These scripts are also useful to perform scale tests.
This section provides step-by-step instructions to set up the environment required for onboarding and provisioning virtual edge nodes. Important: Intel strongly recommends using script-based installation for creating Virtual Machines to ensure a streamlined and efficient setup process.
To ensure optimal compatibility and performance, the host machine must have either Ubuntu 22.04 or Ubuntu 24.04 LTS installed. The following specifications are recommended for effectively onboarding and provisioning virtual machines (VMs). The capacity to provision multiple VMs will depend on these specifications:
- Operating System: Ubuntu 22.04 LTS or Ubuntu 24.04 LTS (must be installed on the host machine)
- CPU: 16 cores
- Memory: 64 GB RAM
- Storage: 1 TB HDD
Begin by cloning the repository that contains all necessary scripts and configurations for deployment. This step is crucial for accessing the tools required for virtual edge node provisioning:
git clone https://github.com/open-edge-platform/virtual-edge-node.gitTo ensure seamless connectivity with the Edge Orchestrator, it is important to configure any necessary proxy settings on your system. Below is an example of how to configure these proxy settings:
export http_proxy=http://proxy-dmz.mycorp.com:912
export https_proxy=http://proxy-dmz.mycorp.com:912
export socks_proxy=proxy-dmz.mycorp.com:1080
export no_proxy=.mycorp.com,.local,.internal,.controller.mycorp.corp,.kind-control-plane,.docker.internal,localhostEnsure that a user has the necessary permissions to perform administrative tasks and manage virtualization and containerization tools. The below command adds the specified user to important groups, granting them the required access rights.
Use the following sample command to add a user named john to the necessary groups:
sudo usermod -aG sudo,kvm,docker,libvirt john
(or)
chmod +x scripts/create_new_user.sh
./scripts/create_new_user.shNote: After running this command, the user may need to log out and log back in for the changes to take effect.
This section verifies the essential dependencies for establishing a KVM-based virtualization environment on your system. These dependencies ensure that your system has the necessary tools and libraries to effectively manage and operate virtual If not, you will be prompted to check your BIOS/UEFI settings to ensure that virtualization is enabled. This section checks the necessary dependencies for setting up a KVM-based virtualization environment on the system. These dependencies ensure that your system is equipped with the required tools and libraries to efficiently manage and run virtual machines using KVM.
make dependency-checkIf your system supports KVM acceleration, a confirmation message will appear: "KVM acceleration is supported on this system." If this feature is not supported, you will be advised to review your BIOS/UEFI settings to ensure that virtualization is enabled.
To customize the setup with your specific environment, open the config file and replace the placeholder values with
the actual values specific to your orchestrator.
CLUSTER="kind.internal": This variable is the FQDN of the orchestrator.
Specify the resource allocations for virtual machines (VMs) to be provisioned.
RAM_SIZE=8192: Allocates 8192 MB (or 8 GB) of RAM to each VM.NO_OF_CPUS=4: Assigns 2 CPU cores to each VM.SDA_DISK_SIZE="110G": Sets the size of the primary disk (sda) to minimum 110 GB.LIBVIRT_DRIVER="kvm": If KVM is supported, set the driver to kvm. If KVM is not supported, set the driver to qemu.
USERNAME="PROVISIONED_USERNAME": This variable represents the username for the newly provisioned Linux system. The placeholder "PROVISIONED_USERNAME" should be replaced with the actual username.PASSWORD="PROVISIONED_PASSWORD": This variable holds the password for the provisioned Linux user. The placeholder "PROVISIONED_PASSWORD" should be replaced with the actual password.
Here's an example of how you might update the fields in a config file:
# Cluster FQDN
CLUSTER="kind.internal"
# VM Resources
RAM_SIZE=8192
NO_OF_CPUS=4
SDA_DISK_SIZE="110G"
LIBVIRT_DRIVER="kvm"
# Linux Provisioning
USERNAME="actual_linux_user"
PASSWORD="actual_linux_password"Before running the IO flow script, export the onboarding username and password:
export ONBOARDING_USERNAME="ONBOARDING_USER"
export ONBOARDING_PASSWORD="ONBOARDING_PASSWORD"ONBOARDING_USERNAME="ONBOARDING_USER": This variable represents the username to start IO flow. The placeholder "ONBOARDING_USER" should be replaced with the actual username.ONBOARDING_PASSWORD="ONBOARDING_PASSWORD": This variable holds the password for the onboarding user. The placeholder "ONBOARDING_PASSWORD" should be replaced with the actual password.
Non Interactive Onboarding Project and User Configurations. These configurations would be used to automatically register the dynamically created Virtual Edge Node Serial Number.
Before running the NIO flow script, export the project API username and password:
export PROJECT_API_USER="your_project_api_username"
export PROJECT_API_PASSWORD="your_project_api_password"
export PROJECT_NAME="your-project-name"PROJECT_NAME="your-project-name": This variable specifies the name of the project associated with the non interactive onboarding flow configurations.PROJECT_API_USER="actual_api_user": This variable indicates the username for accessing an API.PROJECT_API_PASSWORD="": This variable is intended to store the password for the API user. It is currently empty and shouldbe populated with the actual passwordPROJECT_NAME="your-project-name": Non Interactive Onboarding Project configurations would be used to automatically register the dynamically created Virtual Edge Node Serial Number.
Note: If you do not export these credentials, the script will prompt you to enter them when you run the
create_vms.sh script.
To download the Full_server.crt file and save it in the certs directory, follow these steps using wget.
Here's an example of how to download the file into the certs directory:
source ./config
wget https://"tinkerbell-haproxy.${CLUSTER}"/tink-stack/keys/Full_server.crt --no-check-certificate -O certs/"Full_server.crt"VM-based provisioning does not support Secure Boot; therefore, the "osSecurityFeatureEnable":"false" must be set in the provider configuration. Use the following curl command to create an OS instance and a Provider instance.
Note: The following commands are required to run initially when starting VM provisioning. These configurations will be used for further VM provisioning.
Important: Before executing the Below JWT export command, ensure that all NIO configurations are properly exported as environment variables. This step is crucial to ensure that the JWT export process has access to the necessary credentials and settings, preventing errors and ensuring smooth execution.
cd vm-provisioning
1. Source the configuration file
source ./config
2. Obtain the JWT token
export JWT_TOKEN=$(curl --location --insecure --request POST "https://keycloak.${CLUSTER}/realms/master/protocol/openid-connect/token" \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'grant_type=password' \
--data-urlencode 'client_id=system-client' \
--data-urlencode "username=${PROJECT_API_USER}" \
--data-urlencode "password=${PROJECT_API_PASSWORD}" \
--data-urlencode 'scope=openid profile email groups' | jq -r '.access_token')
3. Sample configuration to create a provider with an OS instance:
curl -X POST "https://api.${CLUSTER}/v1/projects/${PROJECT_NAME}/providers" -H "accept: application/json" \
-H "Content-Type: application/json" -d '{"providerKind":"PROVIDER_KIND_BAREMETAL","name":"infra_onboarding", \
"apiEndpoint":"xyz123", "apiCredentials": ["abc123"], "config": "{\"defaultOs\":\"os-51c4eba0\",\"autoProvision\":true,\"defaultLocalAccount\":\"\",\"osSecurityFeatureEnable\":false}" }' \
-H "Authorization: Bearer ${JWT_TOKEN}"Currently, VM onboarding and provisioning are supported for OS profiles (Ubuntu/Microvisor) where the security
feature can be set to SECURITY_FEATURE_NONE or SECURITY_FEATURE_SECURE_BOOT_AND_FULL_DISK_ENCRYPTION,
and the selected OS profile must be set as the default in the provider config.
This section provides instructions for creating one or more virtual machines (VMs) on an orchestrator using predefined scripts from the host machine where the VMs will be created.
To create a specified number of VMs, execute the following command:
chmod +x ./scripts/create_vm.sh
./scripts/create_vm.sh <NO_OF_VMS>NO_OF_VMS: Replace this placeholder with the actual number of VMs you wish to create.
Note: You can press Ctrl+C to cancel the ongoing VM provisioning process, whether it is in progress or completed.
To create VMs using the Non-Interactive Onboarding flow, you have two options:
Use this option to automatically generate random serial numbers for each VM.
chmod +x ./scripts/create_vm.sh
./scripts/create_vm.sh <NO_OF_VMS> -nioNO_OF_VMS: Replace this placeholder with the actual number of VMs you wish to create.-nio: This option enables the Non Interactive Onboarding flow.
Use this option to specify custom serial numbers for each VM.
chmod +x ./scripts/create_vm.sh
./scripts/create_vm.sh <NO_OF_VMS> -nio -serials=<serials>-serials=: Provide a comma-separated list of serial numbers for each VM. The number of serials must match the number of VMs specified.
Automatically generate random serial numbers:
./scripts/create_vm.sh 3 -nioSpecify custom serial numbers:
./scripts/create_vm.sh 3 -nio -serials=VM112M01,VM112M02,VM113M01Note: You can press Ctrl+C to cancel the ongoing VM provisioning process, whether it is in progress or completed. Already provisioned VMs or ongoing provisioning VMs shall be deleted.
Upon successful provisioning with Ubuntu OS, the following log will appear on your terminal.

Upon successful provisioning with MicroVisor, the following log will appear on your terminal.

Run the destroy_vm.sh script to delete VMs that have already been provisioned.
chmod +x ./scripts/destroy_vm.sh
./scripts/destroy_vm.shBy default, VNC access is not enabled in the Vagrantfile to ensure security and simplicity. If you wish to enable VNC access to your virtual machines, you can do so by adding the following lines to your Vagrantfile.
Add the following lines within the libvirt block to enable VNC access:
libvirt.graphics_type = "vnc"
libvirt.video_type = 'qxl'
libvirt.graphics_ip = "0.0.0.0" # Optional: specify the VNC listen address
libvirt.graphics_port = "5900" # Optional: specify the VNC port
libvirt.graphics_password = "abc" # Optional: set a password for VNC accessLocate the section in templates/Vagrantfile where the libvirt provider is configured for each virtual machine. You will find a block similar to the one below:
Vagrant.configure("2") do |config|
(1..num_vms).each do |i|
config.vm.define "#{VM_NAME}#{i}" do |vm_config|
vm_config.vm.provider "libvirt" do |libvirt|
# Existing configuration lines
libvirt.title = "orchvm-net-000-vm#{i}"
libvirt.driver = LIBVIRT_DRIVER
libvirt.management_network_name = "orchvm-net-000"
libvirt.memory = RAM_SIZE
libvirt.cpus = NO_OF_CPUS
**# Add VNC configuration lines here**
end
end
end
end- libvirt.graphics_type = "vnc": Sets the graphics type to VNC, enabling remote graphical access to the virtual machine.
- libvirt.video_type = 'qxl': Configures the video type to qxl, optimized for virtualized environments.
- libvirt.graphics_ip = "0.0.0.0": Specifies that the VNC server should listen on all network interfaces, allowing remote access from any IP address.
- libvirt.graphics_port = "5900": Sets the port for VNC access. VNC typically uses port 5900 by default.
- libvirt.graphics_password = "abc": Sets a password for VNC access, adding a layer of security to prevent unauthorized access.
Ansible automates tasks like provisioning, configuration management, and application deployment across multiple systems. Ansible is agentless, using SSH to communicate with remote hosts without needing additional software on them.
It operates with two main components: the Controller Node and Remote Hosts.
- The machine where Ansible is installed and tasks are executed is responsible for running playbooks to manage configurations.
- The systems managed by Ansible, where tasks are applied via SSH without needing additional software.
- Install Ansible on the controller node using the appropriate package manager.
- Create an Inventory File listing the remote machines to manage.
- Configure SSH Access with key-based authentication for secure communication.
- Run Playbooks to automate tasks like copying SSH keys and launching VMs on the remote hosts.
To set up Ansible, first choose a controller machine to run playbooks.
Install Ansible on Controller Node using the install_ansible.sh script that is available in ansible directory.
./install_ansible.shTo allow a file or folder to be executed, you need to modify its permissions using the chmod command. This is particularly useful for scripts or programs that need to be run directly.
chmod +x /path/to/your/fileExample: chmod +x install_ansible.sh
This file lists all remote hosts (managed nodes) Ansible will control, grouping them by categories.
- Define ansible_vm_deploy_scripts: Specify paths to the all directories and files on the controller
for VM deployment scripts. Ensure these exist and set the path in
inventory.yml.
ansible_vm_deploy_scripts: /home/guest/directory1 /home/guest/directory2 /home/guest/file1Example: ansible_vm_deploy_scripts: /home/guest/ven/vm-provisioning/scripts /home/guest/ven/vm-provisioning/templates /home/guest/ven/vm-provisioning/certs /home/guest/ven/vm-provisioning/config /home/guest/ven/vm-provisioning/install_packages.sh
- Define ansible_secret_file_path: which stores the passwords for the hosts. The file must be
located in the same directory as the Ansible playbooks. Specify its path in
inventory.yml:
ansible_secret_file_path: /home/guest/ven/vm-provisioning/ansible/secret.yml- Define ansible_timeout_for_create_vm_script: This variable defines the timeout for the deployment of all VMs (Value in seconds).
ansible_timeout_for_create_vm_script: 14400- Define ansible_timeout_for_install_vm_dependencies: This variable defines the timeout for the installation of packages (Value in seconds).
ansible_timeout_for_install_vm_dependencies: 6000- Define the correct IP address of remote host.
ansible_host: 10.xx.xx.xx- Define the correct user on remote host.
ansible_user: guest- Default configuration.
ansible_password: "{{ host1_sudo_password }}"
ansible_become: yes
ansible_become_pass: "{{ host1_sudo_password }}"
ansible_become_method: sudo
ansible_become_user: rootIMPORTANT: While using root as the ansible_become_user allows for full administrative access on the remote hosts, it can lead to permission issues when creating, reading, or writing files and directories. This is because files created by tasks running as root may not be accessible to non-root users.
Recommendation: To avoid permission errors, it is recommended to use the actual Ansible user (e.g., guest) for tasks that involve file operations. This ensures that files and directories are created with the correct ownership and permissions, allowing seamless access and management.
- Define the desired copy_path on the remote host where you wish to copy the script. If the specified copy_path does not exist, it will be created automatically during the script execution.
copy_path: "/home/{{ ansible_user }}/ansible_scripts"- Set the number of VM to create.
number_of_vms: 3- Set install_packages: Set this to a non-zero value to install the necessary dependent packages on the remote host for creating a VM.
install_packages: 1NOTE: Set this to zero if the packages are already installed and you do not want to update them.
- Set nio_flow: true for Automated Onboarding(NIO flow), False for Engaged onboarding flow (IO flow).
nio_flow: falseansible_vm_deploy_scripts: /home/guest/ven/vm-provisioning/scripts /home/guest/ven/vm-provisioning/templates /home/guest/ven/vm-provisioning/certs /home/guest/ven/vm-provisioning/config /home/guest/ven/vm-provisioning/install_packages.sh
ansible_secret_file_path: /home/guest/ven/vm-provisioning/ansible/secret.yml
ansible_timeout_for_create_vm_script: 14400
ansible_timeout_for_install_vm_dependencies: 6000
ansible_host: 10.49.76.113
ansible_user: guest
ansible_password: "{{ host1_sudo_password }}"
ansible_become: yes
ansible_become_pass: "{{ host1_sudo_password }}"
ansible_become_method: sudo
ansible_become_user: guest # Actual ansible user
copy_path: "/home/{{ ansible_user }}/ansible_scripts"
number_of_vms: 1
install_packages: 1
nio_flow: falseThe secret.yml file is used to securely store sensitive information required by the Ansible playbook for VM
provisioning. This includes credentials and configuration details for both IO and NIO flows. It is crucial to
keep this file secure and avoid exposing it in version control systems.
host1_sudo_password: "" # add sudo password for host1
host2_sudo_password: "" # add sudo password for host2
host3_sudo_password: "" # add sudo password for host3
host4_sudo_password: "" # add sudo password for host4
host5_sudo_password: "" # add sudo password for host5
# IO Configurations
ONBOARDING_USERNAME: "actual_onboard_user"
ONBOARDING_PASSWORD: "actual_onboard_password"
# NIO Configurations
PROJECT_NAME: "your-project-name"
PROJECT_API_USER: "actual_api_user"
PROJECT_API_PASSWORD: "actual_api_password"In the secret.yml file, define the passwords for each host that is listed in the inventory.yml.
This is critical for Ansible to authenticate with the remote hosts.
ONBOARDING_USERNAME: "actual_onboard_user": This variable represents the username to start IO flow. The placeholder "actual_onboard_user" should be replaced with the actual username.ONBOARDING_PASSWORD: "actual_onboard_password": This variable holds the password for the onboarding user. The placeholder "actual_onboard_password" should be replaced with the actual password.
Non Interactive Onboarding Project and User Configurations. These configurations would be used to automatically register the dynamically created Virtual Edge Node Serial Number.
PROJECT_NAME: "your-project-name": This variable specifies the name of the project associated with the non interactive onboarding flow configurations.PROJECT_API_USER: "actual_api_user": This variable indicates the username for accessing an API.PROJECT_API_PASSWORD: "actual_api_password": This variable is intended to store the password for the API user.
Important Note: Ensure that both IO and NIO flow configurations are included in the secret.yml file. Missing any of these variables can lead to unexpected behavior or errors during script execution. All required variables must be defined for the playbook to function correctly.
- Define the passwords and encrypt the
secret.ymlfile to keep the sensitive information secure.
ansible-vault encrypt secret.ymlYou will be prompted to set a password for the vault. This password will be required whenever the playbooks are run. Important: To make the above command work, you need to switch to the root user.
If you need to make changes to the secret.yml file later, you can decrypt it using:
ansible-vault decrypt secret.ymlAfter making your changes, you can re-encrypt the file.
Follow the above Step 5: Orchestrator and Provisioning VMs Configuration steps to set the config file.
Follow the above Step 6: Download Orchestrator Certificate
steps to download the Full_server.crt certificate file in the certs directory.
In this step, you will execute the Ansible playbooks to automate the creation of the virtual machine. Ensure that all previous configuration steps have been completed before proceeding.
Important: Log in as the root user before running any playbooks. Root access is required to use
the secret.yml file for passwords and to execute the playbooks.
Ensure the controller can connect to the remote machines via SSH. You'll typically want to use SSH keys
to avoid needing to enter passwords for every connection. Before creating the VMs,
you must run the ssh_key_setup.yml playbook.
This playbook copies your SSH key to all the hosts, allowing passwordless SSH login:
ansible-playbook -i inventory.yml ssh_key_setup.yml --ask-vault-passThis eliminates the need to enter a password when logging into the hosts defined in the inventory.yml file.
NOTE: Use the following command to create the .ssh directory in your home directory if it doesn't already exist
mkdir -p ~/.sshAfter setting up SSH keys, run the calculate_max_vms.yml playbook. This playbook will help you determine the maximum number
of VMs you can deploy on each host:
ansible-playbook -i inventory.yml calculate_max_vms.yml --ask-vault-passIf you encounter permission, follow this Step 3: User Access Setup section.
Review the logs to see how many VMs can be deployed on each host. Based on this information, you can set the appropriate
number of VMs in the number_of_vms variable for each host.
After defining the number of VMs, you can execute the create_vms.yml playbook to deploy the specified number of VMs according to the data in the inventory.yml file. The create_vms.yml script carries out the following actions:
- Remove directories on remote hosts.
- Use rsync to synchronize directories from the source to the target machine.
- Install specific packages, update the apt cache, and verify applications.
- Clean up VMs and networks.
- Determine the maximum number of VMs for each host.
- Execute the create_vms.yml script for Engaged or Automated onboarding flow and save the log.
ansible-playbook -i inventory.yml create_vms.yml --ask-vault-passThis command will prompt you to enter the vault password for the encrypted secret.yml file.
To efficiently gather logs from all remote machines and transfer them to the directory where the create_vms.yml playbook is executed, follow these steps:
Execute the show_vms_data.yml playbook in a separate terminal window immediately after
starting the create_vms.yml playbook. This will ensure that logs are captured concurrently.
ansible-playbook -i inventory.yml show_vms_data.yml --ask-vault-passAfter a playbook is executed on a specific host, the logs are saved in the same directory from which the playbook was run.
While the create_vms.yml playbook is running, you can log into the remote host where the playbook is being executed. Navigate to the directory specified by the copy_path variable in the inventory.yml file to view the create_vms_log.txt log. This log is generated in real-time as VMs are created.
tail -f create_vms_log.txtTo learn how to contribute to the project, see the contributor's guide. The project will accept contributions through Pull-Requests (PRs). PRs must be built successfully by the CI pipeline, pass linters verifications and the unit tests.
There are several convenience make targets to support developer activities, you can use help to see a list of makefile
targets.
The following is a list of makefile targets that support developer activities:
lintto run a list of linting targets
To learn more about internals and software architecture






