Installation scripts for OpenShift KNI clusters
KNI clusters consist of:
- OpenShift deployed on physical hardware using OpenShift's bare metal IPI platform based on Metal³.
- OpenShift Container Storage (OCS) based on Ceph Rook and using the OCS Operator.
- Container Native Virtualization (CNV) based on KubeVirt and using the Hyperconverged Cluster Operator (HCO).
- 4x Dell PowerEdge R640 nodes, each with 2x Mellanox 25G NICs, and 2x Mellanox ethernet switches, all in a 12U rack. 1 node is used as a "provisioning host", while the other 3 nodes are OpenShift control plane machines.
The goal of this repository is to provide scripts and tooling to ease the initial installation and validation of one of these clusters.
The current target is OpenShift 4.2, OCS 4.2, and CNV 2.1. The scripts will need to support both published releases and pre-release versions (#12) of each of these.
To ease installation, a prepared ISO can be used to install the "provisioning host". Using the prepared ISO addresses the following:
- Creates an admin user (#21) with passwordless sudo on the provisioning host.
- Ensure the provisioning host has all required software installed. (#22)](openshift-kni#22) - the bare metal IPI network requirements are a good example of environment requirements.
- Apply any configuration changes to the provisioning
host
that are required for the OpenShift installer. For example,
creating the
default
libvirt storage pool and thebaremetal
andprovisioning
bridges.
Note: Optional scripts that handle the above prerequisites may be executed if not using the prepared ISO that handle the above.
The deployment process will use scripts to perform the following on the configuration:
- Validate any environment requirements (#22) - the bare metal IPI network requirements are a good example of environment requirements.
- Prepare the node information (#19) required for the bare metal IPI install-config.
- Launch the OpenShift installer and wait for the cluster installation to complete.
- Complete some post-install configuration - including machine/node
linkage
(#14),
and configuring a tagged storage VLAN on the interface connected to the
Internal
network on the OpenShift nodes (#4). - Deploy OCS (#7) and configure a Ceph cluster and StorageClass. Configure the image registry to use an OCS PVC (#5) for image storage.
- Deploy
CNV. Configure
a bridge on the
External
interface on OpenShift nodes (#18) to allow VMs access this network. - Temporarily install Ripsaw, carry out some performance tests, and capture the results.
The following environment-specific information will be required for each installation. On a properly configured and prepared cluster, the following items will be discovered:
- A pull secret - used to access OpenShift content - and an SSH key that will be used to authenticate SSH access to the control plane machines.
- The cluster name and the domain name under which it will be available.
- The network CIDR in which the machines will be allocated IPs on the
baremetal
network interface. - The 3 IP addresses reserved for API, Ingress, and DNS access.
- The BMC IPMI addresses and credentials for the 3 control plane machines.
- If detected that the provisioning host is not sync with a time source, configure the 25G switch as a source via the DHCP service on the
Storage
network. An optional script to set a source for the switch, will be provided.
The provisioning host must be a RHEL-8 machine.
In the OpenShift subdirectory, create a copy of config_example.sh
using the existing
user as part of the file name. For example, config_<username>.sh
. Once the file has
been created, set the required PULL_SECRET variable within the shell script
To install some required packages, configure libvirt
, provisioning
and baremetal
bridges, from the top directory:
make prep
make OpenShift
Note:
In order to increase the log level ouput of openshift-install
, a LOGLEVEL
environment variable can be used as:
export LOGLEVEL="debug"
make OpenShift
The installation of CNV related operators is managed by a meta operator called the HyperConverged Cluster Operator (HCO). Deploying with the meta operator will launch operators for KubeVirt, Containerized Data Imported (CDI), Cluster Network Addons (CNA), Common templates (SSP), Node Maintenance Operator (NMO) and Node Labeller Operator.
Coming Soon
The HyperConverged Cluster Operator is listed in the Red Hat registry, so you can go the rest of the way using the UI by clicking the OperatorHub tab and selecting the HyperConverged Cluster Operator.
If you want to use the CLI, we provide the a script that automates all the steps to the point of having a fully functional CNV deployment.