See the TL;DR section in the README
Our CI Working Group has been tasked with demonstrating best practices for integrating, testing, and deploying projects within the CNCF ecosystem across multiple cloud providers.
Help ensure the CNCF projects deploy and run sucessfully on each supported cloud providers.
A project to continually validate the interoperability of each CNCF project, for every commit on stable and HEAD, for all supported cloud providers with the results published to the cross-cloud public dashboard. The cross-cloud project is composed of the following components:
- Cross-project CI - Project app and e2e test container builder / Project to Cross-cloud CI integration point
- Builds and registers containerized apps as well as their related e2e tests for deployment. Triggers the cross-cloud CI pipeline.
- Cross-cloud CI - Multi-cloud container deployer / Multi-cloud project test runner
- Triggers the creation of K8s clusters on cloud providers, deploys containerized apps, and runs upstream project tests supplying results to the cross-cloud dashboard.
- Multi-cloud provisioner - Cloud end-point provisioner for Kubernetes
- Supplies conformance validated Kubernetes end-points for each cloud provider with cloud specific features enabled
- Cross-cloud CI Dashboard -
- Provides a high-level view of the interoperability status of CNCF projects for each supported cloud provider.
Currently the cross-cloud K8s end-point provisioner supports
- AWS
- GCE
- GKE
- Packet
Additional cloud-providers will be added. We welcome pull requests to add new ones. :)
The cross-project runs project specific CI tests.
The cross-cloud project runs e2e tests for the project on each cloud.
For Kubernetes cross-cloud runs the K8s conformance test from upstream Kubernetes for each cloud after the cluster has been provisioned.
It allows a third party to maintain the API level interaction with the cloud providers.
It supports cloud-provider specific features offered by Aws/GCE/GKE/Azure.
It supports templated cloud-init config across all clouds.
To take an immutable approach to the infrastructure management/provisioning which allows us to very quickly iterate over new deployments on a per-commit basis.
It reduces our dependency on provisioning code needing to connect back over ssh (salt/ansible etc).
It supports installing software repos, configuring certificates, writing out files, and service creation.
The entire list is cloud dependent since we support per-cloud feature sets.
The base list of dependencies common for each cloud is:
- CNI
- Kubelet
- Etcd
- Kube API server
- Kube control manager
- Kube scheduler
- Kube proxy
- Containerd/Docker
Cross-cloud uses pinning to set version being used. This can be any commit, branch, tag or release.
The configuration for each K8s cluster is customized to support cloud-provider specific features.
Then the K8s cluster is configured and provisioned for each cloud with Terraform using cloud-init with the cloud specific configuration. (See “Why Terraform for cross-cloud” for more information on this topic).
The Kubelet binary is started by Systemd. Kubelet starts the remaining Kubernetes components from the manifest files which were written to disk during provisioning.
Yes. Resource limiting includes
- Control the total number of running pipelines
- Control of the number of nodes used in a Kubernetes cluster
- Control over the number of cloud-provider’s provisioned
- Control the cloud providers being used
No it does not use Jenkins or CircleCI.
The current implementation uses GitLab runners.
- 2 clusters, 8 nodes (total)
- 3 master nodes
- 1 worker node
- We are using 1 worker node because ONAP requires 1, but 3 worker nodes would be ideal in the future
- 64GB of RAM per node
- 16-cores
- Full usage, but we can shrink master nodes to use 10 CPU minimum
- Workers need to be 64 GB at least (ONAP needs that minimum to deploy)
- Determine if a terraform module will work for the new cloud environment
- Review how terraform templates were built
- Look at how to do auth
- Create sub-folder for the new cloud environment
- For reference, see OpenStack’s PRs:
- New cloud can run terraform apply on cluster,
- Cloud resources are successfully allocated,
- K8s is deployed installed to provisioned systems,
- Installing helm to k8s cluster is successful,
- New cloud can run K8s conformance tests
- Yes, they have public IPs
- Store public IPs, query DNS records, find end point
https://github.com/crosscloudci/cross-cloud/blob/master/README.md crosscloudci#111 crosscloudci#128 crosscloudci#134