From 929adc309f52c7cb26161209cff445e3589af042 Mon Sep 17 00:00:00 2001 From: divya-mohan0209 Date: Mon, 7 Mar 2022 17:01:20 +0530 Subject: [PATCH 1/2] Amendments to Cluster provisioning and registering documentation A#dding change to the registereing cluster docs --- .../registered-clusters/_index.md | 7 +++---- .../registered-clusters/_index.md | 13 ++++++------- 2 files changed, 9 insertions(+), 11 deletions(-) diff --git a/content/rancher/v2.5/en/cluster-provisioning/registered-clusters/_index.md b/content/rancher/v2.5/en/cluster-provisioning/registered-clusters/_index.md index 0e1ee65c00..ffe5c3c6c9 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/registered-clusters/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/registered-clusters/_index.md @@ -7,7 +7,7 @@ aliases: - /rancher/v2.x/en/cluster-provisioning/registered-clusters/ --- -The cluster registration feature replaced the feature to import clusters. +Along with importing clusters, as of v2.5, Rancher allows you to tie in closer with cloud APIs and manage your cluster by registering existing clusters. The control that Rancher has to manage a registered cluster depends on the type of cluster. For details, see [Management Capabilities for Registered Clusters.](#management-capabilities-for-registered-clusters) @@ -121,7 +121,7 @@ $ curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - ### Configuring an Imported EKS Cluster with Terraform -You should define **only** the minimum fields that Rancher requires when importing an EKS cluster with Terraform. This is important as Rancher will overwrite what was in the EKS cluster with any config that the user has provided. +You should define **only** the minimum fields that Rancher requires when importing an EKS cluster with Terraform. This is important as Rancher will overwrite what was in the EKS cluster with any config that the user has provided. >**Warning:** Even a small difference between the current EKS cluster and a user-provided config could have unexpected results. @@ -256,7 +256,7 @@ Also in the K3s documentation, nodes with the worker role are called agent nodes # Debug Logging and Troubleshooting for Registered K3s Clusters -Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two [plans](https://github.com/rancher/system-upgrade-controller#example-upgrade-plan) to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes. +Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two [plans](https://github.com/rancher/system-upgrade-controller#example-upgrade-plan) to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes. To enable debug logging on the system upgrade controller deployment, edit the [configmap](https://github.com/rancher/system-upgrade-controller/blob/50a4c8975543d75f1d76a8290001d87dc298bdb4/manifests/system-upgrade-controller.yaml#L32) to set the debug environment variable to true. Then restart the `system-upgrade-controller` pod. @@ -326,4 +326,3 @@ To annotate a registered cluster, 1. Click **Save.** **Result:** The annotation does not give the capabilities to the cluster, but it does indicate to Rancher that the cluster has those capabilities. - diff --git a/content/rancher/v2.6/en/cluster-provisioning/registered-clusters/_index.md b/content/rancher/v2.6/en/cluster-provisioning/registered-clusters/_index.md index 4d9ab0c2a1..cdcbc76e1a 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/registered-clusters/_index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/registered-clusters/_index.md @@ -3,7 +3,7 @@ title: Registering Existing Clusters weight: 6 --- -The cluster registration feature replaced the feature to import clusters. +Along with importing clusters, as of v2.5, Rancher allows you to tie in closer with cloud APIs and manage your cluster by registering existing clusters. The control that Rancher has to manage a registered cluster depends on the type of cluster. For details, see [Management Capabilities for Registered Clusters.](#management-capabilities-for-registered-clusters) @@ -168,7 +168,7 @@ Also in the K3s documentation, nodes with the worker role are called agent nodes # Debug Logging and Troubleshooting for Registered K3s Clusters -Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two [plans](https://github.com/rancher/system-upgrade-controller#example-upgrade-plan) to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes. +Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two [plans](https://github.com/rancher/system-upgrade-controller#example-upgrade-plan) to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes. To enable debug logging on the system upgrade controller deployment, edit the [configmap](https://github.com/rancher/system-upgrade-controller/blob/50a4c8975543d75f1d76a8290001d87dc298bdb4/manifests/system-upgrade-controller.yaml#L32) to set the debug environment variable to true. Then restart the `system-upgrade-controller` pod. @@ -196,7 +196,7 @@ Authorized Cluster Endpoint (ACE) support has been added for registered RKE2 and > **Note:** > -> - These steps only need to be performed on the control plane nodes of the downstream cluster. You must configure each control plane node individually. +> - These steps only need to be performed on the control plane nodes of the downstream cluster. You must configure each control plane node individually. > > - The following steps will work on both RKE2 and K3s clusters registered in v2.6.x as well as those registered (or imported) from a previous version of Rancher with an upgrade to v2.6.x. > @@ -223,19 +223,19 @@ Authorized Cluster Endpoint (ACE) support has been added for registered RKE2 and context: user: Default cluster: Default - + 1. Add the following to the config file (or create one if it doesn’t exist); note that the default location is `/etc/rancher/{rke2,k3s}/config.yaml`: kube-apiserver-arg: - authentication-token-webhook-config-file=/var/lib/rancher/{rke2,k3s}/kube-api-authn-webhook.yaml - + 1. Run the following commands: sudo systemctl stop {rke2,k3s}-server sudo systemctl start {rke2,k3s}-server 1. Finally, you **must** go back to the Rancher UI and edit the imported cluster there to complete the ACE enablement. Click on **⋮ > Edit Config**, then click the **Networking** tab under Cluster Configuration. Finally, click the **Enabled** button for **Authorized Endpoint**. Once the ACE is enabled, you then have the option of entering a fully qualified domain name (FQDN) and certificate information. - + >**Note:** The FQDN field is optional, and if one is entered, it should point to the downstream cluster. Certificate information is only needed if there is a load balancer in front of the downstream cluster that is using an untrusted certificate. If you have a valid certificate, then nothing needs to be added to the CA Certificates field. # Annotating Registered Clusters @@ -286,4 +286,3 @@ To annotate a registered cluster, 1. Click **Save**. **Result:** The annotation does not give the capabilities to the cluster, but it does indicate to Rancher that the cluster has those capabilities. - From 2670d6b3e8b50c4c3b570e35ae470c695077498b Mon Sep 17 00:00:00 2001 From: divya-mohan0209 Date: Fri, 25 Mar 2022 14:49:03 +0530 Subject: [PATCH 2/2] Adding user personas section to the cluster provisioning docs in v2.5 and v2.6 --- content/rancher/v2.5/en/cluster-provisioning/_index.md | 9 +++++++++ content/rancher/v2.6/en/cluster-provisioning/_index.md | 9 +++++++++ 2 files changed, 18 insertions(+) diff --git a/content/rancher/v2.5/en/cluster-provisioning/_index.md b/content/rancher/v2.5/en/cluster-provisioning/_index.md index 8fe1bc1c85..a1c89b0deb 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/_index.md @@ -19,6 +19,7 @@ This section covers the following topics: +- [User Personas associated with a Kubernetes installation](#user-personas-associated-with-a-kubernetes-installation) - [Cluster Management Capabilities by Cluster Type](#cluster-management-capabilities-by-cluster-type) - [Setting up clusters in a hosted Kubernetes provider](#setting-up-clusters-in-a-hosted-kubernetes-provider) - [Launching Kubernetes with Rancher](#launching-kubernetes-with-rancher) @@ -28,6 +29,14 @@ This section covers the following topics: +### User Personas associated with a Kubernetes installation + +Before we dive deep into the specifics of cluster management capabilities, it is essential to take a step back and understand the various personas associated with a Kubernetes installation based on the type of interaction. + +The first persona is that of the administrator. With a focus on managing the overall health of the Kubernetes installation, typical administrative workloads may also include performing configuration and setup tasks. + +Developer personas are end users of Kubernetes installations. By defining application resources and using core primitives to build, monitor, and troubleshoot scalable applications and tools, their workloads are typically aimed at exposing cloud native applications by using Kubernetes. + ### Cluster Management Capabilities by Cluster Type The following table summarizes the options and settings available for each cluster type: diff --git a/content/rancher/v2.6/en/cluster-provisioning/_index.md b/content/rancher/v2.6/en/cluster-provisioning/_index.md index 9e9f44c4c8..5d12976f3f 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/_index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/_index.md @@ -14,6 +14,7 @@ This section covers the following topics: +- [User Personas associated with a Kubernetes installation](#user-personas-associated-with-a-kubernetes-installation) - [Cluster Management Capabilities by Cluster Type](#cluster-management-capabilities-by-cluster-type) - [Setting up clusters in a hosted Kubernetes provider](#setting-up-clusters-in-a-hosted-kubernetes-provider) - [Launching Kubernetes with Rancher](#launching-kubernetes-with-rancher) @@ -24,6 +25,14 @@ This section covers the following topics: +### User Personas associated with a Kubernetes installation + +Before we dive deep into the specifics of cluster management capabilities, it is essential to take a step back and understand the various personas associated with a Kubernetes installation based on the type of interaction. + +The first persona is that of the administrator. With a focus on managing the overall health of the Kubernetes installation, typical administrative workloads may also include performing configuration and setup tasks. + +Developer personas are end users of Kubernetes installations. By defining application resources and using core primitives to build, monitor, and troubleshoot scalable applications and tools, their workloads are typically aimed at exposing cloud native applications by using Kubernetes. + ### Cluster Management Capabilities by Cluster Type The following table summarizes the options and settings available for each cluster type: