diff --git a/content/changelog.md b/content/changelog.md index 926a0e5e9..c3c2324ad 100644 --- a/content/changelog.md +++ b/content/changelog.md @@ -1,5 +1,35 @@ # Changelog +## v0.10.x + +### v0.10.0 + +### Feature + +- Added support for caching for MTO Console using PostgreSQL as caching layer. +- Added support for custom metrics with Template, Template Instance and Template Group Instance. +- Graph visualization of Tenant and its associated resources on MTO Console. +- Tenant and Admin level authz/authn support within MTO Console and Gateway. +- Now in MTO console you can view cost of different Tenant resources with different date, resource type and additional filters. +- MTO can now create a default keycloak realm, client and `mto-admin` user for Console. +- Implemented Cluster Resource Quota for vanilla Kubernetes platform type. +- Dependency of TLS secrets for MTO Webhook. +- Added Helm Chart that would be used for installing MTO over Kubernetes. + - And it comes with default Cert Manager manifests for certificates. +- Support for MTO e2e. + +### Fix + +- Updated CreateMergePatch to MergeMergePatches to address issues caused by losing `resourceVersion` and UID when converting `oldObject` to `newObject`. This prevents problems when the object is edited by another controller. +- In Template Resource distribution for Secret type, we now consider the source's Secret field type, preventing default creation as Opaque regardless of the source's actual type. +- Enhanced admin permissions for tenant role in Vault to include Create, Update, Delete alongside existing Read and List privileges for the common-shared-secrets path. Viewers now have Read permission. + +### Enhanced + +- Started to support Kubernetes along with OpenShift as platform type. +- Support of MTO's PostgreSQL instance as persistent storage for keycloak. +- `kube:admin` is now bypassed by default to perform operations, earlier `kube:admin` needed to be mentioned in respective tenants to give it access over namespaces. + ## v0.9.x ### v0.9.4 @@ -242,7 +272,7 @@ > ⚠️ Known Issues - `caBundle` field in validation webhooks is not being populated for newly added webhooks. A temporary fix is to edit the validation webhook configuration manifest without the `caBundle` field added in any webhook, so OpenShift can add it to all fields simultaneously - - Edit the `ValidatingWebhookConfiguration` `stakater-tenant-operator-validating-webhook-configuration` by removing all the `caBundle` fields of all webhooks + - Edit the `ValidatingWebhookConfiguration` `multi-tenant-operator-validating-webhook-configuration` by removing all the `caBundle` fields of all webhooks - Save the manifest - Verify that all `caBundle` fields have been populated - Restart Tenant-Operator pods diff --git a/content/explanation/auth.md b/content/explanation/auth.md new file mode 100644 index 000000000..a6d388437 --- /dev/null +++ b/content/explanation/auth.md @@ -0,0 +1,37 @@ +# Authentication and Authorization in MTO Console + +## Keycloak for Authentication + +MTO Console incorporates Keycloak, a leading authentication module, to manage user access securely and efficiently. Keycloak is provisioned automatically by our controllers, setting up a new realm, client, and a default user named `mto`. + +### Benefits + +- Industry Standard: Offers robust, reliable authentication in line with industry standards. +- Integration with Existing Systems: Enables easy linkage with existing Active Directories or SSO systems, avoiding the need for redundant user management. +- Administrative Control: Grants administrators full authority over user access to the console, enhancing security and operational integrity. + +## PostgreSQL as Persistent Storage for Keycloak + +MTO Console leverages PostgreSQL as the persistent storage solution for Keycloak, enhancing the reliability and flexibility of the authentication system. + +It offers benefits such as enhanced data reliability, easy data export and import. + +### Benefits + +- Persistent Data Storage: By using PostgreSQL, Keycloak's data, including realms, clients, and user information, is preserved even in the event of a pod restart. This ensures continuous availability and stability of the authentication system. +- Data Exportability: Customers can easily export Keycloak configurations and data from the PostgreSQL database. +- Transferability Across Environments: The exported data can be conveniently imported into another cluster or Keycloak instance, facilitating smooth transitions and backups. +- No Data Loss: Ensures that critical authentication data is not lost during system updates or maintenance. +- Operational Flexibility: Provides customers with greater control over their authentication data, enabling them to manage and migrate their configurations as needed. + +## Built-in module for Authorization + +The MTO Console is equipped with an authorization module, designed to manage access rights intelligently and securely. + +### Benefits + +- User and Tenant Based: Authorization decisions are made based on the user's membership in specific tenants, ensuring appropriate access control. +- Role-Specific Access: The module considers the roles assigned to users, granting permissions accordingly to maintain operational integrity. +- Elevated Privileges for Admins: Users identified as administrators or members of the clusterAdminGroups are granted comprehensive permissions across the console. +- Database Caching: Authorization decisions are cached in the database, reducing reliance on the Kubernetes API server. +- Faster, Reliable Access: This caching mechanism ensures quicker and more reliable access for users, enhancing the overall responsiveness of the MTO Console. diff --git a/content/explanation/console.md b/content/explanation/console.md new file mode 100644 index 000000000..5e7bbee64 --- /dev/null +++ b/content/explanation/console.md @@ -0,0 +1,88 @@ +# MTO Console + +## Introduction + +The Multi Tenant Operator (MTO) Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources. + +## Dashboard Overview + +The dashboard serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, and quotas. It is designed to provide a quick summary/snapshot of MTO resources' status. Additionally, it includes a Showback graph that presents a quick glance of the seven-day cost trends associated with the namespaces/tenants based on the logged-in user. + +![dashboard](../images/dashboard.png) + +### Tenants + +Here, admins have a bird's-eye view of all tenants, with the ability to delve into each one for detailed examination and management. This section is pivotal for observing the distribution and organization of tenants within the system. More information on each tenant can be accessed by clicking the view option against each tenant name. + +![tenants](../images/tenants.png) + +### Namespaces + +Users can view all the namespaces that belong to their tenant, offering a comprehensive perspective of the accessible namespaces for tenant members. This section also provides options for detailed exploration. + +![namespaces](../images/namespaces.png) + +### Quotas + +MTO's Quotas are crucial for managing resource allocation. In this section, administrators can assess the quotas assigned to each tenant, ensuring a balanced distribution of resources in line with operational requirements. + +![quotas](../images/quotas.png) + +### Templates + +The Templates section acts as a repository for standardized resource deployment patterns, which can be utilized to maintain consistency and reliability across tenant environments. Few examples include provisioning specific k8s manifests, helm charts, secrets or configmaps across a set of namespaces. + +![templates](../images/templates.png) +![templateGroupInstances](../images/templateGroupInstances.png) + +### Showback + +The Showback feature is an essential financial governance tool, providing detailed insights into the cost implications of resource usage by tenant or namespace or other filters. This facilitates a transparent cost management and internal chargeback or showback process, enabling informed decision-making regarding resource consumption and budgeting. + +![showback](../images/showback.png) + +## User Roles and Permissions + +### Administrators + +Administrators have overarching access to the console, including the ability to view all namespaces and tenants. They have exclusive access to the IntegrationConfig, allowing them to view all the settings and integrations. + +![integrationConfig](../images/integrationConfig.png) + +### Tenant Users + +Regular tenant users can monitor and manage their allocated resources. However, they do not have access to the IntegrationConfig and cannot view resources across different tenants, ensuring data privacy and operational integrity. + +## Live YAML Configuration and Graph View + +In the MTO Console, each resource section is equipped with a "View" button, revealing the live YAML configuration for complete information on the resource. For Tenant resources, a supplementary "Graph" option is available, illustrating the relationships and dependencies of all resources under a Tenant. This dual-view approach empowers users with both the detailed control of YAML and the holistic oversight of the graph view. + +You can find more details on graph visualization here: [Graph Visualization](../reference-guides/graph-visualization.md) + +![tenants-graph](../images/tenants_graph.png) + +## Caching and Database + +MTO integrates a dedicated database to streamline resource management. Now, all resources managed by MTO are efficiently stored in a Postgres database, enhancing the MTO Console's ability to efficiently retrieve all the resources for optimal presentation. + +The implementation of this feature is facilitated by the Bootstrap controller, streamlining the deployment process. This controller creates the PostgreSQL Database, establishes a service for inter-pod communication, and generates a secret to ensure secure connectivity to the database. + +Furthermore, the introduction of a dedicated cache layer ensures that there is no added burden on the kube API server when responding to MTO Console requests. This enhancement not only improves response times but also contributes to a more efficient and responsive resource management system. + +## Authentication and Authorization + +MTO Console ensures secure access control using a robust combination of Keycloak for authentication and a custom-built authorization module. + +### Keycloak Integration + +Keycloak, an industry-standard authentication tool, is integrated for secure user login and management. It supports seamless integration with existing ADs or SSO systems and grants administrators complete control over user access. + +### Custom Authorization Module + +Complementing Keycloak, our custom authorization module intelligently controls access based on user roles and their association with tenants. Special checks are in place for admin users, granting them comprehensive permissions. + +For more details on Keycloak's integration, PostgreSQL as persistent storage, and the intricacies of our authorization module, please visit [here](./auth.md). + +## Conclusion + +The MTO Console is engineered to simplify complex multi-tenant management. The current iteration focuses on providing comprehensive visibility. Future updates could include direct CUD (Create/Update/Delete) capabilities from the dashboard, enhancing the console’s functionality. The Showback feature remains a standout, offering critical cost tracking and analysis. The delineation of roles between administrators and tenant users ensures a secure and organized operational framework. diff --git a/content/faq.md b/content/faq.md index 0435e5e47..799ea786c 100644 --- a/content/faq.md +++ b/content/faq.md @@ -1,13 +1,48 @@ # FAQs -## Q. Error received while performing Create, Update or Delete action on namespace `"Cannot CREATE namespace test-john without label stakater.com/tenant"` +## Namespace Admission Webhook -**A.** Error occurs when a user is trying to perform create, update, delete action on a namespace without the required `stakater.com/tenant` label. This label is used by the operator to see that authorized users can perform that action on the namespace. Just add the label with the tenant name so that MTO knows which tenant the namespace belongs to, and who is authorized to perform create/update/delete operations. For more details please refer to [Namespace use-case](./tutorials/tenant/creating-namespaces.md). +### Q. Error received while performing Create, Update or Delete action on Namespace -## Q. How do I deploy cluster-scoped resource via the ArgoCD integration? +```terminal +Cannot CREATE namespace test-john without label stakater.com/tenant +``` -**A.** Multi-Tenant Operator's ArgoCD Integration allows configuration of which cluster-scoped resources can be deployed, both globally and on a per-tenant basis. For a global allow-list that applies to all tenants, you can add both resource `group` and `kind` to the [IntegrationConfig's](./how-to-guides/integration-config.md#argocd) `spec.argocd.clusterResourceWhitelist` field. Alternatively, you can set this up on a tenant level by configuring the same details within a [Tenant's](./how-to-guides/tenant.md) `spec.argocd.appProject.clusterResourceWhitelist` field. For more details, check out the [ArgoCD integration use cases](./tutorials/argocd/enabling-multi-tenancy-argocd.md#allow-argocd-to-sync-certain-cluster-wide-resources) +**Answer.** Error occurs when a user is trying to perform create, update, delete action on a namespace without the required `stakater.com/tenant` label. This label is used by the operator to see that authorized users can perform that action on the namespace. Just add the label with the tenant name so that MTO knows which tenant the namespace belongs to, and who is authorized to perform create/update/delete operations. For more details please refer to [Namespace use-case](./tutorials/tenant/creating-namespaces.md). + +### Q. Error received while performing Create, Update or Delete action on OpenShift Project + +```terminal +Cannot CREATE namespace testing without label stakater.com/tenant. User: system:serviceaccount:openshift-apiserver:openshift-apiserver-sa +``` + +**Answer.** This error occurs because we don't allow Tenant members to do operations on OpenShift Project, whenever an operation is done on a project, `openshift-apiserver-sa` tries to do the same request onto a namespace. That's why the user sees `openshift-apiserver-sa` Service Account instead of its own user in the error message. + +The fix is to try the same operation on the namespace manifest instead. + +### Q. Error received while doing "kubectl apply -f namespace.yaml" + +```terminal +Error from server (Forbidden): error when retrieving current configuration of: +Resource: "/v1, Resource=namespaces", GroupVersionKind: "/v1, Kind=Namespace" +Name: "ns1", Namespace: "" +from server for: "namespace.yaml": namespaces "ns1" is forbidden: User "muneeb" cannot get resource "namespaces" in API group "" in the namespace "ns1" +``` + +**Answer.** Tenant members will not be able to use `kubectl apply` because `apply` first gets all the instances of that resource, in this case namespaces, and then does the required operation on the selected resource. To maintain tenancy, tenant members do not the access to get or list all the namespaces. + +The fix is to create namespaces with `kubectl create` instead. + +## MTO - ArgoCD Integration + +### Q. How do I deploy cluster-scoped resource via the ArgoCD integration? + +**Answer.** Multi-Tenant Operator's ArgoCD Integration allows configuration of which cluster-scoped resources can be deployed, both globally and on a per-tenant basis. For a global allow-list that applies to all tenants, you can add both resource `group` and `kind` to the [IntegrationConfig's](./how-to-guides/integration-config.md#argocd) `spec.argocd.clusterResourceWhitelist` field. Alternatively, you can set this up on a tenant level by configuring the same details within a [Tenant's](./how-to-guides/tenant.md) `spec.argocd.appProject.clusterResourceWhitelist` field. For more details, check out the [ArgoCD integration use cases](./tutorials/argocd/enabling-multi-tenancy-argocd.md#allow-argocd-to-sync-certain-cluster-wide-resources) ## Q. InvalidSpecError: application repo \ is not permitted in project \ -**A.** The above error can occur if the ArgoCD Application is syncing from a source that is not allowed the referenced AppProject. To solve this, verify that you have referred to the correct project in the given ArgoCD Application, and that the repoURL used for the Application's source is valid. If the error still appears, you can add the URL to the relevant Tenant's `spec.argocd.sourceRepos` array. +**Answer.** The above error can occur if the ArgoCD Application is syncing from a source that is not allowed the referenced AppProject. To solve this, verify that you have referred to the correct project in the given ArgoCD Application, and that the repoURL used for the Application's source is valid. If the error still appears, you can add the URL to the relevant Tenant's `spec.argocd.sourceRepos` array. + +## Q. Why are there `mto-showback-*` pods failing in my cluster? + +**Answer.** The `mto-showback-*` pods are used to calculate the cost of the resources used by each tenant. These pods are created by the Multi-Tenant Operator and are scheduled to run every 10 minutes. If the pods are failing, it is likely that the operator's necessary to calculate cost are not present in the cluster. To solve this, you can navigate to `Operators` -> `Installed Operators` in the OpenShift console and check if the MTO-OpenCost and MTO-Prometheus operators are installed. If they are in a pending state, you can manually approve them to install them in the cluster. diff --git a/content/features.md b/content/features.md index 26a859a29..56427f52a 100644 --- a/content/features.md +++ b/content/features.md @@ -86,3 +86,19 @@ With Multi Tenant Operator teams can share a single cluster with multiple teams, ## Native Experience Multi Tenant Operator provides multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries. + +## Custom Metrics Support + +Multi Tenant Operator now supports custom metrics for templates, template instances and template group instances. + +Exposed metrics contain, number of resources deployed, number of resources failed, total number of resources deployed for template instances and template group instances. These metrics can be used to monitor the usage of templates and template instances in the cluster. + +Additionally, this allows us to expose other performance metrics listed [here](https://book.kubebuilder.io/reference/metrics-reference.html). + +More details on [Enabling Custom Metrics](./reference-guides/custom-metrics.md) + +## Graph Visualization for Tenants + +Multi Tenant Operator now supports graph visualization for tenants on the MTO Console. Effortlessly associate tenants with their respective resources using the enhanced graph feature on the MTO Console. This dynamic graph illustrates the relationships between tenants and the resources they create, encompassing both MTO's proprietary resources and native Kubernetes/OpenShift elements. + +More details on [Graph Visualization](./reference-guides/graph-visualization.md) diff --git a/content/how-to-guides/integration-config.md b/content/how-to-guides/integration-config.md index 661c80d3d..30e9cb580 100644 --- a/content/how-to-guides/integration-config.md +++ b/content/how-to-guides/integration-config.md @@ -7,7 +7,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: tenantRoles: default: @@ -87,14 +87,11 @@ spec: namespace: openshift-auth vault: enabled: true - endpoint: - url: https://vault.apps.prod.abcdefghi.kubeapp.cloud/ - secretReference: - name: vault-root-token - namespace: vault + accessorPath: oidc/ + address: 'https://vault.apps.prod.abcdefghi.kubeapp.cloud/' + roleName: mto sso: clientName: vault - accessorID: ``` Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator. @@ -348,21 +345,56 @@ If `vault` is configured on a cluster, then Vault configuration can be enabled. ```yaml Vault: enabled: true - endpoint: - secretReference: - name: vault-root-token - namespace: vault - url: >- - https://vault.apps.prod.abcdefghi.kubeapp.cloud/ + accessorPath: oidc/ + address: 'https://vault.apps.prod.abcdefghi.kubeapp.cloud/' + roleName: mto sso: - accessorID: clientName: vault ``` -If enabled, then admins have to provide secret, URL and SSO accessorID of Vault. +If enabled, then admins have to provide following details: -- `secretReference.name:` Will contain the name of the secret. -- `secretReference.namespace:` Will contain the namespace of the secret. -- `url:` Will contain the URL of Vault. -- `sso.accessorID:` Will contain the SSO accessorID. -- `sso.clientName:` Will contain the client name. +- `accessorPath:` Accessor Path within Vault to fetch SSO accessorID +- `address:` Valid Vault address reachable within cluster. +- `roleName:` Vault's Kubernetes authentication role +- `sso.clientName:` SSO client name. + +For more details around enabling Kubernetes auth in Vault, visit [here](https://developer.hashicorp.com/vault/docs/auth/kubernetes) + +The role created within Vault for Kubernetes authentication should have the following permissions: + +```yaml +path "secret/*" { + capabilities = ["create", "read", "update", "patch", "delete", "list"] +} +path "sys/mounts" { + capabilities = ["read", "list"] +} +path "sys/mounts/*" { + capabilities = ["create", "read", "update", "patch", "delete", "list"] +} +path "managed-addons/*" { + capabilities = ["read", "list"] +} +path "auth/kubernetes/role/*" { + capabilities = ["create", "read", "update", "patch", "delete", "list"] +} +path "sys/auth" { + capabilities = ["read", "list"] +} +path "sys/policies/*" { + capabilities = ["create", "read", "update", "patch", "delete", "list"] +} +path "identity/group" { + capabilities = ["create", "read", "update", "patch", "delete", "list"] +} +path "identity/group-alias" { + capabilities = ["create", "read", "update", "patch", "delete", "list"] +} +path "identity/group/name/*" { + capabilities = ["read", "list"] +} +path "identity/group/id/*" { + capabilities = ["create", "read", "update", "patch", "delete", "list"] +} +``` diff --git a/content/how-to-guides/keycloak.md b/content/how-to-guides/keycloak.md new file mode 100644 index 000000000..5a6ada3a0 --- /dev/null +++ b/content/how-to-guides/keycloak.md @@ -0,0 +1,74 @@ +# Setting Up User Access in Keycloak for MTO Console + +This guide walks you through the process of adding new users in Keycloak and granting them access to Multi Tenant Operator (MTO) Console. + +## Accessing Keycloak Console + +* Log in to the OpenShift Console. +* Go to the 'Routes' section within the 'multi-tenant-operator' namespace. + +![routes](../images/routes.png) + +* Click on the Keycloak console link provided in the Routes. +* Login using the admin credentials (default: admin/admin). + +## Adding new Users in Keycloak + +* In the Keycloak console, switch to the `mto` realm. + +![realm](../images/realm.png) + +* Go to the `Users` section in the `mto` realm. +* Follow the prompts to add a new user. + +![keycloak-new-user](../images/keycloak-new-user.png) + +* Once you add a new user, here is how the Users section would look like + +![keycloak-users](../images/keycloak-users.png) + +## Accessing MTO Console + +* Go back to the OpenShift Console, navigate to the Routes section, and get the URL for the MTO Console. +* Open the MTO Console URL and log in with the newly added user credentials. + +Now, at this point, a user will be authenticated to the MTO Console. But in order to get access to view any Tenant resources, the user will need to be part of a Tenant. + +## Granting Access to Tenant Resources + +* Open Tenant CR: In the OpenShift cluster, locate and open the Tenant Custom Resource (CR) that you wish to give access to. You will see a YAML file similar to the following example: + +```yaml +apiVersion: tenantoperator.stakater.com/v1beta2 +kind: Tenant +metadata: + name: arsenal +spec: + quota: small + owners: + users: + - gabriel@arsenal.com + groups: + - arsenal + editors: + users: + - hakimi@arsenal.com + viewers: + users: + - neymar@arsenal.com +``` + +* Edit Tenant CR: Add the newly created user's email to the appropriate section (owners, editors, viewers) in the Tenant CR. For example, if you have created a user `john@arsenal.com` and wish to add them as an editor, the edited section would look like this: + +```yaml +editors: + users: + - gabriel@arsenal.com + - benzema@arsenal.com +``` + +* Save Changes: Save and apply the changes to the Tenant CR. + +## Verifying Access + +Once the above steps are completed, you should be able to access the MTO Console now and see alpha Tenant's details along with all the other resources such as namespaces and templates that John has access to. diff --git a/content/how-to-guides/offboarding/uninstalling.md b/content/how-to-guides/offboarding/uninstalling.md index 2ba4da5ab..593667d71 100644 --- a/content/how-to-guides/offboarding/uninstalling.md +++ b/content/how-to-guides/offboarding/uninstalling.md @@ -4,6 +4,10 @@ You can uninstall MTO by following these steps: * Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set `spec.onDelete.cleanNamespaces` to `false` for all those tenants whose namespaces you want to retain, and `spec.onDelete.cleanAppProject` to `false` for all those tenants whose AppProject you want to retain. For more details check out [onDelete](../../tutorials/tenant/deleting-tenant.md#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted) +* In case you have enabled console, you will have to disable it first by navigating to `Search` -> `IntegrationConfig` -> `tenant-operator-config` and set `spec.provision.console` and `spec.provision.showback` to `false`. + +* Remove IntegrationConfig CR from the cluster by navigating to `Search` -> `IntegrationConfig` -> `tenant-operator-config` and select `Delete` from actions dropdown. + * After making the required changes open OpenShift console and click on `Operators`, followed by `Installed Operators` from the side menu ![image](../../images/installed-operators.png) diff --git a/content/images/dashboard.png b/content/images/dashboard.png new file mode 100644 index 000000000..b860c0c19 Binary files /dev/null and b/content/images/dashboard.png differ diff --git a/content/images/graph-1.png b/content/images/graph-1.png new file mode 100644 index 000000000..ccafdb72d Binary files /dev/null and b/content/images/graph-1.png differ diff --git a/content/images/graph-2.png b/content/images/graph-2.png new file mode 100644 index 000000000..f42686602 Binary files /dev/null and b/content/images/graph-2.png differ diff --git a/content/images/graph-3.png b/content/images/graph-3.png new file mode 100644 index 000000000..2b301defb Binary files /dev/null and b/content/images/graph-3.png differ diff --git a/content/images/integrationConfig.png b/content/images/integrationConfig.png new file mode 100644 index 000000000..104e10755 Binary files /dev/null and b/content/images/integrationConfig.png differ diff --git a/content/images/keycloak-new-user.png b/content/images/keycloak-new-user.png new file mode 100644 index 000000000..642500a50 Binary files /dev/null and b/content/images/keycloak-new-user.png differ diff --git a/content/images/keycloak-users.png b/content/images/keycloak-users.png new file mode 100644 index 000000000..ffce84a84 Binary files /dev/null and b/content/images/keycloak-users.png differ diff --git a/content/images/manual-approve-1.png b/content/images/manual-approve-1.png new file mode 100644 index 000000000..c1e014ee2 Binary files /dev/null and b/content/images/manual-approve-1.png differ diff --git a/content/images/manual-approve-2.png b/content/images/manual-approve-2.png new file mode 100644 index 000000000..14a128918 Binary files /dev/null and b/content/images/manual-approve-2.png differ diff --git a/content/images/manual-approve-3.png b/content/images/manual-approve-3.png new file mode 100644 index 000000000..84c480d87 Binary files /dev/null and b/content/images/manual-approve-3.png differ diff --git a/content/images/manual-approve-4.png b/content/images/manual-approve-4.png new file mode 100644 index 000000000..f5c4b2715 Binary files /dev/null and b/content/images/manual-approve-4.png differ diff --git a/content/images/namespaces.png b/content/images/namespaces.png new file mode 100644 index 000000000..25255d1e3 Binary files /dev/null and b/content/images/namespaces.png differ diff --git a/content/images/quotas.png b/content/images/quotas.png new file mode 100644 index 000000000..4af3e1191 Binary files /dev/null and b/content/images/quotas.png differ diff --git a/content/images/realm.png b/content/images/realm.png new file mode 100644 index 000000000..fb2ba5760 Binary files /dev/null and b/content/images/realm.png differ diff --git a/content/images/routes.png b/content/images/routes.png new file mode 100644 index 000000000..69fd20891 Binary files /dev/null and b/content/images/routes.png differ diff --git a/content/images/showback.png b/content/images/showback.png new file mode 100644 index 000000000..75daa814b Binary files /dev/null and b/content/images/showback.png differ diff --git a/content/images/templateGroupInstances.png b/content/images/templateGroupInstances.png new file mode 100644 index 000000000..e893af8b1 Binary files /dev/null and b/content/images/templateGroupInstances.png differ diff --git a/content/images/templates.png b/content/images/templates.png new file mode 100644 index 000000000..d07e98e2a Binary files /dev/null and b/content/images/templates.png differ diff --git a/content/images/tenants.png b/content/images/tenants.png new file mode 100644 index 000000000..e5ea0ec07 Binary files /dev/null and b/content/images/tenants.png differ diff --git a/content/images/tenantsAdmin.png b/content/images/tenantsAdmin.png new file mode 100644 index 000000000..2be3a5065 Binary files /dev/null and b/content/images/tenantsAdmin.png differ diff --git a/content/images/tenants_graph.png b/content/images/tenants_graph.png new file mode 100644 index 000000000..b3dccfdad Binary files /dev/null and b/content/images/tenants_graph.png differ diff --git a/content/images/tenants_yaml.png b/content/images/tenants_yaml.png new file mode 100644 index 000000000..c5601cabd Binary files /dev/null and b/content/images/tenants_yaml.png differ diff --git a/content/index.md b/content/index.md index 6d84f2d34..2e797b73d 100644 --- a/content/index.md +++ b/content/index.md @@ -48,24 +48,12 @@ Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShi More details on [ArgoCD Multitenancy](./tutorials/argocd/enabling-multi-tenancy-argocd.md) -## Mattermost Multitenancy - -Multi Tenant Operator can manage Mattermost to create Teams for tenant users. All tenant users get a unique team and a list of predefined channels gets created. When a user is removed from the tenant, the user is also removed from the Mattermost team corresponding to tenant. - -More details on [Mattermost](./reference-guides/mattermost.md) - -## Cost/Resource Optimization +## Resource Management Multi Tenant Operator provides a mechanism for defining Resource Quotas at the tenant scope, meaning all namespaces belonging to a particular tenant share the defined quota, which is why you are able to safely enable dev teams to self serve their namespaces whilst being confident that they can only use the resources allocated based on budget and business needs. More details on [Quota](./how-to-guides/quota.md) -## Remote Development Namespaces - -Multi Tenant Operator can be configured to automatically provision a namespace in the cluster for every member of the specific tenant, that will also be preloaded with any selected templates and consume the same pool of resources from the tenants quota creating safe remote dev namespaces that teams can use as scratch namespace for rapid prototyping and development. So, every developer gets a Kubernetes-based cloud development environment that feel like working on localhost. - -More details on [Sandboxes](./tutorials/tenant/create-sandbox.md) - ## Templates and Template distribution Multi Tenant Operator allows admins/users to define templates for namespaces, so that others can instantiate these templates to provision namespaces with batteries loaded. A template could pre-populate a namespace for certain use cases or with basic tooling required. Templates allow you to define Kubernetes manifests, Helm chart and more to be applied when the template is used to create a namespace. @@ -81,12 +69,37 @@ Common use cases for namespace templates may be: More details on [Distributing Template Resources](./reference-guides/deploying-templates.md) +## MTO Console + +Multi Tenant Operator Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources. It serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, templates and quotas. It is designed to provide a quick summary/snapshot of MTO's status and facilitates easier interaction with various resources such as tenants, namespaces, templates, and quotas. + +More details on [Console](./explanation/console.md) + +## Showback + +The showback functionality in Multi Tenant Operator (MTO) Console is a significant feature designed to enhance the management of resources and costs in multi-tenant Kubernetes environments. This feature focuses on accurately tracking the usage of resources by each tenant, and/or namespace, enabling organizations to monitor and optimize their expenditures. +Furthermore, this functionality supports financial planning and budgeting by offering a clear view of operational costs associated with each tenant. This can be particularly beneficial for organizations that chargeback internal departments or external clients based on resource usage, ensuring that billing is fair and reflective of actual consumption. + +More details on [Showback](./explanation/console.md#showback) + ## Hibernation Multi Tenant Operator can downscale Deployments and StatefulSets in a tenant's Namespace according to a defined sleep schedule. The Deployments and StatefulSets are brought back to their required replicas according to the provided wake schedule. More details on [Hibernation](./tutorials/tenant/tenant-hibernation.md#hibernating-a-tenant) +## Mattermost Multitenancy + +Multi Tenant Operator can manage Mattermost to create Teams for tenant users. All tenant users get a unique team and a list of predefined channels gets created. When a user is removed from the tenant, the user is also removed from the Mattermost team corresponding to tenant. + +More details on [Mattermost](./reference-guides/mattermost.md) + +## Remote Development Namespaces + +Multi Tenant Operator can be configured to automatically provision a namespace in the cluster for every member of the specific tenant, that will also be preloaded with any selected templates and consume the same pool of resources from the tenants quota creating safe remote dev namespaces that teams can use as scratch namespace for rapid prototyping and development. So, every developer gets a Kubernetes-based cloud development environment that feel like working on localhost. + +More details on [Sandboxes](./tutorials/tenant/create-sandbox.md) + ## Cross Namespace Resource Distribution Multi Tenant Operator supports cloning of secrets and configmaps from one namespace to another namespace based on label selectors. It uses templates to enable users to provide reference to secrets and configmaps. It uses a template group instance to distribute those secrets and namespaces in matching namespaces, even if namespaces belong to different tenants. If template instance is used then the resources will only be mapped if namespaces belong to same tenant. diff --git a/content/installation.md b/content/installation.md index 75f16c15f..295c154d9 100644 --- a/content/installation.md +++ b/content/installation.md @@ -6,11 +6,13 @@ This document contains instructions on installing, uninstalling and configuring 1. [CLI/GitOps](#installing-via-cli-or-gitops) +1. [Enabling Console](#enabling-console) + 1. [Uninstall](#uninstall-via-operatorhub-ui) ## Requirements -* An **OpenShift** cluster [v4.7 - v4.12] +* An **OpenShift** cluster [v4.8 - v4.13] ## Installing via OperatorHub UI @@ -42,34 +44,6 @@ This document contains instructions on installing, uninstalling and configuring > Note: MTO will be installed in `multi-tenant-operator` namespace. -### Configuring IntegrationConfig - -IntegrationConfig is required to configure the settings of multi-tenancy for MTO. - -* We recommend using the following IntegrationConfig as a starting point - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator -spec: - openshift: - privilegedNamespaces: - - default - - ^openshift-* - - ^kube-* - - ^redhat-* - privilegedServiceAccounts: - - ^system:serviceaccount:default-* - - ^system:serviceaccount:openshift-* - - ^system:serviceaccount:kube-* - - ^system:serviceaccount:redhat-* -``` - -For more details and configurations check out [IntegrationConfig](./integration-config.md). - ## Installing via CLI OR GitOps * Create namespace `multi-tenant-operator` @@ -107,11 +81,7 @@ spec: name: tenant-operator source: certified-operators sourceNamespace: openshift-marketplace - startingCSV: tenant-operator.v0.9.1 - config: - env: - - name: ENABLE_CONSOLE - value: 'true' + startingCSV: tenant-operator.v0.10.0 EOF subscription.operators.coreos.com/tenant-operator created ``` @@ -134,33 +104,40 @@ subscription.operators.coreos.com/tenant-operator created ![image](./images/to_installed_successful_pod.png) -### Configuring IntegrationConfig +For more details and configurations check out [IntegrationConfig](./integration-config.md). -IntegrationConfig is required to configure the settings of multi-tenancy for MTO. +## Enabling Console -* We recommend using the following IntegrationConfig as a starting point: +To enable console GUI for MTO, go to `Search` -> `IntegrationConfig` -> `tenant-operator-config` and make sure the following fields are set to `true`: ```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator spec: - openshift: - privilegedNamespaces: - - default - - ^openshift-* - - ^kube-* - - ^redhat-* - privilegedServiceAccounts: - - ^system:serviceaccount:default-* - - ^system:serviceaccount:openshift-* - - ^system:serviceaccount:kube-* - - ^system:serviceaccount:redhat-* + provision: + console: true + showback: true ``` -For more details and configurations check out [IntegrationConfig](./integration-config.md). +> Note: If your `InstallPlan` approval is set to `Manual` then you will have to manually approve the `InstallPlan` for MTO console components to be installed. + +### Manual Approval + +* Open OpenShift console and click on `Operators`, followed by `Installed Operators` from the side menu. + +![image](./images/manual-approve-1.png) + +* Now click on `Upgrade available` in front of `mto-opencost` or `mto-prometheus`. + +![image](./images/manual-approve-2.png) + +* Now click on `Preview InstallPlan` on top. + +![image](./images/manual-approve-3.png) + +* Now click on `Approve` button. + +![image](./images/manual-approve-4.png) + +* Now the `InstallPlan` will be approved, and MTO console components will be installed. ## Uninstall via OperatorHub UI diff --git a/content/integration-config.md b/content/integration-config.md index 6a1842751..1cc42b036 100644 --- a/content/integration-config.md +++ b/content/integration-config.md @@ -7,7 +7,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: tenantRoles: default: @@ -95,6 +95,9 @@ spec: sso: clientName: vault accessorID: + provision: + console: true + showback: true ``` Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator. @@ -251,11 +254,14 @@ users: ### Cluster Admin Groups -`clusterAdminGroups:` Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way +`clusterAdminGroups:` Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way. MTO does not interfere even with the deletion of privilegedNamespaces. + +!!! note + User `kube:admin` is bypassed by default to perform operations as a cluster admin, this includes operations on all the namespaces. ### Privileged Namespaces -`privilegedNamespaces:` Contains the list of `namespaces` ignored by MTO. MTO will not manage the `namespaces` in this list. Values in this list are regex patterns. +`privilegedNamespaces:` Contains the list of `namespaces` ignored by MTO. MTO will not manage the `namespaces` in this list. Treatment for privileged namespaces does not involve further integrations or finalizers processing as with normal namespaces. Values in this list are regex patterns. For example: - To ignore the `default` namespace, we can specify `^default$` @@ -368,3 +374,24 @@ If enabled, than admins have to provide secret, URL and SSO accessorID of Vault. - `sso.clientName:` Will contain the client name. For more details please refer [use-cases](./usecases/integrationconfig.md) + +## Provision + +```yaml +provision: + console: true + showback: true +``` + +`provision.console:` Can be used to enable/disable console GUI for MTO. +`provision.showback:` Can be used to enable/disable showback feature on the console. + +Integration config will be managing the following resources required for console GUI: + +- `Showback` cronjob. +- `Keycloak` deployment. +- `MTO-OpenCost` operator. +- `MTO-Prometheus` operator. +- `MTO-Postgresql` stateful set. + +Details on console GUI and showback can be found [here](explanation/console.md) diff --git a/content/reference-guides/configuring-multitenant-network-isolation.md b/content/reference-guides/configuring-multitenant-network-isolation.md index 9b3eff12a..0d508d184 100644 --- a/content/reference-guides/configuring-multitenant-network-isolation.md +++ b/content/reference-guides/configuring-multitenant-network-isolation.md @@ -55,7 +55,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: openshift: project: diff --git a/content/reference-guides/custom-metrics.md b/content/reference-guides/custom-metrics.md new file mode 100644 index 000000000..44e3c3cd3 --- /dev/null +++ b/content/reference-guides/custom-metrics.md @@ -0,0 +1,11 @@ +# Custom Metrics Support + +Multi Tenant Operator now supports custom metrics for templates, template instances and template group instances. This feature allows users to monitor the usage of templates and template instances in their cluster. + +To enable custom metrics and view them in your OpenShift cluster, you need to follow the steps below: + +- Ensure that cluster monitoring is enabled in your cluster. You can check this by going to `Observe` -> `Metrics` in the OpenShift console. +- Navigate to `Administration` -> `Namespaces` in the OpenShift console. Select the namespace where you have installed Multi Tenant Operator. +- Add the following label to the namespace: `openshift.io/cluster-monitoring=true`. This will enable cluster monitoring for the namespace. +- To ensure that the metrics are being scraped for the namespace, navigate to `Observe` -> `Targets` in the OpenShift console. You should see the namespace in the list of targets. +- To view the custom metrics, navigate to `Observe` -> `Metrics` in the OpenShift console. You should see the custom metrics for templates, template instances and template group instances in the list of metrics. diff --git a/content/reference-guides/custom-roles.md b/content/reference-guides/custom-roles.md index b7304959c..1af4184f1 100644 --- a/content/reference-guides/custom-roles.md +++ b/content/reference-guides/custom-roles.md @@ -9,7 +9,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: tenantRoles: default: @@ -35,7 +35,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: tenantRoles: default: diff --git a/content/reference-guides/graph-visualization.md b/content/reference-guides/graph-visualization.md new file mode 100644 index 000000000..f7958750e --- /dev/null +++ b/content/reference-guides/graph-visualization.md @@ -0,0 +1,30 @@ +# Graph Visualization on MTO Console + +Effortlessly associate tenants with their respective resources using the enhanced graph feature on the MTO Console. This dynamic graph illustrates the relationships between tenants and the resources they create, encompassing both MTO's proprietary resources and native Kubernetes/OpenShift elements. + +Example Graph: + +```mermaid + graph LR; + A(alpha)-->B(dev); + A-->C(prod); + B-->D(limitrange); + B-->E(owner-rolebinding); + B-->F(editor-rolebinding); + B-->G(viewer-rolebinding); + C-->H(limitrange); + C-->I(owner-rolebinding); + C-->J(editor-rolebinding); + C-->K(viewer-rolebinding); +``` + +Explore with an intuitive graph that showcases the relationships between tenants and their resources. The MTO Console's graph feature simplifies the understanding of complex structures, providing you with a visual representation of your tenant's organization. + +To view the graph of your tenant, follow the steps below: + +- Navigate to `Tenants` page on the MTO Console using the left navigation bar. +![Tenants](../images/graph-1.png) +- Click on `View` of the tenant for which you want to view the graph. +![Tenant View](../images/graph-2.png) +- Click on `Graph` tab on the tenant details page. +![Tenant Graph](../images/graph-3.png) diff --git a/content/reference-guides/integrationconfig.md b/content/reference-guides/integrationconfig.md index fe34081e7..ce2ff231b 100644 --- a/content/reference-guides/integrationconfig.md +++ b/content/reference-guides/integrationconfig.md @@ -18,7 +18,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: openshift: privilegedNamespaces: @@ -44,7 +44,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: openshift: privilegedServiceAccounts: @@ -62,7 +62,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: openshift: privilegedServiceAccounts: @@ -85,18 +85,14 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: vault: enabled: true - endpoint: - secretReference: - name: vault-root-token - namespace: vault - url: >- - https://vault.apps.prod.abcdefghi.kubeapp.cloud/ + accessorPath: oidc/ + address: 'https://vault.apps.prod.abcdefghi.kubeapp.cloud/' + roleName: mto sso: - accessorID: auth_oidc_aa6aa9aa clientName: vault ``` diff --git a/content/tutorials/argocd/enabling-multi-tenancy-argocd.md b/content/tutorials/argocd/enabling-multi-tenancy-argocd.md index b4e71bf29..06d86c908 100644 --- a/content/tutorials/argocd/enabling-multi-tenancy-argocd.md +++ b/content/tutorials/argocd/enabling-multi-tenancy-argocd.md @@ -31,7 +31,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: ... argocd: @@ -138,7 +138,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: ... argocd: @@ -178,7 +178,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: ... argocd: diff --git a/content/tutorials/vault/enabling-multi-tenancy-vault.md b/content/tutorials/vault/enabling-multi-tenancy-vault.md index 0379ed047..143d132f8 100644 --- a/content/tutorials/vault/enabling-multi-tenancy-vault.md +++ b/content/tutorials/vault/enabling-multi-tenancy-vault.md @@ -22,7 +22,7 @@ This requires a running `RHSSO(RedHat Single Sign On)` instance integrated with MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths. -Once both integrations are set up with [IntegrationConfig CR](../../how-to-guides/integration-config.md), MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO. +Once both integrations are set up with [IntegrationConfig CR](../../how-to-guides/integration-config.md#rhsso-red-hat-single-sign-on), MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO. After that, MTO creates specific policies in Vault for its tenant users. diff --git a/content/usecases/argocd.md b/content/usecases/argocd.md index 7c0f2fad8..219ab84f4 100644 --- a/content/usecases/argocd.md +++ b/content/usecases/argocd.md @@ -9,7 +9,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: ... argocd: @@ -116,7 +116,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: ... argocd: @@ -156,7 +156,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: ... argocd: diff --git a/content/usecases/configuring-multitenant-network-isolation.md b/content/usecases/configuring-multitenant-network-isolation.md index d53ae7747..8d751f3c3 100644 --- a/content/usecases/configuring-multitenant-network-isolation.md +++ b/content/usecases/configuring-multitenant-network-isolation.md @@ -55,7 +55,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: openshift: project: diff --git a/content/usecases/custom-roles.md b/content/usecases/custom-roles.md index d61e86625..ace50dc37 100644 --- a/content/usecases/custom-roles.md +++ b/content/usecases/custom-roles.md @@ -9,7 +9,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: tenantRoles: default: @@ -35,7 +35,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: tenantRoles: default: diff --git a/content/usecases/integrationconfig.md b/content/usecases/integrationconfig.md index 1429e756d..49380d157 100644 --- a/content/usecases/integrationconfig.md +++ b/content/usecases/integrationconfig.md @@ -18,7 +18,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: openshift: privilegedNamespaces: @@ -44,7 +44,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: openshift: privilegedServiceAccounts: @@ -62,7 +62,7 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: openshift: privilegedServiceAccounts: @@ -85,18 +85,14 @@ apiVersion: tenantoperator.stakater.com/v1alpha1 kind: IntegrationConfig metadata: name: tenant-operator-config - namespace: stakater-tenant-operator + namespace: multi-tenant-operator spec: vault: enabled: true - endpoint: - secretReference: - name: vault-root-token - namespace: vault - url: >- - https://vault.apps.prod.abcdefghi.kubeapp.cloud/ + accessorPath: oidc/ + address: 'https://vault.apps.prod.abcdefghi.kubeapp.cloud/' + roleName: mto sso: - accessorID: auth_oidc_aa6aa9aa clientName: vault ``` diff --git a/vocabulary b/vocabulary index 6df794427..5e5fd5928 160000 --- a/vocabulary +++ b/vocabulary @@ -1 +1 @@ -Subproject commit 6df79442723244b60287235a6319d5d422c0b8b0 +Subproject commit 5e5fd5928e6656037a67be50c968e8011f7ca1eb