diff --git a/website/docs/introduction/core-principles.md b/website/docs/introduction/core-principles.md index ca27a9d5..0e951cb5 100644 --- a/website/docs/introduction/core-principles.md +++ b/website/docs/introduction/core-principles.md @@ -10,26 +10,26 @@ sidebar_label: Core principles Cloud Native Chaos Engineering, defined as engineering practices focused on (and built on) Kubernetes environments, applications, microservices, and infrastructure follows these core principles - -## Driven by Open Source +**Driven by Open Source** Cloud-native software provides the ideal platform for multi-cloud deployments because it is rooted in open-source standards established by the World Wide Web Consortium (W3C). Digital transformation requires real-time, event-driven data collection and the W3C “One Web” vision defines an ideal architecture for any data to run with any app across any W3C-compliant cloud. This principle focuses on the framework to be completely open-source under the Apache2 License to encourage broader community participation and inspection. The number of applications moving to the Kubernetes platform is limitless. At such a large scale, only the Open Chaos model will thrive and get the required adoption. -## CRDs for Chaos Management +**CRDs for Chaos Management** Custom Resource Definition(CRD) is what you use to define a Custom Resource. This is a powerful way to extend Kubernetes capabilities beyond the default installation. These Kubernetes native CRDs defined here should be used as APIs for both Developers and SREs to build and orchestrate chaos testing. The CRDs act as standard APIs to provision and manage the chaos. -## Extensible and Pluggable +**Extensible and Pluggable** One lesson learned why cloud native approaches are winning is that their components can be relatively easily swapped out and new ones introduced as needed. Any standard chaos library or functionality developed by other open-source developers should be able to be integrated into and orchestrated for testing via this pluggable framework. -## Broad Community Adoption +**Broad Community Adoption** Once we have the APIs, Operator, and plugin framework, we have all the ingredients needed for a common way of injecting chaos. The chaos will be run against a well-known infrastructure like Kubernetes or applications like databases or other infrastructure components like storage or networking. These chaos experiments can be reused, and a broad-based community is useful for identifying and contributing to other high-value scenarios. Hence a Chaos Engineering framework should provide a central hub or forge where open-source chaos experiments are shared, and collaboration via code is enabled. [Learn more about our community adoption](community.md) -## GitOps for Chaos Management +**GitOps for Chaos Management** Use GitOps as an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD, and applies them to infrastructure automation. With the demands made on today’s infrastructure, it has become increasingly crucial to implement infrastructure automation. Modern infrastructure needs to be elastic so that it can effectively manage cloud resources that are needed for continuous deployments. diff --git a/website/docs/introduction/what-is-litmus.md b/website/docs/introduction/what-is-litmus.md index 360a9241..83ff474e 100644 --- a/website/docs/introduction/what-is-litmus.md +++ b/website/docs/introduction/what-is-litmus.md @@ -20,7 +20,7 @@ Kubernetes is being run on a variety of infrastructure, ranging from virtual mac Your application resilience really depends more on the underlying stack than your application itself. It is possible that once your application is stabilized, the resilience of your service that runs on Kubernetes depends on other components and infrastructure more than 90% of the time. -Thus it is important to verify your application resilience whenever a change has happened in the underlying stack. **Keep verifying** is the key. Robust testing before upgrades is not good enough, mainly because you cannot possibly consider all sorts of faults during upgrade testing. This introduces the concept of Chaos Engineering. The process of "**continuously verifying** if your service is resilient against faults" is called Chaos Engineering. +Thus it is important to verify your application resilience whenever a change has happened in the underlying stack. **Keep verifying** is the key. Robust testing before upgrades is not good enough, mainly because you cannot possibly consider all sorts of faults during upgrade testing. This introduces the concept of Chaos Engineering. The process of "**continuously verifying if your service is resilient against faults**" is called Chaos Engineering. ## What is a Chaos Experiment diff --git a/website/docs/user-guides/chaos-infrastructure-installation.md b/website/docs/user-guides/chaos-infrastructure-installation.md index b2f28d6b..f80768db 100644 --- a/website/docs/user-guides/chaos-infrastructure-installation.md +++ b/website/docs/user-guides/chaos-infrastructure-installation.md @@ -1,7 +1,7 @@ --- id: chaos-infrastructure-installation -title: chaos-infrastructure-installation -sidebar_label: chaos-infrastructure-installation +title: Chaos Infrastructure Installation +sidebar_label: Chaos Infrastructure Installation --- --- diff --git a/website/docs/user-guides/chaoscenter-namespace-scope-installation.md b/website/docs/user-guides/chaoscenter-namespace-scope-installation.md index d27f7ef1..1264ea3f 100644 --- a/website/docs/user-guides/chaoscenter-namespace-scope-installation.md +++ b/website/docs/user-guides/chaoscenter-namespace-scope-installation.md @@ -27,7 +27,7 @@ Installation of Litmus can be done using either of the below methods - [Helm3](#install-litmus-using-helm) chart - [Kubectl](#install-litmus-using-kubectl) yaml spec file -### **Install Litmus using Helm ** +### Install Litmus using Helm The helm chart will install all the required service account configuration and ChaosCenter. @@ -97,9 +97,9 @@ Visit https://docs.litmuschaos.io/ to find more info. > **Note:** Litmus uses Kubernetes CRDs to define chaos intent. Helm3 handles CRDs better than Helm2. Before you start running a chaos experiment, verify if Litmus is installed correctly. -### **Install Litmus using kubectl ** +### Install Litmus using kubectl -#### **Set the namespace on which you want to install Litmus ChaosCenter** +#### Set the namespace on which you want to install Litmus ChaosCenter > Create a namespace `kubectl create ns ` @@ -114,7 +114,7 @@ NAME STATUS AGE litmus Active 2s ``` -#### **Install the required Litmus CRDs** +#### Install the required Litmus CRDs The cluster-admin or an equivalent user with the right permissions are required to install the CRDs upfront. @@ -136,7 +136,7 @@ customresourcedefinition.apiextensions.k8s.io/chaosresults.litmuschaos.io create customresourcedefinition.apiextensions.k8s.io/eventtrackerpolicies.eventtracker.litmuschaos.io created ``` -#### **Install Litmus ChaosCenter** +#### Install Litmus ChaosCenter Applying the manifest file will install all the required service account configuration and ChaosCenter. @@ -178,11 +178,11 @@ service/mongo-service created service/mongo-headless-service created ``` -## **Verify your installation** +## Verify your installation --- -#### **Verify if the frontend, server, and database pods are running** +#### Verify if the frontend, server, and database pods are running - Check the pods in the namespace where you installed Litmus: @@ -227,7 +227,7 @@ kubectl set env deployment/litmusportal-server -n litmus --containers="graphql-s --- -#### **Verify Successful Registration of the Self Chaos Delegate post [Account Configuration](setup-without-ingress)** +#### Verify Successful Registration of the Self Chaos Delegate post [Account Configuration](setup-without-ingress) Once the project is created, the cluster is automatically registered as a chaos target via installation of [Chaos Delegate](../getting-started/resources.md#chaosagents). This is represented as [Self Chaos Delegate](../getting-started/resources.md#types-of-chaosagents) in [ChaosCenter](../getting-started/resources.md#chaosagents). diff --git a/website/versioned_docs/version-3.0.0/introduction/core-principles.md b/website/versioned_docs/version-3.0.0/introduction/core-principles.md index ca27a9d5..0e951cb5 100644 --- a/website/versioned_docs/version-3.0.0/introduction/core-principles.md +++ b/website/versioned_docs/version-3.0.0/introduction/core-principles.md @@ -10,26 +10,26 @@ sidebar_label: Core principles Cloud Native Chaos Engineering, defined as engineering practices focused on (and built on) Kubernetes environments, applications, microservices, and infrastructure follows these core principles - -## Driven by Open Source +**Driven by Open Source** Cloud-native software provides the ideal platform for multi-cloud deployments because it is rooted in open-source standards established by the World Wide Web Consortium (W3C). Digital transformation requires real-time, event-driven data collection and the W3C “One Web” vision defines an ideal architecture for any data to run with any app across any W3C-compliant cloud. This principle focuses on the framework to be completely open-source under the Apache2 License to encourage broader community participation and inspection. The number of applications moving to the Kubernetes platform is limitless. At such a large scale, only the Open Chaos model will thrive and get the required adoption. -## CRDs for Chaos Management +**CRDs for Chaos Management** Custom Resource Definition(CRD) is what you use to define a Custom Resource. This is a powerful way to extend Kubernetes capabilities beyond the default installation. These Kubernetes native CRDs defined here should be used as APIs for both Developers and SREs to build and orchestrate chaos testing. The CRDs act as standard APIs to provision and manage the chaos. -## Extensible and Pluggable +**Extensible and Pluggable** One lesson learned why cloud native approaches are winning is that their components can be relatively easily swapped out and new ones introduced as needed. Any standard chaos library or functionality developed by other open-source developers should be able to be integrated into and orchestrated for testing via this pluggable framework. -## Broad Community Adoption +**Broad Community Adoption** Once we have the APIs, Operator, and plugin framework, we have all the ingredients needed for a common way of injecting chaos. The chaos will be run against a well-known infrastructure like Kubernetes or applications like databases or other infrastructure components like storage or networking. These chaos experiments can be reused, and a broad-based community is useful for identifying and contributing to other high-value scenarios. Hence a Chaos Engineering framework should provide a central hub or forge where open-source chaos experiments are shared, and collaboration via code is enabled. [Learn more about our community adoption](community.md) -## GitOps for Chaos Management +**GitOps for Chaos Management** Use GitOps as an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD, and applies them to infrastructure automation. With the demands made on today’s infrastructure, it has become increasingly crucial to implement infrastructure automation. Modern infrastructure needs to be elastic so that it can effectively manage cloud resources that are needed for continuous deployments. diff --git a/website/versioned_docs/version-3.0.0/introduction/what-is-litmus.md b/website/versioned_docs/version-3.0.0/introduction/what-is-litmus.md index 360a9241..83ff474e 100644 --- a/website/versioned_docs/version-3.0.0/introduction/what-is-litmus.md +++ b/website/versioned_docs/version-3.0.0/introduction/what-is-litmus.md @@ -20,7 +20,7 @@ Kubernetes is being run on a variety of infrastructure, ranging from virtual mac Your application resilience really depends more on the underlying stack than your application itself. It is possible that once your application is stabilized, the resilience of your service that runs on Kubernetes depends on other components and infrastructure more than 90% of the time. -Thus it is important to verify your application resilience whenever a change has happened in the underlying stack. **Keep verifying** is the key. Robust testing before upgrades is not good enough, mainly because you cannot possibly consider all sorts of faults during upgrade testing. This introduces the concept of Chaos Engineering. The process of "**continuously verifying** if your service is resilient against faults" is called Chaos Engineering. +Thus it is important to verify your application resilience whenever a change has happened in the underlying stack. **Keep verifying** is the key. Robust testing before upgrades is not good enough, mainly because you cannot possibly consider all sorts of faults during upgrade testing. This introduces the concept of Chaos Engineering. The process of "**continuously verifying if your service is resilient against faults**" is called Chaos Engineering. ## What is a Chaos Experiment diff --git a/website/versioned_docs/version-3.0.0/user-guides/chaos-infrastructure-installation.md b/website/versioned_docs/version-3.0.0/user-guides/chaos-infrastructure-installation.md index b2f28d6b..f80768db 100644 --- a/website/versioned_docs/version-3.0.0/user-guides/chaos-infrastructure-installation.md +++ b/website/versioned_docs/version-3.0.0/user-guides/chaos-infrastructure-installation.md @@ -1,7 +1,7 @@ --- id: chaos-infrastructure-installation -title: chaos-infrastructure-installation -sidebar_label: chaos-infrastructure-installation +title: Chaos Infrastructure Installation +sidebar_label: Chaos Infrastructure Installation --- --- diff --git a/website/versioned_docs/version-3.0.0/user-guides/chaoscenter-namespace-scope-installation.md b/website/versioned_docs/version-3.0.0/user-guides/chaoscenter-namespace-scope-installation.md index d27f7ef1..1264ea3f 100644 --- a/website/versioned_docs/version-3.0.0/user-guides/chaoscenter-namespace-scope-installation.md +++ b/website/versioned_docs/version-3.0.0/user-guides/chaoscenter-namespace-scope-installation.md @@ -27,7 +27,7 @@ Installation of Litmus can be done using either of the below methods - [Helm3](#install-litmus-using-helm) chart - [Kubectl](#install-litmus-using-kubectl) yaml spec file -### **Install Litmus using Helm ** +### Install Litmus using Helm The helm chart will install all the required service account configuration and ChaosCenter. @@ -97,9 +97,9 @@ Visit https://docs.litmuschaos.io/ to find more info. > **Note:** Litmus uses Kubernetes CRDs to define chaos intent. Helm3 handles CRDs better than Helm2. Before you start running a chaos experiment, verify if Litmus is installed correctly. -### **Install Litmus using kubectl ** +### Install Litmus using kubectl -#### **Set the namespace on which you want to install Litmus ChaosCenter** +#### Set the namespace on which you want to install Litmus ChaosCenter > Create a namespace `kubectl create ns ` @@ -114,7 +114,7 @@ NAME STATUS AGE litmus Active 2s ``` -#### **Install the required Litmus CRDs** +#### Install the required Litmus CRDs The cluster-admin or an equivalent user with the right permissions are required to install the CRDs upfront. @@ -136,7 +136,7 @@ customresourcedefinition.apiextensions.k8s.io/chaosresults.litmuschaos.io create customresourcedefinition.apiextensions.k8s.io/eventtrackerpolicies.eventtracker.litmuschaos.io created ``` -#### **Install Litmus ChaosCenter** +#### Install Litmus ChaosCenter Applying the manifest file will install all the required service account configuration and ChaosCenter. @@ -178,11 +178,11 @@ service/mongo-service created service/mongo-headless-service created ``` -## **Verify your installation** +## Verify your installation --- -#### **Verify if the frontend, server, and database pods are running** +#### Verify if the frontend, server, and database pods are running - Check the pods in the namespace where you installed Litmus: @@ -227,7 +227,7 @@ kubectl set env deployment/litmusportal-server -n litmus --containers="graphql-s --- -#### **Verify Successful Registration of the Self Chaos Delegate post [Account Configuration](setup-without-ingress)** +#### Verify Successful Registration of the Self Chaos Delegate post [Account Configuration](setup-without-ingress) Once the project is created, the cluster is automatically registered as a chaos target via installation of [Chaos Delegate](../getting-started/resources.md#chaosagents). This is represented as [Self Chaos Delegate](../getting-started/resources.md#types-of-chaosagents) in [ChaosCenter](../getting-started/resources.md#chaosagents).