From ac07bf65177469784a63655e563142854d7d6e90 Mon Sep 17 00:00:00 2001 From: Charlie Le Date: Thu, 13 Nov 2025 18:03:58 -0800 Subject: [PATCH] Restructure and enhance getting started documentation Split the monolithic getting started guide into three focused documents: - Main index with overview and guide selection - Separate single-binary mode guide - Separate microservices mode guide Key improvements: - Clear comparison table to help users choose the right deployment mode - Step-by-step instructions with verification commands - Hands-on experiments to learn Cortex behavior - Comprehensive troubleshooting sections - Explanations of multi-tenancy and key concepts - Clear next steps and additional resources Signed-off-by: Charlie Le --- docs/getting-started/_index.md | 334 ++---------- docs/getting-started/grafana-values.yaml | 10 + docs/getting-started/microservices.md | 644 +++++++++++++++++++++++ docs/getting-started/single-binary.md | 373 +++++++++++++ 4 files changed, 1074 insertions(+), 287 deletions(-) create mode 100644 docs/getting-started/microservices.md create mode 100644 docs/getting-started/single-binary.md diff --git a/docs/getting-started/_index.md b/docs/getting-started/_index.md index 5edbd38bff3..e27f79e6441 100644 --- a/docs/getting-started/_index.md +++ b/docs/getting-started/_index.md @@ -6,315 +6,75 @@ no_section_index_title: true slug: "getting-started" --- -Cortex is a powerful platform software that can be run in two modes: as a single binary or as multiple -independent [microservices](../architecture.md). +Welcome to Cortex! This guide will help you get a Cortex environment up and running quickly. -There are two guides in this section: +## What is Cortex? -1. [Single Binary Mode with Docker Compose](#single-binary-mode) -2. [Microservice Mode with KIND](#microservice-mode) +Cortex is a horizontally scalable, highly available, multi-tenant, long-term storage solution for Prometheus and OpenTelemetry Metrics. It can be run in two modes: -The single binary mode is useful for testing and development, while the microservice mode is useful for production. +- **Single Binary Mode**: All components run in a single process - ideal for testing, development, and learning +- **Microservices Mode**: Components run as independent services - designed for production deployments -Both guides will help you get started with Cortex using [blocks storage](../blocks-storage/_index.md). +Both deployment modes use [blocks storage](../blocks-storage/_index.md), which is based on Prometheus TSDB and stores data in object storage (S3, GCS, Azure, or compatible services). -## Single Binary Mode +## Choose Your Guide -This guide will help you get started with Cortex in single-binary mode using -[blocks storage](../blocks-storage/_index.md). +| Mode | Time | Use Case | Guide | +|------|------|----------|-------| +| **Single Binary** | ~15 min | Learning, Development, Testing | [Start Here →](single-binary.md) | +| **Microservices** | ~30 min | Production-like Environment, Kubernetes | [Start Here →](microservices.md) | -### Prerequisites +### Single Binary Mode -Cortex can be configured to use local storage or cloud storage (S3, GCS, and Azure). It can also utilize external -Memcached and Redis instances for caching. This guide will focus on running Cortex as a single process with no -dependencies. +Perfect for your first experience with Cortex. Runs all components in one process with minimal dependencies. -* [Docker Compose](https://docs.docker.com/compose/install/) +**What you'll set up:** +- Cortex (single process) +- Prometheus (sending metrics via remote_write) +- Grafana (visualizing metrics) +- SeaweedFS (S3-compatible storage) -### Running Cortex as a Single Instance +**Requirements:** +- Docker & Docker Compose +- 4GB RAM, 10GB disk -For simplicity, we'll start by running Cortex as a single process with no dependencies. This mode is not recommended or -intended for production environments or production use. +[Get Started with Single Binary Mode →](single-binary.md) -This example uses [Docker Compose](https://docs.docker.com/compose/) to set up: +### Microservices Mode -1. An instance of [SeaweedFS](https://github.com/seaweedfs/seaweedfs/) for S3-compatible object storage -1. An instance of [Cortex](https://cortexmetrics.io/) to receive metrics. -1. An instance of [Prometheus](https://prometheus.io/) to send metrics to Cortex. -1. An instance of [Perses](https://perses.dev) for latest trend on dashboarding -1. An instance of [Grafana](https://grafana.com/) for legacy dashboarding +Experience Cortex as it runs in production. Each component runs as a separate service in Kubernetes. -#### Instructions +**What you'll set up:** +- Cortex (distributed: ingester, querier, distributor, compactor, etc.) +- Prometheus (sending metrics via remote_write) +- Grafana (visualizing metrics) +- SeaweedFS (S3-compatible storage) -```sh -$ git clone https://github.com/cortexproject/cortex.git -$ cd cortex/docs/getting-started -``` - -**Note**: This guide uses `grafana-datasource-docker.yaml` which is specifically configured for the single binary Docker Compose deployment. For Kubernetes/microservices mode, use `grafana-datasource.yaml` instead. - -##### Start the services - -```sh -$ docker compose up -d -``` - -We can now access the following services: - -* [Cortex](http://localhost:9009) -* [Prometheus](http://localhost:9090) -* [Grafana](http://localhost:3000) -* [SeaweedFS](http://localhost:8333) - -If everything is working correctly, Prometheus should be sending metrics that it is scraping to Cortex. Prometheus is -configured to send metrics to Cortex via `remote_write`. Check out the `prometheus-config.yaml` file to see -how this is configured. - -#### Configure Cortex Recording Rules and Alerting Rules - -We can configure Cortex with [cortextool](https://github.com/cortexproject/cortex-tools/) to load [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). This is optional, but it is helpful to see how Cortex can be configured to manage rules and alerts. - -```sh -# Configure recording rules for the Cortex tenant (optional) -$ docker run --network host -v "$(pwd):/workspace" -w /workspace quay.io/cortexproject/cortex-tools:v0.17.0 rules sync rules.yaml alerts.yaml --id cortex --address http://localhost:9009 -``` - -#### Configure Cortex Alertmanager - -Cortex also comes with a multi-tenant Alertmanager. Let's load configuration for it to be able to view them in Grafana. - -```sh -# Configure alertmanager for the Cortex tenant -$ docker run --network host -v "$(pwd):/workspace" -w /workspace quay.io/cortexproject/cortex-tools:v0.17.0 alertmanager load alertmanager-config.yaml --id cortex --address http://localhost:9009 -``` - -You can configure Alertmanager in [Grafana as well](http://localhost:3000/alerting/notifications?search=&alertmanager=Cortex%20Alertmanager). - -There's a list of recording rules and alerts that should be visible in Grafana [here](http://localhost:3000/alerting/list?view=list&search=datasource:Cortex). - -#### Explore - -Grafana is configured to use Cortex as a data source. Grafana is also configured with [Cortex Dashboards](http://localhost:3000/dashboards?tag=cortex) to understand the state of the Cortex instance. The dashboards are generated from the cortex-jsonnet repository. There is a Makefile in the repository that can be used to update the dashboards. - -```sh -# Update the dashboards (optional) -$ make -``` - -If everything is working correctly, then the metrics seen in Grafana were successfully sent from Prometheus to Cortex -via `remote_write`! - -Other things to explore: - -- [Cortex](http://localhost:9009) - Administrative interface for Cortex - - Try shutting down the ingester, and see how it affects metric ingestion. - - Restart Cortex to bring the ingester back online, and see how Prometheus catches up. - - Does it affect the querying of metrics in Grafana? -- [Prometheus](http://localhost:9090) - Prometheus instance that is sending metrics to Cortex - - Try querying the metrics in Prometheus. - - Are they the same as what you see in Cortex? -- [Grafana](http://localhost:3000) - Grafana instance that is visualizing the metrics. - - Try creating a new dashboard and adding a new panel with a query to Cortex. - -### Clean up - -```sh -$ docker compose down -v -``` - -## Microservice Mode - -Now that you have Cortex running as a single instance, let's explore how to run Cortex in microservice mode. - -### Prerequisites - -* [Kind](https://kind.sigs.k8s.io) -* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) -* [Helm](https://helm.sh/docs/intro/install/) - -This example uses [Kind](https://kind.sigs.k8s.io) to set up: - -1. A Kubernetes cluster -1. An instance of [SeaweedFS](https://github.com/seaweedfs/seaweedfs/) for S3-compatible object storage -1. An instance of [Cortex](https://cortexmetrics.io/) to receive metrics -1. An instance of [Prometheus](https://prometheus.io/) to send metrics to Cortex -1. An instance of [Grafana](https://grafana.com/) to visualize the metrics - -### Setup Kind - -```sh -$ kind create cluster -``` - -### Configure Helm - -```sh -$ helm repo add cortex-helm https://cortexproject.github.io/cortex-helm-chart -$ helm repo add grafana https://grafana.github.io/helm-charts -$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts -``` - -### Instructions - -```sh -$ git clone https://github.com/cortexproject/cortex.git -$ cd cortex/docs/getting-started -``` - -#### Configure SeaweedFS (S3) - -```sh -# Create a namespace -$ kubectl create namespace cortex -``` - -```sh -# We can emulate S3 with SeaweedFS -$ kubectl --namespace cortex apply -f seaweedfs.yaml --wait --timeout=5m -``` +**Requirements:** +- Kind, kubectl, Helm +- 8GB RAM, 20GB disk -```sh -# Wait for SeaweedFS to be ready -$ sleep 5 -$ kubectl --namespace cortex wait --for=condition=ready pod -l app=seaweedfs --timeout=5m -``` - -```sh -# Port-forward to SeaweedFS to create a bucket -$ kubectl --namespace cortex port-forward svc/seaweedfs 8333 & -``` - -```sh -# Create buckets in SeaweedFS -$ for bucket in cortex-blocks cortex-ruler cortex-alertmanager; do - curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" -X PUT http://localhost:8333/$bucket -done -``` - -#### Setup Cortex - -```sh -# Deploy Cortex using the provided values file which configures -# - blocks storage to use the seaweedfs service -$ helm upgrade --install --version=2.4.0 --namespace cortex cortex cortex-helm/cortex -f cortex-values.yaml --wait -``` - -#### Setup Prometheus - -```sh -# Deploy Prometheus to scrape metrics in the cluster and send them, via remote_write, to Cortex. -$ helm upgrade --install --version=25.20.1 --namespace cortex prometheus prometheus-community/prometheus -f prometheus-values.yaml --wait -``` - -If everything is working correctly, Prometheus should be sending metrics that it is scraping to Cortex. Prometheus is -configured to send metrics to Cortex via `remote_write`. Check out the `prometheus-config.yaml` file to see -how this is configured. - -#### Setup Grafana - -```sh -# Deploy Grafana to visualize the metrics that were sent to Cortex. -$ helm upgrade --install --version=7.3.9 --namespace cortex grafana grafana/grafana -f grafana-values.yaml --wait -``` - -**Note**: This guide uses `grafana-values.yaml` with Helm to configure Grafana datasources. Alternatively, you can manually deploy Grafana with `grafana-datasource.yaml` which is specifically configured for Kubernetes/microservices mode with the correct `cortex-nginx` endpoints. - -```sh -# Create dashboards for Cortex -$ for dashboard in $(ls dashboards); do - basename=$(basename -s .json $dashboard) - cmname=grafana-dashboard-$basename - kubectl create --namespace cortex configmap $cmname --from-file=$dashboard=dashboards/$dashboard --save-config=true -o yaml --dry-run=client | kubectl apply -f - - kubectl patch --namespace cortex configmap $cmname -p '{"metadata":{"labels":{"grafana_dashboard":""}}}' -done - -``` - -```sh -# Port-forward to Grafana to visualize -$ kubectl --namespace cortex port-forward deploy/grafana 3000 & -``` - -View the dashboards in [Grafana](http://localhost:3000/dashboards?tag=cortex). - -#### Configure Cortex Recording Rules and Alerting Rules (Optional) - -We can configure Cortex with [cortextool](https://github.com/cortexproject/cortex-tools/) to load [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). This is optional, but it is helpful to see how Cortex can be configured to manage rules and alerts. +[Get Started with Microservices Mode →](microservices.md) -```sh -# Port forward to the alertmanager to configure recording rules and alerts -$ kubectl --namespace cortex port-forward svc/cortex-nginx 8080:80 & -``` - -```sh -# Configure recording rules for the cortex tenant -$ cortextool rules sync rules.yaml alerts.yaml --id cortex --address http://localhost:8080 -``` - -#### Configure Cortex Alertmanager (Optional) - -Cortex also comes with a multi-tenant Alertmanager. Let's load configuration for it to be able to view them in Grafana. - -```sh -# Configure alertmanager for the cortex tenant -$ cortextool alertmanager load alertmanager-config.yaml --id cortex --address http://localhost:8080 -``` - -You can configure Alertmanager in [Grafana as well](http://localhost:3000/alerting/notifications?search=&alertmanager=Cortex%20Alertmanager). - -There's a list of recording rules and alerts that should be visible in Grafana [here](http://localhost:3000/alerting/list?view=list&search=datasource:Cortex). - -#### Explore - -Grafana is configured to use Cortex as a data source. Grafana is also configured with [Cortex Dashboards](http://localhost:3000/dashboards?tag=cortex) to understand the state of the Cortex instance. The dashboards are generated from the cortex-jsonnet repository. There is a Makefile in the repository that can be used to update the dashboards. - -```sh -# Update the dashboards (optional) -$ make -``` - -If everything is working correctly, then the metrics seen in Grafana were successfully sent from Prometheus to Cortex -via `remote_write`! - -Other things to explore: - -[Cortex](http://localhost:9009) - Administrative interface for Cortex - -```sh -# Port forward to the ingester to see the administrative interface for Cortex -$ kubectl --namespace cortex port-forward deploy/cortex-ingester 9009:8080 & -``` +## Key Concepts -- Try shutting down the ingester, and see how it affects metric ingestion. -- Restart Cortex to bring the ingester back online, and see how Prometheus catches up. -- Does it affect the querying of metrics in Grafana? +Before you begin, it's helpful to understand these core concepts: -[Prometheus](http://localhost:9090) - Prometheus instance that is sending metrics to Cortex +- **Blocks Storage**: Cortex's storage engine based on Prometheus TSDB. Metrics are stored in 2-hour blocks in object storage. +- **Multi-tenancy**: Cortex isolates metrics by tenant ID (sent via `X-Scope-OrgID` header). In these guides, we use `cortex` as the tenant ID. +- **Remote Write**: Prometheus protocol for sending metrics to remote storage systems like Cortex. +- **Components**: In microservices mode, Cortex runs as separate services (distributor, ingester, querier, etc.). In single binary mode, all run together. -```sh -# Port forward to Prometheus to see the metrics that are being scraped -$ kubectl --namespace cortex port-forward deploy/prometheus-server 9090 & -``` -- Try querying the metrics in Prometheus. -- Are they the same as what you see in Cortex? - -[Grafana](http://localhost:3000) - Grafana instance that is visualizing the metrics. +## Data Flow -```sh -# Port forward to Grafana to visualize -$ kubectl --namespace cortex port-forward deploy/grafana 3000 & ``` - -- Try creating a new dashboard and adding a new panel with a query to Cortex. - -### Clean up - -```sh -# Remove the port-forwards -$ killall kubectl +Prometheus → remote_write → Cortex → Object Storage (S3) + ↓ + Grafana (queries via PromQL) ``` -```sh -$ kind delete cluster -``` +## Need Help? +- **Documentation**: Explore the [Architecture guide](../architecture.md) to understand Cortex's design +- **Community**: Join the [CNCF Slack #cortex channel](https://cloud-native.slack.com/archives/cortex) +- **Issues**: Report problems on [GitHub](https://github.com/cortexproject/cortex/issues) diff --git a/docs/getting-started/grafana-values.yaml b/docs/getting-started/grafana-values.yaml index b36403b0a09..f5ecf4ef6d4 100644 --- a/docs/getting-started/grafana-values.yaml +++ b/docs/getting-started/grafana-values.yaml @@ -614,6 +614,16 @@ datasources: timeInterval: 15s secureJsonData: httpHeaderValue1: cortex + - name: Cortex Alertmanager + type: alertmanager + url: http://cortex-nginx/api/prom + access: proxy + editable: true + jsonData: + httpHeaderName1: X-Scope-OrgID + implementation: cortex + secureJsonData: + httpHeaderValue1: cortex # - name: CloudWatch # type: cloudwatch # access: proxy diff --git a/docs/getting-started/microservices.md b/docs/getting-started/microservices.md new file mode 100644 index 00000000000..22b26a75d51 --- /dev/null +++ b/docs/getting-started/microservices.md @@ -0,0 +1,644 @@ +--- +title: "Microservices Mode" +linkTitle: "Microservices Mode" +weight: 3 +slug: "microservices" +--- + +This guide will help you run Cortex in microservices mode using Kubernetes (Kind). In this mode, each Cortex component runs as an independent service, mirroring how Cortex runs in production. + +**Time to complete:** ~30 minutes + +## What You'll Learn + +- How to deploy Cortex on Kubernetes with Helm +- How Cortex microservices communicate with each other +- How to configure Prometheus to send metrics to a distributed Cortex +- How to query metrics across multiple Cortex services +- How to configure rules and alerts in a distributed setup + +## Prerequisites + +### Software Requirements + +- [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) (Kubernetes in Docker) +- [kubectl](https://kubernetes.io/docs/tasks/tools/) (v1.20+) +- [Helm](https://helm.sh/docs/intro/install/) (v3.0+) +- [Docker](https://docs.docker.com/get-docker/) (required by Kind) + +### System Requirements + +- 8GB RAM minimum (for local Kubernetes cluster) +- 20GB disk space +- Linux, macOS, or Windows with WSL2 + +### Optional Tools + +- [cortextool](https://github.com/cortexproject/cortex-tools/) - For managing rules and alerts +- [jq](https://jqlang.github.io/jq/) - For parsing JSON responses + +## Architecture + +This setup creates a production-like Cortex deployment with independent microservices: + +``` + ┌─────────────────────────────────────┐ + │ Kubernetes Cluster (Kind) │ + │ │ +┌─────────────┐ │ ┌──────────────┐ ┌──────────────┐ │ +│ Prometheus │────remote───┼─>│ Distributor │ │ Ingester │ │ +│ │ write │ └──────────────┘ └──────────────┘ │ +└─────────────┘ │ │ │ │ + │ │ │ │ + │ └────────────────┘ │ + │ │ │ + │ ▼ │ + ┌─────────────┐ │ ┌──────────────┐ ┌──────────────┐ │ + │ Grafana │◄──────┼──┤ Querier │ │ SeaweedFS │ │ + └─────────────┘ │ └──────────────┘ │ (S3) │ │ + │ ▲ └──────────────┘ │ + │ │ │ │ + │ ┌──────────────┐ │ │ + │ │Store Gateway │◄────────┘ │ + │ └──────────────┘ │ + │ │ + │ ┌──────────────┐ │ + │ │ Compactor │ │ + │ └──────────────┘ │ + │ │ + │ ┌──────────────┐ │ + │ │ Ruler │ │ + │ └──────────────┘ │ + └─────────────────────────────────────┘ +``` + +**Components:** +- **Distributor**: Receives metrics from Prometheus, validates, and forwards to ingesters +- **Ingester**: Stores recent metrics in memory and flushes to S3 +- **Querier**: Queries both ingesters (recent data) and store-gateway (historical data) +- **Store Gateway**: Queries historical blocks from S3 +- **Compactor**: Compacts and downsamples blocks in S3 +- **Ruler**: Evaluates recording and alerting rules +- **Alertmanager**: Manages alerts (optional) + +## Step 1: Create a Kubernetes Cluster + +Create a local Kubernetes cluster using Kind: + +```sh +kind create cluster --name cortex-demo +``` + +**What's happening?** +- Kind creates a Kubernetes cluster running inside Docker +- This takes ~2 minutes +- The cluster is named `cortex-demo` + +**Verify the cluster:** +```sh +kubectl cluster-info +kubectl get nodes +``` + +You should see one node in the `Ready` state. + +## Step 2: Configure Helm Repositories + +Add the Helm repositories for Cortex, Grafana, and Prometheus: + +```sh +helm repo add cortex-helm https://cortexproject.github.io/cortex-helm-chart +helm repo add grafana https://grafana.github.io/helm-charts +helm repo add prometheus-community https://prometheus-community.github.io/helm-charts +helm repo update +``` + +## Step 3: Clone the Repository + +```sh +git clone https://github.com/cortexproject/cortex.git +cd cortex/docs/getting-started +``` + +The `getting-started` directory contains Helm values files and Kubernetes manifests. + +## Step 4: Create the Cortex Namespace + +```sh +kubectl create namespace cortex +``` + +All Cortex components will be deployed in this namespace. + +## Step 5: Deploy SeaweedFS (S3-Compatible Storage) + +Cortex requires object storage for blocks. We'll use SeaweedFS as an S3-compatible alternative. + +### Deploy SeaweedFS + +```sh +kubectl --namespace cortex apply -f seaweedfs.yaml --wait --timeout=5m +``` + +### Wait for SeaweedFS to be Ready + +```sh +kubectl --namespace cortex wait --for=condition=ready pod -l app=seaweedfs --timeout=5m +``` + +**What's happening?** +- SeaweedFS starts as a StatefulSet +- It provides an S3-compatible API on port 8333 +- Initial startup takes ~1 minute + +### Create S3 Buckets + +SeaweedFS needs buckets for Cortex data. First, port-forward to SeaweedFS: + +```sh +kubectl --namespace cortex port-forward svc/seaweedfs 8333 & +``` + +**Note:** This runs in the background. If the command fails with "port already in use", kill the existing process: +```sh +lsof -ti:8333 | xargs kill +``` + +Now create the buckets: + +```sh +for bucket in cortex-blocks cortex-ruler cortex-alertmanager; do + curl --aws-sigv4 "aws:amz:local:seaweedfs" \ + --user "any:any" \ + -X PUT http://localhost:8333/$bucket +done +``` + +**What are these buckets?** +- `cortex-blocks`: Stores TSDB blocks (metric data) +- `cortex-ruler`: Stores ruler configuration +- `cortex-alertmanager`: Stores alertmanager configuration + +**Verify buckets were created:** +```sh +curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" http://localhost:8333 +``` + +## Step 6: Deploy Cortex + +Deploy Cortex using the Helm chart with the provided values file: + +```sh +helm upgrade --install \ + --version=2.4.0 \ + --namespace cortex \ + cortex cortex-helm/cortex \ + -f cortex-values.yaml \ + --wait +``` + +**What's in `cortex-values.yaml`?** +- Configures blocks storage to use SeaweedFS +- Sets resource limits for each component +- Enables the ruler and alertmanager +- Configures the ingester to use 3 replicas + +**This takes ~5 minutes.** The `--wait` flag waits for all pods to be ready. + +### Verify Cortex Components + +```sh +kubectl --namespace cortex get pods +``` + +You should see pods for each Cortex component: +- `cortex-distributor-*` +- `cortex-ingester-*` (multiple replicas) +- `cortex-querier-*` +- `cortex-query-frontend-*` +- `cortex-store-gateway-*` +- `cortex-compactor-*` +- `cortex-ruler-*` +- `cortex-nginx-*` (reverse proxy) + +**Check logs if pods aren't starting:** +```sh +kubectl --namespace cortex logs -l app.kubernetes.io/name=cortex -f --max-log-requests 20 +``` + +## Step 7: Deploy Prometheus + +Deploy Prometheus to scrape metrics from Kubernetes and send them to Cortex: + +```sh +helm upgrade --install \ + --version=25.20.1 \ + --namespace cortex \ + prometheus prometheus-community/prometheus \ + -f prometheus-values.yaml \ + --wait +``` + +**What's in `prometheus-values.yaml`?** +- Configures remote_write to send metrics to `cortex-distributor` +- Sets up scrape configs for Kubernetes services +- Adds the `X-Scope-OrgID: cortex` header for multi-tenancy + +**Verify Prometheus is running:** +```sh +kubectl --namespace cortex get pods -l app.kubernetes.io/name=prometheus +``` + +## Step 8: Deploy Grafana + +Deploy Grafana to visualize metrics from Cortex: + +```sh +helm upgrade --install \ + --version=7.3.9 \ + --namespace cortex \ + grafana grafana/grafana \ + -f grafana-values.yaml \ + --wait +``` + +**What's in `grafana-values.yaml`?** + +- Configures Cortex as a datasource (pointing to `cortex-nginx`) +- Enables sidecar for loading dashboards from ConfigMaps + +### Load Cortex Dashboards + +Create ConfigMaps for Cortex operational dashboards: + +```sh +for dashboard in $(ls dashboards); do + basename=$(basename -s .json $dashboard) + cmname=grafana-dashboard-$basename + kubectl create --namespace cortex configmap $cmname \ + --from-file=$dashboard=dashboards/$dashboard \ + --save-config=true -o yaml --dry-run=client | kubectl apply -f - + kubectl patch --namespace cortex configmap $cmname \ + -p '{"metadata":{"labels":{"grafana_dashboard":"1"}}}' +done +``` + +**What's happening?** + +- Each dashboard JSON is stored in a ConfigMap +- The `grafana_dashboard` label tells Grafana's sidecar to load it +- Dashboards appear automatically in Grafana + +## Step 9: Access the Services + +Port-forward to access Grafana: + +```sh +kubectl --namespace cortex port-forward deploy/grafana 3000 & +``` + +Open [Grafana](http://localhost:3000). + +**For other services, port-forward as needed:** + +```sh +# Prometheus +kubectl --namespace cortex port-forward deploy/prometheus-server 9090 & + +# Cortex Nginx (API gateway) +kubectl --namespace cortex port-forward svc/cortex-nginx 8080:80 & + +# Cortex Distributor (admin UI) +kubectl --namespace cortex port-forward deploy/cortex-distributor 9009:8080 & +``` + +**Tip:** Open a new terminal for each port-forward, or use `&` to run in the background. + +## Step 10: Verify Data Flow + +### Check Prometheus is Sending Metrics + +Port-forward to Prometheus: +```sh +kubectl --namespace cortex port-forward deploy/prometheus-server 9090 & +``` + +Open [Prometheus](http://localhost:9090) and: + +1. Go to Status → Targets - verify targets are UP +2. Go to Query - verify `prometheus_remote_storage_samples_total` is increasing + +### Query Metrics in Cortex + +Port-forward to the Cortex API: + +```sh +kubectl --namespace cortex port-forward svc/cortex-nginx 8080:80 & +``` + +Query metrics: + +```sh +curl -H "X-Scope-OrgID: cortex" \ + "http://localhost:8080/prometheus/api/v1/query?query=up" | jq +``` + +**Note:** The `X-Scope-OrgID` header specifies the tenant. Cortex is multi-tenant. + +### View Metrics in Grafana + +1. Open [Grafana](http://localhost:3000) +2. Go to [Explore](http://localhost:3000/explore) +3. Select the "Cortex" datasource +4. Run a query: `up` +5. You should see metrics from Prometheus! + +### View Cortex Dashboards + +Navigate to [Dashboards](http://localhost:3000/dashboards?tag=cortex) to see: + +- **Cortex / Writes**: Monitor the distributor and ingesters +- **Cortex / Reads**: Monitor the querier and query-frontend +- **Cortex / Object Store**: Monitor block storage operations +- **Cortex / Compactor**: Monitor block compaction + +## Step 11: Configure Rules and Alerts (Optional) + +### Port-Forward to Cortex API + +If not already running: +```sh +kubectl --namespace cortex port-forward svc/cortex-nginx 8080:80 & +``` + +### Install cortextool (if needed) + +**macOS:** +```sh +brew install cortexproject/tap/cortex-tools +``` + +**Linux:** +```sh +wget https://github.com/cortexproject/cortex-tools/releases/download/v0.17.0/cortex-tools_0.17.0_linux_x86_64.tar.gz +tar -xzf cortex-tools_0.17.0_linux_x86_64.tar.gz +sudo mv cortextool /usr/local/bin/ +``` + +**Or use Docker:** +```sh +alias cortextool="docker run --rm --network host -v $(pwd):/workspace -w /workspace quay.io/cortexproject/cortex-tools:v0.17.0 cortextool" +``` + +### Load Recording and Alerting Rules + +```sh +cortextool rules sync rules.yaml alerts.yaml \ + --id cortex \ + --address http://localhost:8080 +``` + +**What's happening?** +- `rules.yaml` contains recording rules (pre-computed PromQL queries) +- `alerts.yaml` contains alerting rules (conditions that trigger alerts) +- Rules are stored in S3 and evaluated by the ruler component + +### Verify Rules Are Loaded + +View rules in Grafana: [Alerting → Alert rules](http://localhost:3000/alerting/list?view=list&search=datasource:Cortex) + +Or check via API: +```sh +curl -H "X-Scope-OrgID: cortex" \ + "http://localhost:8080/prometheus/api/v1/rules" | jq +``` + +## Step 12: Configure Alertmanager (Optional) + +Load Alertmanager configuration: + +```sh +cortextool alertmanager load alertmanager-config.yaml \ + --id cortex \ + --address http://localhost:8080 +``` + +**Verify in Grafana:** [Alerting → Notification policies](http://localhost:3000/alerting/notifications?search=&alertmanager=Cortex%20Alertmanager) + +## Explore and Experiment + +### Experiment 1: Scale Ingesters + +Cortex uses a hash ring to distribute time series across ingesters. Let's add more ingesters: + +```sh +kubectl --namespace cortex scale deployment cortex-ingester --replicas=5 +``` + +**Observe:** +- New ingesters join the ring +- Time series are rebalanced +- View the ring: Port-forward to a distributor and visit [http://localhost:9009/ingester/ring](http://localhost:9009/ingester/ring) + +```sh +kubectl --namespace cortex port-forward deploy/cortex-distributor 9009:8080 +``` + +**Scale back down:** +```sh +kubectl --namespace cortex scale deployment cortex-ingester --replicas=3 +``` + +### Experiment 2: Kill an Ingester + +Simulate a failure by deleting a single ingester pod. This demonstrates Cortex's resilience. + +**Step 1: List the ingester pods** +```sh +kubectl --namespace cortex get pods -l app.kubernetes.io/component=ingester +``` + +You should see multiple ingester pods (typically 3 replicas). + +**Step 2: Delete one specific ingester pod** +```sh +# Replace with an actual pod name from the list above +kubectl --namespace cortex delete pod --force --grace-period=0 +``` + +Example: +```sh +kubectl --namespace cortex delete pod cortex-ingester-76d95464d8 --force --grace-period=0 +``` + +**Observe:** +- Queries still work (data is replicated across the remaining ingesters) +- Kubernetes automatically restarts the deleted pod +- The new ingester rejoins the ring +- Check the ring status: Port-forward to a distributor and visit [http://localhost:9009/ingester/ring](http://localhost:9009/ingester/ring) + +**Why it works:** Cortex replicates data across multiple ingesters (default: 3 replicas), so losing one ingester doesn't cause data loss. + +### Experiment 3: View Component Logs + +See what each component is doing: + +```sh +# Distributor logs (receives metrics from Prometheus) +kubectl --namespace cortex logs -l app.kubernetes.io/component=distributor -f + +# Ingester logs (stores metrics in memory) +kubectl --namespace cortex logs -l app.kubernetes.io/component=ingester -f + +# Querier logs (handles PromQL queries) +kubectl --namespace cortex logs -l app.kubernetes.io/component=querier -f + +# Compactor logs (compacts blocks in S3) +kubectl --namespace cortex logs -l app.kubernetes.io/component=compactor -f +``` + +### Experiment 4: Inspect the Ring + +Cortex uses a hash ring for consistent hashing. View the ingester ring: + +```sh +kubectl --namespace cortex port-forward deploy/cortex-distributor 9009:8080 & +``` + +Open [http://localhost:9009/ingester/ring](http://localhost:9009/ingester/ring) to see: +- Ingester tokens in the ring +- Health status of each ingester +- Token ownership ranges + +### Experiment 5: Check Block Storage + +View blocks in SeaweedFS using the S3 API: + +```sh +kubectl --namespace cortex port-forward svc/seaweedfs 8333 & +``` + +**List buckets:** +```sh +curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" http://localhost:8333 +``` + +**List objects in the cortex-blocks bucket:** +```sh +curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" http://localhost:8333/cortex-blocks?list-type=2 +``` + +You'll see: +- `cortex/` directory (tenant ID) +- Block directories (named by ULID) +- Each block contains `index`, `chunks/`, and `meta.json` + +**Tip:** You can also use the AWS CLI: +```sh +export AWS_ACCESS_KEY_ID=any +export AWS_SECRET_ACCESS_KEY=any +aws --endpoint-url=http://localhost:8333 s3 ls s3://cortex-blocks/cortex/ +``` + +## Configuration Files + +| File | Purpose | +|----------------------------|-----------------------------------------------------------| +| `seaweedfs.yaml` | Kubernetes manifest for SeaweedFS (S3) | +| `cortex-values.yaml` | Helm values for Cortex (component config, storage) | +| `prometheus-values.yaml` | Helm values for Prometheus (scrape configs, remote_write) | +| `grafana-values.yaml` | Helm values for Grafana (datasources, dashboards) | +| `rules.yaml` | Recording rules for the ruler | +| `alerts.yaml` | Alerting rules for the ruler | +| `alertmanager-config.yaml` | Alertmanager notification configuration | + +**Want to customize?** Edit the Helm values files and upgrade: +```sh +helm upgrade --namespace cortex cortex cortex-helm/cortex -f cortex-values.yaml +``` + +## Troubleshooting + +### Pods are pending or crashing +```sh +# Check pod status +kubectl --namespace cortex get pods + +# Describe a pod to see events +kubectl --namespace cortex describe pod + +# Check logs +kubectl --namespace cortex logs +``` + +### Ingesters won't start +- Check that SeaweedFS is running and buckets are created +- Verify the `cortex-values.yaml` has correct S3 config +- Check logs: `kubectl --namespace cortex logs -l app.kubernetes.io/component=ingester` + +### No metrics in Grafana +1. Check Prometheus remote_write is configured: `kubectl --namespace cortex get configmap prometheus-server -o yaml` +2. Check distributor is receiving metrics: `kubectl --namespace cortex logs -l app.kubernetes.io/component=distributor` +3. Test Cortex API directly: `curl -H "X-Scope-OrgID: cortex" "http://localhost:8080/prometheus/api/v1/query?query=up"` + +### Port-forward fails +- Check if port is already in use: `lsof -i :` +- Kill existing process: `lsof -ti: | xargs kill` + +### Out of memory +Kind requires Docker to have enough resources: +- Docker Desktop → Settings → Resources → Memory (set to 8GB+) + +### cortextool commands fail +- Make sure port-forward is running: `kubectl --namespace cortex port-forward svc/cortex-nginx 8080:80 &` +- Verify Cortex is responding: `curl http://localhost:8080/ready` + +## Clean Up + +### Remove Port-Forwards + +```sh +# Kill all kubectl port-forwards +killall kubectl + +# Or kill specific port-forwards +lsof -ti:3000,8080,9009,9090 | xargs kill +``` + +### Delete the Kind Cluster + +```sh +kind delete cluster --name cortex-demo +``` + +This removes all Kubernetes resources and the Kind cluster. + +## Next Steps + +Congratulations! You've successfully deployed Cortex in microservices mode on Kubernetes. Here's what to explore next: + +1. **Production Deployment**: [Run Cortex on real Kubernetes →](../guides/running-cortex-on-kubernetes.md) +2. **Learn the Architecture**: [Understand each component →](../architecture.md) +3. **Blocks Storage Deep Dive**: [How blocks storage works →](../blocks-storage/_index.md) +4. **High Availability**: [Configure zone replication →](../guides/zone-replication.md) +5. **Monitoring Cortex**: [Capacity planning guide →](../guides/capacity-planning.md) +6. **Secure Your Deployment**: [Set up authentication →](../guides/authentication-and-authorisation.md) + +## Comparison: Single Binary vs Microservices + +| Aspect | Single Binary | Microservices | +|-----------------------|--------------------------------|-----------------------------------| +| **Components** | All in one process | Separate pods per component | +| **Scaling** | Vertical (bigger instance) | Horizontal (more pods) | +| **Resource Usage** | Lower (1 process) | Higher (multiple processes) | +| **Complexity** | Simple | Complex (orchestration needed) | +| **Failure Isolation** | None (single point of failure) | Yes (component failures isolated) | +| **Use Case** | Dev, testing, learning | Production deployments | + +## Additional Resources + +- [Cortex Documentation](https://cortexmetrics.io/docs/) +- [Cortex Helm Chart](https://github.com/cortexproject/cortex-helm-chart) +- [Cortex Architecture](../architecture.md) +- [Running on Production Kubernetes](../guides/running-cortex-on-kubernetes.md) +- [CNCF Slack #cortex](https://cloud-native.slack.com/archives/cortex) diff --git a/docs/getting-started/single-binary.md b/docs/getting-started/single-binary.md new file mode 100644 index 00000000000..6321a1c238e --- /dev/null +++ b/docs/getting-started/single-binary.md @@ -0,0 +1,373 @@ +--- +title: "Single Binary Mode" +linkTitle: "Single Binary Mode" +weight: 2 +slug: "single-binary" +--- + +This guide will help you get Cortex running in single-binary mode using Docker Compose. In this mode, all Cortex components run in a single process, making it perfect for learning, development, and testing. + +**Time to complete:** ~15 minutes + +## What You'll Learn + +- How to run Cortex with Docker Compose +- How to send metrics from Prometheus to Cortex using remote_write +- How to query metrics stored in Cortex using Grafana +- How to configure recording rules and alerting rules +- How to set up the Cortex Alertmanager + +## Prerequisites + +### Software Requirements + +- [Docker](https://docs.docker.com/get-docker/) (v20.10+) +- [Docker Compose](https://docs.docker.com/compose/install/) (v2.30+) + +### System Requirements + +- 4GB RAM minimum +- 10GB disk space +- Linux, macOS, or Windows with WSL2 + +### Optional Tools + +- [cortextool](https://github.com/cortexproject/cortex-tools/) - For managing rules and alerts (we'll use Docker to run this) + +## Architecture + +This setup creates the following services: + +``` +┌─────────────┐ remote_write ┌─────────────┐ +│ Prometheus │ ───────────────────> │ Cortex │ +│ │ │ (single) │ +└─────────────┘ └─────────────┘ + │ + │ stores blocks + ▼ +┌─────────────┐ ┌─────────────┐ +│ Grafana │ ────── queries ────> │ SeaweedFS │ +│ Perses │ │ (S3) │ +└─────────────┘ └─────────────┘ +``` + +**Components:** +- **SeaweedFS**: S3-compatible object storage for storing metric blocks +- **Cortex**: Single-process Cortex instance with all components (distributor, ingester, querier, compactor, etc.) +- **Prometheus**: Scrapes its own metrics and sends them to Cortex +- **Grafana**: Visualizes metrics stored in Cortex +- **Perses**: Modern dashboard alternative (optional) + +## Step 1: Clone the Repository + +```sh +git clone https://github.com/cortexproject/cortex.git +cd cortex/docs/getting-started +``` + +The `getting-started` directory contains all the configuration files needed for this guide. + +## Step 2: Start the Services + +```sh +docker compose up -d +``` + +This command starts all services in the background. Docker Compose will: +1. Pull required images (first time only) +2. Start SeaweedFS (S3-compatible storage) +3. Initialize S3 buckets +4. Start Cortex +5. Start Prometheus (configured to send metrics to Cortex) +6. Start Grafana (pre-configured with Cortex datasource) + +**What's happening?** Check the logs: +```sh +# View all logs +docker compose logs -f + +# View Cortex logs only +docker compose logs -f cortex +``` + +## Step 3: Verify Services Are Running + +After ~30 seconds, all services should be healthy. Verify by checking: + +```sh +docker compose ps +``` + +You should see all services with status "Up" or "healthy". + +### Access the UIs + +Open these URLs in your browser: + +- **Cortex**: [http://localhost:9009](http://localhost:9009) - Admin interface and ring status +- **Prometheus**: [http://localhost:9090](http://localhost:9090) - Prometheus UI +- **Grafana**: [http://localhost:3000](http://localhost:3000) - Dashboards (no auth needed) +- **SeaweedFS S3 API**: http://localhost:8333 - S3-compatible API (use curl with `--user any:any`) + +## Step 4: Verify Data Flow + +Let's verify that metrics are flowing from Prometheus → Cortex → Grafana. + +### Check Prometheus is Sending Metrics + +1. Open [Prometheus](http://localhost:9090) +2. Go to Status → Targets +3. Verify the targets are UP +4. Go to Query - you should see `prometheus_remote_storage_samples_total` increasing + +### Query Metrics in Cortex + +Test that Cortex is receiving metrics: + +```sh +curl -H "X-Scope-OrgID: cortex" "http://localhost:9009/prometheus/api/v1/query?query=up" | jq +``` + +You should see JSON output with metrics data. + +**Note:** The `X-Scope-OrgID` header specifies which tenant's data to query. Cortex is multi-tenant by default. Prometheus automatically adds this header when writing metrics via remote_write. + +### View Metrics in Grafana + +1. Open [Grafana](http://localhost:3000) (login: `admin` / `admin`) +2. Go to [Explore](http://localhost:3000/explore) +3. Select the "Cortex" datasource +4. Run a query: `up` +5. You should see metrics from Prometheus! + +### View Cortex Dashboards + +Pre-built dashboards are available at [Dashboards](http://localhost:3000/dashboards?tag=cortex): + +- **Cortex / Writes**: Monitor metric ingestion +- **Cortex / Reads**: Monitor query performance +- **Cortex / Object Store**: Monitor block storage + +## Step 5: Configure Recording and Alerting Rules (Optional) + +Cortex can evaluate PromQL recording rules and alerting rules, similar to Prometheus. This is optional but demonstrates an important Cortex feature. + +**What are these?** +- **Recording rules**: Pre-compute expensive queries and store results as new metrics +- **Alerting rules**: Define conditions that trigger alerts + +The repository includes example rules in `rules.yaml` and `alerts.yaml`. + +### Load Rules into Cortex + +**For Linux users:** +```sh +docker run --network host \ + -v "$(pwd):/workspace" -w /workspace \ + quay.io/cortexproject/cortex-tools:v0.17.0 \ + rules sync rules.yaml alerts.yaml --id cortex --address http://localhost:9009 +``` + +**For macOS/Windows users:** +```sh +docker run --network cortex-docs-getting-started_default \ + -v "$(pwd):/workspace" -w /workspace \ + quay.io/cortexproject/cortex-tools:v0.17.0 \ + rules sync rules.yaml alerts.yaml --id cortex --address http://cortex:9009 +``` + +**Note:** The `--id cortex` flag specifies the tenant ID. Cortex is multi-tenant, so rules are namespaced by tenant. + +### Verify Rules Are Loaded + +View rules in Grafana: [Alerting → Alert rules](http://localhost:3000/alerting/list?view=list&search=datasource:Cortex) + +Or check via API: +```sh +curl -H "X-Scope-OrgID: cortex" "http://localhost:9009/prometheus/api/v1/rules" | jq +``` + +## Step 6: Configure Alertmanager (Optional) + +Cortex includes a multi-tenant Alertmanager that receives alerts from the ruler. + +### Load Alertmanager Configuration + +**For Linux users:** +```sh +docker run --network host \ + -v "$(pwd):/workspace" -w /workspace \ + quay.io/cortexproject/cortex-tools:v0.17.0 \ + alertmanager load alertmanager-config.yaml --id cortex --address http://localhost:9009 +``` + +**For macOS/Windows users:** +```sh +docker run --network cortex-docs-getting-started_default \ + -v "$(pwd):/workspace" -w /workspace \ + quay.io/cortexproject/cortex-tools:v0.17.0 \ + alertmanager load alertmanager-config.yaml --id cortex --address http://cortex:9009 +``` + +### View Alertmanager in Grafana + +Configure Alertmanager notification policies in Grafana: [Alerting → Notification policies](http://localhost:3000/alerting/notifications?search=&alertmanager=Cortex%20Alertmanager) + +## Explore and Experiment + +Now that everything is running, try these experiments to learn how Cortex works: + +### Experiment 1: Stop the Ingester + +Cortex runs all components in one process, so stopping Cortex simulates an ingester failure. + +```sh +docker compose stop cortex +``` + +**Observe:** +- Prometheus continues running and queues samples +- Grafana queries fail (no ingesters available) +- Metrics are NOT lost - Prometheus will retry + +**Restart Cortex:** +```sh +docker compose start cortex +``` + +**Result:** Prometheus catches up by sending queued samples. Check the Cortex / Writes dashboard to see the backlog being processed. + +### Experiment 2: Query Old vs Recent Data + +Cortex stores recent data (last ~2 hours) in memory and older data in object storage (S3). + +**Query recent metrics (from ingester memory):** +```sh +curl "http://localhost:9009/prometheus/api/v1/query?query=up" | jq +``` + +**After 2+ hours, query old metrics (from S3 blocks):** +```sh +curl "http://localhost:9009/prometheus/api/v1/query?query=up[24h]" | jq +``` + +**Observe:** Both queries work! Cortex seamlessly queries both sources. + +### Experiment 3: Compare Prometheus vs Cortex + +**In Prometheus:** [Query `up`](http://localhost:9090/graph?g0.expr=up) + +**In Grafana (Cortex datasource):** [Query `up`](http://localhost:3000/explore) + +**Are they the same?** Initially yes, but after Prometheus sends data to Cortex via remote_write, the data diverges: +- Prometheus has local storage (limited retention) +- Cortex has long-term storage in S3 + +### Experiment 4: Explore the Ring + +Cortex uses a hash ring for consistent hashing of time series to ingesters. + +View the ring status: [http://localhost:9009/ring](http://localhost:9009/ring) + +In single-binary mode, you'll see one ingester. In microservices mode, you'd see multiple ingesters. + +### Experiment 5: Inspect Object Storage + +SeaweedFS stores Cortex blocks. You can inspect them using the S3 API: + +**List buckets:** +```sh +curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" http://localhost:8333 +``` + +**List objects in the cortex-blocks bucket:** +```sh +curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" http://localhost:8333/cortex-blocks?list-type=2 +``` + +You'll see: +- `cortex/` directory (tenant ID) +- Block directories named by ULID (e.g., `01J8KRQ7M8...`) +- Each block contains `index`, `chunks/`, and `meta.json` + +**Tip:** You can also use the AWS CLI with SeaweedFS: +```sh +export AWS_ACCESS_KEY_ID=any +export AWS_SECRET_ACCESS_KEY=any +aws --endpoint-url=http://localhost:8333 s3 ls s3://cortex-blocks/ +``` + +## Configuration Files + +This setup uses several configuration files. Here's what each does: + +| File | Purpose | +|----------------------------------|---------------------------------------------------------------| +| `docker-compose.yaml` | Defines all services (Cortex, Prometheus, Grafana, SeaweedFS) | +| `cortex-config.yaml` | Cortex configuration (storage, limits, components) | +| `prometheus-config.yaml` | Prometheus configuration with remote_write to Cortex | +| `grafana-datasource-docker.yaml` | Grafana datasource pointing to Cortex | +| `rules.yaml` | Example recording rules | +| `alerts.yaml` | Example alerting rules | +| `alertmanager-config.yaml` | Alertmanager configuration | + +**Want to customize?** Edit these files and restart services: +```sh +docker compose restart cortex +``` + +## Troubleshooting + +### Services won't start +```sh +# Check logs +docker compose logs + +# Check port conflicts +lsof -i :9009 # Cortex +lsof -i :9090 # Prometheus +lsof -i :3000 # Grafana +``` + +### No metrics in Grafana +1. Check Prometheus is sending metrics: [Status → Targets](http://localhost:9090/targets) +2. Check Cortex is receiving metrics: `curl "http://localhost:9009/prometheus/api/v1/query?query=up"` +3. Check Grafana datasource: Settings → Data sources → Cortex → Test + +### cortextool fails on macOS/Windows +The `--network host` flag doesn't work on macOS/Windows. Use the Docker network name instead: +```sh +docker run --network cortex-docs-getting-started_default ... +``` + +### Out of memory errors +Increase Docker's memory limit to 4GB or more: +- Docker Desktop → Settings → Resources → Memory + +## Clean Up + +When you're done, stop and remove all services: + +```sh +docker compose down -v +``` + +The `-v` flag removes volumes (stored data). Omit it to keep data between runs. + +## Next Steps + +Congratulations! You've successfully run Cortex in single-binary mode. Here's what to explore next: + +1. **Try Microservices Mode**: [Get started with microservices mode →](microservices.md) +2. **Learn the Architecture**: [Understand Cortex's design →](../architecture.md) +3. **Production Deployment**: [Run Cortex on Kubernetes →](../guides/running-cortex-on-kubernetes.md) +4. **Deep Dive into Blocks Storage**: [Learn about blocks storage →](../blocks-storage/_index.md) +5. **Configure Multi-tenancy**: [Set up authentication →](../guides/authentication-and-authorisation.md) + +## Additional Resources + +- [Cortex Documentation](https://cortexmetrics.io/docs/) +- [Cortex Helm Chart](https://github.com/cortexproject/cortex-helm-chart) +- [cortextool CLI](https://github.com/cortexproject/cortex-tools) +- [CNCF Slack #cortex](https://cloud-native.slack.com/archives/cortex)