diff --git a/deploy-manage/autoscaling/autoscaling-deciders.md b/deploy-manage/autoscaling/autoscaling-deciders.md index 1c20443ad..eda553131 100644 --- a/deploy-manage/autoscaling/autoscaling-deciders.md +++ b/deploy-manage/autoscaling/autoscaling-deciders.md @@ -122,7 +122,7 @@ The [autoscaling](../../deploy-manage/autoscaling.md) {{ml}} decider (`ml`) calc The {{ml}} decider is enabled for policies governing `ml` nodes. ::::{note} -For {{ml}} jobs to open when the cluster is not appropriately scaled, set `xpack.ml.max_lazy_ml_nodes` to the largest number of possible {{ml}} nodes (refer to [Advanced machine learning settings](elasticsearch://reference/elasticsearch/configuration-reference/machine-learning-settings.md#advanced-ml-settings) for more information). In {{ess}}, this is automatically set. +For {{ml}} jobs to open when the cluster is not appropriately scaled, set `xpack.ml.max_lazy_ml_nodes` to the largest number of possible {{ml}} nodes (refer to [Advanced machine learning settings](elasticsearch://reference/elasticsearch/configuration-reference/machine-learning-settings.md#advanced-ml-settings) for more information). In {{ech}}, this is automatically set. :::: diff --git a/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md b/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md index 1a40fb62a..1bb1c4bcd 100644 --- a/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md +++ b/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md @@ -220,9 +220,8 @@ Note the following requirements when you run this API request: $$$ece-autoscaling-api-example-requirements-table$$$ -| | | | | -| --- | --- | --- | --- | | | `size` | `autoscaling_min` | `autoscaling_max` | +| --- | --- | --- | --- | | data tier | ✓ | ✕ | ✓ | | machine learning node | ✕ | ✓ | ✓ | | coordinating and master nodes | ✓ | ✕ | ✕ | diff --git a/deploy-manage/deploy/elastic-cloud.md b/deploy-manage/deploy/elastic-cloud.md index c5e1ccfc0..1052fd463 100644 --- a/deploy-manage/deploy/elastic-cloud.md +++ b/deploy-manage/deploy/elastic-cloud.md @@ -4,7 +4,7 @@ applies_to: deployment: ess: ga mapped_pages: - - https://www.elastic.co/guide/en/serverless/current/intro.html#general-what-is-serverless-elastic-differences-between-serverless-projects-and-hosted-deployments-on-ecloud + - https://www.elastic.co/guide/en/serverless/current/intro.html --- # Elastic Cloud [intro] @@ -38,9 +38,8 @@ For more information, refer to [](/deploy-manage/cloud-organization.md). You can have multiple hosted deployments and serverless projects in the same {{ecloud}} organization, and each deployment type has its own specificities. -| | | | -| --- | --- | --- | | Option | Serverless | Hosted | +| --- | --- | --- | | **Cluster management** | Fully managed by Elastic. | You provision and manage your hosted clusters. Shared responsibility with Elastic. | | **Scaling** | Autoscales out of the box. | Manual scaling or autoscaling available for you to enable. | | **Upgrades** | Automatically performed by Elastic. | You choose when to upgrade. | diff --git a/deploy-manage/deploy/elastic-cloud/azure-marketplace-pricing.md b/deploy-manage/deploy/elastic-cloud/azure-marketplace-pricing.md index d6d381f1d..398e00124 100644 --- a/deploy-manage/deploy/elastic-cloud/azure-marketplace-pricing.md +++ b/deploy-manage/deploy/elastic-cloud/azure-marketplace-pricing.md @@ -23,9 +23,8 @@ The pricing plan update enables us to align with market trends and adapt to chan These pricing changes will apply to customers who are currently paying for Azure Marketplace services in non-USD currencies. If you are paying in USD, your pricing and billing will remain unchanged. -| | | | -| --- | --- | --- | | Currency | Price | Elastic Billing Units for Azure† | +| --- | --- | --- | | USD | 1.00 | $0.10 per 1000 units | | AUD | 1.60 | $0.16 per 1000 units | | BRL | 5.40 | R$0.54 per 1000 units | diff --git a/deploy-manage/monitor/cloud-health-perf.md b/deploy-manage/monitor/cloud-health-perf.md index 423103d2d..7fb7cfff1 100644 --- a/deploy-manage/monitor/cloud-health-perf.md +++ b/deploy-manage/monitor/cloud-health-perf.md @@ -74,7 +74,7 @@ deployment: ess: ``` -{{ess}} deployments offer an additional **Performance** page to get further information about your cluster performance. +{{ech}} deployments offer an additional **Performance** page to get further information about your cluster performance. If you observe issues on search and ingest operations in terms of increased latency or throughput for queries, these might not be directly reported on the **Health** page, unless they are related to shard health or master node availability. diff --git a/deploy-manage/monitor/orchestrators/ece-proxy-log-fields.md b/deploy-manage/monitor/orchestrators/ece-proxy-log-fields.md index 4c9619274..a752ca8d9 100644 --- a/deploy-manage/monitor/orchestrators/ece-proxy-log-fields.md +++ b/deploy-manage/monitor/orchestrators/ece-proxy-log-fields.md @@ -13,9 +13,8 @@ These fields *are* subject to change, though the vast majority of them are gener :::: -| | | -| --- | --- | | Field | Description | +| --- | --- | | `proxy_ip` | the IP on the connection, i.e. a proxy IP if the request has been proxied | | `request_end` | the time the request was returned in ms since unix epoch | | `status_code` | the HTTP status returned to the client | diff --git a/deploy-manage/reference-architectures.md b/deploy-manage/reference-architectures.md index c20c65ae3..56f3764b0 100644 --- a/deploy-manage/reference-architectures.md +++ b/deploy-manage/reference-architectures.md @@ -25,9 +25,8 @@ These reference architectures are recommendations and should be adapted to fit y ## Architectures [reference-architectures-time-series] -| | | +| Architecture | When to use | | --- | --- | -| **Architecture** | **When to use** | | [*Hot/Frozen - High Availability*](/deploy-manage/reference-architectures/hotfrozen-high-availability.md)
A high availability architecture that is cost optimized for large time-series datasets. | * Have a requirement for cost effective long term data storage (many months or years).
* Provide insights and alerts using logs, metrics, traces, or various event types to ensure optimal performance and quick issue resolution for applications.
* Apply Machine Learning and Search AI to assist in dealing with the large amount of data.
* Deploy an architecture model that allows for maximum flexibility between storage cost and performance.
| | Additional architectures are on the way.
Stay tuned for updates. | | diff --git a/deploy-manage/reference-architectures/hotfrozen-high-availability.md b/deploy-manage/reference-architectures/hotfrozen-high-availability.md index 3c3aa6105..48c4c7086 100644 --- a/deploy-manage/reference-architectures/hotfrozen-high-availability.md +++ b/deploy-manage/reference-architectures/hotfrozen-high-availability.md @@ -62,9 +62,8 @@ In the links provided above, Elastic has performance tested hardware for each of This table shows our specific recommendations for nodes in a Hot/Frozen architecture. -| | | | | | +| Type | AWS | Azure | GCP | Physical | | --- | --- | --- | --- | --- | -| **Type** | **AWS** | **Azure** | **GCP** | **Physical** | | ![Hot data node](../../images/reference-architectures-hot.png "") | c6gd | f32sv2 | N2 | 16-32 vCPU
64 GB RAM
2-6 TB NVMe SSD | | ![Frozen data node](../../images/reference-architectures-frozen.png "") | i3en | e8dsv4 | N2 | 8 vCPU
64 GB RAM
6-20+ TB NVMe SSD
Depending on days cached | | ![Machine learning node](../../images/reference-architectures-machine-learning.png "") | m6gd | f16sv2 | N2 | 16 vCPU
64 GB RAM
256 GB SSD | diff --git a/deploy-manage/security/elastic-cloud-static-ips.md b/deploy-manage/security/elastic-cloud-static-ips.md index 814d17006..44beeaf63 100644 --- a/deploy-manage/security/elastic-cloud-static-ips.md +++ b/deploy-manage/security/elastic-cloud-static-ips.md @@ -39,9 +39,8 @@ Not suitable usage of egress static IPs to introduce network controls: ## Supported Regions [ec-regions] ::::{dropdown} AWS -| | | | +| Region | Ingress Static IPs | Egress Static IPs | | --- | --- | --- | -| **Region** | **Ingress Static IPs** | **Egress Static IPs** | | aws-af-south-1 | No | Yes | | aws-ap-east-1 | No | Yes | | aws-ap-northeast-1 | No | Yes | @@ -67,9 +66,8 @@ Not suitable usage of egress static IPs to introduce network controls: ::::{dropdown} Azure -| | | | +| Region | Ingress Static IPs | Egress Static IPs | | --- | --- | --- | -| **Region** | **Ingress Static IPs** | **Egress Static IPs** | | azure-australiaeast | Yes | Yes | | azure-brazilsouth | Yes | Yes | | azure-canadacentral | Yes | Yes | @@ -91,9 +89,8 @@ Not suitable usage of egress static IPs to introduce network controls: ::::{dropdown} GCP -| | | | +| Region | Ingress Static IPs | Egress Static IPs | | --- | --- | --- | -| **Region** | **Ingress Static IPs** | **Egress Static IPs** | | gcp-asia-east1 | Yes | No | | gcp-asia-northeast1 | Yes | No | | gcp-asia-northeast3 | Yes | No | diff --git a/deploy-manage/security/fips-140-2.md b/deploy-manage/security/fips-140-2.md index 8cac69449..e08dd15d5 100644 --- a/deploy-manage/security/fips-140-2.md +++ b/deploy-manage/security/fips-140-2.md @@ -111,9 +111,8 @@ FIPS 140-2 compliance dictates that the length of the public keys used for TLS m $$$comparable-key-strength$$$ -| | | | -| --- | --- | --- | | Symmetric Key Algorithm | RSA key Length | ECC key length | +| --- | --- | --- | | `3DES` | 2048 | 224-255 | | `AES-128` | 3072 | 256-383 | | `AES-256` | 15630 | 512+ | diff --git a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md index c0b44658d..471599ab5 100644 --- a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md +++ b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md @@ -10,9 +10,8 @@ mapped_pages: Elastic Cloud Enterprise 2.4.0 and later defaults to minimum TLS version 1.2 with a modern set of cipher suites. -| | | | +| Elastic Cloud Enterprise version | Default minimum TLS version | Default allowed cipher suites | | --- | --- | --- | -| **Elastic Cloud Enterprise version** | **Default minimum TLS version** | **Default allowed cipher suites** | | 2.4.0 and later | TLS 1.2 | `ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256` | | 2.3.1 and earlier | TLS 1.0 | `CDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA` | diff --git a/deploy-manage/users-roles/cloud-organization/user-roles.md b/deploy-manage/users-roles/cloud-organization/user-roles.md index 534d3a600..c83200a19 100644 --- a/deploy-manage/users-roles/cloud-organization/user-roles.md +++ b/deploy-manage/users-roles/cloud-organization/user-roles.md @@ -73,7 +73,7 @@ There are two ways for a user to access {{kib}} instances of an {{ech}} deployme The following table shows the default mapping: -| Cloud role | Cloud API `role_id` | Stack role | +| Cloud role | Cloud API `role_id` | Stack role | | --- | --- | --- | | Organization owner | `organization-admin` | superuser | | Billing admin | `billing-admin` | none | diff --git a/docset.yml b/docset.yml index df682ea94..8eec95c32 100644 --- a/docset.yml +++ b/docset.yml @@ -75,60 +75,28 @@ toc: - hidden: 404.md subs: - filebeat-ref: "https://www.elastic.co/guide/en/beats/filebeat/current" - defguide: "https://www.elastic.co/guide/en/elasticsearch/guide/2.x" - security-guide-all: "https://www.elastic.co/guide/en/security" - sql-odbc: "https://www.elastic.co/guide/en/elasticsearch/sql-odbc/current" - ml-docs: "https://www.elastic.co/guide/en/machine-learning/current" - eland-docs: "https://www.elastic.co/guide/en/elasticsearch/client/eland/current" subscriptions: "https://www.elastic.co/subscriptions" - extendtrial: "https://www.elastic.co/trialextension" ecloud: "Elastic Cloud" - ess: "Elasticsearch Service" ech: "Elastic Cloud Hosted" ece: "Elastic Cloud Enterprise" eck: "Elastic Cloud on Kubernetes" serverless-full: "Elastic Cloud Serverless" serverless-short: "Serverless" es-serverless: "Elasticsearch Serverless" - es3: "Elasticsearch Serverless" obs-serverless: "Elastic Observability Serverless" sec-serverless: "Elastic Security Serverless" - serverless-docs: "https://docs.elastic.co/serverless" - cloud: "https://www.elastic.co/guide/en/cloud/current" - ess-utm-params: "?page=docs&placement=docs-body" - ess-baymax: "?page=docs&placement=docs-body" - ess-trial: "https://cloud.elastic.co/registration?page=docs&placement=docs-body" - ess-product: "https://www.elastic.co/cloud/elasticsearch-service?page=docs&placement=docs-body" - ess-console: "https://cloud.elastic.co?page=docs&placement=docs-body" - ess-deployments: "https://cloud.elastic.co/deployments?page=docs&placement=docs-body" - ess-leadin: "You can run Elasticsearch on your own hardware or use our hosted Elasticsearch Service that is available on AWS, GCP, and Azure. https://cloud.elastic.co/registration{ess-utm-params}[Try the Elasticsearch Service for free]." ess-leadin-short: "Our hosted Elasticsearch Service is available on AWS, GCP, and Azure, and you can https://cloud.elastic.co/registration{ess-utm-params}[try it for free]." - ess-icon: "image:https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg[link=\"https://cloud.elastic.co/registration{ess-utm-params}\", title=\"Supported on Elasticsearch Service\"]" - ece-icon: "image:https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud_ece.svg[link=\"https://cloud.elastic.co/registration{ess-utm-params}\", title=\"Supported on Elastic Cloud Enterprise\"]" - cloud-only: "This feature is designed for indirect use by https://cloud.elastic.co/registration{ess-utm-params}[Elasticsearch Service], https://www.elastic.co/guide/en/cloud-enterprise/{ece-version-link}[Elastic Cloud Enterprise], and https://www.elastic.co/guide/en/cloud-on-k8s/current[Elastic Cloud on Kubernetes]. Direct use is not supported." - ess-setting-change: "image:https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg[link=\"{ess-trial}\", title=\"Supported on {ess}\"] indicates a change to a supported https://www.elastic.co/guide/en/cloud/current/ec-add-user-settings.html[user setting] for Elasticsearch Service." - ess-skip-section: "If you use Elasticsearch Service, skip this section. Elasticsearch Service handles these changes for you." - api-cloud: "https://www.elastic.co/docs/api/doc/cloud" - api-ece: "https://www.elastic.co/docs/api/doc/cloud-enterprise" - api-kibana-serverless: "https://www.elastic.co/docs/api/doc/serverless" - es-feature-flag: "This feature is in development and not yet available for use. This documentation is provided for informational purposes only." apm-app: "APM app" uptime-app: "Uptime app" synthetics-app: "Synthetics app" logs-app: "Logs app" - metrics-app: "Metrics app" infrastructure-app: "Infrastructure app" - siem-app: "SIEM app" security-app: "Elastic Security app" ml-app: "Machine Learning" dev-tools-app: "Dev Tools" - ingest-manager-app: "Ingest Manager" stack-manage-app: "Stack Management" stack-monitor-app: "Stack Monitoring" - alerts-ui: "Alerts and Actions" rules-ui: "Rules" - rac-ui: "Rules and Connectors" connectors-ui: "Connectors" connectors-feature: "Actions and Connectors" stack-rules-feature: "Stack Rules" @@ -137,46 +105,30 @@ subs: ems-init: "EMS" hosted-ems: "Elastic Maps Server" ipm-app: "Index Pattern Management" - ingest-pipelines: "ingest pipelines" ingest-pipelines-app: "Ingest Pipelines" - ingest-pipelines-cap: "Ingest pipelines" - ls-pipelines: "Logstash pipelines" ls-pipelines-app: "Logstash Pipelines" - maint-windows: "maintenance windows" maint-windows-app: "Maintenance Windows" maint-windows-cap: "Maintenance windows" custom-roles-app: "Custom Roles" data-source: "data view" data-sources: "data views" - data-source-caps: "Data View" data-sources-caps: "Data Views" data-source-cap: "Data view" data-sources-cap: "Data views" project-settings: "Project settings" manage-app: "Management" index-manage-app: "Index Management" - data-views-app: "Data Views" rules-app: "Rules" saved-objects-app: "Saved Objects" - tags-app: "Tags" api-keys-app: "API keys" - transforms-app: "Transforms" connectors-app: "Connectors" - files-app: "Files" reports-app: "Reports" - maps-app: "Maps" - alerts-app: "Alerts" - crawler: "Enterprise Search web crawler" - ents: "Enterprise Search" app-search-crawler: "App Search web crawler" agent: "Elastic Agent" agents: "Elastic Agents" fleet: "Fleet" fleet-server: "Fleet Server" integrations-server: "Integrations Server" - ingest-manager: "Ingest Manager" - ingest-management: "ingest management" - package-manager: "Elastic Package Manager" integrations: "Integrations" package-registry: "Elastic Package Registry" artifact-registry: "Elastic Artifact Registry" @@ -185,8 +137,6 @@ subs: xpack: "X-Pack" es: "Elasticsearch" kib: "Kibana" - esms: "Elastic Stack Monitoring Service" - esms-init: "ESMS" ls: "Logstash" beats: "Beats" auditbeat: "Auditbeat" @@ -195,20 +145,14 @@ subs: metricbeat: "Metricbeat" packetbeat: "Packetbeat" winlogbeat: "Winlogbeat" - functionbeat: "Functionbeat" - journalbeat: "Journalbeat" - es-sql: "Elasticsearch SQL" esql: "ES|QL" elastic-agent: "Elastic Agent" k8s: "Kubernetes" - log-driver-long: "Elastic Logging Plugin for Docker" - security: "X-Pack security" security-features: "security features" operator-feature: "operator privileges feature" es-security-features: "Elasticsearch security features" stack-security-features: "Elastic Stack security features" endpoint-sec: "Endpoint Security" - endpoint-cloud-sec: "Endpoint and Cloud Security" elastic-defend: "Elastic Defend" elastic-sec: "Elastic Security" elastic-endpoint: "Elastic Endpoint" @@ -222,8 +166,6 @@ subs: webhook: "Webhook" webhook-cm: "Webhook - Case Management" opsgenie: "Opsgenie" - bedrock: "Amazon Bedrock" - gemini: "Google Gemini" hive: "TheHive" monitoring: "X-Pack monitoring" monitor-features: "monitoring features" @@ -232,10 +174,8 @@ subs: alert-features: "alerting features" reporting: "X-Pack reporting" report-features: "reporting features" - graph: "X-Pack graph" graph-features: "graph analytics features" searchprofiler: "Search Profiler" - xpackml: "X-Pack machine learning" ml: "machine learning" ml-cap: "Machine learning" ml-init: "ML" @@ -250,9 +190,6 @@ subs: ilm: "index lifecycle management" ilm-cap: "Index lifecycle management" ilm-init: "ILM" - dlm: "data lifecycle management" - dlm-cap: "Data lifecycle management" - dlm-init: "DLM" search-snap: "searchable snapshot" search-snaps: "searchable snapshots" search-snaps-cap: "Searchable snapshots" @@ -260,18 +197,12 @@ subs: slm-cap: "Snapshot lifecycle management" slm-init: "SLM" rollup-features: "data rollup features" - ipm: "index pattern management" ipm-cap: "Index pattern" rollup: "rollup" - rollup-cap: "Rollup" - rollups: "rollups" - rollups-cap: "Rollups" rollup-job: "rollup job" rollup-jobs: "rollup jobs" - rollup-jobs-cap: "Rollup jobs" dfeed: "datafeed" dfeeds: "datafeeds" - dfeed-cap: "Datafeed" dfeeds-cap: "Datafeeds" ml-jobs: "machine learning jobs" ml-jobs-cap: "Machine learning jobs" @@ -282,33 +213,18 @@ subs: anomaly-jobs-cap: "Anomaly detection jobs" dataframe: "data frame" dataframes: "data frames" - dataframe-cap: "Data frame" - dataframes-cap: "Data frames" watcher-transform: "payload transform" watcher-transforms: "payload transforms" - watcher-transform-cap: "Payload transform" watcher-transforms-cap: "Payload transforms" transform: "transform" transforms: "transforms" transform-cap: "Transform" transforms-cap: "Transforms" - dataframe-transform: "transform" - dataframe-transform-cap: "Transform" - dataframe-transforms: "transforms" - dataframe-transforms-cap: "Transforms" dfanalytics-cap: "Data frame analytics" dfanalytics: "data frame analytics" - dataframe-analytics-config: "data frame analytics analytics config" dfanalytics-job: "data frame analytics analytics job" dfanalytics-jobs: "data frame analytics analytics jobs" dfanalytics-jobs-cap: "Data frame analytics analytics jobs" - cdataframe: "continuous data frame" - cdataframes: "continuous data frames" - cdataframe-cap: "Continuous data frame" - cdataframes-cap: "Continuous data frames" - cdataframe-transform: "continuous transform" - cdataframe-transforms: "continuous transforms" - cdataframe-transforms-cap: "Continuous transforms" ctransform: "continuous transform" ctransform-cap: "Continuous transform" ctransforms: "continuous transforms" @@ -317,19 +233,13 @@ subs: oldetection-cap: "Outlier detection" olscore: "outlier score" olscores: "outlier scores" - fiscore: "feature influence score" evaluatedf-api: "evaluate data frame analytics API" - evaluatedf-api-cap: "Evaluate data frame analytics API" - binarysc: "binary soft classification" - binarysc-cap: "Binary soft classification" regression: "regression" regression-cap: "Regression" reganalysis: "regression analysis" reganalysis-cap: "Regression analysis" depvar: "dependent variable" - feature-var: "feature variable" feature-vars: "feature variables" - feature-vars-cap: "Feature variables" classification: "classification" classification-cap: "Classification" classanalysis: "classification analysis" @@ -339,39 +249,17 @@ subs: lang-ident-cap: "Language identification" lang-ident: "language identification" data-viz: "Data Visualizer" - file-data-viz: "File Data Visualizer" feat-imp: "feature importance" feat-imp-cap: "Feature importance" nlp: "natural language processing" nlp-cap: "Natural language processing" apm-agent: "APM agent" - apm-go-agent: "Elastic APM Go agent" - apm-go-agents: "Elastic APM Go agents" - apm-ios-agent: "Elastic APM iOS agent" - apm-ios-agents: "Elastic APM iOS agents" apm-java-agent: "Elastic APM Java agent" - apm-java-agents: "Elastic APM Java agents" - apm-dotnet-agent: "Elastic APM .NET agent" - apm-dotnet-agents: "Elastic APM .NET agents" - apm-node-agent: "Elastic APM Node.js agent" - apm-node-agents: "Elastic APM Node.js agents" - apm-php-agent: "Elastic APM PHP agent" - apm-php-agents: "Elastic APM PHP agents" - apm-py-agent: "Elastic APM Python agent" - apm-py-agents: "Elastic APM Python agents" - apm-ruby-agent: "Elastic APM Ruby agent" - apm-ruby-agents: "Elastic APM Ruby agents" - apm-rum-agent: "Elastic APM Real User Monitoring (RUM) JavaScript agent" - apm-rum-agents: "Elastic APM RUM JavaScript agents" - apm-lambda-ext: "Elastic APM AWS Lambda extension" project-monitors: "project monitors" project-monitors-cap: "Project monitors" private-location: "Private Location" private-locations: "Private Locations" - pwd: "YOUR_PASSWORD" esh: "ES-Hadoop" - default-dist: "default distribution" - oss-dist: "OSS-only distribution" observability: "Observability" api-request-title: "Request" api-prereq-title: "Prerequisites" @@ -379,44 +267,12 @@ subs: api-path-parms-title: "Path parameters" api-query-parms-title: "Query parameters" api-request-body-title: "Request body" - api-response-codes-title: "Response codes" - api-response-body-title: "Response body" - api-example-title: "Example" api-examples-title: "Examples" - api-definitions-title: "Properties" - multi-arg: "†footnoteref:[multi-arg,This parameter accepts multiple arguments.]" - multi-arg-ref: "†footnoteref:[multi-arg]" - yes-icon: "image:https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png[Yes,20,15]" - no-icon: "image:https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png[No,20,15]" - agent-repo: "https://github.com/elastic/elastic-agent/" agent-issue: "https://github.com/elastic/elastic-agent/issues/" agent-pull: "https://github.com/elastic/elastic-agent/pull/" - es-repo: "https://github.com/elastic/elasticsearch/" - es-issue: "https://github.com/elastic/elasticsearch/issues/" - es-pull: "https://github.com/elastic/elasticsearch/pull/" - es-commit: "https://github.com/elastic/elasticsearch/commit/" - fleet-server-repo: "https://github.com/elastic/fleet-server/" fleet-server-issue: "https://github.com/elastic/fleet-server/issues/" fleet-server-pull: "https://github.com/elastic/fleet-server/pull/" - kib-repo: "https://github.com/elastic/kibana/" - kib-issue: "https://github.com/elastic/kibana/issues/" - kibana-issue: "'{{kib-repo}}issues/'" kib-pull: "https://github.com/elastic/kibana/pull/" - kibana-pull: "'{{kib-repo}}pull/'" - kib-commit: "https://github.com/elastic/kibana/commit/" - ml-repo: "https://github.com/elastic/ml-cpp/" - ml-issue: "https://github.com/elastic/ml-cpp/issues/" - ml-pull: "https://github.com/elastic/ml-cpp/pull/" - ml-commit: "https://github.com/elastic/ml-cpp/commit/" - apm-repo: "https://github.com/elastic/apm-server/" - apm-issue: "https://github.com/elastic/apm-server/issues/" - apm-pull: "https://github.com/elastic/apm-server/pull/" - kibana-blob: "https://github.com/elastic/kibana/blob/current/" - infra-guide: "https://www.elastic.co/guide/en/infrastructure/guide/current" - a-data-source: "a data view" - icon-bug: "pass:[]" - icon-checkInCircleFilled: "pass:[]" - icon-warningFilled: "pass:[]" stack-version: "9.0.0" eck_version: "3.0.0" apm_server_version: "9.0.0" diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-lang-ident.md b/explore-analyze/machine-learning/nlp/ml-nlp-lang-ident.md index 07c1cb307..cc6e50008 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-lang-ident.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-lang-ident.md @@ -20,9 +20,8 @@ The longer the text passed into the {{lang-ident}} model, the more accurately th The table below contains the ISO codes and the English names of the languages that {{lang-ident}} supports. If a language has a 2-letter `ISO 639-1` code, the table contains that identifier. Otherwise, the 3-letter `ISO 639-2` code is used. The `Latn` subtag indicates that the language is transliterated into Latin script. -| | | | | | | -| --- | --- | --- | --- | --- | --- | | Code | Language | Code | Language | Code | Language | +| --- | --- | --- | --- | --- | --- | | af | Afrikaans | hr | Croatian | pa | Punjabi | | am | Amharic | ht | Haitian | pl | Polish | | ar | Arabic | hu | Hungarian | ps | Pashto | diff --git a/explore-analyze/numeral-formatting.md b/explore-analyze/numeral-formatting.md index c5db85d27..c6b77bee7 100644 --- a/explore-analyze/numeral-formatting.md +++ b/explore-analyze/numeral-formatting.md @@ -32,9 +32,8 @@ The display of these patterns is affected by the [advanced setting](kibana://ref Most basic examples: -| | | | | +| Input | Pattern | Locale | Output | | --- | --- | --- | --- | -| **Input*** | ***Pattern*** | ***Locale*** | ***Output** | | 10000.23 | 0,0 | en (English) | 10,000 | | 10000.23 | 0.0 | en (English) | 10000.2 | | 10000.23 | 0,0.0 | fr (French) | 10 000,2 | @@ -50,9 +49,8 @@ By adding the `%` symbol to any of the previous patterns, the value is multiplie The default percentage formatter in {{kib}} is `0,0.[000]%`, which shows up to three decimal places. -| | | | | +| Input | Pattern | Locale | Output | | --- | --- | --- | --- | -| **Input*** | ***Pattern*** | ***Locale*** | ***Output** | | 0.43 | 0,0.[000]% | en (English) | 43.00% | | 0.43 | 0,0.[000]% | fr (French) | 43,00% | | 1 | 0% | en (English) | 100% | @@ -80,9 +78,8 @@ The bytes and bits formatters will shorten the input by adding a suffix like `GB Suffixes are not localized with this formatter. -| | | | | +| Input | Pattern | Locale | Output | | --- | --- | --- | --- | -| **Input*** | ***Pattern*** | ***Locale*** | ***Output** | | 2000 | 0.00b | en (English) | 1.95KB | | 2000 | 0.00bb | en (English) | 1.95KiB | | 2000 | 0.00bd | en (English) | 2.00kB | @@ -95,9 +92,8 @@ Suffixes are not localized with this formatter. Currency formatting is limited in {{kib}} due to the limitations of the pattern syntax. To enable currency formatting, use the symbol `$` in the pattern syntax. The number formatting locale will affect the result. -| | | | | +| Input | Pattern | Locale | Output | | --- | --- | --- | --- | -| **Input*** | ***Pattern*** | ***Locale*** | ***Output** | | 1000.234 | $0,0.00 | en (English) | $1,000.23 | | 1000.234 | $0,0.00 | fr (French) | €1 000,23 | | 1000.234 | $0,0.00 | chs (Simplified Chinese) | ¥1,000.23 | @@ -107,9 +103,8 @@ Currency formatting is limited in {{kib}} due to the limitations of the pattern Converts a value in seconds to display hours, minutes, and seconds. -| | | | +| Input | Pattern | Output | | --- | --- | --- | -| **Input*** | ***Pattern*** | ***Output** | | 25 | 00:00:00 | 0:00:25 | | 25 | 00:00 | 0:00:25 | | 238 | 00:00:00 | 0:03:58 | @@ -121,9 +116,8 @@ Converts a value in seconds to display hours, minutes, and seconds. The `a` pattern will look for the shortest abbreviation for your number, and use a locale-specific display for it. The abbreviations `aK`, `aM`, `aB`, and `aT` can indicate that the number should be abbreviated to a specific order of magnitude. -| | | | | +| Input | Pattern | Locale | Output | | --- | --- | --- | --- | -| **Input*** | ***Pattern*** | ***Locale*** | ***Output** | | 2000000000 | 0.00a | en (English) | 2.00b | | 2000000000 | 0.00a | ja (Japanese) | 2.00十億 | | -5444333222111 | 0,0 aK | en (English) | -5,444,333,222 k | @@ -136,9 +130,8 @@ The `a` pattern will look for the shortest abbreviation for your number, and use The `o` pattern will display a locale-specific positional value like `1st` or `2nd`. This pattern has limited support for localization, especially in languages with multiple forms, such as German. -| | | | | +| Input | Pattern | Locale | Output | | --- | --- | --- | --- | -| **Input*** | ***Pattern*** | ***Locale*** | ***Output** | | 3 | 0o | en (English) | 3rd | | 34 | 0o | en (English) | 34th | | 3 | 0o | es (Spanish) | 2er | @@ -149,9 +142,8 @@ The `o` pattern will display a locale-specific positional value like `1st` or `2 These number formats, combined with the previously described patterns, produce the complete set of options for numeral formatting. The output here is all for the `en` locale. -| | | | +| Input | Pattern | Output | | --- | --- | --- | -| **Input*** | ***Pattern*** | ***Output** | | 10000 | 0,0.0000 | 10,000.0000 | | 10000.23 | 0,0 | 10,000 | | -10000 | 0,0.0 | -10,000.0 | diff --git a/explore-analyze/query-filter/languages/esql-rest.md b/explore-analyze/query-filter/languages/esql-rest.md index f8b2ecbe7..3e3b0900b 100644 --- a/explore-analyze/query-filter/languages/esql-rest.md +++ b/explore-analyze/query-filter/languages/esql-rest.md @@ -63,9 +63,8 @@ The URL parameter takes precedence over the HTTP headers. If neither is specifie :::: -| | | | +| `format` | HTTP header | Description | | --- | --- | --- | -| **`format`** | **HTTP header** | **Description** | | Human readable | | `csv` | `text/csv` | [Comma-separated values](https://en.wikipedia.org/wiki/Comma-separated_values) | | `json` | `application/json` | [JSON](https://www.json.org/) (JavaScript Object Notation) human-readable format | diff --git a/explore-analyze/query-filter/languages/sql-data-types.md b/explore-analyze/query-filter/languages/sql-data-types.md index 1f3d5deff..b9d21cb09 100644 --- a/explore-analyze/query-filter/languages/sql-data-types.md +++ b/explore-analyze/query-filter/languages/sql-data-types.md @@ -8,9 +8,8 @@ mapped_pages: # Data Types [sql-data-types] -| | | | | +| {{es}} type | Elasticsearch SQL type | SQL type | SQL precision | | --- | --- | --- | --- | -| **{{es}} type** | **Elasticsearch SQL type** | **SQL type** | **SQL precision** | | Core types | | [`null`](elasticsearch://reference/elasticsearch/mapping-reference/null-value.md) | `null` | NULL | 0 | | [`boolean`](elasticsearch://reference/elasticsearch/mapping-reference/boolean.md) | `boolean` | BOOLEAN | 1 | @@ -47,9 +46,8 @@ In addition to the types above, Elasticsearch SQL also supports at *runtime* SQL $$$es-sql-only-types$$$ The table below indicates these types: -| | | +| SQL type | SQL precision | | --- | --- | -| **SQL type** | **SQL precision** | | `date` | 29 | | `time` | 18 | | `interval_year` | 7 | diff --git a/explore-analyze/query-filter/languages/sql-functions-aggs.md b/explore-analyze/query-filter/languages/sql-functions-aggs.md index 3ccd61256..6ca568aaf 100644 --- a/explore-analyze/query-filter/languages/sql-functions-aggs.md +++ b/explore-analyze/query-filter/languages/sql-functions-aggs.md @@ -167,9 +167,8 @@ SELECT FIRST(a) FROM t will result in: -| | +| FIRST(a) | | --- | -| **FIRST(a)** | | 1 | and @@ -180,9 +179,8 @@ SELECT FIRST(a, b) FROM t will result in: -| | +| FIRST(a, b) | | --- | -| **FIRST(a, b)** | | 100 | ```sql @@ -288,9 +286,8 @@ SELECT LAST(a) FROM t will result in: -| | +| LAST(a) | | --- | -| **LAST(a)** | | 200 | and @@ -301,9 +298,8 @@ SELECT LAST(a, b) FROM t will result in: -| | +| LAST(a, b) | | --- | -| **LAST(a, b)** | | 2 | ```sql diff --git a/explore-analyze/query-filter/languages/sql-functions-datetime.md b/explore-analyze/query-filter/languages/sql-functions-datetime.md index b4aea8008..bb7ad9058 100644 --- a/explore-analyze/query-filter/languages/sql-functions-datetime.md +++ b/explore-analyze/query-filter/languages/sql-functions-datetime.md @@ -18,9 +18,8 @@ A common requirement when dealing with date/time in general revolves around the The table below shows the mapping between {{es}} and Elasticsearch SQL: -| | | +| {{es}} | Elasticsearch SQL | | --- | --- | -| **{{es}}** | **Elasticsearch SQL** | | Index/Table datetime math | | `` | | Query date/time math | @@ -41,9 +40,8 @@ Elasticsearch SQL accepts also the plural for each time unit (e.g. both `YEAR` a Example of the possible combinations below: -| | | +| Interval | Description | | --- | --- | -| **Interval** | **Description** | | `INTERVAL '1-2' YEAR TO MONTH` | 1 year and 2 months | | `INTERVAL '3 4' DAYS TO HOURS` | 3 days and 4 hours | | `INTERVAL '5 6:12' DAYS TO MINUTES` | 5 days, 6 hours and 12 minutes | diff --git a/explore-analyze/query-filter/languages/sql-index-patterns.md b/explore-analyze/query-filter/languages/sql-index-patterns.md index 772e15d5d..8e553e70a 100644 --- a/explore-analyze/query-filter/languages/sql-index-patterns.md +++ b/explore-analyze/query-filter/languages/sql-index-patterns.md @@ -88,9 +88,8 @@ Notice how now `emp%` does not match any tables because `%`, which means match z In a nutshell, the differences between the two type of patterns are: -| | | | +| Feature | Multi index | SQL `LIKE` | | --- | --- | --- | -| **Feature** | **Multi index** | **SQL `LIKE`** | | Type of quoting | `"` | `'` | | Inclusion | Yes | Yes | | Exclusion | Yes | No | diff --git a/explore-analyze/query-filter/languages/sql-lexical-structure.md b/explore-analyze/query-filter/languages/sql-lexical-structure.md index 12da08300..ece76c058 100644 --- a/explore-analyze/query-filter/languages/sql-lexical-structure.md +++ b/explore-analyze/query-filter/languages/sql-lexical-structure.md @@ -126,9 +126,8 @@ To escape single or double quotes, one needs to use that specific quote one more A few characters that are not alphanumeric have a dedicated meaning different from that of an operator. For completeness these are specified below: -| | | +| Char | Description | | --- | --- | -| **Char** | **Description** | | `*` | The asterisk (or wildcard) is used in some contexts to denote all fields for a table. Can be also used as an argument to some aggregate functions. | | `,` | Commas are used to enumerate the elements of a list. | | `.` | Used in numeric constants or to separate identifiers qualifiers (catalog, table, column names, etc…​). | @@ -141,9 +140,8 @@ Most operators in Elasticsearch SQL have the same precedence and are left-associ The following table indicates the supported operators and their precedence (highest to lowest); -| | | | +| Operator/Element | Associativity | Description | | --- | --- | --- | -| **Operator/Element** | **Associativity** | **Description** | | `.` | left | qualifier separator | | `::` | left | PostgreSQL-style type cast | | `+ -` | right | unary plus and minus (numeric literal sign) | diff --git a/explore-analyze/query-filter/languages/sql-like-rlike-operators.md b/explore-analyze/query-filter/languages/sql-like-rlike-operators.md index 7b5d86e84..2ec59eef6 100644 --- a/explore-analyze/query-filter/languages/sql-like-rlike-operators.md +++ b/explore-analyze/query-filter/languages/sql-like-rlike-operators.md @@ -95,9 +95,8 @@ When using `LIKE`/`RLIKE`, do consider using [full-text search predicates](sql-f For example: -| | | +| LIKE/RLIKE | QUERY/MATCH | | --- | --- | -| **LIKE/RLIKE** | **QUERY/MATCH** | | ``foo LIKE 'bar'`` | ``MATCH(foo, 'bar')`` | | ``foo LIKE 'bar' AND tar LIKE 'goo'`` | ``MATCH('foo^2, tar^5', 'bar goo', 'operator=and')`` | | ``foo LIKE 'barr'`` | ``QUERY('foo: bar~')`` | diff --git a/explore-analyze/query-filter/languages/sql-rest-format.md b/explore-analyze/query-filter/languages/sql-rest-format.md index 26202c7de..e65164360 100644 --- a/explore-analyze/query-filter/languages/sql-rest-format.md +++ b/explore-analyze/query-filter/languages/sql-rest-format.md @@ -17,9 +17,8 @@ The URL parameter takes precedence over the `Accept` HTTP header. If neither is :::: -| | | | +| format | `Accept` HTTP header | Description | | --- | --- | --- | -| **format** | **`Accept` HTTP header** | **Description** | | Human Readable | | `csv` | `text/csv` | [Comma-separated values](https://en.wikipedia.org/wiki/Comma-separated_values) | | `json` | `application/json` | [JSON](https://www.json.org/) (JavaScript Object Notation) human-readable format | diff --git a/explore-analyze/query-filter/languages/sql-syntax-reserved.md b/explore-analyze/query-filter/languages/sql-syntax-reserved.md index 98705efbf..a8325cad5 100644 --- a/explore-analyze/query-filter/languages/sql-syntax-reserved.md +++ b/explore-analyze/query-filter/languages/sql-syntax-reserved.md @@ -16,9 +16,8 @@ The following table lists all of the keywords that are reserved in Elasticsearch SELECT "AS" FROM index ``` -| | | | +| Keyword | SQL:2016 | SQL-92 | | --- | --- | --- | -| **Keyword** | **SQL:2016** | **SQL-92** | | `ALL` | reserved | reserved | | `AND` | reserved | reserved | | `ANY` | reserved | reserved | diff --git a/manage-data/data-store/index-basics.md b/manage-data/data-store/index-basics.md index 7eaa763dc..41f9d2361 100644 --- a/manage-data/data-store/index-basics.md +++ b/manage-data/data-store/index-basics.md @@ -10,8 +10,6 @@ applies_to: # Index basics -This content applies to: [![Elasticsearch](/images/serverless-es-badge.svg "")](/solutions/search.md) [![Observability](/images/serverless-obs-badge.svg "")](/solutions/observability.md) [![Security](/images/serverless-sec-badge.svg "")](/solutions/security/elastic-security-serverless.md) - An index is a fundamental unit of storage in {{es}}. It is a collection of documents uniquely identified by a name or an [alias](/manage-data/data-store/aliases.md). This unique name is important because it’s used to target the index in search queries and other operations. ::::{tip} diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-password-reset-elastic.md b/raw-migrated-files/cloud/cloud-enterprise/ece-password-reset-elastic.md deleted file mode 100644 index e88474341..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-password-reset-elastic.md +++ /dev/null @@ -1,17 +0,0 @@ -# Reset the password for the `elastic` user [ece-password-reset-elastic] - -You might need to reset the password for the `elastic` superuser if you cannot authenticate with the `elastic` user ID and are effectively locked out from a cluster. - -To reset the password: - -1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. From the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. From your deployment menu, select **Security**. -4. Select **Reset password**. -5. Copy down the auto-generated password for the `elastic` user. - -The password is hashed after you leave this pane, so if you lose it, you need to reset the password again. - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-restore-across-clusters.md b/raw-migrated-files/cloud/cloud-enterprise/ece-restore-across-clusters.md deleted file mode 100644 index 380075a98..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-restore-across-clusters.md +++ /dev/null @@ -1,48 +0,0 @@ -# Restore a snapshot across clusters [ece-restore-across-clusters] - -Snapshots can be restored to either the same Elasticsearch cluster or to another cluster. If you are restoring all indices to another cluster, you can *clone* a cluster. - -::::{note} -Users created using the X-Pack security features or using Shield are not included when you restore across clusters, only data from Elasticsearch indices is restored. If you do want to create a cloned cluster with the same users as your old cluster, you need to recreate the users manually on the new cluster. -:::: - - -Restoring to another cluster is useful for scenarios where isolating activities on a separate cluster is beneficial, such as: - -Performing ad hoc analytics -: For most logging and metrics use cases, it is cost prohibitive to have all the data in memory, even if it would provide the best performance for aggregations. Cloning the relevant data to an ad hoc analytics cluster that can be discarded after use is a cost effective way to experiment with your data, without risk to existing clusters used for production. - -Enabling your developers -: Realistic test data is crucial for uncovering unexpected errors early in the development cycle. What can be more realistic than actual data from a production cluster? Giving your developers access to real production data is a great way to break down silos. - -Testing mapping changes -: Mapping changes almost always require reindexing. Unless your data volume is trivial, reindexing requires time and tweaking the parameters to achieve the best reindexing performance usually takes a little trial and error. While this use case could also be handled by running the scan and scroll query directly against the source cluster, a long lived scroll has the side effect of blocking merges even if the scan query is very light weight. - -Integration testing -: Test your application against a real live Elasticsearch cluster with actual data. If you automate this, you could also aggregate performance metrics from the tests and use those metrics to detect if a change in your application has introduced a performance degradation. - -::::{note} -A cluster is eligible as a destination for a built-in snapshot restore if it meets these criteria: - -* The destination cluster is able to read the indices. You can generally restore to your Elasticsearch cluster snapshots of indices created back to the previous major version, but see the [version matrix](../../../deploy-manage/tools/snapshot-and-restore.md#snapshot-restore-version-compatibility) for all the details. - -:::: - - -The list of available snapshots can be found in the [`found-snapshots` repository](../../../deploy-manage/tools/snapshot-and-restore/self-managed.md). - -To restore built-in snapshots across clusters, there are two options: - -* [Restore snapshot into a new deployment](../../../deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-into-new-deployment.md) -* [Restore snapshot into an existing deployment](../../../deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-into-existing-deployment.md) - -When restoring snapshots across clusters, we create a new repository called `\_clone_{{clusterIdPrefix}}`, which persists until manually deleted. If the repository is still in use, for example by mounted searchable snapshots, it can’t be removed. - -::::{warning} -When restoring from a deployment that’s using searchable snapshots, refer to [Restore snapshots containing searchable snapshots indices across clusters](../../../deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-containing-searchable-snapshots-indices-across-clusters.md) -:::: - - - - - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-restore-deployment.md b/raw-migrated-files/cloud/cloud-enterprise/ece-restore-deployment.md deleted file mode 100644 index 2c38e7b7a..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-restore-deployment.md +++ /dev/null @@ -1,13 +0,0 @@ -# Restore a deployment [ece-restore-deployment] - -You can restore a deployment that was previously [terminated](../../../deploy-manage/uninstall/delete-a-cloud-deployment.md) to its original configuration. Note that the data that was in the deployment is not restored, since it is deleted as part of the termination process. If you have a snapshot, you can [restore it](../../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) to recover the Elasticsearch indices. - -To restore a terminated deployment: - -1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. In the **Deployment Management** section, select **Restore** and then acknowledge the confirmation message. - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-snapshots.md b/raw-migrated-files/cloud/cloud-enterprise/ece-snapshots.md deleted file mode 100644 index bd2b95338..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-snapshots.md +++ /dev/null @@ -1,15 +0,0 @@ -# Work with snapshots [ece-snapshots] - -Snapshots provide backups of your Elasticsearch indices. You can use snapshots to recover from a failure when not enough availability zones are used to provide high availability or to recover from accidental deletion. - -To enable snapshots for your Elasticsearch clusters and to work with them, you must [have configured a repository](../../../deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md). After you have configured a snapshot repository, a snapshot is taken every 30 minutes or at the interval that you specify. - -Use Kibana to manage your snapshots. In Kibana, you can set up additional repositories where the snapshots are stored, other than the one currently managed by Elastic Cloud Enterprise. You can view and delete snapshots, and configure a snapshot lifecycle management (SLM) policy to automate when snapshots are created and deleted. To learn more, check the [Snapshot and Restore](../../../deploy-manage/tools/snapshot-and-restore/create-snapshots.md) documentation. - -From within Elastic Cloud Enterprise you can [restore a snapshot](../../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) from a different deployment in the same region. - -::::{important} -Snapshots back up only open indices. If you close an index, it is not included in snapshots and you will not be able to restore the data. -:::: - - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-terminate-deployment.md b/raw-migrated-files/cloud/cloud-enterprise/ece-terminate-deployment.md deleted file mode 100644 index 073b1bb4d..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-terminate-deployment.md +++ /dev/null @@ -1,13 +0,0 @@ -# Terminate a deployment [ece-terminate-deployment] - -Terminating a deployment stops all running instances and **deletes all data**. Only configuration information is saved so that you can restore the deployment in the future. If there is [a snapshot repository associated](../../../deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md) with the Elasticsearch cluster and at least one snapshot has been taken, you can restore the cluster with the same indices later. - -To terminate a deployment in Elastic Cloud Enterprise: - -1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. In the **Deployment Management** section, select **Terminate deployment**. - diff --git a/raw-migrated-files/cloud/cloud/ec-about.md b/raw-migrated-files/cloud/cloud/ec-about.md deleted file mode 100644 index eb63054ca..000000000 --- a/raw-migrated-files/cloud/cloud/ec-about.md +++ /dev/null @@ -1,12 +0,0 @@ -# About {{ech}} [ec-about] - -The information in this section covers: - -* [Subscription Levels](../../../deploy-manage/license.md) -* [Version Policy](../../../deploy-manage/deploy/elastic-cloud/available-stack-versions.md) -* [{{ech}} Hardware](cloud://reference/cloud-hosted/hardware.md) -* [{{ech}} Regions](cloud://reference/cloud-hosted/regions.md) -* [Service Status](../../../deploy-manage/cloud-organization/service-status.md) -* [Getting help](../../../troubleshoot/index.md) -* [Restrictions and known problems](../../../deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md) - diff --git a/raw-migrated-files/cloud/cloud/ec-access-kibana.md b/raw-migrated-files/cloud/cloud/ec-access-kibana.md deleted file mode 100644 index 682c66292..000000000 --- a/raw-migrated-files/cloud/cloud/ec-access-kibana.md +++ /dev/null @@ -1,50 +0,0 @@ -# Access Kibana [ec-access-kibana] - -Kibana is an open source analytics and visualization platform designed to search, view, and interact with data stored in Elasticsearch indices. The use of Kibana is included with your subscription. - -For new Elasticsearch clusters, we automatically create a Kibana instance for you. - -To access Kibana: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Deployments** page, select your deployment. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. Under **Applications**, select the Kibana **Launch** link and wait for Kibana to open. - - ::::{note} - Both ports 443 and 9243 can be used to access Kibana. SSO only works with 9243 on older deployments, where you will see an option in the Cloud UI to migrate the default to port 443. In addition, any version upgrade will automatically migrate the default port to 443. - :::: - -4. Log into Kibana. Single sign-on (SSO) is enabled between your Cloud account and the Kibana instance. If you’re logged in already, then Kibana opens without requiring you to log in again. However, if your token has expired, choose from one of these methods to log in: - - * Select **Login with Cloud**. You’ll need to log in with your Cloud account credentials and then you’ll be redirected to Kibana. - * Log in with the `elastic` superuser. The password was provided when you created your cluster or [can be reset](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md). - * Log in with any users you created in Kibana already. - - -In production systems, you might need to control what Elasticsearch data users can access through Kibana, so you need create credentials that can be used to access the necessary Elasticsearch resources. This means granting read access to the necessary indexes, as well as access to update the `.kibana` index. - -::::{tip} -If your cluster didn’t include a Kibana instance initially, there might not be a Kibana endpoint URL shown, yet. To gain access, all you need to do is [enable Kibana first](../../../deploy-manage/deploy/elastic-cloud/access-kibana.md#ec-enable-kibana2). -:::: - - -## Enable Kibana [ec-enable-kibana2] - -If your deployment didn’t include a Kibana instance initially, use these instructions to enable Kibana first. For new Elasticsearch clusters, we automatically create a Kibana instance for you that you can access directly. The use of Kibana is included with your subscription. - -To enable Kibana on your deployment: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, go to the **Kibana** page. -4. Select **Enable**. - -Enabling Kibana provides you with an endpoint URL, where you can access Kibana. It can take a short while to provision Kibana right after you select **Enable**, so if you get an error message when you first access the endpoint URL, wait a bit and try again. - - diff --git a/raw-migrated-files/cloud/cloud/ec-activity-page.md b/raw-migrated-files/cloud/cloud/ec-activity-page.md deleted file mode 100644 index caec9c856..000000000 --- a/raw-migrated-files/cloud/cloud/ec-activity-page.md +++ /dev/null @@ -1,37 +0,0 @@ -# Keep track of deployment activity [ec-activity-page] - -The deployment **Activity** page gives you a convenient way to follow all configuration changes that have been applied to your deployment, including which resources were affected, when the changes were applied, who initiated the changes, and whether or not the changes were successful. You can also select **Details** for an expanded, step-by-step view of each change applied to each deployment resource. - -To view the activity for a deployment: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Deployments** page, select your deployment. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. In your deployment menu, select **Activity**. -4. You can: - - 1. View the activity for all deployment resources (the default). - 2. Use one of the available filters to view configuration changes by status or type. You can use the query field to create a custom search. Select the filter buttons to get examples of the query format. - 3. Select one of the resource filters to view activity for only that resource type. - - -:::{image} ../../../images/cloud-ec-ce-activity-page.png -:alt: The Activity page -::: - -In the table columns you find the following information: - -Change -: Which deployment resource the configuration change was applied to. - -Summary -: A summary of what change was applied, when the change was performed, and how long it took. - -Applied by -: The user who submitted the configuration change. `System` indicates configuration changes initiated automatically by the {{ecloud}} platform. - -Actions -: Select **Details** for an expanded view of each step in the configuration change, including the start time, end time, and duration. You can select **Reapply** to re-run the configuration change. - diff --git a/raw-migrated-files/cloud/cloud/ec-add-user-settings.md b/raw-migrated-files/cloud/cloud/ec-add-user-settings.md deleted file mode 100644 index c6725cee4..000000000 --- a/raw-migrated-files/cloud/cloud/ec-add-user-settings.md +++ /dev/null @@ -1,291 +0,0 @@ -# Edit {{es}} user settings [ec-add-user-settings] - -Change how {{es}} runs by providing your own user settings. {{ech}} appends these settings to each node’s `elasticsearch.yml` configuration file. - -{{ech}} automatically rejects `elasticsearch.yml` settings that could break your cluster. For a list of supported settings, check [Supported {{es}} settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md#ec-es-elasticsearch-settings). - -::::{warning} -You can also update [dynamic cluster settings](../../../deploy-manage/deploy/self-managed/configure-elasticsearch.md#dynamic-cluster-setting) using {{es}}'s [update cluster settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). However, {{ech}} doesn’t reject unsafe setting changes made using this API. Use with caution. -:::: - - -To add or edit user settings: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, go to the **Edit** page. -4. In the **Elasticsearch** section, select **Manage user settings and extensions**. -5. Update the user settings. -6. Select **Save changes**. - -::::{note} -In some cases, you may get a warning saying "User settings are different across Elasticsearch instances". To fix this issue, ensure that your user settings (including the comments sections and whitespaces) are identical across all Elasticsearch nodes (not only the data tiers, but also the Master, Machine Learning, and Coordinating nodes). -:::: - - -## Supported {{es}} settings [ec-es-elasticsearch-settings] - -{{ech}} supports the following `elasticsearch.yml` settings. - -### General settings [ec_general_settings] - -The following general settings are supported: - -$$$http-cors-settings$$$`http.cors.*` -: Enables cross-origin resource sharing (CORS) settings for the [HTTP module](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md). - - ::::{note} - If your use case depends on the ability to receive CORS requests and you have a cluster that was provisioned prior to January 25th 2019, you must manually set `http.cors.enabled` to `true` and allow a specific set of hosts with `http.cors.allow-origin`. Applying these changes in your Elasticsearch configuration allows cross-origin resource sharing requests. - :::: - - -`http.compression` -: Support for [HTTP compression](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md) when possible (with Accept-Encoding). Defaults to `true`. - -`transport.compress` -: Configures [transport compression](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md) for node-to-node traffic. - -`transport.compression_scheme` -: Configures [transport compression](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md) for node-to-node traffic. - -`repositories.url.allowed_urls` -: Enables explicit allowing of [read-only URL repositories](../../../deploy-manage/tools/snapshot-and-restore/read-only-url-repository.md). - -`reindex.remote.whitelist` -: Explicitly allows the set of hosts that can be [reindexed from remotely](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex). Expects a YAML array of `host:port` strings. Consists of a comma-delimited list of `host:port` entries. Defaults to `["\*.io:*", "\*.com:*"]`. - -`reindex.ssl.*` -: To learn more on how to configure reindex SSL user settings, check [configuring reindex SSL parameters](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex). - -`script.painless.regex.enabled` -: Enables [regular expressions](elasticsearch://reference/scripting-languages/painless/brief-painless-walkthrough.md#modules-scripting-painless-regex) for the Painless scripting language. - -`action.auto_create_index` -: [Automatically create index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create) if it doesn’t already exist. - -`action.destructive_requires_name` -: When set to `true`, users must [specify the index name](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete) to delete an index. It’s not possible to delete _all or use wildcards. - -`xpack.notification.webhook.additional_token_enabled` -: When set to `true`, {{es}} automatically sets a token which enables the bypassing of traffic filters for calls initiated by Watcher towards {{es}} or {{kib}}. The default is `false` and the feature is available starting with {{es}} version 8.7.1 and later. - - ::::{important} - This setting only applies to the Watcher `webhook` action, not the `http` input action. - :::: - - -`cluster.indices.close.enable` -: Enables closing indices in Elasticsearch. Defaults to `true` for versions 7.2.0 and later, and to `false` for previous versions. In versions 7.1 and below, closed indices represent a data loss risk: if you close an index, it is not included in snapshots and you will not be able to restore the data. Similarly, closed indices are not included when you make cluster configuration changes, such as scaling to a different capacity, failover, and many other operations. Lastly, closed indices can lead to inaccurate disk space counts. - - ::::{warning} - For versions 7.1 and below, closed indices represent a data loss risk. Enable this setting only temporarily for these versions. - :::: - - -`azure.client.CLIENT_NAME.endpoint_suffix` -: Allows providing the [endpoint_suffix client setting](../../../deploy-manage/tools/snapshot-and-restore/azure-repository.md#repository-azure-client-settings) for a non-internal Azure client used for snapshot/restore. Note that `CLIENT_NAME` should be replaced with the name of the created client. - - -### Circuit breaker settings [ec_circuit_breaker_settings] - -The following circuit breaker settings are supported: - -`indices.breaker.total.limit` -: Configures [the parent circuit breaker settings](elasticsearch://reference/elasticsearch/configuration-reference/circuit-breaker-settings.md#parent-circuit-breaker). - -`indices.breaker.fielddata.limit` -: Configures [the limit for the fielddata breaker](elasticsearch://reference/elasticsearch/configuration-reference/circuit-breaker-settings.md#fielddata-circuit-breaker). - -`indices.breaker.fielddata.overhead` -: Configures [a constant that all field data estimations are multiplied with to determine a final estimation](elasticsearch://reference/elasticsearch/configuration-reference/circuit-breaker-settings.md#fielddata-circuit-breaker). - -`indices.breaker.request.limit` -: Configures [the limit for the request breaker](elasticsearch://reference/elasticsearch/configuration-reference/circuit-breaker-settings.md#request-circuit-breaker). - -`indices.breaker.request.overhead` -: Configures [a constant that all request estimations are multiplied by to determine a final estimation](elasticsearch://reference/elasticsearch/configuration-reference/circuit-breaker-settings.md#request-circuit-breaker). - - -### Indexing pressure settings [ec_indexing_pressure_settings] - -The following indexing pressure settings are supported: - -`indexing_pressure.memory.limit` -: Configures [the indexing pressure settings](elasticsearch://reference/elasticsearch/index-settings/pressure.md). - - -### X-Pack [ec_x_pack] - -#### Version 8.5.3+, 7.x support in 7.17.8+ [ec_version_8_5_3_7_x_support_in_7_17_8] - -`xpack.security.transport.ssl.trust_restrictions.x509_fields` -: Specifies which field(s) from the TLS certificate is used to match for the restricted trust management that is used for remote clusters connections. This should only be set when a self managed cluster can not create certificates that follow the Elastic Cloud pattern. The default value is ["subjectAltName.otherName.commonName"], the Elastic Cloud pattern. "subjectAltName.dnsName" is also supported and can be configured in addition to or in replacement of the default. - - -#### All supported versions [ec_all_supported_versions] - -`xpack.ml.inference_model.time_to_live` -: Sets the duration of time that the trained models are cached. Check [{{ml-cap}} settings](elasticsearch://reference/elasticsearch/configuration-reference/machine-learning-settings.md). - -`xpack.security.loginAssistanceMessage` -: Adds a message to the login screen. Useful for displaying corporate messages. - -`xpack.security.authc.anonymous.*` -: To learn more on how to enable anonymous access, check [Enabling anonymous access](/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md) - -`xpack.notification.slack` -: Configures [Slack notification settings](/explore-analyze/alerts-cases/watcher/actions-slack.md). Note that you need to add `secure_url` as a [secret value to the keystore](../../../deploy-manage/security/secure-settings.md). - -`xpack.notification.pagerduty` -: Configures [PagerDuty notification settings](/explore-analyze/alerts-cases/watcher/actions-pagerduty.md#configuring-pagerduty). - -`xpack.watcher.trigger.schedule.engine` -: Defines when the watch should start, based on date and time [Learn more](/explore-analyze/alerts-cases/watcher/trigger-schedule.md). - -`xpack.notification.email.html.sanitization.*` -: Enables [email notification settings](elasticsearch://reference/elasticsearch/configuration-reference/watcher-settings.md) to sanitize HTML elements in emails that are sent. - -`xpack.monitoring.collection.interval` -: Controls [how often data samples are collected](elasticsearch://reference/elasticsearch/configuration-reference/monitoring-settings.md#monitoring-collection-settings). - -`xpack.monitoring.collection.min_interval_seconds` -: Specifies the minimum number of seconds that a time bucket in a chart can represent. If you modify the `xpack.monitoring.collection.interval`, use the same value in this setting. - - Defaults to `10` (10 seconds). - - -$$$xpack-monitoring-history-duration$$$`xpack.monitoring.history.duration` -: Sets the [retention duration](elasticsearch://reference/elasticsearch/configuration-reference/monitoring-settings.md#monitoring-collection-settings) beyond which the indices created by a monitoring exporter will be automatically deleted. - -`xpack.watcher.history.cleaner_service.enabled` -: Controls [whether old watcher indices are automatically deleted](elasticsearch://reference/elasticsearch/configuration-reference/watcher-settings.md#general-notification-settings). - -`xpack.http.ssl.cipher_suites` -: Controls the list of supported cipher suites for all outgoing TLS connections. - -`xpack.security.authc.realms.saml.*` -: To learn more on how to enable SAML and related user settings, check [secure your clusters with SAML](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md). - -`xpack.security.authc.realms.oidc.*` -: To learn more on how to enable OpenID Connect and related user settings, check [secure your clusters with OpenID Connect](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md). - -`xpack.security.authc.realms.kerberos.*` -: To learn more on how to enable Kerberos and relate user settings, check [secure your clusters with Kerberos](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md). - -`xpack.security.authc.realms.jwt.*` -: To learn more on how to enable JWT and related user settings, check [secure your clusters with JWT](../../../deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md). - -::::{note} -All SAML, OpenID Connect, Kerberos, and JWT settings are allowlisted. -:::: - - - - -### Search [ec_search] - -The following search settings are supported: - -* `search.aggs.rewrite_to_filter_by_filter` - - -### Disk-based shard allocation settings [shard-allocation-settings] - -The following disk-based allocation settings are supported: - -`cluster.routing.allocation.disk.threshold_enabled` -: Enable or disable [disk allocation](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#disk-based-shard-allocation) decider and defaults to `true`. - -`cluster.routing.allocation.disk.watermark.low` -: Configures [disk-based shard allocation’s low watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#disk-based-shard-allocation). - -`cluster.routing.allocation.disk.watermark.high` -: Configures [disk-based shard allocation’s high watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#disk-based-shard-allocation). - -`cluster.routing.allocation.disk.watermark.flood_stage` -: Configures [disk-based shard allocation’s flood_stage](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#disk-based-shard-allocation). - -::::{tip} -Remember to update user settings for alerts when performing a major version upgrade. -:::: - - - -### Enrich settings [ec_enrich_settings] - -The following enrich settings are supported: - -`enrich.cache_size` -: Maximum number of searches to cache for enriching documents. Defaults to 1000. There is a single cache for all enrich processors in the cluster. This setting determines the size of that cache. - -`enrich.coordinator_proxy.max_concurrent_requests` -: Maximum number of concurrent multi-search requests to run when enriching documents. Defaults to 8. - -`enrich.coordinator_proxy.max_lookups_per_request` -: Maximum number of searches to include in a multi-search request when enriching documents. Defaults to 128. - -`enrich.coordinator_proxy.queue_capacity` -: coordinator queue capacity, defaults to max_concurrent_requests * max_lookups_per_request - - -### Audit settings [ec_audit_settings] - -The following audit settings are supported: - -`xpack.security.audit.enabled` -: Enables auditing on Elasticsearch cluster nodes. Defaults to *false*. - -`xpack.security.audit.logfile.events.include` -: Specifies which events to include in the auditing output. - -`xpack.security.audit.logfile.events.exclude` -: Specifies which events to exclude from the output. No events are excluded by default. - -`xpack.security.audit.logfile.events.emit_request_body` -: Specifies whether to include the request body from REST requests on certain event types, for example *authentication_failed*. Defaults to *false*. - -`xpack.security.audit.logfile.emit_node_name` -: Specifies whether to include the node name as a field in each audit event. Defaults to *true*. - -`xpack.security.audit.logfile.emit_node_host_address` -: Specifies whether to include the node’s IP address as a field in each audit event. Defaults to *false*. - -`xpack.security.audit.logfile.emit_node_host_name` -: Specifies whether to include the node’s host name as a field in each audit event. Defaults to *false*. - -`xpack.security.audit.logfile.emit_node_id` -: Specifies whether to include the node ID as a field in each audit event. Defaults to *true*. - -`xpack.security.audit.logfile.events.ignore_filters..users` -: A list of user names or wildcards. The specified policy will not print audit events for users matching these values. - -`xpack.security.audit.logfile.events.ignore_filters..realms` -: A list of authentication realm names or wildcards. The specified policy will not print audit events for users in these realms. - -`xpack.security.audit.logfile.events.ignore_filters..roles` -: A list of role names or wildcards. The specified policy will not print audit events for users that have these roles. - -`xpack.security.audit.logfile.events.ignore_filters..indices` -: A list of index names or wildcards. The specified policy will not print audit events when all the indices in the event match these values. - -`xpack.security.audit.logfile.events.ignore_filters..actions` -: A list of action names or wildcards. The specified policy will not print audit events for actions matching these values. - -::::{note} -To enable auditing you must first [enable deployment logging](../../../deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md). -:::: - - - -### Universal Profiling settings [ec_universal_profiling_settings] - -The following settings for Elastic Universal Profiling are supported: - -`xpack.profiling.enabled` -: *Version 8.7.0+*: Specifies whether the Universal Profiling Elasticsearch plugin is enabled. Defaults to *true*. - -`xpack.profiling.templates.enabled` -: *Version 8.9.0+*: Specifies whether Universal Profiling related index templates should be created on startup. Defaults to *false*. diff --git a/raw-migrated-files/cloud/cloud/ec-billing-stop.md b/raw-migrated-files/cloud/cloud/ec-billing-stop.md deleted file mode 100644 index 10609f59d..000000000 --- a/raw-migrated-files/cloud/cloud/ec-billing-stop.md +++ /dev/null @@ -1,18 +0,0 @@ -# Stop charges for a deployment [ec-billing-stop] - -Got a deployment you no longer need and don’t want to be charged for any longer? Simply delete it. - -::::{important} -**All data is lost.** Billing for usage is by the hour and any outstanding charges for usage before you deleted the deployment will still appear on your next bill. -:::: - - -To stop being charged for a deployment: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. Select **Delete deployment** and confirm the deletion. - diff --git a/raw-migrated-files/cloud/cloud/ec-custom-bundles.md b/raw-migrated-files/cloud/cloud/ec-custom-bundles.md deleted file mode 100644 index b78577cb2..000000000 --- a/raw-migrated-files/cloud/cloud/ec-custom-bundles.md +++ /dev/null @@ -1,245 +0,0 @@ -# Upload custom plugins and bundles [ec-custom-bundles] - -There are several cases where you might need your own files to be made available to your {{es}} cluster’s nodes: - -* Your own custom plugins, or third-party plugins that are not amongst the [officially available plugins](../../../deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md). -* Custom dictionaries, such as synonyms, stop words, compound words, and so on. -* Cluster configuration files, such as an Identity Provider metadata file used when you [secure your clusters with SAML](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md). - -To facilitate this, we make it possible to upload a ZIP file that contains the files you want to make available. Uploaded files are stored using Amazon’s highly-available S3 service. This is necessary so we do not have to rely on the availability of third-party services, such as the official plugin repository, when provisioning nodes. - -Custom plugins and bundles are collectively referred to as extensions. - -## Before you begin [ec_before_you_begin_7] - -The selected plugins/bundles are downloaded and provided when a node starts. Changing a plugin does not change it for nodes already running it. Refer to [Updating Plugins and Bundles](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ec-update-bundles-and-plugins). - -With great power comes great responsibility: your plugins can extend your deployment with new functionality, but also break it. Be careful. We obviously cannot guarantee that your custom code works. - -::::{important} -You cannot edit or delete a custom extension after it has been used in a deployment. To remove it from your deployment, you can disable the extension and update your deployment configuration. -:::: - - -Uploaded files cannot be bigger than 20MB for most subscription levels, for Platinum and Enterprise the limit is 8GB. - -It is important that plugins and dictionaries that you reference in mappings and configurations are available at all times. For example, if you try to upgrade {{es}} and de-select a dictionary that is referenced in your mapping, the new nodes will be unable to recover the cluster state and function. This is true even if the dictionary is referenced by an empty index you do not actually use. - - -## Prepare your files for upload [ec-prepare-custom-bundles] - -Plugins are uploaded as ZIP files. You need to choose whether your uploaded file should be treated as a *plugin* or as a *bundle*. Bundles are not installed as plugins. If you need to upload both a custom plugin and custom dictionaries, upload them separately. - -To prepare your files, create one of the following: - -Plugins -: A plugin is a ZIP file that contains a plugin descriptor file and binaries. - - The plugin descriptor file is called either `stable-plugin-descriptor.properties` for plugins built against the stable plugin API, or `plugin-descriptor.properties` for plugins built against the classic plugin API. A plugin ZIP file should only contain one plugin descriptor file. - - {{es}} assumes that the uploaded ZIP file contains binaries. If it finds any source code, it fails with an error message, causing provisioning to fail. Make sure you upload binaries, and not source code. - - ::::{note} - Plugins larger than 5GB should have the plugin descriptor file at the top of the archive. This order can be achieved by specifying at time of creating the ZIP file: - - ```sh - zip -r name-of-plugin.zip name-of-descriptor-file.properties * - ``` - - :::: - - -Bundles -: The entire content of a bundle is made available to the node by extracting to the {{es}} container’s `/app/config` directory. This is useful to make custom dictionaries available. Dictionaries should be placed in a `/dictionaries` folder in the root path of your ZIP file. - - Here are some examples of bundles: - - **Script** - - ```text - $ tree . - . - └── scripts - └── test.js - ``` - - The script `test.js` can be referred in queries as `"script": "test"`. - - **Dictionary of synonyms** - - ```text - $ tree . - . - └── dictionaries - └── synonyms.txt - ``` - - The dictionary `synonyms.txt` can be used as `synonyms.txt` or using the full path `/app/config/synonyms.txt` in the `synonyms_path` of the `synonym-filter`. - - To learn more about analyzing with synonyms, check [Synonym token filter](elasticsearch://reference/data-analysis/text-analysis/analysis-synonym-tokenfilter.md) and [Formatting Synonyms](https://www.elastic.co/guide/en/elasticsearch/guide/2.x/synonym-formats.html). - - **GeoIP database bundle** - - ```text - $ tree . - . - └── ingest-geoip - └── MyGeoLite2-City.mmdb - ``` - - Note that the extension must be `-(City|Country|ASN).mmdb`, and it must be a different name than the original file name `GeoLite2-City.mmdb` which already exists in {{ech}}. To use this bundle, you can refer it in the GeoIP ingest pipeline as `MyGeoLite2-City.mmdb` under `database_file`. - - - -## Add your extension [ec-add-your-plugin] - -You must upload your files before you can apply them to your cluster configuration: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. -3. Under **Features**, select **Extensions**. -4. Select **Upload extension**. -5. Complete the extension fields, including the {{es}} version. - - * Plugins must use full version notation down to the patch level, such as `7.10.1`. You cannot use wildcards. This version notation should match the version in your plugin’s plugin descriptor file. For classic plugins, it should also match the target deployment version. - * Bundles should specify major or minor versions with wildcards, such as `7.*` or `*`. Wildcards are recommended to ensure the bundle is compatible across all versions of these releases. - -6. Select the extension **Type**. -7. Under **Plugin file**, choose the file to upload. -8. Select **Create extension**. - -After creating your extension, you can [enable them for existing {{es}} deployments](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ec-update-bundles) or enable them when creating new deployments. - -::::{note} -Creating extensions larger than 200MB should be done through the extensions API. - -Refer to [Managing plugins and extensions through the API](../../../deploy-manage/deploy/elastic-cloud/manage-plugins-extensions-through-api.md) for more details. - -:::: - - - -## Update your deployment configuration [ec-update-bundles] - -After uploading your files, you can select to enable them when creating a new {{es}} deployment. For existing deployments, you must update your deployment configuration to use the new files: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From the **Actions** dropdown, select **Edit deployment**. -4. Select **Manage user settings and extensions**. -5. Select the **Extensions** tab. -6. Select the custom extension. -7. Select **Back**. -8. Select **Save**. The {{es}} cluster is then updated with new nodes that have the plugin installed. - - -## Update your extension [ec-update-bundles-and-plugins] - -While you can update the ZIP file for any plugin or bundle, these are downloaded and made available only when a node is started. - -You should be careful when updating an extension. If you update an existing extension with a new file, and if the file is broken for some reason, all the nodes could be in trouble, as a restart or move node could make even HA clusters non-available. - -If the extension is not in use by any deployments, then you are free to update the files or extension details as much as you like. However, if the extension is in use, and if you need to update it with a new file, it is recommended to [create a new extension](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ec-add-your-plugin) rather than updating the existing one that is in use. - -By following this method, only the one node would be down even if the extension file is faulty. This would ensure that HA clusters remain available. - -This method also supports having a test/staging deployment to test out the extension changes before applying them on a production deployment. - -You may delete the old extension after updating the deployment successfully. - -To update an extension with a new file version, - -1. Prepare a new plugin or bundle. -2. On the **Extensions** page, [upload a new extension](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ec-add-your-plugin). -3. Make your new files available by uploading them. -4. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -5. From the **Actions** dropdown, select **Edit deployment**. -6. Select **Manage user settings and extensions**. -7. Select the **Extensions** tab. -8. Select the new extension and de-select the old one. -9. Select **Back**. -10. Select **Save**. - - -## How to use the extensions API [ec-extension-api-usage-guide] - -::::{note} -For a full set of examples, check [Managing plugins and extensions through the API](../../../deploy-manage/deploy/elastic-cloud/manage-plugins-extensions-through-api.md). -:::: - - -If you don’t already have one, create an [API key](../../../deploy-manage/api-keys/elastic-cloud-api-keys.md) - -There are ways that you can use the extensions API to upload a file. - -### Method 1: Use HTTP `POST` to create metadata and then upload the file using HTTP `PUT` [ec_method_1_use_http_post_to_create_metadata_and_then_upload_the_file_using_http_put] - -Step 1: Create metadata - -```text -curl -XPOST \ --H "Authorization: ApiKey $EC_API_KEY" \ --H 'content-type:application/json' \ -https://api.elastic-cloud.com/api/v1/deployments/extensions \ --d'{ - "name" : "synonyms-v1", - "description" : "The best synonyms ever", - "extension_type" : "bundle", - "version" : "7.*" -}' -``` - -Step 2: Upload the file - -```text -curl -XPUT \ --H "Authorization: ApiKey $EC_API_KEY" \ -"https://api.elastic-cloud.com/api/v1/deployments/extensions/$extension_id" \ --T /tmp/synonyms.zip -``` - -If you are using a client that does not have native `application/zip` handling like `curl`, be sure to use the equivalent of the following with `content-type: multipart/form-data`: - -```text -curl -XPUT \ --H 'Expect:' \ --H 'content-type: multipart/form-data' \ --H "Authorization: ApiKey $EC_API_KEY" \ -"https://api.elastic-cloud.com/api/v1/deployments/extensions/$extension_id" -F "file=@/tmp/synonyms.zip" -``` - -For example, using the Python `requests` module, the `PUT` request would be as follows: - -```text -import requests -files = {'file': open('/tmp/synonyms.zip','rb')} -r = requests.put('https://api.elastic-cloud.com/api/v1/deployments/extensions/{}'.format(extension_id), files=files, headers= {'Authorization': 'ApiKey {}'.format(EC_API_KEY)}) -``` - - -### Method 2: Single step. Use a `download_url` so that the API server downloads the object at the specified URL [ec_method_2_single_step_use_a_download_url_so_that_the_api_server_downloads_the_object_at_the_specified_url] - -```text -curl -XPOST \ --H "Authorization: ApiKey $EC_API_KEY" \ --H 'content-type:application/json' \ -https://api.elastic-cloud.com/api/v1/deployments/extensions \ --d'{ - "name" : "anylysis_icu", - "description" : "Helpful description", - "extension_type" : "plugin", - "version" : "7.13.2", - "download_url": "https://artifacts.elastic.co/downloads/elasticsearch-plugins/analysis-icu/analysis-icu-7.13.2.zip" -}' -``` - -Please refer to the [Extensions API reference](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-extensions) for the complete set of HTTP methods and payloads. - - - diff --git a/raw-migrated-files/cloud/cloud/ec-custom-repository.md b/raw-migrated-files/cloud/cloud/ec-custom-repository.md deleted file mode 100644 index cf790ff52..000000000 --- a/raw-migrated-files/cloud/cloud/ec-custom-repository.md +++ /dev/null @@ -1,23 +0,0 @@ -# Snapshot and restore with custom repositories [ec-custom-repository] - -Specify your own repositories to snapshot to and restore from. This can be useful, for example, to do long-term archiving of old indexes, restore snapshots across Elastic Cloud accounts, or to be certain you have an exit strategy, should you need to move away from our service. - -{{ech}} supports these repositories: - -* [Amazon Web Services (AWS)](../../../deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md) -* [Google Cloud Storage (GCS)](../../../deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md) -* [Azure Blob Storage](../../../deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md) - -::::{note} -Automated snapshots are only available in the *found snapshots* repository. You are responsible for the execution and maintenance of the snapshots that you store in custom repositories. Note that the automated snapshot frequency might conflict with manual snapshots. You can enable SLM to automate snapshot management in a custom repository. -:::: - - -::::{tip} -By using a custom repository, you can restore snapshots across regions. -:::: - - - - - diff --git a/raw-migrated-files/cloud/cloud/ec-delete-deployment.md b/raw-migrated-files/cloud/cloud/ec-delete-deployment.md deleted file mode 100644 index c7e4a2d8e..000000000 --- a/raw-migrated-files/cloud/cloud/ec-delete-deployment.md +++ /dev/null @@ -1,18 +0,0 @@ -# Delete your deployment [ec-delete-deployment] - -To do that, select **Delete deployment** from the deployment overview page. - -When you delete your deployment, billing stops immediately rounding up to the nearest hour. - -::::{warning} -When deployments are deleted, we erase all data on disk, including snapshots. Snapshots are retained for very a limited amount of time post deletion and we cannot guarantee that deleted deployments can be restored from snapshots for this reason. If you accidentally delete a deployment, please contact support as soon as possible to increase the likelihood of restoring your deployment. -:::: - - -::::{tip} -If you want to keep the snapshot for future purposes even after the deployment deletion, you should [use a custom snapshot repository](../../../deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md). -:::: - - -Billing restarts as soon as the deployment is restored. - diff --git a/raw-migrated-files/cloud/cloud/ec-editing-user-settings.md b/raw-migrated-files/cloud/cloud/ec-editing-user-settings.md deleted file mode 100644 index cada29207..000000000 --- a/raw-migrated-files/cloud/cloud/ec-editing-user-settings.md +++ /dev/null @@ -1,12 +0,0 @@ -# Edit your user settings [ec-editing-user-settings] - -From the {{ecloud}} Console you can customize Elasticsearch, Kibana, and related products to suit your needs. These editors append your changes to the appropriate YAML configuration file and they affect all users of that cluster. In each editor you can: - -* [Dictate the behavior of Elasticsearch and its security features](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md). -* [Manage Kibana’s settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md). -* [Tune your APM Server](../../../solutions/observability/apps/configure-apm-server.md). - - - - - diff --git a/raw-migrated-files/cloud/cloud/ec-faq-getting-started.md b/raw-migrated-files/cloud/cloud/ec-faq-getting-started.md deleted file mode 100644 index 02db11453..000000000 --- a/raw-migrated-files/cloud/cloud/ec-faq-getting-started.md +++ /dev/null @@ -1,73 +0,0 @@ -# {{ech}} FAQ [ec-faq-getting-started] - -This frequently-asked-questions list helps you with common questions while you get {{ech}} up and running for the first time. For questions about {{ech}} configuration options or billing, check the [Technical FAQ](../../../deploy-manage/index.md) and the [Billing FAQ](../../../deploy-manage/cloud-organization/billing/billing-faq.md). - -* [What is {{ech}}?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-what) -* [Is {{ech}} the same as Amazon’s {{es}} Service?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-aws-difference) -* [Can I run the full Elastic Stack in {{ech}}?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-full-stack) -* [Can I try {{ech}} for free?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-trial) -* [What if I need to change the size of my {{es}} cluster at a later time?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-config) -* [Do you offer support subscriptions?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-subscriptions) -* [Where is {{ech}} hosted?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-where) -* [What is the difference between {{ech}} and the Amazon {{es}} Service?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-vs-aws) -* [Can I use {{ech}} on platforms other than AWS?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-aws) -* [Do you offer Elastic’s commercial products?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-elastic) -* [Is my {{es}} cluster protected by X-Pack?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-x-pack) -* [Is there a limit on the number of documents or indexes I can have in my cluster?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-limit) - - $$$faq-what$$$What is {{ech}}? - : {{ech}} is hosted and managed {{es}} and {{kib}} brought to you by the creators of {{es}}. {{ech}} is part of Elastic Cloud and ships with features that you can only get from the company behind {{es}}, {{kib}}, {{beats}}, and {{ls}}. {{es}} is a full text search engine that suits a range of uses, from search on websites to big data analytics and more. - - $$$faq-aws-difference$$$Is {{ech}} the same as Amazon’s {{es}} Service? - : {{ech}} is not the same as the Amazon {{es}} service. To learn more about the differences, check our [AWS {{es}} Service](https://www.elastic.co/aws-elasticsearch-service) comparison. - - $$$faq-full-stack$$$Can I run the full Elastic Stack in {{ech}}? - : Many of the products that are part of the Elastic Stack are readily available in {{ech}}, including {{es}}, {{kib}}, plugins, and features such as monitoring and security. Use other Elastic Stack products directly with {{ech}}. For example, both Logstash and Beats can send their data to {{ech}}. What is run is determined by the [subscription level](https://www.elastic.co/cloud/as-a-service/subscriptions). - - $$$faq-trial$$$Can I try {{ech}} for free? - : Yes, sign up for a 14-day free trial. The trial starts the moment a cluster is created. - - During the free trial period get access to a deployment to explore Elastic solutions for Search, Observability, Security, or the latest version of the Elastic Stack. - - - $$$faq-config$$$What if I need to change the size of my {{es}} cluster at a later time? - : Scale your clusters both up and down from the user console, whenever you like. The resizing of the cluster is transparently done in the background, and highly available clusters are resized without any downtime. If you scale your cluster down, make sure that the downsized cluster can handle your {{es}} memory requirements. Read more about sizing and memory in [Sizing {{es}}](https://www.elastic.co/blog/found-sizing-elasticsearch). - - $$$faq-subscriptions$$$Do you offer support? - : Yes, all subscription levels for {{ech}} include support, handled by email or through the Elastic Support Portal. Different subscription levels include different levels of support. For the Standard subscription level, there is no service-level agreement (SLA) on support response times. Gold and Platinum subscription levels include an SLA on response times to tickets and dedicated resources. To learn more, check [Getting Help](../../../troubleshoot/index.md). - - $$$faq-where$$$Where is {{ech}} hosted? - : We host our {{es}} clusters on Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Check out which [regions we support](cloud://reference/cloud-hosted/regions.md) and what [hardware we use](cloud://reference/cloud-hosted/hardware.md). New data centers are added all the time. - - $$$faq-vs-aws$$$What is the difference between {{ech}} and the Amazon {{es}} Service? - : {{ech}} is the only hosted and managed {{es}} service built, managed, and supported by the company behind {{es}}, {{kib}}, {{beats}}, and {{ls}}. With {{ech}}, you always get the latest versions of the software. Our service is built on best practices and years of experience hosting and managing thousands of {{es}} clusters in the Cloud and on premise. For more information, check the following Amazon and Elastic {{es}} Service [comparison page](https://www.elastic.co/aws-elasticsearch-service). - - Please note that there is no formal partnership between Elastic and Amazon Web Services (AWS), and Elastic does not provide any support on the AWS {{es}} Service. - - - $$$faq-aws$$$Can I use {{ech}} on platforms other than AWS? - : Yes, create deployments on the Google Cloud Platform and Microsoft Azure. - - $$$faq-elastic$$$Do you offer Elastic’s commercial products? - : Yes, all {{ech}} customers have access to basic authentication, role-based access control, and monitoring. - - {{ech}} Gold, Platinum and Enterprise customers get complete access to all the capabilities in X-Pack: - - * Security - * Alerting - * Monitoring - * Reporting - * Graph Analysis & Visualization - - [Contact us](https://www.elastic.co/cloud/contact) to learn more. - - - $$$faq-x-pack$$$Is my Elasticsearch cluster protected by X-Pack? - : Yes, X-Pack security features offer the full power to protect your {{ech}} deployment with basic authentication and role-based access control. - - $$$faq-limit$$$Is there a limit on the number of documents or indexes I can have in my cluster? - : No. We do not enforce any artificial limit on the number of indexes or documents you can store in your cluster. - - That said, there is a limit to how many indexes Elasticsearch can cope with. Every shard of every index is a separate Lucene index, which in turn comprises several files. A process cannot have an unlimited number of open files. Also, every shard has its associated control structures in memory. So, while we will let you make as many indexes as you want, there are limiting factors. Our larger plans provide your processes with more dedicated memory and CPU-shares, so they are capable of handling more indexes. The number of indexes or documents you can fit in a given plan therefore depends on their structure and use. - - diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started-trial.md b/raw-migrated-files/cloud/cloud/ec-getting-started-trial.md deleted file mode 100644 index 14f43a331..000000000 --- a/raw-migrated-files/cloud/cloud/ec-getting-started-trial.md +++ /dev/null @@ -1,79 +0,0 @@ -# How do I sign up? [ec-getting-started-trial] - -To sign up, all you need is an email address: - -1. Go to our [Elastic Cloud Trial](https://cloud.elastic.co/registration?page=docs&placement=docs-body) page. -2. Enter your email address and password, or sign up with a Google or Microsoft account. Make sure you’ve read through our [terms of service](https://www.elastic.co/legal/elastic-cloud-account-terms). - -You are ready to [create your first deployment](../../../deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md). - - -## What is included in my trial? [ec_what_is_included_in_my_trial] - -Your 14-day free trial includes: - -**One hosted deployment** - -A deployment lets you explore Elastic solutions for Search, Observability, and Security. Trial deployments run on the latest version of the Elastic Stack. They includes 8 GB of RAM spread out over two availability zones, and enough storage space to get you started. If you’re looking to evaluate a smaller workload, you can scale down your trial deployment. Each deployment includes Elastic features such as Maps, SIEM, machine learning, advanced security, and much more. You have some sample data sets to play with and tutorials that describe how to add your own data. - -**One serverless project** - -Serverless projects package Elastic Stack features by type of solution: Elasticsearch, Observability, and Security. When you create a project, you select the project type applicable to your use case, so only the relevant and impactful applications and features are easily accessible to you. - -To learn more about serverless Elastic Cloud, check [our serverless documentation](https://docs.elastic.co/serverless). - -::::{tip} -During the trial period, you are limited to one active hosted deployment and one active serverless project at a time. When you subscribe, you can create additional deployments and projects. -:::: - - - -## What limits are in place during a trial? [ec_what_limits_are_in_place_during_a_trial] - -During the free 14 day trial, Elastic provides access to one hosted deployment and one serverless project. If all you want to do is try out Elastic, the trial includes more than enough to get you started. During the trial period, some limitations apply. - -**Hosted deployments** - -* You can have one active deployment at a time -* The deployment size is limited to 8GB RAM and approximately 360GB of storage, depending on the specified hardware profile -* Machine learning nodes are available up to 4GB RAM -* Custom Elasticsearch plugins are not enabled - -**Serverless projects** - -* You can have one active serverless project at a time. -* Search Power is limited to 100. This setting only exists in Elasticsearch projects. -* Search Boost Window is limited to 7 days. This setting only exists in Elasticsearch projects. - -Find more details in [our serverless documentation](https://docs.elastic.co/serverless). - -**How to remove restrictions?** - -To remove limitations, subscribe to [Elastic Cloud](../../../deploy-manage/cloud-organization/billing/add-billing-details.md). Elastic Cloud subscriptions include the following benefits: - -* Increased memory or storage for deployment components, such as Elasticsearch clusters, machine learning nodes, and APM server. -* As many deployments and projects as you need. -* Third availability zone for your deployments. -* Access to additional features, such as cross-cluster search and cross-cluster replication. - -You can subscribe to Elastic Cloud at any time during your trial. Billing starts when you subscribe. To maximize the benefits of your trial, subscribe at the end of the free period. To monitor charges, anticipate future costs, and adjust your usage, check your [account usage](../../../deploy-manage/cloud-organization/billing/monitor-analyze-usage.md) and [billing history](../../../deploy-manage/cloud-organization/billing/view-billing-history.md). - - -## How do I get started with my trial? [ec_how_do_i_get_started_with_my_trial] - -Start by checking out some common approaches for [moving data into Elastic Cloud](../../../manage-data/ingest.md). - - -## How do I sign up through a marketplace? [ec_how_do_i_sign_up_through_a_marketplace] - -If you’re interested in consolidated billing, you’ll want to [subscribe from a Marketplace](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). This skips over your trial period and connect your Marketplace email to your [unique Elastic account](../../../cloud-account/update-your-email-address.md). - -::::{note} -[Serverless projects](https://docs.elastic.co/serverless) are only available for AWS Marketplace. Support for GCP Marketplace and Azure Marketplace will be added in the near future. -:::: - - - -## How do I get help? [ec_how_do_i_get_help] - -We’re here to help. If you have any questions feel free to reach out to [Support](https://cloud.elastic.co/support). diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started.md b/raw-migrated-files/cloud/cloud/ec-getting-started.md deleted file mode 100644 index bccf92ab6..000000000 --- a/raw-migrated-files/cloud/cloud/ec-getting-started.md +++ /dev/null @@ -1,77 +0,0 @@ -# Introducing {{ech}} [ec-getting-started] - -::::{note} -Are you just discovering Elastic or are unfamiliar with the core concepts of the Elastic Stack? Would you like to be guided through the very first steps and understand how Elastic can help you? Try one of our [getting started guides](https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-guides.html) first. -:::: - - - -## What is {{ech}}? [ec_what_is_elasticsearch_service] - -**The Elastic Stack, managed through {{ecloud}} deployments.** - -{{ech}} allows you to manage one or more instances of the Elastic Stack through **deployments**. These deployments are hosted on {{ecloud}}, through the cloud provider and regions of your choice, and are tied to your organization account. - -A *deployment* helps you manage an Elasticsearch cluster and instances of other Elastic products, like Kibana or APM instances, in one place. Spin up, scale, upgrade, and delete your Elastic Stack products without having to manage each one separately. In a deployment, everything works together. - -::::{note} -If you are instead interested in serverless Elastic Cloud, check the [serverless documentation](https://docs.elastic.co/serverless). -:::: - - -**Hardware profiles to optimize deployments for your usage.** - -You can optimize the configuration and performance of a deployment by selecting a **hardware profile** that matches your usage. - -*Hardware profiles* are presets that provide a unique blend of storage, memory and vCPU for each component of a deployment. They support a specific purpose, such as a hot-warm architecture that helps you manage your data storage retention. - -You can use these presets, or start from them to get the unique configuration you need. They can vary slightly from one cloud provider or region to another to align with the available virtual hardware. - -**Solutions to help you make the most out of your data in each deployment.** - -Building a rich search experience, gaining actionable insight into your environment, or protecting your systems and endpoints? You can implement each of these major use cases, and more, with the solutions that are pre-built in each Elastic deployment. - -:::{image} ../../../images/cloud-ec-stack-components.png -:alt: Elastic Stack components and solutions with Enterprise Search -::: - -:::{important} -Enterprise Search is not available in {{stack}} 9.0+. -::: - -These solutions help you accomplish your use cases: Ingest data into the deployment and set up specific capabilities of the Elastic Stack. - -Of course, you can choose to follow your own path and use Elastic components available in your deployment to ingest, visualize, and analyze your data independently from solutions. - - -## How to operate {{ech}}? [ec_how_to_operate_elasticsearch_service] - -**Where to start?** - -* Try one of our solutions by following our [getting started guides](https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-guides.html). -* Sign up using your preferred method: - - * [Sign Up for a Trial](../../../deploy-manage/deploy/elastic-cloud/create-an-organization.md) - Sign up, check what your free trial includes and when we require a credit card. - * [Sign Up from Marketplace](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md) - Consolidate billing portals by signing up through one of the available marketplaces. - -* Set up your account by [completing your user or organization profile](../../../deploy-manage/cloud-organization/billing.md) and by [inviting users to your organization](../../../deploy-manage/cloud-organization.md). -* [Create a deployment](../../../deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md) - Get up and running very quickly. Select your desired configuration and let Elastic deploy Elasticsearch, Kibana, and the Elastic products that you need for you. In a deployment, everything works together, everything runs on hardware that is optimized for your use case. -* [Connect your data to your deployment](../../../manage-data/ingest.md) - Ingest and index the data you want, from a variety of sources, and take action on it. - -**Adjust the capacity and capabilities of your deployments for production** - -There are a few things that can help you make sure that your production deployments remain available, healthy, and ready to handle your data in a scalable way over time, with the expected level of performance. We’ve listed these things for you in [Preparing for production](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md). - -**Secure your environment** - -Control which users and services can access your deployments by [securing your environment](../../../deploy-manage/users-roles/cluster-or-deployment-auth.md). Add authentication mechanisms, configure [traffic filtering](../../../deploy-manage/security/traffic-filtering.md) for private link, encrypt your deployment data and snapshots at rest [with your own key](../../../deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md), manage trust with Elasticsearch clusters from other environments, and more. - -**Monitor your deployments and keep them healthy** - -{{ech}} provides several ways to monitor your deployments, anticipate and prevent issues, or fix them when they occur. Check [Monitoring your deployment](../../../deploy-manage/monitor/stack-monitoring.md) to get more details. - -**And then?** - -Now is the time for you to work with your data. The content of the {{ecloud}} section helps you get your environment up and ready to handle your data the way you need. You can always adjust your deployments and their configuration as your usage evolves over time. - -To get the most out of the solutions that the Elastic Stack offers, [log in to {{ecloud}}](https://cloud.elastic.co) or [browse the documentation](https://www.elastic.co/docs). diff --git a/raw-migrated-files/cloud/cloud/ec-manage-apm-settings.md b/raw-migrated-files/cloud/cloud/ec-manage-apm-settings.md deleted file mode 100644 index e57b3ee67..000000000 --- a/raw-migrated-files/cloud/cloud/ec-manage-apm-settings.md +++ /dev/null @@ -1,369 +0,0 @@ -# Edit APM user settings [ec-manage-apm-settings] - -Change how Elastic APM runs by providing your own user settings. Starting in {{stack}} version 8.0, how you change APM settings and the settings that are available to you depend on how you spin up Elastic APM. There are two modes: - -{{fleet}}-managed APM integration -: New deployments created in {{stack}} version 8.0 and later will be managed by {{fleet}}. - - Check [APM configuration reference](/solutions/observability/apps/configure-apm-server.md) for information on how to configure Elastic APM in this mode. - - -Standalone APM Server (legacy) -: Deployments created prior to {{stack}} version 8.0 are in legacy mode. Upgrading to or past {{stack}} 8.0 will not remove you from legacy mode. - - Check [Edit standalone APM settings (legacy)](../../../solutions/observability/apps/configure-apm-server.md#ec-edit-apm-standalone-settings) and [Supported standalone APM settings (legacy)](../../../solutions/observability/apps/configure-apm-server.md#ec-apm-settings) for information on how to configure Elastic APM in this mode. - - -To learn more about the differences between these modes, or to switch from Standalone APM Server (legacy) mode to {{fleet}}-managed, check [Switch to the Elastic APM integration](/solutions/observability/apps/switch-to-elastic-apm-integration.md). - -## Edit standalone APM settings (legacy) [ec-edit-apm-standalone-settings] - -User settings are appended to the `apm-server.yml` configuration file for your instance and provide custom configuration options. - -To add user settings: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, go to the **Edit** page. -4. In the **APM** section, select **Edit user settings**. (For existing deployments with user settings, you may have to expand the **Edit apm-server.yml** caret instead.) -5. Update the user settings. -6. Select **Save changes**. - -::::{note} -If a setting is not supported by {{ech}}, you will get an error message when you try to save. -:::: - - - -## Supported standalone APM settings (legacy) [ec-apm-settings] - -{{ech}} supports the following setting when running APM in standalone mode (legacy). - -::::{tip} -Some settings that could break your cluster if set incorrectly are blocklisted. The following settings are generally safe in cloud environments. For detailed information about APM settings, check the [APM documentation](/solutions/observability/apps/configure-apm-server.md). -:::: - - -### Version 8.0+ [ec_version_8_0_3] - -This stack version removes support for some previously supported settings. These are all of the supported settings for this version: - -`apm-server.agent.config.cache.expiration` -: When using APM agent configuration, determines cache expiration from information fetched from Kibana. Defaults to `30s`. - -`apm-server.aggregation.transactions.*` -: This functionality is experimental and may be changed or removed completely in a future release. When enabled, APM Server produces transaction histogram metrics that are used to power the APM app. Shifting this responsibility from APM app to APM Server results in improved query performance and removes the need to store unsampled transactions. - -The following `apm-server.auth.anonymous.*` settings can be configured to restrict anonymous access to specified agents and/or services. This is primarily intended to allow limited access for untrusted agents, such as Real User Monitoring. Anonymous auth is automatically enabled when RUM is enabled. Otherwise, anonymous auth is disabled. When anonymous auth is enabled, only agents matching `allow_agent` and services matching `allow_service` are allowed. See below for details on default values for these. - -`apm-server.auth.anonymous.allow_agent` -: Allow anonymous access only for specified agents. - -`apm-server.auth.anonymous.allow_service` -: Allow anonymous access only for specified service names. By default, all service names are allowed. This is replacing the config option `apm-server.rum.allow_service_names`, previously available for `7.x` deployments. - -`apm-server.auth.anonymous.rate_limit.event_limit` -: Rate limiting is defined per unique client IP address, for a limited number of IP addresses. Sites with many concurrent clients should consider increasing this limit. Defaults to 1000. This is replacing the config option `apm-server.rum.event_rate.limit`, previously available for `7.x` deployments. - -`apm-server.auth.anonymous.rate_limit.ip_limit` -: Defines the maximum amount of events allowed per IP per second. Defaults to 300. The overall maximum event throughput for anonymous access is (event_limit * ip_limit). This is replacing the config option `apm-server.rum.event_rate.lru_size`, previously available for `7.x` deployments. - -`apm-server.auth.api_key.enabled` -: Enables agent authorization using Elasticsearch API Keys. This is replacing the config option `apm-server.api_key.enabled`, previously available for `7.x` deployments. - -`apm-server.auth.api_key.limit` -: Restrict how many unique API keys are allowed per minute. Should be set to at least the amount of different API keys configured in your monitored services. Every unique API key triggers one request to Elasticsearch. This is replacing the config option `apm-server.api_key.limit`, previously available for `7.x` deployments. - -`apm-server.capture_personal_data` -: When set to `true`, the server captures the IP of the instrumented service and its User Agent. Enabled by default. - -`apm-server.default_service_environment` -: If specified, APM Server will record this value in events which have no service environment defined, and add it to agent configuration queries to Kibana when none is specified in the request from the agent. - -`apm-server.max_event_size` -: Specifies the maximum allowed size of an event for processing by the server, in bytes. Defaults to `307200`. - -`apm-server.rum.allow_headers` -: A list of Access-Control-Allow-Headers to allow RUM requests, in addition to "Content-Type", "Content-Encoding", and "Accept". - -`apm-server.rum.allow_origins` -: A list of permitted origins for real user monitoring. User-agents will send an origin header that will be validated against this list. An origin is made of a protocol scheme, host, and port, without the URL path. Allowed origins in this setting can have a wildcard `*` to match anything (for example: `http://*.example.com`). If an item in the list is a single `*`, all origins will be allowed. - -`apm-server.rum.enabled` -: Enable Real User Monitoring (RUM) Support. By default RUM is enabled. RUM does not support token based authorization. Enabled RUM endpoints will not require any authorization configured for other endpoints. - -`apm-server.rum.exclude_from_grouping` -: A regexp to be matched against a stacktrace frame’s `file_name`. If the regexp matches, the stacktrace frame is not used for calculating error groups. The default pattern excludes stacktrace frames that have a filename starting with `/webpack` - -`apm-server.rum.library_pattern` -: A regexp to be matched against a stacktrace frame’s `file_name` and `abs_path` attributes. If the regexp matches, the stacktrace frame is considered to be a library frame. - -`apm-server.rum.source_mapping.enabled` -: If a source map has previously been uploaded, source mapping is automatically applied to all error and transaction documents sent to the RUM endpoint. Sourcemapping is enabled by default when RUM is enabled. - -`apm-server.rum.source_mapping.cache.expiration` -: The `cache.expiration` determines how long a source map should be cached in memory. Note that values configured without a time unit will be interpreted as seconds. - -`apm-server.sampling.tail.enabled` -: Set to `true` to enable tail based sampling. Disabled by default. - -`apm-server.sampling.tail.policies` -: Criteria used to match a root transaction to a sample rate. - -`apm-server.sampling.tail.interval` -: Synchronization interval for multiple APM Servers. Should be in the order of tens of seconds or low minutes. - -`logging.level` -: Sets the minimum log level. The default log level is error. Available log levels are: error, warning, info, or debug. - -`logging.selectors` -: Enable debug output for selected components. To enable all selectors use ["*"]. Other available selectors are "beat", "publish", or "service". Multiple selectors can be chained. - -`logging.metrics.enabled` -: If enabled, apm-server periodically logs its internal metrics that have changed in the last period. For each metric that changed, the delta from the value at the beginning of the period is logged. Also, the total values for all non-zero internal metrics are logged on shutdown. The default is false. - -`logging.metrics.period` -: The period after which to log the internal metrics. The default is 30s. - -`max_procs` -: Sets the maximum number of CPUs that can be executing simultaneously. The default is the number of logical CPUs available in the system. - -`output.elasticsearch.flush_interval` -: The maximum duration to accumulate events for a bulk request before being flushed to Elasticsearch. The value must have a duration suffix. The default is 1s. - -`output.elasticsearch.flush_bytes` -: The bulk request size threshold, in bytes, before flushing to Elasticsearch. The value must have a suffix. The default is 5MB. - - -### Version 7.17+ [ec_version_7_17] - -This stack version includes all of the settings from 7.16 and the following: - -Allow anonymous access only for specified agents and/or services. This is primarily intended to allow limited access for untrusted agents, such as Real User Monitoring. Anonymous auth is automatically enabled when RUM is enabled. Otherwise, anonymous auth is disabled. When anonymous auth is enabled, only agents matching allow_agent and services matching allow_service are allowed. See below for details on default values for these. - -`apm-server.auth.anonymous.allow_agent` -: Allow anonymous access only for specified agents. - -`apm-server.auth.anonymous.allow_service` -: Allow anonymous access only for specified service names. By default, all service names are allowed. This will be replacing the config option `apm-server.rum.allow_service_names` from `8.0` on. - -`apm-server.auth.anonymous.rate_limit.event_limit` -: Rate limiting is defined per unique client IP address, for a limited number of IP addresses. Sites with many concurrent clients should consider increasing this limit. Defaults to 1000. This will be replacing the config option`apm-server.rum.event_rate.limit` from `8.0` on. - -`apm-server.auth.anonymous.rate_limit.ip_limit` -: Defines the maximum amount of events allowed per IP per second. Defaults to 300. The overall maximum event throughput for anonymous access is (event_limit * ip_limit). This will be replacing the config option `apm-server.rum.event_rate.lru_size` from `8.0` on. - -`apm-server.auth.api_key.enabled` -: Enables agent authorization using Elasticsearch API Keys. This will be replacing the config option `apm-server.api_key.enabled` from `8.0` on. - -`apm-server.auth.api_key.limit` -: Restrict how many unique API keys are allowed per minute. Should be set to at least the amount of different API keys configured in your monitored services. Every unique API key triggers one request to Elasticsearch. This will be replacing the config option `apm-server.api_key.limit` from `8.0` on. - - -### Supported versions before 8.x [ec_supported_versions_before_8_x_3] - -`apm-server.aggregation.transactions.*` -: This functionality is experimental and may be changed or removed completely in a future release. When enabled, APM Server produces transaction histogram metrics that are used to power the APM app. Shifting this responsibility from APM app to APM Server results in improved query performance and removes the need to store unsampled transactions. - -`apm-server.default_service_environment` -: If specified, APM Server will record this value in events which have no service environment defined, and add it to agent configuration queries to Kibana when none is specified in the request from the agent. - -`apm-server.rum.allow_service_names` -: A list of service names to allow, to limit service-specific indices and data streams created for unauthenticated RUM events. If the list is empty, any service name is allowed. - -`apm-server.ilm.setup.mapping` -: ILM policies now support configurable index suffixes. You can append the `policy_name` with an `index_suffix` based on the `event_type`, which can be one of `span`, `transaction`, `error`, or `metric`. - -`apm-server.rum.allow_headers` -: List of Access-Control-Allow-Headers to allow RUM requests, in addition to "Content-Type", "Content-Encoding", and "Accept". - -`setup.template.append_fields` -: A list of fields to be added to the Elasticsearch template and Kibana data view (formerly *index pattern*). - -`apm-server.api_key.enabled` -: Enabled by default. For any requests where APM Server accepts a `secret_token` in the authorization header, it now alternatively accepts an API Key. - -`apm-server.api_key.limit` -: Configure how many unique API keys are allowed per minute. Should be set to at least the amount of different API keys used in monitored services. Default value is 100. - -`apm-server.ilm.setup.enabled` -: When enabled, APM Server creates aliases, event type specific settings and ILM policies. If disabled, event type specific templates need to be managed manually. - -`apm-server.ilm.setup.overwrite` -: Set to `true` to apply custom policies and to properly overwrite templates when switching between using ILM and not using ILM. - -`apm-server.ilm.setup.require_policy` -: Set to `false` when policies are set up outside of APM Server but referenced in this configuration. - -`apm-server.ilm.setup.policies` -: Array of ILM policies. Each entry has a `name` and a `policy`. - -`apm-server.ilm.setup.mapping` -: Array of mappings of ILM policies to event types. Each entry has a `policy_name` and an `event_type`, which can be one of `span`, `transaction`, `error`, or `metric`. - -`apm-server.rum.source_mapping.enabled` -: When events are monitored using the RUM agent, APM Server tries to apply source mapping by default. This configuration option allows you to disable source mapping on stack traces. - -`apm-server.rum.source_mapping.cache.expiration` -: Sets how long a source map should be cached before being refetched from Elasticsearch. Default value is 5m. - -`output.elasticsearch.pipeline` -: APM comes with a default pipeline definition. This allows overriding it. To disable, you can set `pipeline: _none` - -`apm-server.agent.config.cache.expiration` -: When using APM agent configuration, determines cache expiration from information fetched from Kibana. Defaults to `30s`. - -`apm-server.ilm.enabled` -: Enables index lifecycle management (ILM) for the indices created by the APM Server. Defaults to `false`. If you’re updating an existing APM Server, you must also set `setup.template.overwrite: true`. If you don’t, the index template will not be overridden and ILM changes will not take effect. - -`apm-server.max_event_size` -: Specifies the maximum allowed size of an event for processing by the server, in bytes. Defaults to `307200`. - -`output.elasticsearch.pipelines` -: Adds an array for pipeline selector configurations that support conditionals, format string-based field access, and name mappings used to [parse data using ingest node pipelines](/solutions/observability/apps/application-performance-monitoring-apm.md). - -`apm-server.register.ingest.pipeline.enabled` -: Loads the pipeline definitions to Elasticsearch when the APM Server starts up. Defaults to `false`. - -`apm-server.register.ingest.pipeline.overwrite` -: Overwrites the existing pipeline definitions in Elasticsearch. Defaults to `true`. - -`apm-server.rum.event_rate.lru_size` -: Defines the number of unique IP addresses that can be tracked in the LRU cache, which keeps a rate limit for each of the most recently seen IP addresses. Defaults to `1000`. - -`apm-server.rum.event_rate.limit` -: Sets the rate limit per second for each IP address for events sent to the APM Server v2 RUM endpoint. Defaults to `300`. - -`apm-server.rum.enabled` -: Enables/disables Real User Monitoring (RUM) support. Defaults to `true` (enabled). - -`apm-server.rum.allow_origins` -: Specifies a list of permitted origins from user agents. The default is `*`, which allows everything. - -`apm-server.rum.library_pattern` -: Differentiates library frames against specific attributes. Refer to "Configure Real User Monitoring (RUM)" in the [Observability Guide](https://www.elastic.co/guide/en/observability/current) to learn more. The default value is `"node_modules|bower_components|~"`. - -`apm-server.rum.exclude_from_grouping` -: Configures the RegExp to be matched against a stacktrace frame’s `file_name`. - -`apm-server.rum.rate_limit` -: Sets the rate limit per second for each IP address for requests sent to the RUM endpoint. Defaults to `10`. - -`apm-server.capture_personal_data` -: When set to `true`, the server captures the IP of the instrumented service and its User Agent. Enabled by default. - -`setup.template.settings.index.number_of_shards` -: Specifies the number of shards for the Elasticsearch template. - -`setup.template.settings.index.number_of_replicas` -: Specifies the number of replicas for the Elasticsearch template. - -`apm-server.frontend.enabled` -: Enables/disables frontend support. - -`apm-server.frontend.allow_origins` -: Specifies the comma-separated list of permitted origins from user agents. The default is `*`, which allows everything. - -`apm-server.frontend.library_pattern` -: Differentiates library frames against [specific attributes](https://www.elastic.co/guide/en/apm/server/6.3/configuration-frontend.html). The default value is `"node_modules|bower_components|~"`. - -`apm-server.frontend.exclude_from_grouping` -: Configures the RegExp to be matched against a stacktrace frame’s `file_name`. - -`apm-server.frontend.rate_limit` -: Sets the rate limit per second per IP address for requests sent to the frontend endpoint. Defaults to `10`. - -`apm-server.capture_personal_data` -: When set to `true`, the server captures the IP address of the instrumented service and its User Agent. Enabled by default. - -`max_procs` -: Max number of CPUs used simultaneously. Defaults to the number of logical CPUs available. - -`setup.template.enabled` -: Set to false to disable loading of Elasticsearch templates used for APM indices. If set to false, you must load the template manually. - -`setup.template.name` -: Name of the template. Defaults to `apm-server`. - -`setup.template.pattern` -: The template pattern to apply to the default index settings. Default is `apm-*` - -`setup.template.settings.index.number_of_shards` -: Specifies the number of shards for the Elasticsearch template. - -`setup.template.settings.index.number_of_replicas` -: Specifies the number of replicas for the Elasticsearch template. - -`output.elasticsearch.bulk_max_size` -: Maximum number of events to bulk together in a single Elasticsearch bulk API request. By default, this number changes based on the size of the instance: - - | Instance size | Default max events | - | --- | --- | - | 512MB | 267 | - | 1GB | 381 | - | 2GB | 533 | - | 4GB | 762 | - | 8GB | 1067 | - - -`output.elasticsearch.indices` -: Array of index selector rules supporting conditionals and formatted string. - -`output.elasticsearch.index` -: The index to write the events to. If changed, `setup.template.name` and `setup.template.pattern` must be changed accordingly. - -`output.elasticsearch.worker` -: Maximum number of concurrent workers publishing events to Elasticsearch. By default, this number changes based on the size of the instance: - - | Instance size | Default max concurrent workers | - | --- | --- | - | 512MB | 5 | - | 1GB | 7 | - | 2GB | 10 | - | 4GB | 14 | - | 8GB | 20 | - - -`queue.mem.events` -: Maximum number of events to concurrently store in the internal queue. By default, this number changes based on the size of the instance: - - | Instance size | Default max events | - | --- | --- | - | 512MB | 2000 | - | 1GB | 4000 | - | 2GB | 8000 | - | 4GB | 16000 | - | 8GB | 32000 | - - -`queue.mem.flush.min_events` -: Minimum number of events to have before pushing them to Elasticsearch. By default, this number changes based on the size of the instance. - -`queue.mem.flush.timeout` -: Maximum duration before sending the events to the output if the `min_events` is not crossed. - - -### Logging settings [ec_logging_settings] - -`logging.level` -: Specifies the minimum log level. One of *debug*, *info*, *warning*, or *error*. Defaults to *info*. - -`logging.selectors` -: The list of debugging-only selector tags used by different APM Server components. Use *** to enable debug output for all components. For example, add *publish* to display all the debug messages related to event publishing. - -`logging.metrics.enabled` -: If enabled, APM Server periodically logs its internal metrics that have changed in the last period. Defaults to *true*. - -`logging.metrics.period` -: The period after which to log the internal metrics. Defaults to *30s*. - -::::{note} -To change logging settings you must first [enable deployment logging](../../../deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md). -:::: - - - - diff --git a/raw-migrated-files/cloud/cloud/ec-manage-appsearch-settings.md b/raw-migrated-files/cloud/cloud/ec-manage-appsearch-settings.md deleted file mode 100644 index f74c36e76..000000000 --- a/raw-migrated-files/cloud/cloud/ec-manage-appsearch-settings.md +++ /dev/null @@ -1,37 +0,0 @@ -# Add App Search user settings [ec-manage-appsearch-settings] - -Change how App Search runs by providing your own user settings. User settings are appended to the `app-search.yml` configuration file for your instance and provide custom configuration options. - -::::{tip} -Some settings that could break your cluster if set incorrectly are blocked. Review the [list of settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md#ec-appsearch-settings) that are generally safe in cloud environments. For detailed information about App Search settings, check the [App Search documentation](https://swiftype.com/documentation/app-search/self-managed/configuration). -:::: - - -To add user settings: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, go to the **Edit** page. -4. At the bottom of the **App Search** section, expand the **User settings overrides** caret. -5. Update the user settings. -6. Select **Save changes**. - -::::{note} -If a setting is not supported by {{ech}}, you get an error message when you try to save. -:::: - - - -## Supported App Search settings [ec-appsearch-settings] - -{{ech}} supports the following App Search settings. - -`app_search.auth.source` -: The origin of authenticated App Search users. Options are `standard`, `elasticsearch-native`, and `elasticsearch-saml`. - -`app_search.auth.name` -: (SAML only) Name of the realm within the Elasticsearch realm chain. - diff --git a/raw-migrated-files/cloud/cloud/ec-manage-enterprise-search-settings.md b/raw-migrated-files/cloud/cloud/ec-manage-enterprise-search-settings.md deleted file mode 100644 index 28c7c169a..000000000 --- a/raw-migrated-files/cloud/cloud/ec-manage-enterprise-search-settings.md +++ /dev/null @@ -1,27 +0,0 @@ -# Add Enterprise Search user settings [ec-manage-enterprise-search-settings] - -:::{important} -Enterprise Search is not available in {{stack}} 9.0+. -::: - -Change how Enterprise Search runs by providing your own user settings. User settings are appended to the `ent-search.yml` configuration file for your instance and provide custom configuration options. - -Refer to the [Configuration settings reference](https://www.elastic.co/guide/en/enterprise-search/current/configuration.html#configuration-file) in the Enterprise Search documentation for a full list of configuration settings. Settings supported on {{ech}} are indicated by an {{ecloud}} icon (![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ecloud}}")). Be sure to refer to the documentation version that matches the Elastic Stack version used in your deployment. - -To add user settings: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, go to the **Edit** page. -4. In the **Enterprise Search** section, select **Edit user settings**. For deployments with existing user settings, you may have to expand the **Edit enterprise-search.yml** caret instead. -5. Update the user settings. -6. Select **Save changes**. - -::::{note} -If a setting is not supported by {{ech}}, an error message displays when you try to save your settings. -:::: - - diff --git a/raw-migrated-files/cloud/cloud/ec-manage-kibana-settings.md b/raw-migrated-files/cloud/cloud/ec-manage-kibana-settings.md deleted file mode 100644 index 712c7e703..000000000 --- a/raw-migrated-files/cloud/cloud/ec-manage-kibana-settings.md +++ /dev/null @@ -1,936 +0,0 @@ -# Edit Kibana user settings [ec-manage-kibana-settings] - -{{ech}} supports most of the standard Kibana and X-Pack settings. Through a YAML editor in the console, you can append Kibana properties to the `kibana.yml` file. Your changes to the configuration file are read on startup. - -::::{important} -Be aware that some settings that could break your cluster if set incorrectly and that the syntax might change between major versions. Before upgrading, be sure to review the full list of the [latest Kibana settings and syntax](kibana://reference/configuration-reference/general-settings.md). -:::: - - -To change Kibana settings: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, go to the **Edit** page. -4. In the **Kibana** section, select **Edit user settings**. (For deployments with existing user settings, you may have to expand the **Edit kibana.yml** caret instead.) -5. Update the user settings. -6. Select **Save changes**. - -Saving your changes initiates a configuration plan change that restarts Kibana automatically for you. - -::::{note} -If a setting is not supported by {{ech}}, you will get an error message when you try to save. -:::: - - -## Supported Kibana settings [ec-kibana-config] - -### Version 8.12.0+ [ec_version_8_12_0] - -`xpack.reporting.csv.maxConcurrentShardRequests` -: Sets the maximum number of concurrent shard requests that each sub-search request executes per node during Kibana CSV export. Defaults to `5`. - - -### Version 8.11.0+ [ec_version_8_11_0] - -`xpack.action.queued.max` -: Specifies the maximum number of actions that can be queued. Defaults to `1000000`. - - -### Version 8.9.0+ [ec_version_8_9_0] - -`xpack.fleet.createArtifactsBulkBatchSize` -: Allow to configure batch size for creating and updating Fleet user artifacts. Examples include creation of Trusted Applications and Endpoint Exceptions in Security. To learn more, check [Fleet settings in Kibana](kibana://reference/configuration-reference/fleet-settings.md). - -`xpack.securitySolution.maxUploadResponseActionFileBytes` -: Allow to configure the max file upload size for use with the Upload File Repsonse action available with the Defend Integration. To learn more, check [Endpoint Response actions](/solutions/security/endpoint-response-actions.md). - - -### Version 8.7.0+ [ec_version_8_7_0] - -`xpack.security.session.concurrentSessions.maxSessions` -: Set the maximum number of sessions each user is allowed to have active in {{kib}}. By default, no limit is applied. If set, the value of this option should be an integer between 1 and 1000. When the limit is exceeded, the oldest session is automatically invalidated. To learn more, check [Session management](/deploy-manage/security/kibana-session-management.md#session-max-sessions). - -`server.securityResponseHeaders.crossOriginOpenerPolicy` -: Controls whether the [`Cross-Origin-Opener-Policy`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cross-Origin-Opener-Policy) header is used in all responses to the client from the Kibana server. To learn more, see [Configure Kibana](kibana://reference/configuration-reference/general-settings.md#server-securityresponseheaders-crossoriginopenerpolicy). - - -### Version 8.6.0+ [ec_version_8_6_0] - -`server.compression.brotli.enabled` -: Enable brotli compression format for browser-server communications. Default: false. To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md). - -`xpack.fleet.enableExperimental` -: Allow to configure experimental feature for Fleet. To learn more, check [Fleet settings in Kibana](kibana://reference/configuration-reference/fleet-settings.md). - - -### Version 8.4.0+ [ec_version_8_4_0] - -`migrations.discardUnknownObjects` -: Discard saved objects with unknown types during a migration. Must be set to the target version, e.g.: `8.4.0`. Default: undefined. To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md). - -`migrations.discardCorruptObjects` -: Discard corrupt saved objects, as well as those that cause transform errors during a migration. Must be set to the target version, e.g.: `8.4.0`. Default: undefined. To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md). - - -### Version 8.3.0+ [ec_version_8_3_0] - -`elasticsearch.compression` -: Enable compression for communications with Elasticsearch. Default: false. To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md). - - -### Version 8.2.0+ [ec_version_8_2_0] - -`elasticsearch.maxSockets` -: The maximum number of sockets that can be used for communications with Elasticsearch. Default: Infinity. To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md). - - -### Version 8.1.0+ [ec_version_8_1_0] - -`execution_context.enabled` -: Propagate request-specific metadata to Elasticsearch server by way of the `x-opaque-id` header. To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md). - - -### Supported versions before 8.x [ec_supported_versions_before_8_x] - -`vis_type_table.legacyVisEnabled` -: For 7.x versions version 7.11 and higher, a new version of the datatable visualization is used. Set to `true` to enable the legacy version. In version 8.0, the old implementation is removed and this setting is no longer supported. - -`vega.enableExternalUrls` -: Set to `true` to allow Vega vizualizations to use data from sources other than the linked Elasticsearch cluster. In stack version 8.0 and above, the `vega.enableExternalUrls` is not supported. Use `vis_type_vega.enableExternalUrls` instead. - - -### All supported versions [ec_all_supported_versions_2] - -`migrations.maxBatchSizeBytes` -: Defines the maximum payload size for indexing batches of saved objects during upgrade migrations. To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md). - -`server.maxPayload` -: The maximum payload size in bytes for incoming server requests. Default: 1048576. To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md#server-maxpayload). - -`server.securityResponseHeaders.strictTransportSecurity` -: Controls whether the [`Strict-Transport-Security`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security) header is used in all responses to the client from the Kibana server. To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md##server-securityresponseheaders-stricttransportsecurity). - -`server.securityResponseHeaders.xContentTypeOptions` -: Controls whether the [`X-Content-Type-Options`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options) header is used in all responses to the client from the Kibana server. To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md#server-securityresponseheaders-xcontenttypeoptions). - -`server.securityResponseHeaders.referrerPolicy` -: Controls whether the [`Referrer-Policy`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy) header is used in all responses to the client from the Kibana server. To learn more, see [Configure Kibana](kibana://reference/configuration-reference/general-settings.md#server-securityresponseheaders-referrerpolicy). - -`server.securityResponseHeaders.permissionsPolicy` -: Controls whether the `Permissions-Policy` header is used in all responses to the client from the Kibana server. To learn more, see [Configure Kibana](kibana://reference/configuration-reference/general-settings.md#server-securityresponseheaders-permissionspolicy). - -`server.securityResponseHeaders.permissionsPolicyReportOnly` -: Controls whether the `Permissions-Policy-Report-Only` header is used in all responses to the client from the Kibana server. To learn more, see [Configure Kibana](kibana://reference/configuration-reference/general-settings.md#server-securityresponseheaders-permissionspolicy). - -`server.securityResponseHeaders.disableEmbedding` -: Controls whether the [`Content-Security-Policy`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy) and [`X-Frame-Options`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options) headers are configured to disable embedding Kibana in other webpages using iframes. To learn more, see [Configure Kibana](kibana://reference/configuration-reference/general-settings.md#server-securityresponseheaders-disableembedding). - -`data.autocomplete.valueSuggestions.timeout` -: Specifies the time in milliseconds to wait for autocomplete suggestions from Elasticsearch. The default is 1000. Allowed values are between 1 and 1200000. To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md). - -`data.autocomplete.valueSuggestions.terminateAfter` -: Specifies the max number of documents loaded by each shard to generate autocomplete suggestions. The default is 100000. Allowed values are between 1 and 10000000. To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md). - -`map.tilemap.options.attribution` -: Adds the map attribution string. - -`map.tilemap.options.maxZoom` -: Sets the maximum zoom level. - -`map.tilemap.options.minZoom` -: Sets the minimum zoom level. - -`map.tilemap.options.subdomains` -: Provides an array of subdomains used by the tile service. Specify the position of the subdomain the URL with the token `{s}`. - -`map.tilemap.url` -: Lists the URL to the tileservice that Kibana uses to display map tiles in tilemap visualizations. - -`i18n.locale` -: Specifies the locale for all strings, dates, and number formats that can be localized. Defaults to `en` (English). - -`migrations.batchSize` -: Defines the number of documents migrated at a time during saved object upgrade migrations. To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md). - -`server.defaultRoute` -: Specifies the default route when opening Kibana. You can use this setting to modify the landing page when opening Kibana. - -`server.customResponseHeaders` -: Specifies HTTP header names and values that the Kibana backend will return to the client. - -#### Map settings [ec_map_settings] - -`map.regionmap:` -: Specifies additional vector layers for use in [Region Map](https://www.elastic.co/guide/en/kibana/7.5/visualize-maps.html#region-map) visualizations. Each layer object points to an external vector file that contains a geojson FeatureCollection. The file must use the [WGS84 coordinate reference system](https://en.wikipedia.org/wiki/World_Geodetic_System) and only include polygons. If the file is hosted on a separate domain from Kibana, the server needs to be CORS-enabled so Kibana can download the file. The following example shows a valid regionmap configuration. - - ```yaml - map.regionmap: - includeElasticMapsService: false - layers: - - name: "Departments of France" - url: "http://my.cors.enabled.server.org/france_departements.geojson" - attribution: "INRAP" - fields: - - name: "department" - description: "Full department name" - - name: "INSEE" - description: "INSEE numeric identifier" - ``` - - -`map.regionmap.includeElasticMapsService:` -: Turns on or off whether layers from the Elastic Maps Service should be included in the vector layer option list. Supported on Elastic Cloud Enterprise. By turning this off, only the layers that are configured here will be included. The default is `true`. - -`map.regionmap.layers[].attribution:` -: Optional. References the originating source of the geojson file. - -`map.regionmap.layers[].fields[]:` -: Mandatory. Each layer can contain multiple fields to indicate what properties from the geojson features you wish to expose. The previous example shows how to define multiple properties. - -`map.regionmap.layers[].fields[].description:` -: Mandatory. The human readable text that is shown under the Options tab when building the Region Map visualization. - -`map.regionmap.layers[].fields[].name:` -: Mandatory. This value is used to do an inner-join between the document stored in Elasticsearch and the geojson file. For example, if the field in the geojson is called `Location` and has city names, there must be a field in Elasticsearch that holds the same values that Kibana can then use to lookup for the geoshape data. - -`map.regionmap.layers[].name:` -: Mandatory. A description of the map being provided. - -`map.regionmap.layers[].url:` -: Mandatory. The location of the geojson file as provided by a webserver. - -`tilemap.options.attribution` -: Adds the map attribution string. - -`tilemap.options.maxZoom` -: Sets the maximum zoom level. - -`tilemap.options.minZoom` -: Sets the minimum zoom level. - -`tilemap.options.subdomains` -: Provides an array of subdomains used by the tile service. Specify the position of the subdomain the URL with the token `{s}`. - -`tilemap.url` -: Lists the URL to the tileservice that Kibana uses to display map tiles in tilemap visualizations. - - - -### SAML settings [ec_saml_settings] - -If you are using SAML to secure your clusters, these settings are supported in {{ech}}. - -To learn more, refer to [configuring Kibana to use SAML](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-configure-kibana). - -#### Version 8.0.0+ [ec_version_8_0_0] - -The following additional setting is supported: - -`server.xsrf.allowlist` -: Allows the SAML authentication URL within Kibana, so that the Kibana server doesn’t reject external authentication messages that originate from your Identity Provider. - - -#### All supported versions [ec_all_supported_versions_3] - -`xpack.security.authc.providers.saml..useRelayStateDeepLink` -: Specifies if Kibana should treat the `RelayState` parameter as a deep link when Identity Provider Initiated login flow is used. - -`xpack.security.authc.providers.saml..order` -: Specifies order of the SAML authentication provider in the authentication chain. - -`xpack.security.authc.providers.saml..realm` -: Specifies which SAML realm in Elasticsearch should be used. - -`xpack.security.authc.providers.saml..maxRedirectURLSize` -: Specifies the maximum size of the URL that Kibana is allowed to store during the SAML handshake. - -`xpack.security.authc.providers.saml..description` -: Specifies how SAML login should be titled in the Login Selector UI. - -`xpack.security.authc.saml.maxRedirectURLSize` -: Specifies the maximum size of the URL that Kibana is allowed to store during the SAML handshake. - -`xpack.security.authc.saml.realm` -: Specifies which SAML realm in Elasticsearch should be used. - -`xpack.security.authc.providers` -: Specifies which providers are going to be used in Kibana. - - -#### All supported versions before 8.x [ec_all_supported_versions_before_8_x] - -`xpack.security.authProviders` -: Set to `saml` to instruct Kibana to use SAML SSO as the authentication method. - -`xpack.security.public.protocol` -: Set to HTTP or HTTPS. To access Kibana, HTTPS protocol is recommended. - -`xpack.security.public.hostname` -: Set to a fully qualified hostname to connect your users to the proxy server. - -`xpack.security.public.port` -: The port number that connects your users to the proxy server (for example, 80 for HTTP or 443 for HTTPS). - -`xpack.security.authc.saml.useRelayStateDeepLink` -: Specifies if Kibana should treat the `RelayState` parameter as a deep link when Identity Provider Initiated login flow is used. - -`server.xsrf.whitelist` -: Explicitly allows the SAML authentication URL within Kibana, so that the Kibana server doesn’t reject external authentication messages that originate from your Identity Provider. This setting is renamed to `server.xsrf.allowlist` in version 8.0.0. - - - -### OpenID Connect [ec_openid_connect] - -If you are using OpenID Connect to secure your clusters, these settings are supported in {{ech}}. - -`xpack.security.authc.providers.oidc..order` -: Specifies order of the OpenID Connect authentication provider in the authentication chain. - -`xpack.security.authc.providers.oidc..realm` -: Specifies which OpenID Connect realm in Elasticsearch should be used. - -`xpack.security.authc.providers.oidc..description` -: Specifies how OpenID Connect login should be titled in the Login Selector UI. - -`xpack.security.authc.oidc.realm` -: Specifies which OpenID Connect realm in Elasticsearch should be used. - -To learn more, check [configuring Kibana to use OpenID Connect](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md). - - -### Anonymous authentication [ec_anonymous_authentication] - -If you want to allow anonymous authentication in Kibana, these settings are supported in {{ech}}. To learn more on how to enable anonymous access, check [Enabling anonymous access](/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md) and [Configuring Kibana to use anonymous authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md#anonymous-authentication). - -#### Supported versions before 8.0.0 [ec_supported_versions_before_8_0_0] - -`xpack.security.sessionTimeout` -: Specifies the session duration in milliseconds. Allows a value between 15000 (15 seconds) and 86400000 (1 day). To learn more, check [Security settings in Kibana](kibana://reference/configuration-reference/security-settings.md). Deprecated in versions 7.6+ and removed in versions 8.0+. - - -#### All supported versions [ec_all_supported_versions_4] - -`xpack.security.authc.anonymous.*` -: Enables access for the `anonymous` user. In versions prior to 7.10 anonymous access is enabled by default, but you can add this setting if you want to avoid anonymous access being disabled accidentally by a subsequent upgrade. - -`xpack.security.authc.providers.anonymous..order` -: Specifies order of the anonymous authentication provider in the authentication chain. - -`xpack.security.authc.providers.anonymous..credentials` -: Specifies which credentials Kibana should use for anonymous users. - - - - -## X-Pack configuration settings [ec-xpack-config] - -You can configure the following X-Pack settings from the Kibana **User Settings** editor. - -### Version 8.18+ [ec_version_8_18] - -`xpack.fleet.enableManagedLogsAndMetricsDataviews` -: Allow to disable the automatic creation of global dataviews `logs-*` and `metrics-*`. - - -### Version 8.16+ [ec_version_8_16] - -`xpack.task_manager.capacity` -: Controls the number of tasks that can be run at one time. Can be minimum 5 and maximum 50. Default: 10. - - -### Version 8.8+ [ec_version_8_8] - -`xpack.cases.files.allowedMimeTypes` -: The MIME types that you can attach to a case, represented in an array of strings. For example: `['image/tiff','text/csv','application/zip'].` The default MIME types are specified in [mime_types.ts](https://github.com/elastic/kibana/blob/8.16/x-pack/plugins/cases/common/constants/mime_types.ts). - -`xpack.cases.files.maxSize` -: The size limit for files that you can attach to a case, represented as the number of bytes. By default, the limit is 10 MiB for images and 100 MiB for all other MIME types. If you specify a value for this setting, it affects all file types. - -`xpack.actions.enableFooterInEmail` -: A boolean value indicating that a footer with a relevant link should be added to emails sent as alerting actions. Default: true. - - -### Version 8.7+ [ec_version_8_7] - -`xpack.actions.run.maxAttempts` -: Specifies the maximum number of times an action can be attempted to run. Can be minimum 1 and maximum 10. - -`xpack.actions.run.connectorTypeOverrides` -: Overrides the settings under xpack.actions.run for a connector type with the given ID. For example id:'.server-log', maxAttempts:5. - - -### Version 8.6+ [ec_version_8_6] - -`xpack.task_manager.monitored_stats_health_verbose_log.level` -: Set to `info` for Task Manager to log the health monitoring stats at info level instead of `debug`. Default: `debug`. - - -### Version 8.5+ [ec_version_8_5] - -`xpack.security.accessAgreement.message` -: This setting specifies the access agreement text in Markdown format that will be used as the default access agreement for all providers that do not specify a value for `xpack.security.authc.providers...accessAgreement.message`. - -`xpack.alerting.rules.run.alerts.max` -: Sets the maximum number of alerts that a rule can generate each time detection checks run. Defaults to `1000`. - - -### Version 8.3+ [ec_version_8_3] - -`xpack.cloudSecurityPosture.enabled` -: Enables the Kibana UI for Elastic’s Cloud Security Posture solution. The solution provides audit & compliance checks on Cloud & Kubernetes environments. Defaults to `false`. - -`xpack.alerting.rules.run.actions.connectorTypeOverrides` -: Overrides the settings under xpack.alerting.rules.run.actions for a connector type with the given ID. For example id:'.server-log', max:1000. - - -### Version 8.2+ [ec_version_8_2] - -`xpack.alerting.rules.minimumScheduleInterval.value` -: Specifies the minimum schedule interval for rules. This minimum is applied to all rules created or updated after you set this value. Defaults to `1m`. - -`xpack.alerting.rules.minimumScheduleInterval.enforce` -: Specifies the behavior when a new or changed rule has a schedule interval less than the value defined in `xpack.alerting.rules.minimumScheduleInterval.value`. If `false`, rules with schedules less than the interval will be created but warnings will be logged. If `true`, rules with schedules less than the interval cannot be created. Default: `false`. - -`xpack.alerting.rules.run.actions.max` -: Sets the maximum number of actions that a rule can trigger each time detection checks run (maximum `100000`). - -`xpack.alerting.rules.run.timeout` -: Specifies the default timeout for the all rule types tasks. - -`xpack.alerting.rules.run.ruleTypeOverrides` -: Overrides the settings under xpack.alerting.rules.run for a rule type with the given id. e.g. (id:'index-threshold', timeout:'5m'), - -#### Version 8.1+ [ec_version_8_1] - -`xpack.alerting.cancelAlertsOnRuleTimeout` -: Set to `false` to enable writing alerts and scheduling actions even if rule execution is cancelled due to timeout. Defaults to `true`. - - - -### Version 8.0+ [ec_version_8_0] - -`xpack.endpoint.enabled` -: Set to `true` to enable the Endpoint application. - -`xpack.fleet.enabled` -: Set to `false` to disable the Fleet application. Also enables the EPM and Agents features. For details about using this application, check the blog post [Easier data onboarding with Elastic Agent and Ingest Manager](https://www.elastic.co/blog/introducing-elastic-agent-and-ingest-manager). - -`xpack.fleet.agents.enabled` -: Set to `false` to disable the Agents API & UI. - -`xpack.ruleRegistry.write.disabledRegistrationContexts` -: Specifies the observability alert indices that are disabled to be written. Data type is array. Allowed values are: [ *observability.logs*,*observability.metrics*,*observability.apm*,*observability.uptime* ] - - -### Version 7.17.4+, 8.3+ [ec_version_7_17_4_8_3] - -`xpack.actions.email.domain_allowlist` -: A list of allowed email domains which can be used with the email connector. When this setting is not used, all email domains are allowed. When this setting is used, if any email is attempted to be sent that (a) includes an addressee with an email domain that is not in the allowlist, or (b) includes a from address domain that is not in the allowlist, it will fail with a message indicating the email is not allowed. - -::::{note} -This setting is not available in versions 8.0.0 through 8.2.0. As such, this setting should be removed before upgrading from 7.17 to 8.0, 8.1 or 8.2. It is possible to configure the settings in 7.17.4 and then upgrade to 8.3.0 directly. -:::: - - - -### Version 7.17.2+, 8.2+ [ec_version_7_17_2_8_2] - -`xpack.task_manager.event_loop_delay.monitor` -: Enables event loop delay monitoring, which will log a warning when a task causes an event loop delay which exceeds the `warn_threshold` setting. Defaults to true. - - ::::{note} - This setting is not available in versions 8.0.0 through 8.1.1. - :::: - - -`xpack.task_manager.event_loop_delay.warn_threshold` -: Sets the amount of event loop delay during a task execution which will cause a warning to be logged. Defaults to 5000 milliseconds (5 seconds). - - ::::{note} - This setting is not available in versions 8.0.0 through 8.1.1. As such, this setting should be removed before upgrading from 7.17 to 8.0 or 8.1.0. It is possible to configure the settings in 7.17.2 and then upgrade to 8.2.0 directly. - :::: - - - -### All supported versions [ec_all_supported_versions_5] - -`xpack.alerting.defaultRuleTaskTimeout` -: Specifies the default timeout for the all rule types tasks. Defaults to `5m`. Deprecated in versions 8.2+ and removed in {{stack}} 9.0+. - -`xpack.actions.microsoftGraphApiUrl` -: Specifies the URL to the Microsoft Graph server when using the MS Exchange Server email service. Defaults to `https://graph.microsoft.com/v1.0`. - -`xpack.alerting.maxEphemeralActionsPerAlert` -: Sets the number of actions that will be executed ephemerally. Defaults to `10`. - -`xpack.task_manager.ephemeral_tasks.enabled` -: Enables an experimental feature that executes a limited (and configurable) number of actions in the same task as the alert which triggered them. These action tasks reduce the latency of the time it takes an action to run after it’s triggered, but are not persisted as SavedObjects. These non-persisted action tasks have a risk that they won’t be run at all if the Kibana instance running them exits unexpectedly. Defaults to `false`. - -`xpack.task_manager.ephemeral_tasks.request_capacity` -: Sets the size of the ephemeral queue. Defaults to `10`. - -`xpack.actions.customHostSettings` -: An array of objects, one per host, containing the SSL/TLS settings used when executing connectors which make HTTPS and SMTP connections to the host servers. For details about using this setting, check [Alerting and action settings in Kibana](kibana://reference/configuration-reference/alerting-settings.md). - -`xpack.actions.ssl.proxyVerificationMode` -: Controls the verification of the proxy server certificate that hosted-ems receives when making an outbound SSL/TLS connection to the host server. Valid values are `full`, `certificate`, and `none`. Use `full` to perform hostname verification, `certificate` to skip hostname verification, and `none` to skip verification. Default: `full`. - -`xpack.actions.ssl.verificationMode` -: Controls the verification of the server certificate that hosted-ems receives when making an outbound SSL/TLS connection to the host server. Valid values are `full`, `certificate`, and `none`. Use `full` to perform hostname verification, `certificate` to skip hostname verification, and `none` to skip verification. Default: `full`. - -`xpack.task_manager.monitored_stats_health_verbose_log.enabled` -: Enable to allow the Kibana task manager to log at either a warning or error log level if it self-detects performance issues. Default: `false`. - -`xpack.task_manager.monitored_stats_health_verbose_log.warn_delayed_task_start_in_seconds` -: The number of seconds we allow a task to delay before printing a warning server log. Default: `60`. - -`xpack.actions.preconfiguredAlertHistoryEsIndex` -: Set to `true` to enable experimental Alert history Elasticsearch index connector. Default: `false`. - -`xpack.discoverEnhanced.actions.exploreDataInContextMenu.enabled` -: Set to `true` to enable the "explore underlying data" menu action in dashboards. Default: `false`. - -`xpack.actions.proxyBypassHosts` -: Specifies hostnames which should not use the proxy, if using a proxy for actions. The value is an array of hostnames as strings. By default, all hosts will use the proxy. The settings `xpack.actions.proxyBypassHosts` and `xpack.actions.proxyOnlyHosts` cannot be used at the same time. - -`xpack.actions.proxyOnlyHosts` -: Specifies hostnames which should only be used with the proxy, if using a proxy for actions. The value is an array of hostnames as strings. By default, all hosts will use the proxy. The settings `xpack.actions.proxyBypassHosts` and `xpack.actions.proxyOnlyHosts` cannot be used at the same time. - -`xpack.actions.maxResponseContentLength` -: Specifies the max number of bytes of the HTTP response for requests to external resources. Defaults to *1mb*. - -`xpack.actions.responseTimeout` -: Specifies the time allowed for requests to external resources. Requests that take longer are aborted. The time is formatted as [ms|s|m|h|d|w|M|Y], for example, *20m*, *24h*, *7d*, *1w*. Defaults to *60s*. - -`xpack.task_manager.monitored_task_execution_thresholds` -: Specifies the thresholds for failed task executions. If the percentage of failed executions exceeds the specified thresholds, the health of the task will be reported as configured. Can be specified at a default level or a custom level for specific task types. The following example shows a valid `monitored_task_execution_thresholds configuration`. - - ```yaml - xpack.task_manager.monitored_task_execution_thresholds: - default: - error_threshold: 70 - warn_threshold: 50 - custom: - "alerting:.index-threshold": - error_threshold: 50 - warn_threshold: 0 - ``` - - -`xpack.task_manager.version_conflict_threshold` -: Specifies the threshold for version conflicts. If the percentage of version conflicts exceeds the threshold, the task manager `poll_interval` will automatically be adjusted. Default: `80`. - -`xpack.actions.proxyUrl` -: Specifies the proxy URL to use, if using a proxy for actions. - -`xpack.actions.proxyHeaders` -: Specifies headers for proxy, if using a proxy for actions. - -`xpack.ingestManager.enabled` -: Set to `false` to disable the Ingest Manager application. Also enables the EPM and Fleet features. For details about using this application, check the blog post [Easier data onboarding with Elastic Agent and Ingest Manager](https://www.elastic.co/blog/introducing-elastic-agent-and-ingest-manager). - -`xpack.ingestManager.fleet.enabled` -: Set to `false` to disable the Fleet API & UI. - -`xpack.lists.maxImportPayloadBytes` -: Sets the number of bytes (default `9000000`, maximum `100000000`) allowed for uploading Security Solution value lists. For every 10 megabytes, it is recommended to have an additional 1 gigabyte of RAM reserved for Kibana. For example, on a Kibana instance with 2 gigabytes of RAM, you can set this value up to 20000000 (20 megabytes). - -`xpack.lists.importBufferSize` -: Sets the buffer size used for uploading Security Solution value lists (default `1000`). Change the value if you are experiencing slow upload speeds or larger than wanted memory usage when uploading value lists. Set to a higher value to increase throughput at the expense of using more Kibana memory, or a lower value to decrease throughput and reduce memory usage. - -`xpack.security.sameSiteCookies` -: Sets the `SameSite` attribute of `Set-Cookie` HTTP header. It allows you to declare whether your cookie should be restricted to a first-party or same-site context. **Not set** by default, which makes modern browsers treat it as `Lax`. If you use Kibana embedded in an iframe in modern browsers, you might need to set it to `None`. Note that `None` usage requires secure context: `xpack.security.secureCookies: true`. Some old versions of IE11 do not support `SameSite: None`, so you should not specify `xpack.security.sameSiteCookies` at all. - -`xpack.ingestManager.enabled` -: Set to `true` (default `false`) to enable the Ingest Manager application. Also enables the EPM and Fleet features. For details about using this application, check the blog post [Easier data onboarding with Elastic Agent and Ingest Manager](https://www.elastic.co/blog/introducing-elastic-agent-and-ingest-manager). - -`xpack.ingestManager.epm.enabled` -: Set to `true` (default) to enable the EPM API & UI. - -`xpack.ingestManager.fleet.enabled` -: Set to `true` (default) to enable the Fleet API & UI. - -`xpack.task_manager.max_workers` -: Specify the maximum number of tasks a Kibana will run concurrently. Default: `10`. Deprecated in versions 8.16+ - -`xpack.task_manager.poll_interval` -: Specify how often, in milliseconds, a Kibana should check for more tasks. Default: `3000`. - -`xpack.eventLog.logEntries` -: Set to `true` to enable logging event log documents from alerting to the Kibana log, in addition to being indexed into the event log index. Default: `false`. - -`xpack.security.session.idleTimeout` -: Set the session duration. The format is a string of `count` and `unit`, where unit is one of `ms`,`s`,`m`,`h`,`d`,`w`,`M`,`Y`. For example, `70ms`, `5s`, `3d`, `1Y`. To learn more, check [Security settings in Kibana](kibana://reference/configuration-reference/security-settings.md). - -`xpack.security.session.lifespan` -: Sets the maximum duration, also known as "absolute timeout". After this duration, the session will expire even if it is not idle. To learn more, check [Security settings in Kibana](kibana://reference/configuration-reference/security-settings.md). - -`xpack.maps.showMapVisualizationTypes` -: Set to `true` if you want to create new region map visualizations. - -`xpack.actions.allowedHosts` -: Set to an array of host names which actions such as email, slack, pagerduty, and webhook can connect to. An element of `*` indicates any host can be connected to. An empty array indicates no hosts can be connected to. Default: `[ * ]` - -`xpack.actions.enabledActionTypes` -: Set to an array of action types that are enabled. An element of `*` indicates all action types registered are enabled. The action types provided by Kibana are: `.server-log`, `.slack`, `.email`, `.index`, `.pagerduty`, `.webhook`. Default: `[ * ]` - -`xpack.grokdebugger.enabled` -: Set to `true` (default) to enable the Grok Debugger. - -`xpack.graph.enabled` -: Set to `false` to disable X-Pack graph. - -`xpack.monitoring.cluster_alerts.email_notifications.email_address` -: When enabled, specifies the email address to receive cluster alert notifications. - -`xpack.monitoring.kibana.collection.interval` -: Controls [how often data samples are collected](elasticsearch://reference/elasticsearch/configuration-reference/monitoring-settings.md#monitoring-collection-settings). - -`xpack.monitoring.min_interval_seconds` -: Specifies the minimum number of seconds that a time bucket in a chart can represent. If you modify the `xpack.monitoring.kibana.collection.interval`, use the same value in this setting. - -`xpack.monitoring.ui.container.elasticsearch.enabled` -: For Elasticsearch clusters that run in containers, enables the `Node Listing` to display the `CPU utilization` based on the `Cgroup statistics`, and adds the `Cgroup CPU utilization` to the Node Overview page instead of the overall operating system CPU utilization. - -`xpack.ml.enabled` -: Set to true (default) to enable machine learning. - - If set to `false` in `kibana.yml`, the machine learning icon is hidden in this Kibana instance. If `xpack.ml.enabled` is set to `true` in `elasticsearch.yml`, however, you can still use the machine learning APIs. To disable machine learning entirely, check the [Elasticsearch Machine Learning Settings](elasticsearch://reference/elasticsearch/configuration-reference/machine-learning-settings.md). - - -#### Content security policy configuration [ec_content_security_policy_configuration] - -`csp.script_src` -: Add sources for the [Content Security Policy `script-src` directive](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/script-src). When [`csp.strict`](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md#csp-strict) is `true`, `csp.script_src` may not be `unsafe-inline`. Rules may not contain `nonce-*` or `none` and will not override the defaults. **Default: [`'unsafe-eval'`, `'self'`]** - -`csp.worker_src` -: Add sources for the [Content Security Policy `worker-src` directive](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/worker-src). Rules may not contain `nonce-*` or `none` and will not override the defaults. **Default: [`blob:`, `'self'`]** - -`csp.style_src` -: Add sources for the [Content Security Policy `style-src` directive](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/style-src). Rules may not contain `nonce-*` or `none` and will not override the defaults. **Default: [`'unsafe-inline'`, `'self'`]** - -`csp.connect_src` -: Add sources for the [Content Security Policy `connect-src` directive](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/connect-src). - -`csp.default_src` -: Add sources for the [Content Security Policy `default-src` directive](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/default-src). - -`csp.font_src` -: Add sources for the [Content Security Policy `font-src` directive](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/font-src). - -`csp.frame_src` -: Add sources for the [Content Security Policy `frame-src` directive](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/frame-src). - -`csp.img_src` -: Add sources for the [Content Security Policy `img-src` directive](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/img-src). - -`csp.report_uri` -: Add sources for the [Content Security Policy `report-uri` directive](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/report-uri). - -`csp.report_only.form_action` -: Add sources for the [Content Security Policy `form-action` directive](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/form-action) in reporting mode. - -$$$csp-strict$$$ `csp.strict` -: Blocks Kibana access to any browser that does not enforce even rudimentary CSP rules. In practice, this disables support for older, less safe browsers like Internet Explorer. **Default: `true`** To learn more, check [Configure Kibana](kibana://reference/configuration-reference/general-settings.md)]. - -`csp.warnLegacyBrowsers` -: Shows a warning message after loading Kibana to any browser that does not enforce even rudimentary CSP rules, though Kibana is still accessible. This configuration is effectively ignored when [`csp.strict`](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md#csp-strict) is enabled. **Default: `true`** - -`csp.disableUnsafeEval` -: [preview] Set this to `true` to remove the [`unsafe-eval`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/script-src#unsafe_eval_expressions) source expression from the `script-src` directive. **Default: `false`** - - By enabling `csp.disableUnsafeEval`, Kibana will use a custom version of the Handlebars template library which doesn’t support [inline partials](https://handlebarsjs.com/guide/partials.md#inline-partials). Handlebars is used in various locations in the Kibana frontend where custom templates can be supplied by the user when for instance setting up a visualisation. If you experience any issues rendering Handlebars templates after turning on `csp.disableUnsafeEval`, or if you rely on inline partials, please revert this setting to `false` and [open an issue](https://github.com/elastic/kibana/issues/new/choose) in the Kibana GitHub repository. - - - -#### Permissions policy configuration [ec_permissions_policy_configuration] - -`permissionsPolicy.report_to` -: Add sources for the permissions policy `report-to` directive. To learn more, see [Configure Kibana](kibana://reference/configuration-reference/general-settings.md#server-securityresponseheaders-permissionspolicy) - - -#### Banner settings [ec_banner_settings] - -Banners are disabled by default. You need to manually configure them in order to use the feature. - -`xpack.banners.placement` -: Set to `top` to display a banner above the Elastic header. Defaults to `disabled`. - -`xpack.banners.textContent` -: The text to display inside the banner, either plain text or Markdown. - -`xpack.banners.textColor` -: The color for the banner text. Defaults to `#8A6A0A`. - -`xpack.banners.backgroundColor` -: The color of the banner background. Defaults to `#FFF9E8`. - -`xpack.banners.disableSpaceBanners` -: If true, per-space banner overrides are disabled. Defaults to `false`. - - - - -## Reporting settings [ec_reporting_settings] - -### Version 8.13.0+ [ec_version_8_13_0] - -`xpack.reporting.csv.scroll.strategy` -: Choose the API method used to page through data during CSV export. Valid options are `scroll` and `pit`. Defaults to `pit`. - -::::{note} -Each method has its own unique limitations which are important to understand. - -* Scroll API: Search is limited to 500 shards at the very most. In cases where data shards are unavailable or time out, the export may return partial data. -* PIT API: Permissions to read data aliases alone will not work. The permissions are needed on the underlying indices or data streams. In cases where data shards are unavailable or time out, the export will be empty instead of returning partial data. - -:::: - - -`xpack.reporting.csv.scroll.duration` -: Amount of [time](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units) allowed before {{kib}} cleans the scroll context during a CSV export. Valid option is either `auto` or [time](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units), Defaults to `30s`. - -::::{note} -Support for the The option `auto` was included here, when the config value is set to `auto` the scroll context will be preserved for as long as is possible, before the report task is terminated due to the limits of `xpack.reporting.queue.timeout`. - -:::: - - - -### All supported versions [ec_all_supported_versions_6] - -`xpack.reporting.enabled` -: Set to `false` to completely disable reporting. - -`xpack.reporting.queue.timeout` -: Specifies the time each worker has to produce a report. If your machine is slow or under heavy load, you might need to increase this timeout. Specified in milliseconds (number) or duration (string). Duration is a string value formatted as [ms|s|m|h|d|w|M|Y], for example, *20m*, *24h*, *7d*, *1w*. - - Defaults to `120000` (2 minutes) - - -`xpack.reporting.capture.maxAttempts` -: Specifies how many retries to attempt in case of occasional failures. - - Defaults to `3`. - - -`xpack.screenshotting.capture.timeouts.openUrl` -: Specify how long to allow the Reporting browser to wait for the "Loading…​" screen to dismiss and find the initial data for the Kibana page. If the time is exceeded, a page screenshot is captured showing the current state, and the download link shows a warning message. - - Defaults to `30000` (30 seconds). - - -`xpack.screenshotting.capture.timeouts.waitForElements` -: Specify how long to allow the Reporting browser to wait for all visualization panels to load on the Kibana page. If the time is exceeded, a page screenshot is captured showing the current state, and the download link shows a warning message. - - Defaults to `30000` (30 seconds). - - -`xpack.screenshotting.capture.timeouts.renderComplete` -: Specify how long to allow the Reporting browser to wait for all visualizations to fetch and render the data. If the time is exceeded, a page screenshot is captured showing the current state, and the download link shows a warning message. - - Defaults to `30000` (30 seconds). - - -`xpack.screenshotting.capture.browser.type` -: Specifies the browser to use to capture screenshots. Valid options are `phantom` and `chromium`. - - Beginning with version 7.0, `chromium` is the only allowed option. Defaults to `phantom` for earlier versions. - - -`xpack.reporting.csv.maxSizeBytes` -: Sets the maximum size of a CSV file before being truncated. This setting exists to prevent large exports from causing performance and storage issues. Until 7.15, maximum allowed value is 50 MB (52428800 Bytes). - - Defaults to `250MB`. {{stack}} versions before 8.10 default to `10485760` (10MB). - - -`xpack.reporting.encryptionKey` -: Set to any text string. To provide your own encryption key for reports, use this setting. - -`xpack.reporting.roles.enabled` -: When `true`, grants users access to the {{report-features}} when they are assigned the `reporting_user` role. Granting access to users this way is deprecated. Set to `false` and use [Kibana privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md) instead. - -Defaults to `true`. - -`xpack.reporting.csv.scroll.duration` -: Amount of [time](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units) allowed before {{kib}} cleans the scroll context during a CSV export. - -Defaults to `30s` (30 seconds). - -::::{note} -If search latency in {{es}} is sufficiently high, such as if you are using cross-cluster search or frozen tiers, you may need to increase the setting. - -:::: - - -`xpack.reporting.csv.scroll.size` -: Sets the number of documents retrieved from {{es}} for each scroll iteration during Kibana CSV export. Defaults to `500`. - -`xpack.reporting.csv.checkForFormulas` -: Enables a check that warns you when there’s a potential formula included in the output (=, -, +, and @ chars). See OWASP: [https://www.owasp.org/index.php/CSV_Injection](https://www.owasp.org/index.php/CSV_Injection). Defaults to `true`. - -`xpack.reporting.csv.escapeFormulaValues` -: Escapes formula values in cells with a `'`. See OWASP: [https://www.owasp.org/index.php/CSV_Injection](https://www.owasp.org/index.php/CSV_Injection). Defaults to `true`. - -`xpack.reporting.csv.useByteOrderMarkEncoding` -: Adds a byte order mark (`\ufeff`) at the beginning of the CSV file. Defaults to `false`. - - - -## Logging and audit settings [ec_logging_and_audit_settings] - -::::{note} -To change logging settings or to enable auditing you must first [enable deployment logging](../../../deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md). -:::: - - -The following logging settings are supported: - -### Version 8.0+ [ec_version_8_0_2] - -`logging.root.level` -: Can be used to adjust Kibana’s logging level. Allowed values are `fatal`, `error`, `warn`, `info`, `debug`, `trace`, and `all`. Setting this to `all` causes all events to be logged, including system usage information, all requests, and Elasticsearch queries. This has a similar effect to enabling both `logging.verbose` and `elasticsearch.logQueries` in older 7.x versions. Setting to `error` has a similar effect to enabling `logging.quiet` in older 7.x versions. Defaults to `info`. - -`xpack.security.audit.enabled` -: When set to *true*, audit logging is enabled for security events. Defaults to *false*. - - -### Supported 7.x versions [ec_supported_7_x_versions] - -`xpack.security.audit.appender.type` -: When set to *"rolling-file"* and `xpack.security.audit.enabled` is set to *true*, Kibana ECS audit logs are enabled. Beginning with version 8.0, this setting is no longer necessary for ECS audit log output; it’s only necessary to set `xpack.security.audit.enabled` to `true` - -`logging.verbose` -: If set to *true*, all events are logged, including system usage information and all requests. Defaults to *false*. - -`logging.quiet` -: If set to *true*, all logging output other than error messages is suppressed. Defaults to *false*. - -`elasticsearch.logQueries` -: When set to *true*, queries sent to Elasticsearch are logged (requires `logging.verbose` set to *true*). Defaults to *false*. - -`xpack.security.audit.enabled` -: When set to *true*, audit logging is enabled for security events. Defaults to *false*. - - -### All supported versions [ec_all_supported_versions_7] - -`xpack.security.audit.ignore_filters` -: List of filters that determine which audit events should be excluded from the ECS audit log. - -`xpack.security.audit.ignore_filters.actions` -: List of values matched against the `event.action` field of an audit event. - -`xpack.security.audit.ignore_filters.categories` -: List of values matched against the `event.category` field of an audit event. - -`xpack.security.audit.ignore_filters.outcomes` -: List of values matched against the `event.outcome` field of an audit event. - -`xpack.security.audit.ignore_filters.spaces` -: List of values matched against the `kibana.space_id` field of an audit event. This represents the space id in which the event took place. - -`xpack.security.audit.ignore_filters.types` -: List of values matched against the `event.type` field of an audit event. - - -### Version 8.15.0+ [ec_version_8_15_0] - -`xpack.security.audit.ignore_filters.users` -: List of values matched against the `user.name` field of an audit event. This represents the username associated with the audit event. - - - -## APM [ec_apm] - -The following APM settings are supported in Kibana: - -### Version 8.0.0+ [ec_version_8_0_0_2] - -`xpack.apm.autoCreateApmDataView` -: Set to `false` to disable the automatic creation of the APM data view when the APM app is opened. Defaults to `true`. This setting was called `xpack.apm.autocreateApmIndexPattern` in versions prior to 8.0.0. - - -### Version 7.16.0 to 8.6.2 [ec_version_7_16_0_to_8_6_2] - -`xpack.apm.ui.transactionGroupBucketSize` -: Number of top transaction groups displayed in the APM app. Defaults to `1000`. - - -### Version 7.16.0 to 8.0.0 [ec_version_7_16_0_to_8_0_0] - -`xpack.apm.maxServiceEnvironments` -: Maximum number of unique service environments recognized by the UI. Defaults to `100`. - - -### Supported versions before 8.x [ec_supported_versions_before_8_x_2] - -`xpack.apm.autocreateApmIndexPattern` -: Set to `false` to disable the automatic creation of the APM data view when the APM app is opened. Defaults to `true`. This setting is renamed to `xpack.apm.autoCreateApmDataView` in version 8.0.0. - - -### All supported versions [ec_all_supported_versions_8] - -`xpack.apm.serviceMapFingerprintBucketSize` -: Maximum number of unique transaction combinations sampled for generating service map focused on a specific service. Defaults to `100`. - -`xpack.apm.serviceMapFingerprintGlobalBucketSize` -: Maximum number of unique transaction combinations sampled for generating the global service map. Defaults to `100`. - -`xpack.apm.serviceMapEnabled` -: Set to `false` to disable service maps. Defaults to `true`. - -`xpack.apm.serviceMapTraceIdBucketSize` -: Maximum number of trace IDs sampled for generating service map focused on a specific service. Defaults to `65`. - -`xpack.apm.serviceMapTraceIdGlobalBucketSize` -: Maximum number of trace IDs sampled for generating the global service map. Defaults to `6`. - -`xpack.apm.serviceMapMaxTracesPerRequest` -: Maximum number of traces per request for generating the global service map. Defaults to `50`. - -`xpack.observability.annotations.index` -: Index name where Observability annotations are stored. Defaults to `observability-annotations`. - -`xpack.apm.metricsInterval` -: Sets a `fixed_interval` for date histograms in metrics aggregations. Defaults to `30`. - -`xpack.apm.agent.migrations.enabled` -: Set to `false` to disable cloud APM migrations. Defaults to `true`. - -`xpack.apm.indices.span` -: Matcher for indices containing span documents. Defaults to apm-*. - -`xpack.apm.indices.error` -: Matcher for indices containing error documents. Defaults to apm-*. - -`xpack.apm.indices.transaction` -: Matcher for indices containing transaction documents. Defaults to apm-*. - -`xpack.apm.indices.onboarding` -: Matcher for all onboarding indices. Defaults to apm-*. - -`xpack.apm.indices.metric` -: Matcher for all metrics indices. Defaults to apm-*. - -`xpack.apm.indices.sourcemap` -: Matcher for all source map indices. Defaults to apm-*. - -`xpack.apm.maxSuggestions` -: Maximum number of suggestions fetched in autocomplete selection boxes. Defaults to `100` - -`xpack.apm.searchAggregatedTransactions` -: Whether to use metric instead of transaction documents to render the UI. Available options are `always`, `never` or `auto`. Defaults to `auto`. - -`xpack.apm.ui.maxTraceItems` -: Maximum number of child items displayed when viewing trace details. - - Defaults to `1000`. Any positive value is valid. To learn more, check [APM settings in Kibana](kibana://reference/configuration-reference/apm-settings.md). - - -`xpack.apm.ui.enabled` -: Set to `false` to disable X-Pack APM UI. diff --git a/raw-migrated-files/cloud/cloud/ec-password-reset.md b/raw-migrated-files/cloud/cloud/ec-password-reset.md deleted file mode 100644 index 7b83ca8ce..000000000 --- a/raw-migrated-files/cloud/cloud/ec-password-reset.md +++ /dev/null @@ -1,36 +0,0 @@ -# Reset the `elastic` user password [ec-password-reset] - -You might need to reset the password for the `elastic` superuser if you cannot authenticate with the `elastic` user ID and are effectively locked out from an Elasticsearch cluster or Kibana. - -::::{note} -Elastic does not manage the `elastic` user and does not have access to the account or its credentials. If you lose the password, you have to reset it. -:::: - - -::::{note} -Resetting the `elastic` user password does not interfere with Marketplace integrations. -:::: - - -::::{note} -The `elastic` user should be not be used unless you have no other way to access your deployment. [Create API keys for ingesting data](beats://reference/filebeat/beats-api-keys.md), and create user accounts with [appropriate roles for user access](../../../deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md). -:::: - - -To reset the password: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, go to **Security**. -4. Select **Reset password**. -5. Copy down the auto-generated a password for the `elastic` user: - - ![The password for the elastic user after resetting](../../../images/cloud-reset-password.png "") - -6. Close the window. - -The password is not accessible after you close the window, so if you lose it, you need to reset the password again. - diff --git a/raw-migrated-files/cloud/cloud/ec-regional-deployment-aliases.md b/raw-migrated-files/cloud/cloud/ec-regional-deployment-aliases.md deleted file mode 100644 index 70405a3db..000000000 --- a/raw-migrated-files/cloud/cloud/ec-regional-deployment-aliases.md +++ /dev/null @@ -1,149 +0,0 @@ -# Custom endpoint aliases [ec-regional-deployment-aliases] - -Custom aliases for your deployment endpoints on {{ech}} allow you to have predictable, human-readable URLs that can be shared easily. An alias is unique to only one deployment within a region. - - -## Create a custom endpoint alias for a deployment [ec-create-regional-deployment-alias] - -::::{note} -New deployments are assigned a default alias derived from the deployment name. This alias can be modified later, if needed. -:::: - - -To add an alias to an existing deployment: - -1. From the **Deployments** menu, select a deployment. -2. Under **Custom endpoint alias**, select **Edit**. -3. Define a new alias. Make sure you choose something meaningful to you. - - ::::{tip} - Make the alias as unique as possible to avoid collisions. Aliases might have been already claimed by other users for deployments in the region. - :::: - -4. Select **Update alias**. - - -## Remove a custom endpoint alias [ec-delete-regional-deployment-alias] - -To remove an alias from your deployment, or if you want to re-assign an alias to another deployment, follow these steps: - -1. From the **Deployments** menu, select a deployment. -2. Under **Custom endpoint alias**, select **Edit**. -3. Remove the text from the **Custom endpoint alias** text box. -4. Select **Update alias**. - -::::{note} -After removing an alias, your organisation’s account will hold a claim on it for 30 days. After that period, other users can re-use this alias. -:::: - - - -## Using the custom endpoint URL [ec-using-regional-deployment-alias] - -To use your new custom endpoint URL to access your Elastic products, note that each has its own alias to use in place of the default application UUID. For example, if you configured the custom endpoint alias for your deployment to be `test-alias`, the corresponding alias for the Elasticsearch cluster in that deployment is `test-alias.es`. - -::::{note} -You can get the application-specific custom endpoint alias by selecting **Copy endpoint** for that product. It should contain a subdomain for each application type, for example `es`, `kb`, `apm`, or `ent`. -:::: - - - -### With the REST Client [ec-rest-regional-deployment-alias] - -* As part of the host name: - - After configuring your custom endpoint alias, select **Copy endpoint** on the deployment overview page, which gives you the fully qualified custom endpoint URL for that product. - -* As an HTTP request header: - - Alternatively, you can reach your application by passing the application-specific custom endpoint alias, for example, `test-alias.es`, as the value for the `X-Found-Cluster` HTTP header. - - - -### With the `TransportClient` [ec-transport-regional-deployment-alias] - -While the `TransportClient` is deprecated, your custom endpoint aliases still work with it. Similar to the REST Client, there are two ways to use your custom endpoint alias with the `TransportClient`: - -* As part of the host name: - - Similar to HTTP, you can find the fully qualified host on the deployment overview page by selecting **Copy endpoint** next to Elasticsearch. Make sure to remove the unnecessary `https://` prefix as well as the trailing HTTP port. - -* As part of the **Settings**: - - Include the application-specific custom endpoint alias as the value for `request.headers.X-Found-Cluster` setting in place of the `clusterId`: - - ```java - // Build the settings for our client. - String alias = "test-alias.es"; // Your application-specific custom endpoint alias here - String region = "us-east-1"; // Your region here - boolean enableSsl = true; - - Settings settings = Settings.settingsBuilder() - .put("transport.ping_schedule", "5s") - //.put("transport.sniff", false) // Disabled by default and *must* be disabled. - .put("action.bulk.compress", false) - .put("shield.transport.ssl", enableSsl) - .put("request.headers.X-Found-Cluster", alias) - .put("shield.user", "username:password") // your shield username and password - .build(); - - String hostname = alias + "." + region + ".aws.found.io"; - // Instantiate a TransportClient and add the cluster to the list of addresses to connect to. - // Only port 9343 (SSL-encrypted) is currently supported. - Client client = TransportClient.builder() - .addPlugin(ShieldPlugin.class) - .settings(settings) - .build() - .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(hostname), 9343)); - ``` - - -For more information on configuring the `TransportClient`, see - - -## Create a custom domain with NGINX [ec-custom-domains-with-nginx] - -If you don’t get the level of domain customization you’re looking for by using the [custom endpoint aliases](../../../deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md), you might consider creating a CNAME record that points to your Elastic Cloud endpoints. However, that can lead to some issues. Instead, setting up your own proxy could provide the desired level of customization. - -::::{important} -The setup described in the following sections is not supported by Elastic, and if your proxy cannot connect to the endpoint, but curl can, we may not be able to help. -:::: - - - -### Avoid creating CNAMEs [ec_avoid_creating_cnames] - -To achieve a fully custom domain, you can add a CNAME that points to your Elastic Cloud endpoint. However, this will lead to invalid certificate errors, and moreover, may simply not work. Your Elastic Cloud endpoints already point to a proxy internal to Elastic Cloud, which may not resolve your configured CNAME in the desired way. - -So what to do, instead? - - -### Setting up a proxy [ec_setting_up_a_proxy] - -Here we’ll show you an example of proxying with NGINX, but this can be extrapolated to HAProxy or some other proxy server. - -You need to set `proxy_pass` and `proxy_set_header`, and include the `X-Found-Cluster` header with the cluster’s UUID. You can get the cluster ID by clicking the `Copy cluster ID` link on your deployment’s main page. - -``` -server { - listen 443 ssl; - server_name elasticsearch.example.com; - - include /etc/nginx/tls.conf; - - location / { - proxy_pass https://.eu-west-1.aws.elastic-cloud.com/; - proxy_set_header X-Found-Cluster ; - } -} -``` - -This should work for all of your applications, not just {{es}}. To set it up for {{kib}}, for example, you can select `Copy cluster ID` next to {{kib}} on your deployment’s main page to get the correct UUID. - -::::{note} -Doing this for {{kib}} or won't work with Cloud SSO. -:::: - - -To configure `tls.conf in this example, check out [https://ssl-config.mozilla.org/](https://ssl-config.mozilla.org/) for more fields. - diff --git a/raw-migrated-files/cloud/cloud/ec-restore-across-clusters.md b/raw-migrated-files/cloud/cloud/ec-restore-across-clusters.md deleted file mode 100644 index b5d36a5be..000000000 --- a/raw-migrated-files/cloud/cloud/ec-restore-across-clusters.md +++ /dev/null @@ -1,49 +0,0 @@ -# Restore a snapshot across clusters [ec-restore-across-clusters] - -Snapshots can be restored to either the same Elasticsearch cluster or to another cluster. If you are restoring all indices to another cluster, you can *clone* a cluster. - -::::{note} -Users created using the X-Pack security features or using Shield are not included when you restore across clusters, only data from Elasticsearch indices is restored. If you do want to create a cloned cluster with the same users as your old cluster, you need to recreate the users manually on the new cluster. -:::: - - -Restoring to another cluster is useful for scenarios where isolating activities on a separate cluster is beneficial, such as: - -Performing ad hoc analytics -: For most logging and metrics use cases, it is cost prohibitive to have all the data in memory, even if it would provide the best performance for aggregations. Cloning the relevant data to an ad hoc analytics cluster that can be discarded after use is a cost effective way to experiment with your data, without risk to existing clusters used for production. - -Enabling your developers -: Realistic test data is crucial for uncovering unexpected errors early in the development cycle. What can be more realistic than actual data from a production cluster? Giving your developers access to real production data is a great way to break down silos. - -Testing mapping changes -: Mapping changes almost always require reindexing. Unless your data volume is trivial, reindexing requires time and tweaking the parameters to achieve the best reindexing performance usually takes a little trial and error. While this use case could also be handled by running the scan and scroll query directly against the source cluster, a long lived scroll has the side effect of blocking merges even if the scan query is very light weight. - -Integration testing -: Test your application against a real live Elasticsearch cluster with actual data. If you automate this, you could also aggregate performance metrics from the tests and use those metrics to detect if a change in your application has introduced a performance degradation. - -::::{note} -A cluster is eligible as a destination for a built-in snapshot restore if it meets these criteria: - -* The cluster is in the same region. For example, a snapshot made in `eu-west-1` cannot be restored to `us-east-1` at this point. If you need to restore snapshots across regions, create the destination deployment, connect to the [custom repository](../../../deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md), and then [restore from a snapshot](../../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md). -* The destination cluster is able to read the indices. You can generally restore to your Elasticsearch cluster snapshots of indices created back to the previous major version, but see the [version matrix](../../../deploy-manage/tools/snapshot-and-restore.md#snapshot-restore-version-compatibility) for all the details. - -:::: - - -The list of available snapshots can be found in the [`found-snapshots` repository](../../../deploy-manage/tools/snapshot-and-restore/self-managed.md). - -To restore built-in snapshots across clusters, there are two options: - -* [Restore snapshot into a new deployment](../../../deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-into-new-deployment.md) -* [Restore snapshot into an existing deployment](../../../deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-into-existing-deployment.md) - -When restoring snapshots across clusters, we create a new repository called `\_clone_{{clusterIdPrefix}}`, which persists until manually deleted. If the repository is still in use, for example by mounted searchable snapshots, it can’t be removed. - -::::{warning} -When restoring from a deployment that’s using searchable snapshots, refer to [Restore snapshots containing searchable snapshots indices across clusters](../../../deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-containing-searchable-snapshots-indices-across-clusters.md) -:::: - - - - - diff --git a/raw-migrated-files/cloud/cloud/ec-restoring-snapshots.md b/raw-migrated-files/cloud/cloud/ec-restoring-snapshots.md deleted file mode 100644 index 3f8594eb3..000000000 --- a/raw-migrated-files/cloud/cloud/ec-restoring-snapshots.md +++ /dev/null @@ -1,20 +0,0 @@ -# Work with snapshots [ec-restoring-snapshots] - -Snapshots provide a way to restore your Elasticsearch indices. They can be used to copy indices for testing, to recover from failures or accidental deletions, or to migrate data to other deployments. - -By default, {{ech}} takes a snapshot of all the indices in your Elasticsearch cluster every 30 minutes. You can set a different snapshot interval, if needed for your environment. You can also take snapshots on demand, without having to wait for the next interval. Taking a snapshot on demand does not affect the retention schedule for existing snapshots, it just adds an additional snapshot to the repository. This might be helpful if you are about to make a deployment change and you don’t have a current snapshot. - -Use Kibana to manage your snapshots. In Kibana, you can set up additional repositories where the snapshots are stored, other than the one currently managed by {{ech}}. You can view and delete snapshots, and configure a snapshot lifecycle management (SLM) policy to automate when snapshots are created and deleted. To learn more, check the [Snapshot and Restore](../../../deploy-manage/tools/snapshot-and-restore/create-snapshots.md) documentation. - -::::{important} -Snapshots back up only open indices. If you close an index, it is not included in snapshots and you will not be able to restore the data. -:::: - - -::::{note} -A snapshot taken using the default `found-snapshots` repository can only be restored to deployments in the same region. If you need to restore snapshots across regions, create the destination deployment, connect to the [custom repository](../../../deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md), and then [restore from a snapshot](../../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md). -:::: - - -From within {{ech}}, you can [restore a snapshot](../../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) from a different deployment in the same region. - diff --git a/raw-migrated-files/cloud/cloud/ec-select-subscription-level.md b/raw-migrated-files/cloud/cloud/ec-select-subscription-level.md deleted file mode 100644 index 1fcba9b00..000000000 --- a/raw-migrated-files/cloud/cloud/ec-select-subscription-level.md +++ /dev/null @@ -1,56 +0,0 @@ -# Choose a subscription level [ec-select-subscription-level] - -When you decide to add your credit card and become a paying customer, you can choose a subscription level that includes the features you are going to use. On our [pricing page](https://www.elastic.co/cloud/elasticsearch-service/pricing), you can get a complete list of features by subscription level. - -If, at any time during your monthly subscription with Elastic Cloud, you decide you need features on a higher subscription level, you can easily make changes. You can both upgrade to a higher subscription level, or downgrade to a lower one. - -To change your subscription level: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Select the user icon on the header bar and select **Billing** from the menu. -3. On the **Overview** page, select **Update subscription**. -4. Choose a new subscription level. -5. Save your changes. - -::::{important} -Changing to a higher subscription level takes place immediately. Moving to a lower subscription level takes effect 30 days after you most recently changed to a higher subscription level; in the interim, you pay the current rate. If you haven’t performed a self-service change in the past 30 days, then the change to the lower subscription level is immediate. -:::: - - -::::{important} -Customers on the prepaid consumption billing model can change their subscription level in the **Billing subscription** page. Cloud Standard is not available for customers on the prepaid consumption billing model. -:::: - - - -## Feature usage notifications [ec_feature_usage_notifications] - -If you try to change your subscription to a lower level, but you are using features that belong either to your current level or to a higher one, you need to make some changes before you can proceed, as described in **review required feature changes** link. - -This overview shows you: - -* Any features in use that belong to a higher subscription level, grouped by deployment -* Which subscription level you should change to in order to keep those features - -You can [change your subscription level](../../../deploy-manage/cloud-organization/billing/manage-subscription.md) to the recommended level, or stop using the features that belong to a higher level. In the following list, you can find the features we are tracking and the relevant instructions to remove them from your deployments: - -`Machine learning` -: Edit your deployment to disable [machine learning](/explore-analyze/machine-learning/anomaly-detection.md). - -`Searchable snapshots` -: Edit your deployment index management policies to disable the frozen tier that is using [searchable snapshots](../../../deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md), or set up your cold tier to not mount indices from a searchable snapshot. - -`JDBC/ODBC clients` -: Make sure that there are no applications that use the SQL [JDBC](/explore-analyze/query-filter/languages/sql-jdbc.md) or [ODBC](/explore-analyze/query-filter/languages/sql-odbc.md) clients. - -`Field-level or document-level security` -: Remove any user role configurations based on field or document access [through the API](/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level.md) or the Kibana Roles page. - -`ES|QL cross-cluster search` -: Discontinue all ES|QL CCS queries or upgrade license tier to Enterprise - -::::{note} -After you have made your changes to the deployment, it can take up to one hour to clear the notification banner. -:::: - - diff --git a/raw-migrated-files/cloud/cloud/ec-service-status.md b/raw-migrated-files/cloud/cloud/ec-service-status.md deleted file mode 100644 index 2a769018f..000000000 --- a/raw-migrated-files/cloud/cloud/ec-service-status.md +++ /dev/null @@ -1,25 +0,0 @@ -# Service status [ec-service-status] - -{{ech}} is a hosted service for the Elastic Stack that runs on different cloud platforms, such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Like any service, it might undergo availability changes from time to time. When availability changes, Elastic makes sure to provide you with a current service status. - -To check current and past service availability, go to [Cloud Status](https://cloud-status.elastic.co/) page. - - -## Subscribe to updates [ec_subscribe_to_updates] - -Don’t want to check the service status page manually? You can get notified about changes to the service status automatically. - -To receive service status updates: - -1. Go to the [Cloud Status](https://cloud-status.elastic.co/) page and select **SUBSCRIBE TO UPDATES**. -2. Select one of the following methods to be notified of status updates: - - * Email - * Twitter - * Atom and RSS feeds - - -After you subscribe to updates, you are notified whenever a service status update is posted. - - - diff --git a/raw-migrated-files/cloud/cloud/ec-snapshot-restore.md b/raw-migrated-files/cloud/cloud/ec-snapshot-restore.md deleted file mode 100644 index ebc1664c0..000000000 --- a/raw-migrated-files/cloud/cloud/ec-snapshot-restore.md +++ /dev/null @@ -1,24 +0,0 @@ -# Snapshot and restore [ec-snapshot-restore] - -Snapshots are an efficient way to ensure that your Elasticsearch indices can be recovered in the event of an accidental deletion, or to migrate data across deployments. - -The information here is specific to managing repositories and snapshots in {{ech}}. We also support the Elasticsearch snapshot and restore API to back up your data. For details, consult the [Snapshot and Restore documentation](../../../deploy-manage/tools/snapshot-and-restore.md). - -When you create a cluster in {{ech}}, a default repository called `found-snapshots` is automatically added to the cluster. This repository is specific to that cluster: the deployment ID is part of the repository’s `base_path`, i.e., `/snapshots/[cluster-id]`. - -::::{important} -Do not disable or delete the default `cloud-snapshot-policy` SLM policy, and do not change the default `found-snapshots` repository defined in that policy. These actions are not supported. - -The default policy and repository are used when creating a new deployment from a snapshot, when restoring a snapshot to a different deployment, and when taking automated snapshots in case of deployment changes. You can however customize the snapshot retention settings in that policy to adjust it to your needs. - -To use a custom snapshot repository, you can [register a new snapshot repository](../../../deploy-manage/tools/snapshot-and-restore/self-managed.md) and [create another SLM policy](../../../deploy-manage/tools/snapshot-and-restore/create-snapshots.md#create-slm-policy). - -:::: - - -To get started with snapshots, check out the following pages: - -* [Add your own custom repositories](../../../deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md) to snapshot to and restore from. -* To configure your cluster snapshot settings, see the [Snapshot and Restore documentation](../../../deploy-manage/tools/snapshot-and-restore.md). -* [*Restore a snapshot across clusters*](../../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md). - diff --git a/raw-migrated-files/cloud/cloud/ec_service_status_api.md b/raw-migrated-files/cloud/cloud/ec_service_status_api.md deleted file mode 100644 index ab01cef75..000000000 --- a/raw-migrated-files/cloud/cloud/ec_service_status_api.md +++ /dev/null @@ -1,6 +0,0 @@ -# Service Status API [ec_service_status_api] - -If you want a more programmatical method of ingesting our Service Status updates, we also expose some API endpoints that you can avail of. - -For more information and to get started, go to our [Service Status API](https://status.elastic.co/api/) page. - diff --git a/raw-migrated-files/cloud/cloud/ec_subscribe_to_individual_regionscomponents.md b/raw-migrated-files/cloud/cloud/ec_subscribe_to_individual_regionscomponents.md deleted file mode 100644 index abb752675..000000000 --- a/raw-migrated-files/cloud/cloud/ec_subscribe_to_individual_regionscomponents.md +++ /dev/null @@ -1,6 +0,0 @@ -# Subscribe to Individual Regions/Components [ec_subscribe_to_individual_regionscomponents] - -If you want to know about specific status updates, rather than all of them, you can adjust your preferences by using the following steps (this is used to both sign up a new email and adjust an existing subscription) . Go to the [Cloud Status](https://cloud-status.elastic.co/) page and select **SUBSCRIBE TO UPDATES**. . Enter your email address and click **SUBSCRIBE VIA EMAIL**. . You will be brought to a page with a list of Components - -Here you can either select all, select none, and customise as you require. This way, you’ll only be notified about the status updates that are important to you. - diff --git a/raw-migrated-files/docs-content/serverless/_cloud_native_vulnerability_management_dashboard.md b/raw-migrated-files/docs-content/serverless/_cloud_native_vulnerability_management_dashboard.md deleted file mode 100644 index 699b9dfdb..000000000 --- a/raw-migrated-files/docs-content/serverless/_cloud_native_vulnerability_management_dashboard.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -navigation_title: "Cloud Native Vulnerability Management dashboard" ---- - -# Cloud Native Vulnerability Management dashboard [_cloud_native_vulnerability_management_dashboard] - - -The Cloud Native Vulnerability Management (CNVM) dashboard gives you an overview of vulnerabilities detected in your cloud infrastructure. - -:::{image} ../../../images/serverless--cloud-native-security-vuln-management-dashboard.png -:alt: The CNVM dashboard -:screenshot: -::: - -::::{admonition} Requirements -:class: note - -* To collect this data, install the [Cloud Native Vulnerability Management](../../../solutions/security/cloud/get-started-with-cnvm.md) integration. - -:::: - - - -## CNVM dashboard UI [CNVM-dashboard-UI-dash{{append}}] - -The summary cards at the top of the dashboard display the number of monitored cloud accounts, scanned virtual machines (VMs), and vulnerabilities (grouped by severity). - -The **Trend by severity** bar graph complements the summary cards by displaying the number of vulnerabilities found on your infrastructure over time, sorted by severity. It has a maximum time scale of 30 days. - -::::{admonition} Graph tips -:class: note - -* Click the severity levels legend on its right to hide/show each severity level. -* To display data from specific cloud accounts, select the account names from the **Accounts** drop-down menu. - -:::: - - -The page also includes three tables: - -* **Top 10 vulnerable resources** shows your VMs with the highest number of vulnerabilities. -* **Top 10 patchable vulnerabilities** shows the most common vulnerabilities in your environment that can be fixed by a software update. -* **Top 10 vulnerabilities** shows the most common vulnerabilities in your environment, with additional details. - -Click **View all vulnerabilities** at the bottom of a table to open the [Vulnerabilities Findings](../../../solutions/security/cloud/findings-page-3.md) page, where you can view additional details. diff --git a/raw-migrated-files/docs-content/serverless/elasticsearch-differences.md b/raw-migrated-files/docs-content/serverless/elasticsearch-differences.md deleted file mode 100644 index db61c8909..000000000 --- a/raw-migrated-files/docs-content/serverless/elasticsearch-differences.md +++ /dev/null @@ -1,153 +0,0 @@ ---- -navigation_title: "Serverless differences" ---- - -# Differences from other {{es}} offerings [elasticsearch-differences] - - -[{{es-serverless}}](../../../solutions/search.md) handles all the infrastructure management for you, providing a fully managed {{es}} service. - -If you’ve used {{es}} before, you’ll notice some differences in how you work with the service on {{serverless-full}}, because a number of APIs and settings are not required for serverless projects. - -This guide helps you understand what’s different, what’s available, and how to work effectively when running {{es}} on {{serverless-full}}. - - -## Fully managed infrastructure [elasticsearch-differences-serverless-infrastructure-management] - -{{es-serverless}} manages all infrastructure automatically, including: - -* Cluster scaling and optimization -* Node management and allocation -* Shard distribution and replication -* Resource utilization and monitoring - -This fully managed approach means many traditional {{es}} infrastructure APIs and settings are not available to end users, as detailed in the following sections. - - -## Index size guidelines [elasticsearch-differences-serverless-index-size] - -To ensure optimal performance, follow these recommendations for sizing individual indices on {{es-serverless}}: - -| Use case | Maximum index size | Project configuration | -| --- | --- | --- | -| Vector search | 150GB | Vector optimized | -| General search (non data-stream) | 300GB | General purpose | -| Other uses (non data-stream) | 600GB | General purpose | - -For large datasets that exceed the recommended maximum size for a single index, consider splitting your data across smaller indices and using an alias to search them collectively. - -These recommendations do not apply to indices using better binary quantization (BBQ). Refer to [vector quantization](elasticsearch://reference/elasticsearch/mapping-reference/dense-vector.md#dense-vector-quantization) in the core {{es}} docs for more information. - - -## API availability [elasticsearch-differences-serverless-apis-availability] - -Because {{es-serverless}} manages infrastructure automatically, certain APIs are not available, while others remain fully accessible. - -::::{tip} -Refer to the [{{es-serverless}} API reference](https://www.elastic.co/docs/api/doc/elasticsearch-serverless) for a complete list of available APIs. - -:::: - - -The following categories of operations are unavailable: - -Infrastructure operations -: * All `_nodes/*` operations -* All `_cluster/*` operations -* Most `_cat/*` operations, except for index-related operations such as `/_cat/indices` and `/_cat/aliases` - - -Storage and backup -: * All `_snapshot/*` operations -* Repository management operations - - -Index management -: * `indices/close` operations -* `indices/open` operations -* Recovery and stats operations -* Force merge operations - - -When attempting to use an unavailable API, you’ll receive a clear error message: - -```json -{ - "error": { - "root_cause": [ - { - "type": "api_not_available_exception", - "reason": "Request for uri [/] with method [] exists but is not available when running in serverless mode" - } - ], - "status": 410 - } -} -``` - - -## Settings availability [elasticsearch-differences-serverless-settings-availability] - -In {{es-serverless}}, you can only configure [index-level settings](elasticsearch://reference/elasticsearch/index-settings/index.md). Cluster-level settings and node-level settings are not required by end users and the `elasticsearch.yml` file is fully managed by Elastic. - -Available settings -: **Index-level settings**: Settings that control how {{es}} documents are processed, stored, and searched are available to end users. These include: - - * Analysis configuration - * Mapping parameters - * Search/query settings - * Indexing settings such as `refresh_interval` - - -Managed settings -: **Infrastructure-related settings**: Settings that affect cluster resources or data distribution are not available to end users. These include: - - * Node configurations - * Cluster topology - * Shard allocation - * Resource management - - - -## Feature availability [elasticsearch-differences-serverless-feature-categories] - -Some features that are available in Elastic Cloud Hosted and self-managed offerings are not available in {{es-serverless}}. These features have either been replaced by a new feature, are planned to be released in future, or are not applicable in the new serverless architecture. - - -### Replaced features [elasticsearch-differences-serverless-features-replaced] - -These features have been replaced by a new feature and are therefore not available on {{es-serverless}}: - -* **Index lifecycle management ({{ilm-init}})** is not available, in favor of **data stream lifecycle**. - - In an Elastic Cloud Hosted or self-managed environment, {{ilm-init}} lets you automatically transition indices through data tiers according to your performance needs and retention requirements. This allows you to balance hardware costs with performance. {{es-serverless}} eliminates this complexity by optimizing your cluster performance for you. - - Data stream lifecycle is an optimized lifecycle tool that lets you focus on the most common lifecycle management needs, without unnecessary hardware-centric concepts like data tiers. - -* **Watcher** is not available, in favor of [**Alerts**](../../../explore-analyze/alerts-cases/alerts.md#rules-alerts). - - Kibana Alerts allows rich integrations across use cases like APM, metrics, security, and uptime. Prepackaged rule types simplify setup and hide the details of complex, domain-specific detections, while providing a consistent interface across Kibana. - - - -### Planned features [elasticsearch-differences-serverless-feature-planned] - -The following features are planned for future support in all {{serverless-full}} projects: - -* Reindexing from remote clusters -* Cross-project search and replication -* Snapshot and restore -* Migrations from non-serverless deployments -* Audit logging -* Clone index API -* Traffic filtering and VPCs - -### Unplanned features [elasticsearch-differences-serverless-feature-unavailable] - -The following features are not available in {{es-serverless}} and are not planned for future support: - -* [Custom plugins and bundles](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) -* [{{es}} for Apache Hadoop](elasticsearch-hadoop://reference/index.md) -* [Scripted metric aggregations](elasticsearch://reference/data-analysis/aggregations/search-aggregations-metrics-scripted-metric-aggregation.md) -* Managed web crawler: You can use the [self-managed web crawler](https://github.com/elastic/crawler) instead. -* Managed Search connectors: You can use [self-managed Search connectors](elasticsearch://reference/ingestion-tools/search-connectors/self-managed-connectors.md) instead. diff --git a/raw-migrated-files/docs-content/serverless/general-billing-stop-project.md b/raw-migrated-files/docs-content/serverless/general-billing-stop-project.md deleted file mode 100644 index f6a3ac6c1..000000000 --- a/raw-migrated-files/docs-content/serverless/general-billing-stop-project.md +++ /dev/null @@ -1,14 +0,0 @@ -# Stop charges for a project [general-billing-stop-project] - -Got a project you no longer need and don’t want to be charged for? Simply delete it. - -::::{warning} -All data is lost. Billing for usage is by the hour and any outstanding charges for usage before you deleted the project will still appear on your next bill. -:::: - - -To stop being charged for a project: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your project on the home page in the **Serverless Projects** card and select **Manage** to access it directly. Or, select **Serverless Projects** to go to the projects page to view all of your projects. -3. Select **Actions**, then select **Delete project** and confirm the deletion. diff --git a/raw-migrated-files/docs-content/serverless/general-sign-up-trial.md b/raw-migrated-files/docs-content/serverless/general-sign-up-trial.md deleted file mode 100644 index f0dd973ee..000000000 --- a/raw-migrated-files/docs-content/serverless/general-sign-up-trial.md +++ /dev/null @@ -1,84 +0,0 @@ -# Sign up for Elastic Cloud [general-sign-up-trial] - -The following page provides information on how to sign up for an Elastic Cloud Serverless account, for information on how to sign up for hosted deployments, see [{{ech}} - How do i sign up?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md). - - -## Trial features [general-sign-up-trial-what-is-included-in-my-trial] - -Your free 14-day trial includes: - -**One hosted deployment** - -A deployment lets you explore Elastic solutions for Search, Observability, and Security. Trial deployments run on the latest version of the Elastic Stack. They includes 8 GB of RAM spread out over two availability zones, and enough storage space to get you started. If you’re looking to evaluate a smaller workload, you can scale down your trial deployment. Each deployment includes Elastic features such as Maps, SIEM, machine learning, advanced security, and much more. You have some sample data sets to play with and tutorials that describe how to add your own data. - -To learn more about Elastic Cloud Hosted, check our [{{ech}} documentation](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md). - -**One serverless project** - -Serverless projects package Elastic Stack features by type of solution: - -* [{{es}}](../../../solutions/search.md) -* [Observability](../../../solutions/observability.md) -* [Security](../../../solutions/security/elastic-security-serverless.md) - -When you create a project, you select the project type applicable to your use case, so only the relevant and impactful applications and features are easily accessible to you. - -::::{note} -During the trial period, you are limited to one active hosted deployment and one active serverless project at a time. When you subscribe, you can create additional deployments and projects. - -:::: - - - -## Trial limitations [general-sign-up-trial-what-limits-are-in-place-during-a-trial] - -During the free 14 day trial, Elastic provides access to one hosted deployment and one serverless project. If all you want to do is try out Elastic, the trial includes more than enough to get you started. During the trial period, some limitations apply. - -**Hosted deployments** - -* You can have one active deployment at a time -* The deployment size is limited to 8GB RAM and approximately 360GB of storage, depending on the specified hardware profile -* Machine learning nodes are available up to 4GB RAM -* Custom {{es}} plugins are not enabled - -To learn more about Elastic Cloud Hosted, check our [{{ech}} documentation](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md). - -**Serverless projects** - -* You can have one active serverless project at a time. -* Search Power is limited to 100. This setting only exists in {{es-serverless}} projects -* Search Boost Window is limited to 7 days. This setting only exists in {{es-serverless}} projects -* Scaling is limited for serverless projects in trials. Failures might occur if the workload requires memory or compute beyond what the above search power and search boost window setting limits can provide. - -**Remove limitations** - -Subscribe to [Elastic Cloud](/deploy-manage/cloud-organization/billing/add-billing-details.md) for the following benefits: - -* Increased memory or storage for deployment components, such as {{es}} clusters, machine learning nodes, and APM server. -* As many deployments and projects as you need. -* Third availability zone for your deployments. -* Access to additional features, such as cross-cluster search and cross-cluster replication. - -You can subscribe to Elastic Cloud at any time during your trial. [Billing](../../../deploy-manage/cloud-organization/billing/serverless-project-billing-dimensions.md) starts when you subscribe. To maximize the benefits of your trial, subscribe at the end of the free period. To monitor charges, anticipate future costs, and adjust your usage, check your [account usage](/deploy-manage/cloud-organization/billing/monitor-analyze-usage.md) and [billing history](/deploy-manage/cloud-organization/billing/view-billing-history.md). - - -## Get started with your trial [general-sign-up-trial-how-do-i-get-started-with-my-trial] - -Start by checking out some common approaches for [moving data into Elastic Cloud](/manage-data/ingest.md). - - -## Maintain access to your trial projects and data [general-sign-up-trial-what-happens-at-the-end-of-the-trial] - -When your trial expires, the deployment and project that you created during the trial period are suspended until you subscribe to [Elastic Cloud](/deploy-manage/cloud-organization/billing/add-billing-details.md). When you subscribe, you are able to resume your deployment and serverless project, and regain access to the ingested data. After your trial expires, you have 30 days to subscribe. After 30 days, your deployment, serverless project, and ingested data are permanently deleted. - -If you’re interested in learning more ways to subscribe to Elastic Cloud, don’t hesitate to [contact us](https://www.elastic.co/contact). - - -## Sign up through a marketplace [general-sign-up-trial-how-do-i-sign-up-through-a-marketplace] - -If you’re interested in consolidated billing, subscribe from the AWS Marketplace, which allows you to skip the trial period and connect your AWS Marketplace email to your unique Elastic account. For a list of regions supported, see [Regions](../../../deploy-manage/deploy/elastic-cloud/regions.md). - -::::{note} -Serverless projects are only available for AWS Marketplace. Support for GCP Marketplace and Azure Marketplace will be added in the near future. - -:::: diff --git a/raw-migrated-files/docs-content/serverless/intro.md b/raw-migrated-files/docs-content/serverless/intro.md deleted file mode 100644 index 25f770079..000000000 --- a/raw-migrated-files/docs-content/serverless/intro.md +++ /dev/null @@ -1,20 +0,0 @@ -# Elastic Cloud Serverless [intro] - -## Differences between serverless projects and hosted deployments on {{ecloud}} [general-what-is-serverless-elastic-differences-between-serverless-projects-and-hosted-deployments-on-ecloud] - -You can run [hosted deployments](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md) of the {{stack}} on {{ecloud}}. These hosted deployments provide more provisioning and advanced configuration options. - -| | | | -| --- | --- | --- | -| Option | Serverless | Hosted | -| **Cluster management** | Fully managed by Elastic. | You provision and manage your hosted clusters. Shared responsibility with Elastic. | -| **Scaling** | Autoscales out of the box. | Manual scaling or autoscaling available for you to enable. | -| **Upgrades** | Automatically performed by Elastic. | You choose when to upgrade. | -| **Pricing** | Individual per project type and based on your usage. | Based on deployment size and subscription level. | -| **Performance** | Autoscales based on your usage. | Manual scaling. | -| **Solutions** | Single solution per project. | Full Elastic Stack per deployment. | -| **User management** | Elastic Cloud-managed users. | Elastic Cloud-managed users and native Kibana users. | -| **API support** | Subset of [APIs](https://www.elastic.co/docs/api). | All Elastic APIs. | -| **Backups** | Projects automatically backed up by Elastic. | Your responsibility with Snapshot & Restore. | -| **Data retention** | Editable on data streams. | Index Lifecycle Management. | - diff --git a/raw-migrated-files/docs-content/serverless/project-setting-data.md b/raw-migrated-files/docs-content/serverless/project-setting-data.md deleted file mode 100644 index ec07cabde..000000000 --- a/raw-migrated-files/docs-content/serverless/project-setting-data.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -navigation_title: "Data" ---- - -# Manage project data [project-setting-data] - - -Go to **Project settings**, then ** Management** to manage your indices, data views, saved objects, settings, and more. You can also open Management by using the [global search field](../../../explore-analyze/query-filter/filtering.md#_finding_your_apps_and_objects). - -Access to individual features is governed by Elastic user roles. Consult your administrator if you do not have the appropriate access. To learn more about roles, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles). - -| Feature | Description | Available in | -| --- | --- | --- | -| [Integrations](integration-docs://reference/index.md) | Connect your data to your project. | [![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [Fleet and Elastic Agent](/reference/ingestion-tools/fleet/index.md) | Add monitoring for logs, metrics, and other types of data to a host. | [![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [{{data-sources-cap}}](../../../explore-analyze/find-and-organize/data-views.md) | Manage the fields in the data views that retrieve your data from {{es-serverless}}. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [Index management](../../../manage-data/data-store/index-basics.md) | View index settings, mappings, and statistics and perform operations on indices. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [{{ingest-pipelines-cap}}](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md) | Create and manage ingest pipelines that parse, transform, and enrich your data. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [{{ls-pipelines}}](../../../manage-data/ingest/transform-enrich/logstash-pipelines.md) | Create and manage {{ls}} pipelines that parse, transform, and enrich your data. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [{{ml-cap}}](../../../explore-analyze/machine-learning.md) | View, export, and import your {{anomaly-detect}} and {{dfanalytics}} jobs and trained models. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [{{transforms-app}}](../../../explore-analyze/transforms.md) | Use transforms to pivot existing {{es}} indices into summarized or entity-centric indices. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | diff --git a/raw-migrated-files/docs-content/serverless/project-settings-alerts.md b/raw-migrated-files/docs-content/serverless/project-settings-alerts.md deleted file mode 100644 index 220404a9d..000000000 --- a/raw-migrated-files/docs-content/serverless/project-settings-alerts.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -navigation_title: "Alerts and insights" ---- - -# Manage alerts and insights [project-settings-alerts] - - -Go to **Project settings**, then ** Management** to manage your indices, data views, saved objects, settings, and more. You can also open Management by using the [global search field](../../../explore-analyze/query-filter/filtering.md#_finding_your_apps_and_objects). - -Access to individual features is governed by Elastic user roles. Consult your administrator if you do not have the appropriate access. To learn more about roles, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles). - -| Feature | Description | Available in | -| --- | --- | --- | -| [{{connectors-app}}](../../../deploy-manage/manage-connectors.md) | Create and manage reusable connectors for triggering actions. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [{{maint-windows-cap}}](../../../explore-analyze/alerts-cases/alerts/maintenance-windows.md) | Suppress rule notifications for scheduled periods of time. | [![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [{{rules-app}}](../../../explore-analyze/alerts-cases/alerts.md) | Create and manage rules that generate alerts. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md) | -| [Entity Risk Score](../../../solutions/security/advanced-entity-analytics/entity-risk-scoring.md) | Manage entity risk scoring, and preview risky entities. | [![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | diff --git a/raw-migrated-files/docs-content/serverless/project-settings-content.md b/raw-migrated-files/docs-content/serverless/project-settings-content.md deleted file mode 100644 index 632a830a9..000000000 --- a/raw-migrated-files/docs-content/serverless/project-settings-content.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -navigation_title: "Content" ---- - -# Manage project content [project-settings-content] - - -Go to **Project settings**, then ** Management** to manage your indices, data views, saved objects, settings, and more. You can also open Management by using the [global search field](../../../explore-analyze/query-filter/filtering.md#_finding_your_apps_and_objects). - -Access to individual features is governed by Elastic user roles. Consult your administrator if you do not have the appropriate access. To learn more about roles, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles). - -| Feature | Description | Available in | -| --- | --- | --- | -| [Asset criticality](../../../solutions/security/advanced-entity-analytics/asset-criticality.md) | Bulk assign asset criticality to multiple entities by importing a text file. | [![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [{{files-app}}](../../../explore-analyze/find-and-organize/files.md) | Manage files that are stored in {{kib}}. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [{{maps-app}}](../../../explore-analyze/visualize/maps.md) | Create maps from your geographical data. | [![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [{{reports-app}}](../../../explore-analyze/find-and-organize/reports.md) | Manage and download reports such as CSV files generated from saved searches. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [Saved objects](../../../explore-analyze/find-and-organize.md) | Copy, edit, delete, import, and export your saved objects.These include dashboards, visualizations, maps, {{data-sources}}, and more. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [Spaces](../../../deploy-manage/manage-spaces.md) | Organize your project and objects into multiple spaces. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [{{tags-app}}](../../../explore-analyze/find-and-organize/tags.md) | Create, manage, and assign tags to your saved objects. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| [Visualize Library](../../../explore-analyze/visualize/visualize-library.md) | Create and manage visualization panels that you can use across multiple dashboards. | [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | diff --git a/raw-migrated-files/elasticsearch-hadoop/elasticsearch-hadoop/doc-sections.md b/raw-migrated-files/elasticsearch-hadoop/elasticsearch-hadoop/doc-sections.md deleted file mode 100644 index 63ebd933c..000000000 --- a/raw-migrated-files/elasticsearch-hadoop/elasticsearch-hadoop/doc-sections.md +++ /dev/null @@ -1,40 +0,0 @@ -# Documentation sections [doc-sections] - -The documentation is broken-down in two parts: - -## Setup & Requirements [_setup_requirements] - -This [section](https://www.elastic.co/guide/en/elasticsearch-hadoop/current/features.html) provides an overview of the project, its requirements (and supported environment and libraries) plus information on how to easily install elasticsearch-hadoop in your environment. - - -## Reference Documentation [_reference_documentation] - -This part of the documentation explains the core functionality of elasticsearch-hadoop starting with the configuration options and architecture and gradually explaining the various major features. At a higher level the reference is broken down into architecture and configuration section which are general, Map/Reduce and the libraries built on top of it, upcoming computation libraries (like Apache Spark) and finally mapping, metrics and troubleshooting. - -We recommend going through the entire documentation even superficially when trying out elasticsearch-hadoop for the first time, however those in a rush, can jump directly to the desired sections: - -[*Architecture*](https://www.elastic.co/guide/en/elasticsearch-hadoop/current/arch.html) -: overview of the elasticsearch-hadoop architecture and how it maps on top of Hadoop - -[*Configuration*](https://www.elastic.co/guide/en/elasticsearch-hadoop/current/configuration.html) -: explore the various configuration switches in elasticsearch-hadoop - -[*Map/Reduce integration*](https://www.elastic.co/guide/en/elasticsearch-hadoop/current/mapreduce.html) -: describes how to use elasticsearch-hadoop in vanilla Map/Reduce environments - typically useful for those interested in data loading and saving to/from {{es}} without little, if any, ETL (extract-transform-load). - -[*Apache Hive integration*](https://www.elastic.co/guide/en/elasticsearch-hadoop/current/hive.html) -: Hive users should refer to this section. - -[*Apache Spark support*](https://www.elastic.co/guide/en/elasticsearch-hadoop/current/spark.html) -: describes how to use Apache Spark with {{es}} through elasticsearch-hadoop. - -[*Mapping and Types*](https://www.elastic.co/guide/en/elasticsearch-hadoop/current/mapping.html) -: deep-dive into the strategies employed by elasticsearch-hadoop for doing type conversion and mapping to and from {{es}}. - -[*Hadoop Metrics*](https://www.elastic.co/guide/en/elasticsearch-hadoop/current/metrics.html) -: Elasticsearch Hadoop metrics - -[*Troubleshooting*](https://www.elastic.co/guide/en/elasticsearch-hadoop/current/troubleshooting.html) -: tips on troubleshooting and getting help - - diff --git a/raw-migrated-files/elasticsearch-hadoop/elasticsearch-hadoop/index.md b/raw-migrated-files/elasticsearch-hadoop/elasticsearch-hadoop/index.md deleted file mode 100644 index ba2c35675..000000000 --- a/raw-migrated-files/elasticsearch-hadoop/elasticsearch-hadoop/index.md +++ /dev/null @@ -1,3 +0,0 @@ -# Elasticsearch Hadoop - -Migrated files from the Elasticsearch Hadoop book. \ No newline at end of file diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/documents-indices.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/documents-indices.md deleted file mode 100644 index 42fe29605..000000000 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/documents-indices.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -navigation_title: "Indices and documents" ---- - -# Indices, documents, and fields [documents-indices] - - -The index is the fundamental unit of storage in {{es}}, a logical namespace for storing data that share similar characteristics. After you have {{es}} [deployed](../../../get-started/deployment-options.md), you’ll get started by creating an index to store your data. - -An index is a collection of documents uniquely identified by a name or an [alias](../../../manage-data/data-store/aliases.md). This unique name is important because it’s used to target the index in search queries and other operations. - -::::{tip} -A closely related concept is a [data stream](../../../manage-data/data-store/data-streams.md). This index abstraction is optimized for append-only timestamped data, and is made up of hidden, auto-generated backing indices. If you’re working with timestamped data, we recommend the [Elastic Observability](https://www.elastic.co/guide/en/observability/current) solution for additional tools and optimized content. - -:::: - - - -## Documents and fields [elasticsearch-intro-documents-fields] - -{{es}} serializes and stores data in the form of JSON documents. A document is a set of fields, which are key-value pairs that contain your data. Each document has a unique ID, which you can create or have {{es}} auto-generate. - -A simple {{es}} document might look like this: - -```js -{ - "_index": "my-first-elasticsearch-index", - "_id": "DyFpo5EBxE8fzbb95DOa", - "_version": 1, - "_seq_no": 0, - "_primary_term": 1, - "found": true, - "_source": { - "email": "john@smith.com", - "first_name": "John", - "last_name": "Smith", - "info": { - "bio": "Eco-warrior and defender of the weak", - "age": 25, - "interests": [ - "dolphins", - "whales" - ] - }, - "join_date": "2024/05/01" - } -} -``` - - -## Metadata fields [elasticsearch-intro-documents-fields-data-metadata] - -An indexed document contains data and metadata. [Metadata fields](elasticsearch://reference/elasticsearch/mapping-reference/document-metadata-fields.md) are system fields that store information about the documents. In {{es}}, metadata fields are prefixed with an underscore. For example, the following fields are metadata fields: - -* `_index`: The name of the index where the document is stored. -* `_id`: The document’s ID. IDs must be unique per index. - - -## Mappings and data types [elasticsearch-intro-documents-fields-mappings] - -Each index has a [mapping](../../../manage-data/data-store/mapping.md) or schema for how the fields in your documents are indexed. A mapping defines the [data type](elasticsearch://reference/elasticsearch/mapping-reference/field-data-types.md) for each field, how the field should be indexed, and how it should be stored. When adding documents to {{es}}, you have two options for mappings: - -* [Dynamic mapping](../../../manage-data/data-store/mapping.md#mapping-dynamic): Let {{es}} automatically detect the data types and create the mappings for you. Dynamic mapping helps you get started quickly, but might yield suboptimal results for your specific use case due to automatic field type inference. -* [Explicit mapping](../../../manage-data/data-store/mapping.md#mapping-explicit): Define the mappings up front by specifying data types for each field. Recommended for production use cases, because you have full control over how your data is indexed to suit your specific use case. - -::::{tip} -You can use a combination of dynamic and explicit mapping on the same index. This is useful when you have a mix of known and unknown fields in your data. - -:::: - - diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/esql-using.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/esql-using.md deleted file mode 100644 index eeb08fc70..000000000 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/esql-using.md +++ /dev/null @@ -1,26 +0,0 @@ -# Using {{esql}} [esql-using] - -[REST API](../../../explore-analyze/query-filter/languages/esql-rest.md) -: Information about using the [{{esql}} query APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-esql). - -[Using {{esql}} in {{kib}}](../../../explore-analyze/query-filter/languages/esql-kibana.md) -: Using {{esql}} in {{kib}} to query and aggregate your data, create visualizations, and set up alerts. - -[Using {{esql}} in {{elastic-sec}}](../../../explore-analyze/query-filter/languages/esql-elastic-security.md) -: Using {{esql}} in {{elastic-sec}} to investigate events in Timeline, create detection rules, and build {{esql}} queries using Elastic AI Assistant. - -[Using {{esql}} to query multiple indices](../../../explore-analyze/query-filter/languages/esql-multi-index.md) -: Using {{esql}} to query multiple indexes and resolve field type mismatches. - -[Using {{esql}} across clusters](../../../explore-analyze/query-filter/languages/esql-cross-clusters.md) -: Using {{esql}} to query across multiple clusters. - -[Task management](../../../explore-analyze/query-filter/languages/esql-task-management.md) -: Using the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-tasks) to list and cancel {{esql}} queries. - - - - - - - diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/index-modules-mapper.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/index-modules-mapper.md deleted file mode 100644 index 4b178b2b9..000000000 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/index-modules-mapper.md +++ /dev/null @@ -1,4 +0,0 @@ -# Mapper [index-modules-mapper] - -The mapper module acts as a registry for the type mapping definitions added to an index either when creating it or by using the update mapping API. It also handles the dynamic mapping support for types that have no explicit mappings pre defined. For more information about mapping definitions, check out the [mapping section](../../../manage-data/data-store/mapping.md). - diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/search-with-synonyms.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/search-with-synonyms.md deleted file mode 100644 index b58055666..000000000 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/search-with-synonyms.md +++ /dev/null @@ -1,186 +0,0 @@ -# Search with synonyms [search-with-synonyms] - -Synonyms are words or phrases that have the same or similar meaning. They are an important aspect of search, as they can improve the search experience and increase the scope of search results. - -Synonyms allow you to: - -* **Improve search relevance** by finding relevant documents that use different terms to express the same concept. -* Make **domain-specific vocabulary** more user-friendly, allowing users to use search terms they are more familiar with. -* **Define common misspellings and typos** to transparently handle common mistakes. - -Synonyms are grouped together using **synonyms sets**. You can have as many synonyms sets as you need. - -In order to use synonyms sets in {{es}}, you need to: - -* [Store your synonyms set](../../../solutions/search/full-text/search-with-synonyms.md#synonyms-store-synonyms) -* [Configure synonyms token filters and analyzers](../../../solutions/search/full-text/search-with-synonyms.md#synonyms-synonym-token-filters) - - -## Store your synonyms set [synonyms-store-synonyms] - -Your synonyms sets need to be stored in {{es}} so your analyzers can refer to them. There are three ways to store your synonyms sets: - - -### Synonyms API [synonyms-store-synonyms-api] - -You can use the [synonyms APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-synonyms) to manage synonyms sets. This is the most flexible approach, as it allows to dynamically define and modify synonyms sets. - -Changes in your synonyms sets will automatically reload the associated analyzers. - - -### Synonyms File [synonyms-store-synonyms-file] - -You can store your synonyms set in a file. - -A synonyms set file needs to be uploaded to all your cluster nodes, and be located in the configuration directory for your {{es}} distribution. If you’re using {{ecloud}}, you can upload synonyms files using [custom bundles](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). - -An example synonyms file: - -```markdown -# Blank lines and lines starting with pound are comments. - -# Explicit mappings match any token sequence on the left hand side of "=>" -# and replace with all alternatives on the right hand side. -# These types of mappings ignore the expand parameter in the schema. -# Examples: -i-pod, i pod => ipod -sea biscuit, sea biscit => seabiscuit - -# Equivalent synonyms may be separated with commas and give -# no explicit mapping. In this case the mapping behavior will -# be taken from the expand parameter in the token filter configuration. -# This allows the same synonym file to be used in different synonym handling strategies. -# Examples: -ipod, i-pod, i pod -foozball , foosball -universe , cosmos -lol, laughing out loud - -# If expand==true in the synonym token filter configuration, -# "ipod, i-pod, i pod" is equivalent to the explicit mapping: -ipod, i-pod, i pod => ipod, i-pod, i pod -# If expand==false, "ipod, i-pod, i pod" is equivalent -# to the explicit mapping: -ipod, i-pod, i pod => ipod - -# Multiple synonym mapping entries are merged. -foo => foo bar -foo => baz -# is equivalent to -foo => foo bar, baz -``` - -To update an existing synonyms set, upload new files to your cluster. Synonyms set files must be kept in sync on every cluster node. - -When a synonyms set is updated, search analyzers that use it need to be refreshed using the [reload search analyzers API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-reload-search-analyzers) - -This manual syncing and reloading makes this approach less flexible than using the [synonyms API](../../../solutions/search/full-text/search-with-synonyms.md#synonyms-store-synonyms-api). - - -### Inline [synonyms-store-synonyms-inline] - -You can test your synonyms by adding them directly inline in your token filter definition. - -::::{warning} -Inline synonyms are not recommended for production usage. A large number of inline synonyms increases cluster size unnecessarily and can lead to performance issues. - -:::: - - - -### Configure synonyms token filters and analyzers [synonyms-synonym-token-filters] - -Once your synonyms sets are created, you can start configuring your token filters and analyzers to use them. - -::::{warning} -Synonyms sets must exist before they can be added to indices. If an index is created referencing a nonexistent synonyms set, the index will remain in a partially created and inoperable state. The only way to recover from this scenario is to ensure the synonyms set exists then either delete and re-create the index, or close and re-open the index. - -:::: - - -::::{warning} -Invalid synonym rules can cause errors when applying analyzer changes. For reloadable analyzers, this prevents reloading and applying changes. You must correct errors in the synonym rules and reload the analyzer. - -An index with invalid synonym rules cannot be reopened, making it inoperable when: - -* A node containing the index starts -* The index is opened from a closed state -* A node restart occurs (which reopens the node assigned shards) - -:::: - - -{{es}} uses synonyms as part of the [analysis process](../../../manage-data/data-store/text-analysis.md). You can use two types of [token filter](elasticsearch://reference/data-analysis/text-analysis/token-filter-reference.md) to include synonyms: - -* [Synonym graph](elasticsearch://reference/data-analysis/text-analysis/analysis-synonym-graph-tokenfilter.md): It is recommended to use it, as it can correctly handle multi-word synonyms ("hurriedly", "in a hurry"). -* [Synonym](elasticsearch://reference/data-analysis/text-analysis/analysis-synonym-tokenfilter.md): Not recommended if you need to use multi-word synonyms. - -Check each synonym token filter documentation for configuration details and instructions on adding it to an analyzer. - - -### Test your analyzer [synonyms-test-analyzer] - -You can test an analyzer configuration without modifying your index settings. Use the [analyze API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-analyze) to test your analyzer chain: - -```console -GET /_analyze -{ - "tokenizer": "standard", - "filter" : [ - "lowercase", - { - "type": "synonym_graph", - "synonyms": ["pc => personal computer", "computer, pc, laptop"] - } - ], - "text" : "Check how PC synonyms work" -} -``` - - -### Apply synonyms at index or search time [synonyms-apply-synonyms] - -Analyzers can be applied at [index time or search time](../../../manage-data/data-store/text-analysis/index-search-analysis.md). - -You need to decide when to apply your synonyms: - -* Index time: Synonyms are applied when the documents are indexed into {{es}}. This is a less flexible alternative, as changes to your synonyms require [reindexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex). -* Search time: Synonyms are applied when a search is executed. This is a more flexible approach, which doesn’t require reindexing. If token filters are configured with `"updateable": true`, search analyzers can be [reloaded](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-reload-search-analyzers) when you make changes to your synonyms. - -Synonyms sets created using the [synonyms API](../../../solutions/search/full-text/search-with-synonyms.md#synonyms-store-synonyms-api) can only be used at search time. - -You can specify the analyzer that contains your synonyms set as a [search time analyzer](../../../manage-data/data-store/text-analysis/specify-an-analyzer.md#specify-search-analyzer) or as an [index time analyzer](../../../manage-data/data-store/text-analysis/specify-an-analyzer.md#specify-index-time-analyzer). - -The following example adds `my_analyzer` as a search analyzer to the `title` field in an index mapping: - -```JSON -{ - "mappings": { - "properties": { - "title": { - "type": "text", - "search_analyzer": "my_analyzer" - } - } - }, - "settings": { - "analysis": { - "analyzer": { - "my_analyzer": { - "tokenizer": "whitespace", - "filter": [ - "synonyms_filter" - ] - } - }, - "filter": { - "synonyms_filter": { - "type": "synonym", - "synonyms_path": "analysis/synonym-set.txt", - "updateable": true - } - } - } - } -} -``` diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md deleted file mode 100644 index c6c33b574..000000000 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md +++ /dev/null @@ -1,1662 +0,0 @@ ---- -navigation_title: "Semantic search with the {{infer}} API" ---- - -# Tutorial: semantic search with the {{infer}} API [semantic-search-inference] - - -The instructions in this tutorial shows you how to use the {{infer}} API workflow with various services to perform semantic search on your data. - -::::{important} -For the easiest way to perform semantic search in the {{stack}}, refer to the [`semantic_text`](../../../solutions/search/semantic-search/semantic-search-semantic-text.md) end-to-end tutorial. -:::: - - -The following examples use the: - -* `embed-english-v3.0` model for [Cohere](https://docs.cohere.com/docs/cohere-embed) -* `all-mpnet-base-v2` model from [HuggingFace](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) -* `text-embedding-ada-002` second generation embedding model for OpenAI -* models available through [Azure AI Studio](https://ai.azure.com/explore/models?selectedTask=embeddings) or [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models) -* `text-embedding-004` model for [Google Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api) -* `mistral-embed` model for [Mistral](https://docs.mistral.ai/getting-started/models/) -* `amazon.titan-embed-text-v1` model for [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html) -* `ops-text-embedding-zh-001` model for [AlibabaCloud AI](https://help.aliyun.com/zh/open-search/search-platform/developer-reference/text-embedding-api-details) - -You can use any Cohere and OpenAI models, they are all supported by the {{infer}} API. For a list of recommended models available on HuggingFace, refer to [the supported model list](../../../explore-analyze/elastic-inference/inference-api/huggingface-inference-integration.md). - -Click the name of the service you want to use on any of the widgets below to review the corresponding instructions. - - -## Requirements [infer-service-requirements] - -:::::::{tab-set} - -::::::{tab-item} Cohere -A [Cohere account](https://cohere.com/) is required to use the {{infer}} API with the Cohere service. -:::::: - -::::::{tab-item} ELSER -ELSER is a model trained by Elastic. If you have an {{es}} deployment, there is no further requirement for using the {{infer}} API with the `elasticsearch` service. -:::::: - -::::::{tab-item} HuggingFace -A [HuggingFace account](https://huggingface.co/) is required to use the {{infer}} API with the HuggingFace service. -:::::: - -::::::{tab-item} OpenAI -An [OpenAI account](https://openai.com/) is required to use the {{infer}} API with the OpenAI service. -:::::: - -::::::{tab-item} Azure OpenAI -* An [Azure subscription](https://azure.microsoft.com/free/cognitive-services?azure-portal=true) -* Access granted to Azure OpenAI in the desired Azure subscription. You can apply for access to Azure OpenAI by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access). -* An embedding model deployed in [Azure OpenAI Studio](https://oai.azure.com/). -:::::: - -::::::{tab-item} Azure AI Studio -* An [Azure subscription](https://azure.microsoft.com/free/cognitive-services?azure-portal=true) -* Access to [Azure AI Studio](https://ai.azure.com/) -* A deployed [embeddings](https://ai.azure.com/explore/models?selectedTask=embeddings) or [chat completion](https://ai.azure.com/explore/models?selectedTask=chat-completion) model. -:::::: - -::::::{tab-item} Google Vertex AI -* A [Google Cloud account](https://console.cloud.google.com/) -* A project in Google Cloud -* The Vertex AI API enabled in your project -* A valid service account for the Google Vertex AI API -* The service account must have the Vertex AI User role and the `aiplatform.endpoints.predict` permission. -:::::: - -::::::{tab-item} Mistral -* A Mistral Account on [La Plateforme](https://console.mistral.ai/) -* An API key generated for your account -:::::: - -::::::{tab-item} Amazon Bedrock -* An AWS Account with [Amazon Bedrock](https://aws.amazon.com/bedrock/) access -* A pair of access and secret keys used to access Amazon Bedrock -:::::: - -::::::{tab-item} AlibabaCloud AI Search -* An AlibabaCloud Account with [AlibabaCloud](https://console.aliyun.com) access -* An API key generated for your account from the [API keys section](https://opensearch.console.aliyun.com/cn-shanghai/rag/api-key) -:::::: - -::::::: - -## Create an inference endpoint [infer-text-embedding-task] - -Create an {{infer}} endpoint by using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put): - -:::::::{tab-set} - -::::::{tab-item} Cohere -```console -PUT _inference/text_embedding/cohere_embeddings <1> -{ - "service": "cohere", - "service_settings": { - "api_key": "", <2> - "model_id": "embed-english-v3.0", <3> - "embedding_type": "byte" - } -} -``` - -1. The task type is `text_embedding` in the path and the `inference_id` which is the unique identifier of the {{infer}} endpoint is `cohere_embeddings`. -2. The API key of your Cohere account. You can find your API keys in your Cohere dashboard under the [API keys section](https://dashboard.cohere.com/api-keys). You need to provide your API key only once. The [Get {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-get) does not return your API key. -3. The name of the embedding model to use. You can find the list of Cohere embedding models [here](https://docs.cohere.com/reference/embed). - - -::::{note} -When using this model the recommended similarity measure to use in the `dense_vector` field mapping is `dot_product`. In the case of Cohere models, the embeddings are normalized to unit length in which case the `dot_product` and the `cosine` measures are equivalent. -:::: -:::::: - -::::::{tab-item} ELSER -```console -PUT _inference/sparse_embedding/elser_embeddings <1> -{ - "service": "elasticsearch", - "service_settings": { - "num_allocations": 1, - "num_threads": 1 - } -} -``` - -1. The task type is `sparse_embedding` in the path and the `inference_id` which is the unique identifier of the {{infer}} endpoint is `elser_embeddings`. - - -You don’t need to download and deploy the ELSER model upfront, the API request above will download the model if it’s not downloaded yet and then deploy it. - -::::{note} -You might see a 502 bad gateway error in the response when using the {{kib}} Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the {{ml-app}} UI. If using the Python client, you can set the `timeout` parameter to a higher value. - -:::: -:::::: - -::::::{tab-item} HuggingFace -First, you need to create a new {{infer}} endpoint on [the Hugging Face endpoint page](https://ui.endpoints.huggingface.co/) to get an endpoint URL. Select the model `all-mpnet-base-v2` on the new endpoint creation page, then select the `Sentence Embeddings` task under the Advanced configuration section. Create the endpoint. Copy the URL after the endpoint initialization has been finished, you need the URL in the following {{infer}} API call. - -```console -PUT _inference/text_embedding/hugging_face_embeddings <1> -{ - "service": "hugging_face", - "service_settings": { - "api_key": "", <2> - "url": "" <3> - } -} -``` - -1. The task type is `text_embedding` in the path and the `inference_id` which is the unique identifier of the {{infer}} endpoint is `hugging_face_embeddings`. -2. A valid HuggingFace access token. You can find on the [settings page of your account](https://huggingface.co/settings/tokens). -3. The {{infer}} endpoint URL you created on Hugging Face. -:::::: - -::::::{tab-item} OpenAI -```console -PUT _inference/text_embedding/openai_embeddings <1> -{ - "service": "openai", - "service_settings": { - "api_key": "", <2> - "model_id": "text-embedding-ada-002" <3> - } -} -``` - -1. The task type is `text_embedding` in the path and the `inference_id` which is the unique identifier of the {{infer}} endpoint is `openai_embeddings`. -2. The API key of your OpenAI account. You can find your OpenAI API keys in your OpenAI account under the [API keys section](https://platform.openai.com/api-keys). You need to provide your API key only once. The [Get {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-get) does not return your API key. -3. The name of the embedding model to use. You can find the list of OpenAI embedding models [here](https://platform.openai.com/docs/guides/embeddings/embedding-models). - - -::::{note} -When using this model the recommended similarity measure to use in the `dense_vector` field mapping is `dot_product`. In the case of OpenAI models, the embeddings are normalized to unit length in which case the `dot_product` and the `cosine` measures are equivalent. -:::: -:::::: - -::::::{tab-item} Azure OpenAI -```console -PUT _inference/text_embedding/azure_openai_embeddings <1> -{ - "service": "azureopenai", - "service_settings": { - "api_key": "", <2> - "resource_name": "", <3> - "deployment_id": "", <4> - "api_version": "2024-02-01" - } -} -``` - -1. The task type is `text_embedding` in the path and the `inference_id` which is the unique identifier of the {{infer}} endpoint is `azure_openai_embeddings`. -2. The API key for accessing your Azure OpenAI services. Alternately, you can provide an `entra_id` instead of an `api_key` here. The [Get {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-get) does not return this information. -3. The name our your Azure resource. -4. The id of your deployed model. - - -::::{note} -It may take a few minutes for your model’s deployment to become available after it is created. If you try and create the model as above and receive a `404` error message, wait a few minutes and try again. Also, when using this model the recommended similarity measure to use in the `dense_vector` field mapping is `dot_product`. In the case of Azure OpenAI models, the embeddings are normalized to unit length in which case the `dot_product` and the `cosine` measures are equivalent. -:::: -:::::: - -::::::{tab-item} Azure AI Studio -```console -PUT _inference/text_embedding/azure_ai_studio_embeddings <1> -{ - "service": "azureaistudio", - "service_settings": { - "api_key": "", <2> - "target": "", <3> - "provider": "", <4> - "endpoint_type": "" <5> - } -} -``` - -1. The task type is `text_embedding` in the path and the `inference_id` which is the unique identifier of the {{infer}} endpoint is `azure_ai_studio_embeddings`. -2. The API key for accessing your Azure AI Studio deployed model. You can find this on your model deployment’s overview page. -3. The target URI for accessing your Azure AI Studio deployed model. You can find this on your model deployment’s overview page. -4. The model provider, such as `cohere` or `openai`. -5. The deployed endpoint type. This can be `token` (for "pay as you go" deployments), or `realtime` for real-time deployment endpoints. - - -::::{note} -It may take a few minutes for your model’s deployment to become available after it is created. If you try and create the model as above and receive a `404` error message, wait a few minutes and try again. Also, when using this model the recommended similarity measure to use in the `dense_vector` field mapping is `dot_product`. -:::: -:::::: - -::::::{tab-item} Google Vertex AI -```console -PUT _inference/text_embedding/google_vertex_ai_embeddings <1> -{ - "service": "googlevertexai", - "service_settings": { - "service_account_json": "", <2> - "model_id": "text-embedding-004", <3> - "location": "", <4> - "project_id": "" <5> - } -} -``` - -1. The task type is `text_embedding` per the path. `google_vertex_ai_embeddings` is the unique identifier of the {{infer}} endpoint (its `inference_id`). -2. A valid service account in JSON format for the Google Vertex AI API. -3. For the list of the available models, refer to the [Text embeddings API](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api) page. -4. The name of the location to use for the {{infer}} task. Refer to [Generative AI on Vertex AI locations](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/locations) for available locations. -5. The name of the project to use for the {{infer}} task. -:::::: - -::::::{tab-item} Mistral -```console -PUT _inference/text_embedding/mistral_embeddings <1> -{ - "service": "mistral", - "service_settings": { - "api_key": "", <2> - "model": "" <3> - } -} -``` - -1. The task type is `text_embedding` in the path and the `inference_id` which is the unique identifier of the {{infer}} endpoint is `mistral_embeddings`. -2. The API key for accessing the Mistral API. You can find this in your Mistral account’s API Keys page. -3. The Mistral embeddings model name, for example `mistral-embed`. -:::::: - -::::::{tab-item} Amazon Bedrock -```console -PUT _inference/text_embedding/amazon_bedrock_embeddings <1> -{ - "service": "amazonbedrock", - "service_settings": { - "access_key": "", <2> - "secret_key": "", <3> - "region": "", <4> - "provider": "", <5> - "model": "" <6> - } -} -``` - -1. The task type is `text_embedding` in the path and the `inference_id` which is the unique identifier of the {{infer}} endpoint is `amazon_bedrock_embeddings`. -2. The access key can be found on your AWS IAM management page for the user account to access Amazon Bedrock. -3. The secret key should be the paired key for the specified access key. -4. Specify the region that your model is hosted in. -5. Specify the model provider. -6. The model ID or ARN of the model to use. -:::::: - -::::::{tab-item} AlibabaCloud AI Search -```console -PUT _inference/text_embedding/alibabacloud_ai_search_embeddings <1> -{ - "service": "alibabacloud-ai-search", - "service_settings": { - "api_key": "", <2> - "service_id": "", <3> - "host": "", <4> - "workspace": "" <5> - } -} -``` - -1. The task type is `text_embedding` in the path and the `inference_id` which is the unique identifier of the {{infer}} endpoint is `alibabacloud_ai_search_embeddings`. -2. The API key for accessing the AlibabaCloud AI Search API. You can find your API keys in your AlibabaCloud account under the [API keys section](https://opensearch.console.aliyun.com/cn-shanghai/rag/api-key). You need to provide your API key only once. The [Get {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-get) does not return your API key. -3. The AlibabaCloud AI Search embeddings model name, for example `ops-text-embedding-zh-001`. -4. The name our your AlibabaCloud AI Search host address. -5. The name our your AlibabaCloud AI Search workspace. -:::::: - -::::::: - -## Create the index mapping [infer-service-mappings] - -The mapping of the destination index - the index that contains the embeddings that the model will create based on your input text - must be created. The destination index must have a field with the [`dense_vector`](elasticsearch://reference/elasticsearch/mapping-reference/dense-vector.md) field type for most models and the [`sparse_vector`](elasticsearch://reference/elasticsearch/mapping-reference/sparse-vector.md) field type for the sparse vector models like in the case of the `elasticsearch` service to index the output of the used model. - -:::::::{tab-set} - -::::::{tab-item} Cohere -```console -PUT cohere-embeddings -{ - "mappings": { - "properties": { - "content_embedding": { <1> - "type": "dense_vector", <2> - "dims": 1024, <3> - "element_type": "byte" - }, - "content": { <4> - "type": "text" <5> - } - } - } -} -``` - -1. The name of the field to contain the generated tokens. It must be refrenced in the {{infer}} pipeline configuration in the next step. -2. The field to contain the tokens is a `dense_vector` field. -3. The output dimensions of the model. Find this value in the [Cohere documentation](https://docs.cohere.com/reference/embed) of the model you use. -4. The name of the field from which to create the dense vector representation. In this example, the name of the field is `content`. It must be referenced in the {{infer}} pipeline configuration in the next step. -5. The field type which is text in this example. -:::::: - -::::::{tab-item} ELSER -```console -PUT elser-embeddings -{ - "mappings": { - "properties": { - "content_embedding": { <1> - "type": "sparse_vector" <2> - }, - "content": { <3> - "type": "text" <4> - } - } - } -} -``` - -1. The name of the field to contain the generated tokens. It must be refrenced in the {{infer}} pipeline configuration in the next step. -2. The field to contain the tokens is a `sparse_vector` field for ELSER. -3. The name of the field from which to create the dense vector representation. In this example, the name of the field is `content`. It must be referenced in the {{infer}} pipeline configuration in the next step. -4. The field type which is text in this example. -:::::: - -::::::{tab-item} HuggingFace -```console -PUT hugging-face-embeddings -{ - "mappings": { - "properties": { - "content_embedding": { <1> - "type": "dense_vector", <2> - "dims": 768, <3> - "element_type": "float" - }, - "content": { <4> - "type": "text" <5> - } - } - } -} -``` - -1. The name of the field to contain the generated tokens. It must be referenced in the {{infer}} pipeline configuration in the next step. -2. The field to contain the tokens is a `dense_vector` field. -3. The output dimensions of the model. Find this value in the [HuggingFace model documentation](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). -4. The name of the field from which to create the dense vector representation. In this example, the name of the field is `content`. It must be referenced in the {{infer}} pipeline configuration in the next step. -5. The field type which is text in this example. -:::::: - -::::::{tab-item} OpenAI -```console -PUT openai-embeddings -{ - "mappings": { - "properties": { - "content_embedding": { <1> - "type": "dense_vector", <2> - "dims": 1536, <3> - "element_type": "float", - "similarity": "dot_product" <4> - }, - "content": { <5> - "type": "text" <6> - } - } - } -} -``` - -1. The name of the field to contain the generated tokens. It must be referenced in the {{infer}} pipeline configuration in the next step. -2. The field to contain the tokens is a `dense_vector` field. -3. The output dimensions of the model. Find this value in the [OpenAI documentation](https://platform.openai.com/docs/guides/embeddings/embedding-models) of the model you use. -4. The faster` dot_product` function can be used to calculate similarity because OpenAI embeddings are normalised to unit length. You can check the [OpenAI docs](https://platform.openai.com/docs/guides/embeddings/which-distance-function-should-i-use) about which similarity function to use. -5. The name of the field from which to create the dense vector representation. In this example, the name of the field is `content`. It must be referenced in the {{infer}} pipeline configuration in the next step. -6. The field type which is text in this example. -:::::: - -::::::{tab-item} Azure OpenAI -```console -PUT azure-openai-embeddings -{ - "mappings": { - "properties": { - "content_embedding": { <1> - "type": "dense_vector", <2> - "dims": 1536, <3> - "element_type": "float", - "similarity": "dot_product" <4> - }, - "content": { <5> - "type": "text" <6> - } - } - } -} -``` - -1. The name of the field to contain the generated tokens. It must be referenced in the {{infer}} pipeline configuration in the next step. -2. The field to contain the tokens is a `dense_vector` field. -3. The output dimensions of the model. Find this value in the [Azure OpenAI documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#embeddings-models) of the model you use. -4. For Azure OpenAI embeddings, the `dot_product` function should be used to calculate similarity as Azure OpenAI embeddings are normalised to unit length. See the [Azure OpenAI embeddings](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/understand-embeddings) documentation for more information on the model specifications. -5. The name of the field from which to create the dense vector representation. In this example, the name of the field is `content`. It must be referenced in the {{infer}} pipeline configuration in the next step. -6. The field type which is text in this example. -:::::: - -::::::{tab-item} Azure AI Studio -```console -PUT azure-ai-studio-embeddings -{ - "mappings": { - "properties": { - "content_embedding": { <1> - "type": "dense_vector", <2> - "dims": 1536, <3> - "element_type": "float", - "similarity": "dot_product" <4> - }, - "content": { <5> - "type": "text" <6> - } - } - } -} -``` - -1. The name of the field to contain the generated tokens. It must be referenced in the {{infer}} pipeline configuration in the next step. -2. The field to contain the tokens is a `dense_vector` field. -3. The output dimensions of the model. This value may be found on the model card in your Azure AI Studio deployment. -4. For Azure AI Studio embeddings, the `dot_product` function should be used to calculate similarity. -5. The name of the field from which to create the dense vector representation. In this example, the name of the field is `content`. It must be referenced in the {{infer}} pipeline configuration in the next step. -6. The field type which is text in this example. -:::::: - -::::::{tab-item} Google Vertex AI -```console -PUT google-vertex-ai-embeddings -{ - "mappings": { - "properties": { - "content_embedding": { <1> - "type": "dense_vector", <2> - "dims": 768, <3> - "element_type": "float", - "similarity": "dot_product" <4> - }, - "content": { <5> - "type": "text" <6> - } - } - } -} -``` - -1. The name of the field to contain the generated embeddings. It must be referenced in the {{infer}} pipeline configuration in the next step. -2. The field to contain the embeddings is a `dense_vector` field. -3. The output dimensions of the model. This value may be found on the [Google Vertex AI model reference](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api). The {{infer}} API attempts to calculate the output dimensions automatically if `dims` are not specified. -4. For Google Vertex AI embeddings, the `dot_product` function should be used to calculate similarity. -5. The name of the field from which to create the dense vector representation. In this example, the name of the field is `content`. It must be referenced in the {{infer}} pipeline configuration in the next step. -6. The field type which is `text` in this example. -:::::: - -::::::{tab-item} Mistral -```console -PUT mistral-embeddings -{ - "mappings": { - "properties": { - "content_embedding": { <1> - "type": "dense_vector", <2> - "dims": 1024, <3> - "element_type": "float", - "similarity": "dot_product" <4> - }, - "content": { <5> - "type": "text" <6> - } - } - } -} -``` - -1. The name of the field to contain the generated tokens. It must be referenced in the {{infer}} pipeline configuration in the next step. -2. The field to contain the tokens is a `dense_vector` field. -3. The output dimensions of the model. This value may be found on the [Mistral model reference](https://docs.mistral.ai/getting-started/models/). -4. For Mistral embeddings, the `dot_product` function should be used to calculate similarity. -5. The name of the field from which to create the dense vector representation. In this example, the name of the field is `content`. It must be referenced in the {{infer}} pipeline configuration in the next step. -6. The field type which is text in this example. -:::::: - -::::::{tab-item} Amazon Bedrock -```console -PUT amazon-bedrock-embeddings -{ - "mappings": { - "properties": { - "content_embedding": { <1> - "type": "dense_vector", <2> - "dims": 1024, <3> - "element_type": "float", - "similarity": "dot_product" <4> - }, - "content": { <5> - "type": "text" <6> - } - } - } -} -``` - -1. The name of the field to contain the generated tokens. It must be referenced in the {{infer}} pipeline configuration in the next step. -2. The field to contain the tokens is a `dense_vector` field. -3. The output dimensions of the model. This value may be different depending on the underlying model used. See the [Amazon Titan model](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-multiemb-models.html) or the [Cohere Embeddings model](https://docs.cohere.com/reference/embed) documentation. -4. For Amazon Bedrock embeddings, the `dot_product` function should be used to calculate similarity for Amazon titan models, or `cosine` for Cohere models. -5. The name of the field from which to create the dense vector representation. In this example, the name of the field is `content`. It must be referenced in the {{infer}} pipeline configuration in the next step. -6. The field type which is text in this example. -:::::: - -::::::{tab-item} AlibabaCloud AI Search -```console -PUT alibabacloud-ai-search-embeddings -{ - "mappings": { - "properties": { - "content_embedding": { <1> - "type": "dense_vector", <2> - "dims": 1024, <3> - "element_type": "float" - }, - "content": { <4> - "type": "text" <5> - } - } - } -} -``` - -1. The name of the field to contain the generated tokens. It must be referenced in the {{infer}} pipeline configuration in the next step. -2. The field to contain the tokens is a `dense_vector` field. -3. The output dimensions of the model. This value may be different depending on the underlying model used. See the [AlibabaCloud AI Search embedding model](https://help.aliyun.com/zh/open-search/search-platform/developer-reference/text-embedding-api-details) documentation. -4. The name of the field from which to create the dense vector representation. In this example, the name of the field is `content`. It must be referenced in the {{infer}} pipeline configuration in the next step. -5. The field type which is text in this example. -:::::: - -::::::: - -## Create an ingest pipeline with an inference processor [infer-service-inference-ingest-pipeline] - -Create an [ingest pipeline](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md) with an [{{infer}} processor](elasticsearch://reference/ingestion-tools/enrich-processor/inference-processor.md) and use the model you created above to infer against the data that is being ingested in the pipeline. - -:::::::{tab-set} - -::::::{tab-item} Cohere -```console -PUT _ingest/pipeline/cohere_embeddings_pipeline -{ - "processors": [ - { - "inference": { - "model_id": "cohere_embeddings", <1> - "input_output": { <2> - "input_field": "content", - "output_field": "content_embedding" - } - } - } - ] -} -``` - -1. The name of the inference endpoint you created by using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put), it’s referred to as `inference_id` in that step. -2. Configuration object that defines the `input_field` for the {{infer}} process and the `output_field` that will contain the {{infer}} results. -:::::: - -::::::{tab-item} ELSER -```console -PUT _ingest/pipeline/elser_embeddings_pipeline -{ - "processors": [ - { - "inference": { - "model_id": "elser_embeddings", <1> - "input_output": { <2> - "input_field": "content", - "output_field": "content_embedding" - } - } - } - ] -} -``` - -1. The name of the inference endpoint you created by using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put), it’s referred to as `inference_id` in that step. -2. Configuration object that defines the `input_field` for the {{infer}} process and the `output_field` that will contain the {{infer}} results. -:::::: - -::::::{tab-item} HuggingFace -```console -PUT _ingest/pipeline/hugging_face_embeddings_pipeline -{ - "processors": [ - { - "inference": { - "model_id": "hugging_face_embeddings", <1> - "input_output": { <2> - "input_field": "content", - "output_field": "content_embedding" - } - } - } - ] -} -``` - -1. The name of the inference endpoint you created by using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put), it’s referred to as `inference_id` in that step. -2. Configuration object that defines the `input_field` for the {{infer}} process and the `output_field` that will contain the {{infer}} results. -:::::: - -::::::{tab-item} OpenAI -```console -PUT _ingest/pipeline/openai_embeddings_pipeline -{ - "processors": [ - { - "inference": { - "model_id": "openai_embeddings", <1> - "input_output": { <2> - "input_field": "content", - "output_field": "content_embedding" - } - } - } - ] -} -``` - -1. The name of the inference endpoint you created by using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put), it’s referred to as `inference_id` in that step. -2. Configuration object that defines the `input_field` for the {{infer}} process and the `output_field` that will contain the {{infer}} results. -:::::: - -::::::{tab-item} Azure OpenAI -```console -PUT _ingest/pipeline/azure_openai_embeddings_pipeline -{ - "processors": [ - { - "inference": { - "model_id": "azure_openai_embeddings", <1> - "input_output": { <2> - "input_field": "content", - "output_field": "content_embedding" - } - } - } - ] -} -``` - -1. The name of the inference endpoint you created by using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put), it’s referred to as `inference_id` in that step. -2. Configuration object that defines the `input_field` for the {{infer}} process and the `output_field` that will contain the {{infer}} results. -:::::: - -::::::{tab-item} Azure AI Studio -```console -PUT _ingest/pipeline/azure_ai_studio_embeddings_pipeline -{ - "processors": [ - { - "inference": { - "model_id": "azure_ai_studio_embeddings", <1> - "input_output": { <2> - "input_field": "content", - "output_field": "content_embedding" - } - } - } - ] -} -``` - -1. The name of the inference endpoint you created by using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put), it’s referred to as `inference_id` in that step. -2. Configuration object that defines the `input_field` for the {{infer}} process and the `output_field` that will contain the {{infer}} results. -:::::: - -::::::{tab-item} Google Vertex AI -```console -PUT _ingest/pipeline/google_vertex_ai_embeddings_pipeline -{ - "processors": [ - { - "inference": { - "model_id": "google_vertex_ai_embeddings", <1> - "input_output": { <2> - "input_field": "content", - "output_field": "content_embedding" - } - } - } - ] -} -``` - -1. The name of the inference endpoint you created by using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put), it’s referred to as `inference_id` in that step. -2. Configuration object that defines the `input_field` for the {{infer}} process and the `output_field` that will contain the {{infer}} results. -:::::: - -::::::{tab-item} Mistral -```console -PUT _ingest/pipeline/mistral_embeddings_pipeline -{ - "processors": [ - { - "inference": { - "model_id": "mistral_embeddings", <1> - "input_output": { <2> - "input_field": "content", - "output_field": "content_embedding" - } - } - } - ] -} -``` - -1. The name of the inference endpoint you created by using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put), it’s referred to as `inference_id` in that step. -2. Configuration object that defines the `input_field` for the {{infer}} process and the `output_field` that will contain the {{infer}} results. -:::::: - -::::::{tab-item} Amazon Bedrock -```console -PUT _ingest/pipeline/amazon_bedrock_embeddings_pipeline -{ - "processors": [ - { - "inference": { - "model_id": "amazon_bedrock_embeddings", <1> - "input_output": { <2> - "input_field": "content", - "output_field": "content_embedding" - } - } - } - ] -} -``` - -1. The name of the inference endpoint you created by using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put), it’s referred to as `inference_id` in that step. -2. Configuration object that defines the `input_field` for the {{infer}} process and the `output_field` that will contain the {{infer}} results. -:::::: - -::::::{tab-item} AlibabaCloud AI Search -```console -PUT _ingest/pipeline/alibabacloud_ai_search_embeddings_pipeline -{ - "processors": [ - { - "inference": { - "model_id": "alibabacloud_ai_search_embeddings", <1> - "input_output": { <2> - "input_field": "content", - "output_field": "content_embedding" - } - } - } - ] -} -``` - -1. The name of the inference endpoint you created by using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put), it’s referred to as `inference_id` in that step. -2. Configuration object that defines the `input_field` for the {{infer}} process and the `output_field` that will contain the {{infer}} results. -:::::: - -::::::: - -## Load data [infer-load-data] - -In this step, you load the data that you later use in the {{infer}} ingest pipeline to create embeddings from it. - -Use the `msmarco-passagetest2019-top1000` data set, which is a subset of the MS MARCO Passage Ranking data set. It consists of 200 queries, each accompanied by a list of relevant text passages. All unique passages, along with their IDs, have been extracted from that data set and compiled into a [tsv file](https://github.com/elastic/stack-docs/blob/main/docs/en/stack/ml/nlp/data/msmarco-passagetest2019-unique.tsv). - -Download the file and upload it to your cluster using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents. - - -## Ingest the data through the {{infer}} ingest pipeline [reindexing-data-infer] - -Create embeddings from the text by reindexing the data through the {{infer}} pipeline that uses your chosen model. This step uses the [reindex API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) to simulate data ingestion through a pipeline. - -:::::::{tab-set} - -::::::{tab-item} Cohere -```console -POST _reindex?wait_for_completion=false -{ - "source": { - "index": "test-data", - "size": 50 <1> - }, - "dest": { - "index": "cohere-embeddings", - "pipeline": "cohere_embeddings_pipeline" - } -} -``` - -1. The default batch size for reindexing is 1000. Reducing `size` to a smaller number makes the update of the reindexing process quicker which enables you to follow the progress closely and detect errors early. - - -::::{note} -The [rate limit of your Cohere account](https://dashboard.cohere.com/billing) may affect the throughput of the reindexing process. -:::: -:::::: - -::::::{tab-item} ELSER -```console -POST _reindex?wait_for_completion=false -{ - "source": { - "index": "test-data", - "size": 50 <1> - }, - "dest": { - "index": "elser-embeddings", - "pipeline": "elser_embeddings_pipeline" - } -} -``` - -1. The default batch size for reindexing is 1000. Reducing `size` to a smaller number makes the update of the reindexing process quicker which enables you to follow the progress closely and detect errors early. -:::::: - -::::::{tab-item} HuggingFace -```console -POST _reindex?wait_for_completion=false -{ - "source": { - "index": "test-data", - "size": 50 <1> - }, - "dest": { - "index": "hugging-face-embeddings", - "pipeline": "hugging_face_embeddings_pipeline" - } -} -``` - -1. The default batch size for reindexing is 1000. Reducing `size` to a smaller number makes the update of the reindexing process quicker which enables you to follow the progress closely and detect errors early. -:::::: - -::::::{tab-item} OpenAI -```console -POST _reindex?wait_for_completion=false -{ - "source": { - "index": "test-data", - "size": 50 <1> - }, - "dest": { - "index": "openai-embeddings", - "pipeline": "openai_embeddings_pipeline" - } -} -``` - -1. The default batch size for reindexing is 1000. Reducing `size` to a smaller number makes the update of the reindexing process quicker which enables you to follow the progress closely and detect errors early. - - -::::{note} -The [rate limit of your OpenAI account](https://platform.openai.com/account/limits) may affect the throughput of the reindexing process. If this happens, change `size` to `3` or a similar value in magnitude. -:::: -:::::: - -::::::{tab-item} Azure OpenAI -```console -POST _reindex?wait_for_completion=false -{ - "source": { - "index": "test-data", - "size": 50 <1> - }, - "dest": { - "index": "azure-openai-embeddings", - "pipeline": "azure_openai_embeddings_pipeline" - } -} -``` - -1. The default batch size for reindexing is 1000. Reducing `size` to a smaller number makes the update of the reindexing process quicker which enables you to follow the progress closely and detect errors early. - - -::::{note} -The [rate limit of your Azure OpenAI account](https://learn.microsoft.com/en-us/azure/ai-services/openai/quotas-limits#quotas-and-limits-reference) may affect the throughput of the reindexing process. If this happens, change `size` to `3` or a similar value in magnitude. -:::: -:::::: - -::::::{tab-item} Azure AI Studio -```console -POST _reindex?wait_for_completion=false -{ - "source": { - "index": "test-data", - "size": 50 <1> - }, - "dest": { - "index": "azure-ai-studio-embeddings", - "pipeline": "azure_ai_studio_embeddings_pipeline" - } -} -``` - -1. The default batch size for reindexing is 1000. Reducing `size` to a smaller number makes the update of the reindexing process quicker which enables you to follow the progress closely and detect errors early. - - -::::{note} -Your Azure AI Studio model deployment may have rate limits in place that might affect the throughput of the reindexing process. If this happens, change `size` to `3` or a similar value in magnitude. -:::: -:::::: - -::::::{tab-item} Google Vertex AI -```console -POST _reindex?wait_for_completion=false -{ - "source": { - "index": "test-data", - "size": 50 <1> - }, - "dest": { - "index": "google-vertex-ai-embeddings", - "pipeline": "google_vertex_ai_embeddings_pipeline" - } -} -``` - -1. The default batch size for reindexing is 1000. Reducing `size` will make updates to the reindexing process faster. This enables you to follow the progress closely and detect errors early. -:::::: - -::::::{tab-item} Mistral -```console -POST _reindex?wait_for_completion=false -{ - "source": { - "index": "test-data", - "size": 50 <1> - }, - "dest": { - "index": "mistral-embeddings", - "pipeline": "mistral_embeddings_pipeline" - } -} -``` - -1. The default batch size for reindexing is 1000. Reducing `size` to a smaller number makes the update of the reindexing process quicker which enables you to follow the progress closely and detect errors early. -:::::: - -::::::{tab-item} Amazon Bedrock -```console -POST _reindex?wait_for_completion=false -{ - "source": { - "index": "test-data", - "size": 50 <1> - }, - "dest": { - "index": "amazon-bedrock-embeddings", - "pipeline": "amazon_bedrock_embeddings_pipeline" - } -} -``` - -1. The default batch size for reindexing is 1000. Reducing `size` to a smaller number makes the update of the reindexing process quicker which enables you to follow the progress closely and detect errors early. -:::::: - -::::::{tab-item} AlibabaCloud AI Search -```console -POST _reindex?wait_for_completion=false -{ - "source": { - "index": "test-data", - "size": 50 <1> - }, - "dest": { - "index": "alibabacloud-ai-search-embeddings", - "pipeline": "alibabacloud_ai_search_embeddings_pipeline" - } -} -``` - -1. The default batch size for reindexing is 1000. Reducing `size` to a smaller number makes the update of the reindexing process quicker which enables you to follow the progress closely and detect errors early. -:::::: - -::::::: -The call returns a task ID to monitor the progress: - -```console -GET _tasks/ -``` - -Reindexing large datasets can take a long time. You can test this workflow using only a subset of the dataset. Do this by cancelling the reindexing process, and only generating embeddings for the subset that was reindexed. The following API request will cancel the reindexing task: - -```console -POST _tasks//_cancel -``` - - -## Semantic search [infer-semantic-search] - -After the data set has been enriched with the embeddings, you can query the data using [semantic search](../../../solutions/search/vector/knn.md#knn-semantic-search). In case of dense vector models, pass a `query_vector_builder` to the k-nearest neighbor (kNN) vector search API, and provide the query text and the model you have used to create the embeddings. In case of a sparse vector model like ELSER, use a `sparse_vector` query, and provide the query text with the model you have used to create the embeddings. - -::::{note} -If you cancelled the reindexing process, you run the query only a part of the data which affects the quality of your results. -:::: - - -:::::::{tab-set} - -::::::{tab-item} Cohere -```console -GET cohere-embeddings/_search -{ - "knn": { - "field": "content_embedding", - "query_vector_builder": { - "text_embedding": { - "model_id": "cohere_embeddings", - "model_text": "Muscles in human body" - } - }, - "k": 10, - "num_candidates": 100 - }, - "_source": [ - "id", - "content" - ] -} -``` - -As a result, you receive the top 10 documents that are closest in meaning to the query from the `cohere-embeddings` index sorted by their proximity to the query: - -```console-result -"hits": [ - { - "_index": "cohere-embeddings", - "_id": "-eFWCY4BECzWLnMZuI78", - "_score": 0.737484, - "_source": { - "id": 1690948, - "content": "Oxygen is supplied to the muscles via red blood cells. Red blood cells carry hemoglobin which oxygen bonds with as the hemoglobin rich blood cells pass through the blood vessels of the lungs.The now oxygen rich blood cells carry that oxygen to the cells that are demanding it, in this case skeletal muscle cells.ther ways in which muscles are supplied with oxygen include: 1 Blood flow from the heart is increased. 2 Blood flow to your muscles in increased. 3 Blood flow from nonessential organs is transported to working muscles." - } - }, - { - "_index": "cohere-embeddings", - "_id": "HuFWCY4BECzWLnMZuI_8", - "_score": 0.7176013, - "_source": { - "id": 1692482, - "content": "The thoracic cavity is separated from the abdominal cavity by the diaphragm. This is a broad flat muscle. (muscular) diaphragm The diaphragm is a muscle that separat…e the thoracic from the abdominal cavity. The pelvis is the lowest part of the abdominal cavity and it has no physical separation from it Diaphragm." - } - }, - { - "_index": "cohere-embeddings", - "_id": "IOFWCY4BECzWLnMZuI_8", - "_score": 0.7154432, - "_source": { - "id": 1692489, - "content": "Muscular Wall Separating the Abdominal and Thoracic Cavities; Thoracic Cavity of a Fetal Pig; In Mammals the Diaphragm Separates the Abdominal Cavity from the" - } - }, - { - "_index": "cohere-embeddings", - "_id": "C-FWCY4BECzWLnMZuI_8", - "_score": 0.695313, - "_source": { - "id": 1691493, - "content": "Burning, aching, tenderness and stiffness are just some descriptors of the discomfort you may feel in the muscles you exercised one to two days ago.For the most part, these sensations you experience after exercise are collectively known as delayed onset muscle soreness.urning, aching, tenderness and stiffness are just some descriptors of the discomfort you may feel in the muscles you exercised one to two days ago." - } - }, - (...) - ] -``` -:::::: - -::::::{tab-item} ELSER -```console -GET elser-embeddings/_search -{ - "query":{ - "sparse_vector":{ - "field": "content_embedding", - "inference_id": "elser_embeddings", - "query": "How to avoid muscle soreness after running?" - } - }, - "_source": [ - "id", - "content" - ] -} -``` - -As a result, you receive the top 10 documents that are closest in meaning to the query from the `cohere-embeddings` index sorted by their proximity to the query: - -```console-result -"hits": [ - { - "_index": "elser-embeddings", - "_id": "ZLGc_pABZbBmsu5_eCoH", - "_score": 21.472063, - "_source": { - "id": 2258240, - "content": "You may notice some muscle aches while you are exercising. This is called acute soreness. More often, you may begin to feel sore about 12 hours after exercising, and the discomfort usually peaks at 48 to 72 hours after exercise. This is called delayed-onset muscle soreness.It is thought that, during this time, your body is repairing the muscle, making it stronger and bigger.You may also notice the muscles feel better if you exercise lightly. This is normal.his is called delayed-onset muscle soreness. It is thought that, during this time, your body is repairing the muscle, making it stronger and bigger. You may also notice the muscles feel better if you exercise lightly. This is normal." - } - }, - { - "_index": "elser-embeddings", - "_id": "ZbGc_pABZbBmsu5_eCoH", - "_score": 21.421381, - "_source": { - "id": 2258242, - "content": "Photo Credit Jupiterimages/Stockbyte/Getty Images. That stiff, achy feeling you get in the days after exercise is a normal physiological response known as delayed onset muscle soreness. You can take it as a positive sign that your muscles have felt the workout, but the pain may also turn you off to further exercise.ou are more likely to develop delayed onset muscle soreness if you are new to working out, if you’ve gone a long time without exercising and start up again, if you have picked up a new type of physical activity or if you have recently boosted the intensity, length or frequency of your exercise sessions." - } - }, - { - "_index": "elser-embeddings", - "_id": "ZrGc_pABZbBmsu5_eCoH", - "_score": 20.542095, - "_source": { - "id": 2258248, - "content": "They found that stretching before and after exercise has no effect on muscle soreness. Exercise might cause inflammation, which leads to an increase in the production of immune cells (comprised mostly of macrophages and neutrophils). Levels of these immune cells reach a peak 24-48 hours after exercise.These cells, in turn, produce bradykinins and prostaglandins, which make the pain receptors in your body more sensitive. Whenever you move, these pain receptors are stimulated.hey found that stretching before and after exercise has no effect on muscle soreness. Exercise might cause inflammation, which leads to an increase in the production of immune cells (comprised mostly of macrophages and neutrophils). Levels of these immune cells reach a peak 24-48 hours after exercise." - } - }, - (...) - ] -``` -:::::: - -::::::{tab-item} HuggingFace -```console -GET hugging-face-embeddings/_search -{ - "knn": { - "field": "content_embedding", - "query_vector_builder": { - "text_embedding": { - "model_id": "hugging_face_embeddings", - "model_text": "What's margin of error?" - } - }, - "k": 10, - "num_candidates": 100 - }, - "_source": [ - "id", - "content" - ] -} -``` - -As a result, you receive the top 10 documents that are closest in meaning to the query from the `hugging-face-embeddings` index sorted by their proximity to the query: - -```console-result -"hits": [ - { - "_index": "hugging-face-embeddings", - "_id": "ljEfo44BiUQvMpPgT20E", - "_score": 0.8522128, - "_source": { - "id": 7960255, - "content": "The margin of error can be defined by either of the following equations. Margin of error = Critical value x Standard deviation of the statistic. Margin of error = Critical value x Standard error of the statistic. If you know the standard deviation of the statistic, use the first equation to compute the margin of error. Otherwise, use the second equation. Previously, we described how to compute the standard deviation and standard error." - } - }, - { - "_index": "hugging-face-embeddings", - "_id": "lzEfo44BiUQvMpPgT20E", - "_score": 0.7865497, - "_source": { - "id": 7960259, - "content": "1 y ou are told only the size of the sample and are asked to provide the margin of error for percentages which are not (yet) known. 2 This is typically the case when you are computing the margin of error for a survey which is going to be conducted in the future." - } - }, - { - "_index": "hugging-face-embeddings1", - "_id": "DjEfo44BiUQvMpPgT20E", - "_score": 0.6229427, - "_source": { - "id": 2166183, - "content": "1. In general, the point at which gains equal losses. 2. In options, the market price that a stock must reach for option buyers to avoid a loss if they exercise. For a call, it is the strike price plus the premium paid. For a put, it is the strike price minus the premium paid." - } - }, - { - "_index": "hugging-face-embeddings1", - "_id": "VzEfo44BiUQvMpPgT20E", - "_score": 0.6034223, - "_source": { - "id": 2173417, - "content": "How do you find the area of a circle? Can you measure the area of a circle and use that to find a value for Pi?" - } - }, - (...) - ] -``` -:::::: - -::::::{tab-item} OpenAI -```console -GET openai-embeddings/_search -{ - "knn": { - "field": "content_embedding", - "query_vector_builder": { - "text_embedding": { - "model_id": "openai_embeddings", - "model_text": "Calculate fuel cost" - } - }, - "k": 10, - "num_candidates": 100 - }, - "_source": [ - "id", - "content" - ] -} -``` - -As a result, you receive the top 10 documents that are closest in meaning to the query from the `openai-embeddings` index sorted by their proximity to the query: - -```console-result -"hits": [ - { - "_index": "openai-embeddings", - "_id": "DDd5OowBHxQKHyc3TDSC", - "_score": 0.83704096, - "_source": { - "id": 862114, - "body": "How to calculate fuel cost for a road trip. By Tara Baukus Mello • Bankrate.com. Dear Driving for Dollars, My family is considering taking a long road trip to finish off the end of the summer, but I'm a little worried about gas prices and our overall fuel cost.It doesn't seem easy to calculate since we'll be traveling through many states and we are considering several routes.y family is considering taking a long road trip to finish off the end of the summer, but I'm a little worried about gas prices and our overall fuel cost. It doesn't seem easy to calculate since we'll be traveling through many states and we are considering several routes." - } - }, - { - "_index": "openai-embeddings", - "_id": "ajd5OowBHxQKHyc3TDSC", - "_score": 0.8345704, - "_source": { - "id": 820622, - "body": "Home Heating Calculator. Typically, approximately 50% of the energy consumed in a home annually is for space heating. When deciding on a heating system, many factors will come into play: cost of fuel, installation cost, convenience and life style are all important.This calculator can help you estimate the cost of fuel for different heating appliances.hen deciding on a heating system, many factors will come into play: cost of fuel, installation cost, convenience and life style are all important. This calculator can help you estimate the cost of fuel for different heating appliances." - } - }, - { - "_index": "openai-embeddings", - "_id": "Djd5OowBHxQKHyc3TDSC", - "_score": 0.8327426, - "_source": { - "id": 8202683, - "body": "Fuel is another important cost. This cost will depend on your boat, how far you travel, and how fast you travel. A 33-foot sailboat traveling at 7 knots should be able to travel 300 miles on 50 gallons of diesel fuel.If you are paying $4 per gallon, the trip would cost you $200.Most boats have much larger gas tanks than cars.uel is another important cost. This cost will depend on your boat, how far you travel, and how fast you travel. A 33-foot sailboat traveling at 7 knots should be able to travel 300 miles on 50 gallons of diesel fuel." - } - }, - (...) - ] -``` -:::::: - -::::::{tab-item} Azure OpenAI -```console -GET azure-openai-embeddings/_search -{ - "knn": { - "field": "content_embedding", - "query_vector_builder": { - "text_embedding": { - "model_id": "azure_openai_embeddings", - "model_text": "Calculate fuel cost" - } - }, - "k": 10, - "num_candidates": 100 - }, - "_source": [ - "id", - "content" - ] -} -``` - -As a result, you receive the top 10 documents that are closest in meaning to the query from the `azure-openai-embeddings` index sorted by their proximity to the query: - -```console-result -"hits": [ - { - "_index": "azure-openai-embeddings", - "_id": "DDd5OowBHxQKHyc3TDSC", - "_score": 0.83704096, - "_source": { - "id": 862114, - "body": "How to calculate fuel cost for a road trip. By Tara Baukus Mello • Bankrate.com. Dear Driving for Dollars, My family is considering taking a long road trip to finish off the end of the summer, but I'm a little worried about gas prices and our overall fuel cost.It doesn't seem easy to calculate since we'll be traveling through many states and we are considering several routes.y family is considering taking a long road trip to finish off the end of the summer, but I'm a little worried about gas prices and our overall fuel cost. It doesn't seem easy to calculate since we'll be traveling through many states and we are considering several routes." - } - }, - { - "_index": "azure-openai-embeddings", - "_id": "ajd5OowBHxQKHyc3TDSC", - "_score": 0.8345704, - "_source": { - "id": 820622, - "body": "Home Heating Calculator. Typically, approximately 50% of the energy consumed in a home annually is for space heating. When deciding on a heating system, many factors will come into play: cost of fuel, installation cost, convenience and life style are all important.This calculator can help you estimate the cost of fuel for different heating appliances.hen deciding on a heating system, many factors will come into play: cost of fuel, installation cost, convenience and life style are all important. This calculator can help you estimate the cost of fuel for different heating appliances." - } - }, - { - "_index": "azure-openai-embeddings", - "_id": "Djd5OowBHxQKHyc3TDSC", - "_score": 0.8327426, - "_source": { - "id": 8202683, - "body": "Fuel is another important cost. This cost will depend on your boat, how far you travel, and how fast you travel. A 33-foot sailboat traveling at 7 knots should be able to travel 300 miles on 50 gallons of diesel fuel.If you are paying $4 per gallon, the trip would cost you $200.Most boats have much larger gas tanks than cars.uel is another important cost. This cost will depend on your boat, how far you travel, and how fast you travel. A 33-foot sailboat traveling at 7 knots should be able to travel 300 miles on 50 gallons of diesel fuel." - } - }, - (...) - ] -``` -:::::: - -::::::{tab-item} Azure AI Studio -```console -GET azure-ai-studio-embeddings/_search -{ - "knn": { - "field": "content_embedding", - "query_vector_builder": { - "text_embedding": { - "model_id": "azure_ai_studio_embeddings", - "model_text": "Calculate fuel cost" - } - }, - "k": 10, - "num_candidates": 100 - }, - "_source": [ - "id", - "content" - ] -} -``` - -As a result, you receive the top 10 documents that are closest in meaning to the query from the `azure-ai-studio-embeddings` index sorted by their proximity to the query: - -```console-result -"hits": [ - { - "_index": "azure-ai-studio-embeddings", - "_id": "DDd5OowBHxQKHyc3TDSC", - "_score": 0.83704096, - "_source": { - "id": 862114, - "body": "How to calculate fuel cost for a road trip. By Tara Baukus Mello • Bankrate.com. Dear Driving for Dollars, My family is considering taking a long road trip to finish off the end of the summer, but I'm a little worried about gas prices and our overall fuel cost.It doesn't seem easy to calculate since we'll be traveling through many states and we are considering several routes.y family is considering taking a long road trip to finish off the end of the summer, but I'm a little worried about gas prices and our overall fuel cost. It doesn't seem easy to calculate since we'll be traveling through many states and we are considering several routes." - } - }, - { - "_index": "azure-ai-studio-embeddings", - "_id": "ajd5OowBHxQKHyc3TDSC", - "_score": 0.8345704, - "_source": { - "id": 820622, - "body": "Home Heating Calculator. Typically, approximately 50% of the energy consumed in a home annually is for space heating. When deciding on a heating system, many factors will come into play: cost of fuel, installation cost, convenience and life style are all important.This calculator can help you estimate the cost of fuel for different heating appliances.hen deciding on a heating system, many factors will come into play: cost of fuel, installation cost, convenience and life style are all important. This calculator can help you estimate the cost of fuel for different heating appliances." - } - }, - { - "_index": "azure-ai-studio-embeddings", - "_id": "Djd5OowBHxQKHyc3TDSC", - "_score": 0.8327426, - "_source": { - "id": 8202683, - "body": "Fuel is another important cost. This cost will depend on your boat, how far you travel, and how fast you travel. A 33-foot sailboat traveling at 7 knots should be able to travel 300 miles on 50 gallons of diesel fuel.If you are paying $4 per gallon, the trip would cost you $200.Most boats have much larger gas tanks than cars.uel is another important cost. This cost will depend on your boat, how far you travel, and how fast you travel. A 33-foot sailboat traveling at 7 knots should be able to travel 300 miles on 50 gallons of diesel fuel." - } - }, - (...) - ] -``` -:::::: - -::::::{tab-item} Google Vertex AI -```console -GET google-vertex-ai-embeddings/_search -{ - "knn": { - "field": "content_embedding", - "query_vector_builder": { - "text_embedding": { - "model_id": "google_vertex_ai_embeddings", - "model_text": "Calculate fuel cost" - } - }, - "k": 10, - "num_candidates": 100 - }, - "_source": [ - "id", - "content" - ] -} -``` - -As a result, you receive the top 10 documents that are closest in meaning to the query from the `mistral-embeddings` index sorted by their proximity to the query: - -```console-result -"hits": [ - { - "_index": "google-vertex-ai-embeddings", - "_id": "Ryv0nZEBBFPLbFsdCbGn", - "_score": 0.86815524, - "_source": { - "id": 3041038, - "content": "For example, the cost of the fuel could be 96.9, the amount could be 10 pounds, and the distance covered could be 80 miles. To convert between Litres per 100KM and Miles Per Gallon, please provide a value and click on the required button.o calculate how much fuel you'll need for a given journey, please provide the distance in miles you will be covering on your journey, and the estimated MPG of your vehicle. To work out what MPG you are really getting, please provide the cost of the fuel, how much you spent on the fuel, and how far it took you." - } - }, - { - "_index": "google-vertex-ai-embeddings", - "_id": "w4j0nZEBZ1nFq1oiHQvK", - "_score": 0.8676357, - "_source": { - "id": 1541469, - "content": "This driving cost calculator takes into consideration the fuel economy of the vehicle that you are travelling in as well as the fuel cost. This road trip gas calculator will give you an idea of how much would it cost to drive before you actually travel.his driving cost calculator takes into consideration the fuel economy of the vehicle that you are travelling in as well as the fuel cost. This road trip gas calculator will give you an idea of how much would it cost to drive before you actually travel." - } - }, - { - "_index": "google-vertex-ai-embeddings", - "_id": "Hoj0nZEBZ1nFq1oiHQjJ", - "_score": 0.80510974, - "_source": { - "id": 7982559, - "content": "What's that light cost you? 1 Select your electric rate (or click to enter your own). 2 You can calculate results for up to four types of lights. 3 Select the type of lamp (i.e. 4 Select the lamp wattage (lamp lumens). 5 Enter the number of lights in use. 6 Select how long the lamps are in use (or click to enter your own; enter hours on per year). 7 Finally, ..." - } - }, - (...) - ] -``` -:::::: - -::::::{tab-item} Mistral -```console -GET mistral-embeddings/_search -{ - "knn": { - "field": "content_embedding", - "query_vector_builder": { - "text_embedding": { - "model_id": "mistral_embeddings", - "model_text": "Calculate fuel cost" - } - }, - "k": 10, - "num_candidates": 100 - }, - "_source": [ - "id", - "content" - ] -} -``` - -As a result, you receive the top 10 documents that are closest in meaning to the query from the `mistral-embeddings` index sorted by their proximity to the query: - -```console-result -"hits": [ - { - "_index": "mistral-embeddings", - "_id": "DDd5OowBHxQKHyc3TDSC", - "_score": 0.83704096, - "_source": { - "id": 862114, - "body": "How to calculate fuel cost for a road trip. By Tara Baukus Mello • Bankrate.com. Dear Driving for Dollars, My family is considering taking a long road trip to finish off the end of the summer, but I'm a little worried about gas prices and our overall fuel cost.It doesn't seem easy to calculate since we'll be traveling through many states and we are considering several routes.y family is considering taking a long road trip to finish off the end of the summer, but I'm a little worried about gas prices and our overall fuel cost. It doesn't seem easy to calculate since we'll be traveling through many states and we are considering several routes." - } - }, - { - "_index": "mistral-embeddings", - "_id": "ajd5OowBHxQKHyc3TDSC", - "_score": 0.8345704, - "_source": { - "id": 820622, - "body": "Home Heating Calculator. Typically, approximately 50% of the energy consumed in a home annually is for space heating. When deciding on a heating system, many factors will come into play: cost of fuel, installation cost, convenience and life style are all important.This calculator can help you estimate the cost of fuel for different heating appliances.hen deciding on a heating system, many factors will come into play: cost of fuel, installation cost, convenience and life style are all important. This calculator can help you estimate the cost of fuel for different heating appliances." - } - }, - { - "_index": "mistral-embeddings", - "_id": "Djd5OowBHxQKHyc3TDSC", - "_score": 0.8327426, - "_source": { - "id": 8202683, - "body": "Fuel is another important cost. This cost will depend on your boat, how far you travel, and how fast you travel. A 33-foot sailboat traveling at 7 knots should be able to travel 300 miles on 50 gallons of diesel fuel.If you are paying $4 per gallon, the trip would cost you $200.Most boats have much larger gas tanks than cars.uel is another important cost. This cost will depend on your boat, how far you travel, and how fast you travel. A 33-foot sailboat traveling at 7 knots should be able to travel 300 miles on 50 gallons of diesel fuel." - } - }, - (...) - ] -``` -:::::: - -::::::{tab-item} Amazon Bedrock -```console -GET amazon-bedrock-embeddings/_search -{ - "knn": { - "field": "content_embedding", - "query_vector_builder": { - "text_embedding": { - "model_id": "amazon_bedrock_embeddings", - "model_text": "Calculate fuel cost" - } - }, - "k": 10, - "num_candidates": 100 - }, - "_source": [ - "id", - "content" - ] -} -``` - -As a result, you receive the top 10 documents that are closest in meaning to the query from the `amazon-bedrock-embeddings` index sorted by their proximity to the query: - -```console-result -"hits": [ - { - "_index": "amazon-bedrock-embeddings", - "_id": "DDd5OowBHxQKHyc3TDSC", - "_score": 0.83704096, - "_source": { - "id": 862114, - "body": "How to calculate fuel cost for a road trip. By Tara Baukus Mello • Bankrate.com. Dear Driving for Dollars, My family is considering taking a long road trip to finish off the end of the summer, but I'm a little worried about gas prices and our overall fuel cost.It doesn't seem easy to calculate since we'll be traveling through many states and we are considering several routes.y family is considering taking a long road trip to finish off the end of the summer, but I'm a little worried about gas prices and our overall fuel cost. It doesn't seem easy to calculate since we'll be traveling through many states and we are considering several routes." - } - }, - { - "_index": "amazon-bedrock-embeddings", - "_id": "ajd5OowBHxQKHyc3TDSC", - "_score": 0.8345704, - "_source": { - "id": 820622, - "body": "Home Heating Calculator. Typically, approximately 50% of the energy consumed in a home annually is for space heating. When deciding on a heating system, many factors will come into play: cost of fuel, installation cost, convenience and life style are all important.This calculator can help you estimate the cost of fuel for different heating appliances.hen deciding on a heating system, many factors will come into play: cost of fuel, installation cost, convenience and life style are all important. This calculator can help you estimate the cost of fuel for different heating appliances." - } - }, - { - "_index": "amazon-bedrock-embeddings", - "_id": "Djd5OowBHxQKHyc3TDSC", - "_score": 0.8327426, - "_source": { - "id": 8202683, - "body": "Fuel is another important cost. This cost will depend on your boat, how far you travel, and how fast you travel. A 33-foot sailboat traveling at 7 knots should be able to travel 300 miles on 50 gallons of diesel fuel.If you are paying $4 per gallon, the trip would cost you $200.Most boats have much larger gas tanks than cars.uel is another important cost. This cost will depend on your boat, how far you travel, and how fast you travel. A 33-foot sailboat traveling at 7 knots should be able to travel 300 miles on 50 gallons of diesel fuel." - } - }, - (...) - ] -``` -:::::: - -::::::{tab-item} AlibabaCloud AI Search -```console -GET alibabacloud-ai-search-embeddings/_search -{ - "knn": { - "field": "content_embedding", - "query_vector_builder": { - "text_embedding": { - "model_id": "alibabacloud_ai_search_embeddings", - "model_text": "Calculate fuel cost" - } - }, - "k": 10, - "num_candidates": 100 - }, - "_source": [ - "id", - "content" - ] -} -``` - -As a result, you receive the top 10 documents that are closest in meaning to the query from the `alibabacloud-ai-search-embeddings` index sorted by their proximity to the query: - -```console-result -"hits": [ - { - "_index": "alibabacloud-ai-search-embeddings", - "_id": "DDd5OowBHxQKHyc3TDSC", - "_score": 0.83704096, - "_source": { - "id": 862114, - "body": "How to calculate fuel cost for a road trip. By Tara Baukus Mello • Bankrate.com. Dear Driving for Dollars, My family is considering taking a long road trip to finish off the end of the summer, but I'm a little worried about gas prices and our overall fuel cost.It doesn't seem easy to calculate since we'll be traveling through many states and we are considering several routes.y family is considering taking a long road trip to finish off the end of the summer, but I'm a little worried about gas prices and our overall fuel cost. It doesn't seem easy to calculate since we'll be traveling through many states and we are considering several routes." - } - }, - { - "_index": "alibabacloud-ai-search-embeddings", - "_id": "ajd5OowBHxQKHyc3TDSC", - "_score": 0.8345704, - "_source": { - "id": 820622, - "body": "Home Heating Calculator. Typically, approximately 50% of the energy consumed in a home annually is for space heating. When deciding on a heating system, many factors will come into play: cost of fuel, installation cost, convenience and life style are all important.This calculator can help you estimate the cost of fuel for different heating appliances.hen deciding on a heating system, many factors will come into play: cost of fuel, installation cost, convenience and life style are all important. This calculator can help you estimate the cost of fuel for different heating appliances." - } - }, - { - "_index": "alibabacloud-ai-search-embeddings", - "_id": "Djd5OowBHxQKHyc3TDSC", - "_score": 0.8327426, - "_source": { - "id": 8202683, - "body": "Fuel is another important cost. This cost will depend on your boat, how far you travel, and how fast you travel. A 33-foot sailboat traveling at 7 knots should be able to travel 300 miles on 50 gallons of diesel fuel.If you are paying $4 per gallon, the trip would cost you $200.Most boats have much larger gas tanks than cars.uel is another important cost. This cost will depend on your boat, how far you travel, and how fast you travel. A 33-foot sailboat traveling at 7 knots should be able to travel 300 miles on 50 gallons of diesel fuel." - } - }, - (...) - ] -``` -:::::: - -::::::: - -## Interactive tutorials [infer-interactive-tutorials] - -You can also find tutorials in an interactive Colab notebook format using the {{es}} Python client: - -* [Cohere {{infer}} tutorial notebook](https://colab.research.google.com/github/elastic/elasticsearch-labs/blob/main/notebooks/integrations/cohere/inference-cohere.ipynb) -* [OpenAI {{infer}} tutorial notebook](https://colab.research.google.com/github/elastic/elasticsearch-labs/blob/main/notebooks/search/07-inference.ipynb) diff --git a/raw-migrated-files/ingest-docs/fleet/index.md b/raw-migrated-files/ingest-docs/fleet/index.md deleted file mode 100644 index 034704713..000000000 --- a/raw-migrated-files/ingest-docs/fleet/index.md +++ /dev/null @@ -1,3 +0,0 @@ -# Fleet and Elastic Agent - -Migrated files from the Fleet and Elastic Agent book. \ No newline at end of file diff --git a/raw-migrated-files/ingest-docs/ingest-overview/index.md b/raw-migrated-files/ingest-docs/ingest-overview/index.md deleted file mode 100644 index d2dd0f46d..000000000 --- a/raw-migrated-files/ingest-docs/ingest-overview/index.md +++ /dev/null @@ -1,3 +0,0 @@ -# Ingest overview - -Migrated files from the Ingest overview book. \ No newline at end of file diff --git a/raw-migrated-files/kibana/kibana/apm-settings-kb.md b/raw-migrated-files/kibana/kibana/apm-settings-kb.md deleted file mode 100644 index 1cb435e86..000000000 --- a/raw-migrated-files/kibana/kibana/apm-settings-kb.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -navigation_title: "APM settings" ---- - -# APM settings in Kibana [apm-settings-kb] - - -These settings allow the APM app to function, and specify the data that it surfaces. Unless you’ve customized your setup, you do not need to configure any settings to use the APM app. It is enabled by default. - - -## APM indices [apm-indices-settings-kb] - -The APM app uses data views to query APM indices. To change the default APM indices that the APM app queries, open the APM app and select **Settings*** > ***Indices**. Index settings in the APM app take precedence over those set in `kibana.yml`. - -Starting in version 8.2.0, APM indices are {{kib}} Spaces-aware; Changes to APM index settings will only apply to the currently enabled space. - -:::{image} ../../../images/kibana-apm-settings.png -:alt: APM app settings in Kibana -:screenshot: -::: - - -## General APM settings [general-apm-settings-kb] - -If you’d like to change any of the default values, copy and paste the relevant settings into your `kibana.yml` configuration file. Changing these settings may disable features of the APM App. - -::::{tip} -More settings are available in the [Observability advanced settings](kibana://reference/advanced-settings.md#observability-advanced-settings). -:::: - - -`xpack.apm.maxSuggestions` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Maximum number of suggestions fetched in autocomplete selection boxes. Defaults to `100`. - -`xpack.apm.serviceMapFingerprintBucketSize` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Maximum number of unique transaction combinations sampled for generating service map focused on a specific service. Defaults to `100`. - -`xpack.apm.serviceMapFingerprintGlobalBucketSize` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Maximum number of unique transaction combinations sampled for generating the global service map. Defaults to `100`. - -`xpack.apm.serviceMapEnabled` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Set to `false` to disable service maps. Defaults to `true`. - -`xpack.apm.serviceMapTraceIdBucketSize` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Maximum number of trace IDs sampled for generating service map focused on a specific service. Defaults to `65`. - -`xpack.apm.serviceMapTraceIdGlobalBucketSize` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Maximum number of trace IDs sampled for generating the global service map. Defaults to `6`. - -`xpack.apm.serviceMapMaxTracesPerRequest` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Maximum number of traces per request for generating the global service map. Defaults to `50`. - -`xpack.apm.ui.enabled` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Set to `false` to hide the APM app from the main menu. Defaults to `true`. - -`xpack.apm.ui.maxTraceItems` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Maximum number of child items displayed when viewing trace details. Defaults to `5000`. - -`xpack.observability.annotations.index` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Index name where Observability annotations are stored. Defaults to `observability-annotations`. - -`xpack.apm.metricsInterval` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Sets a `fixed_interval` for date histograms in metrics aggregations. Defaults to `30`. - -`xpack.apm.agent.migrations.enabled` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Set to `false` to disable cloud APM migrations. Defaults to `true`. - -`xpack.apm.indices.error` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Matcher for all error indices. Defaults to `logs-apm*,apm-*,traces-*.otel-*`. - -`xpack.apm.indices.onboarding` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Matcher for all onboarding indices. Defaults to `apm-*`. - -`xpack.apm.indices.span` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Matcher for all span indices. Defaults to `traces-apm*,apm-*,traces-*.otel-*`. - -`xpack.apm.indices.transaction` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Matcher for all transaction indices. Defaults to `traces-apm*,apm-*,traces-*.otel-*`. - -`xpack.apm.indices.metric` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Matcher for all metrics indices. Defaults to `metrics-apm*,apm-*,metrics-*.otel-*`. - -`xpack.apm.indices.sourcemap` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Matcher for all source map indices. Defaults to `apm-*`. - -`xpack.apm.autoCreateApmDataView` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Set to `false` to disable the automatic creation of the APM data view when the APM app is opened. Defaults to `true`. - -`xpack.apm.latestAgentVersionsUrl` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") -: Specifies the URL of a self hosted file that contains latest agent versions. Defaults to `https://apm-agent-versions.elastic.co/versions.json`. Set to `''` to disable requesting latest agent versions. diff --git a/raw-migrated-files/kibana/kibana/logging-settings.md b/raw-migrated-files/kibana/kibana/logging-settings.md deleted file mode 100644 index 60237f131..000000000 --- a/raw-migrated-files/kibana/kibana/logging-settings.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -navigation_title: "Logging settings" ---- - -# Logging settings in {{kib}} [logging-settings] - - -You do not need to configure any additional settings to use the logging features in {{kib}}. Logging is enabled by default and will log at `info` level using the `pattern` layout, which outputs logs to `stdout`. - -However, if you are planning to ingest your logs using Elasticsearch or another tool, we recommend using the `json` layout, which produces logs in ECS format. In general, `pattern` layout is recommended when raw logs will be read by a human, and `json` layout when logs will be read by a machine. - -::::{note} -The logging configuration is validated against the predefined schema and if there are any issues with it, {{kib}} will fail to start with the detailed error message. -:::: - - -{{kib}} relies on three high-level entities to set the logging service: appenders, loggers, and root. These can be configured in the `logging` namespace in `kibana.yml`. - -* Appenders define where log messages are displayed (stdout or console) and their layout (`pattern` or `json`). They also allow you to specify if you want the logs stored and, if so, where (file on the disk). -* Loggers define what logging settings, such as the level of verbosity and the appenders, to apply to a particular context. Each log entry context provides information about the service or plugin that emits it and any of its sub-parts, for example, `metrics.ops` or `elasticsearch.query`. -* Root is a logger that applies to all the log entries in {{kib}}. - -The following table serves as a quick reference for different logging configuration keys. Note that these are not stand-alone settings and may require additional logging configuration. See the [Configure Logging in {{kib}}](../../../deploy-manage/monitor/logging-configuration/kibana-logging.md) guide and complete [examples](../../../deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md) for common configuration use cases. - -| | | -| --- | --- | -| `logging.appenders[].` | Unique appender identifier. | -| `logging.appenders[].console:` | Appender to use for logging records to **stdout**. By default, uses the `[%date][%level][%logger] %message` **pattern*** layout. To use a ***json**, set the [layout type to `json`](../../../deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md#log-in-json-ECS-example). | -| `logging.appenders[].file:` | Allows you to specify a fileName to write log records to disk. To write [all log records to file](../../../deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md#log-to-file-example), add the file appender to `root.appenders`. If configured, you also need to specify [`logging.appenders.file.pathName`](../../../deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md#log-to-file-example). | -| `logging.appenders[].rolling-file:` | Similar to [Log4j’s](https://logging.apache.org/log4j/2.x/) `RollingFileAppender`, this appender will log to a file and rotate if following a rolling strategy when the configured policy triggers. There are currently two policies supported: [`size-limit`](../../../deploy-manage/monitor/logging-configuration/kibana-logging.md#size-limit-triggering-policy) and [`time-interval`](../../../deploy-manage/monitor/logging-configuration/kibana-logging.md#time-interval-triggering-policy). | -| `logging.appenders[]..type` | The appender type determines where the log messages are sent. Options are `console`, `file`, `rewrite`, `rolling-file`. Required. | -| `logging.appenders[]..fileName` | Determines the filepath where the log messages are written to for file and rolling-file appender types. Required for appenders that write to file. | -| `logging.appenders[]..policy.type` | Specify the triggering policy for when a rollover should occur for the `rolling-file` type appender. | -| `logging.appenders[]..policy.interval` | Specify the time interval for rotating a log file for a `time-interval` type `rolling-file` appender. **Default 24h** | -| `logging.appenders[]..policy.size` | Specify the size limit at which the policy should trigger a rollover for a `size-limit` type `rolling-file` appender. **Default 100mb**. | -| `logging.appenders[]..policy.interval` | Specify the time interval at which the policy should trigger a rollover for a time-interval type `rolling-file` appender. | -| `logging.appenders[]..policy.modulate` | Whether the interval should be adjusted to cause the next rollover to occur on the interval boundary. Boolean. Default `true`. | -| `logging.appenders[]..strategy.type` | Rolling file strategy type. Only `numeric` is currently supported. | -| `logging.appenders[]..strategy.pattern` | The suffix to append to the file path when rolling. Must include `%i`. | -| `logging.appenders[]..strategy.max` | The maximum number of files to keep. Optional. Default is `7` and the maximum is `100`. | -| `logging.appenders[]..layout.type` | Determines how the log messages are displayed. Options are `pattern`, which provides human-readable output, or `json`, which provides ECS-compliant output. Required. | -| `logging.appenders[]..layout.highlight` | Optional boolean to highlight log messages in color. Applies to `pattern` layout only. Default is `false`. | -| `logging.appenders[]..layout.pattern` | Optional [string pattern](../../../deploy-manage/monitor/logging-configuration/kibana-logging.md#pattern-layout) for placeholders that will be replaced with data from the actual log message. Applicable to pattern type layout only. | -| `logging.root.appenders[]` | List of specific appenders to apply to `root`. Defaults to `console` with `pattern` layout. | -| `logging.root.level` | Specify default verbosity for all log messages to fall back to if not specifically configured at the individual logger level. Options are `all`, `fatal`, `error`, `warn`, `info`, `debug`, `trace`, `off`. The `all` and `off` levels can be used only in configuration and are just handy shortcuts that allow you to log every log record or disable logging entirely or for a specific logger. Default is `info`. | -| `logging.loggers[]..name:` | Specific logger instance. | -| `logging.loggers[]..level` | Specify verbosity of log messages for context. Optional and inherits the verbosity of any ancestor logger, up to the `root` logger `level`. | -| `logging.loggers[]..appenders` | Determines the appender to apply to a specific logger context as an array. Optional and falls back to the appender(s) of the `root` logger if not specified. | -| $$$enable-http-debug-logs$$$ `deprecation.enable_http_debug_logs` | Optional boolean to log debug messages when a deprecated API is called. Default is `false`. | diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 34c1de0c1..97e39d38b 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -17,13 +17,8 @@ toc: - file: cloud/cloud-enterprise/index.md children: - file: cloud/cloud-enterprise/ece_re_running_the_ece_upgrade.md - - file: cloud/cloud-enterprise/ece-password-reset-elastic.md - - file: cloud/cloud-enterprise/ece-restore-across-clusters.md - - file: cloud/cloud-enterprise/ece-restore-deployment.md - file: cloud/cloud-enterprise/ece-securing-clusters.md - file: cloud/cloud-enterprise/ece-securing-ece.md - - file: cloud/cloud-enterprise/ece-snapshots.md - - file: cloud/cloud-enterprise/ece-terminate-deployment.md - file: cloud/cloud-enterprise/ece-upgrade.md - file: cloud/cloud-heroku/index.md children: @@ -45,75 +40,30 @@ toc: - file: cloud/cloud-heroku/ech-snapshot-restore.md - file: cloud/cloud/index.md children: - - file: cloud/cloud/ec_service_status_api.md - - file: cloud/cloud/ec_subscribe_to_individual_regionscomponents.md - - file: cloud/cloud/ec-about.md - - file: cloud/cloud/ec-access-kibana.md - - file: cloud/cloud/ec-activity-page.md - - file: cloud/cloud/ec-add-user-settings.md - - file: cloud/cloud/ec-billing-stop.md - - file: cloud/cloud/ec-custom-bundles.md - - file: cloud/cloud/ec-custom-repository.md - - file: cloud/cloud/ec-delete-deployment.md - - file: cloud/cloud/ec-editing-user-settings.md - - file: cloud/cloud/ec-faq-getting-started.md - file: cloud/cloud/ec-faq-technical.md - - file: cloud/cloud/ec-getting-started-trial.md - - file: cloud/cloud/ec-getting-started.md - file: cloud/cloud/ec-maintenance-mode-routing.md - - file: cloud/cloud/ec-manage-apm-settings.md - - file: cloud/cloud/ec-manage-appsearch-settings.md - - file: cloud/cloud/ec-manage-enterprise-search-settings.md - - file: cloud/cloud/ec-manage-kibana-settings.md - file: cloud/cloud/ec-monitoring-setup.md - - file: cloud/cloud/ec-password-reset.md - file: cloud/cloud/ec-planning.md - - file: cloud/cloud/ec-regional-deployment-aliases.md - - file: cloud/cloud/ec-restore-across-clusters.md - - file: cloud/cloud/ec-restoring-snapshots.md - file: cloud/cloud/ec-security.md - - file: cloud/cloud/ec-select-subscription-level.md - - file: cloud/cloud/ec-service-status.md - - file: cloud/cloud/ec-snapshot-restore.md - file: docs-content/serverless/index.md children: - - file: docs-content/serverless/intro.md - - file: docs-content/serverless/elasticsearch-differences.md - file: docs-content/serverless/elasticsearch-http-apis.md - - file: docs-content/serverless/general-billing-stop-project.md - - file: docs-content/serverless/general-sign-up-trial.md - - file: docs-content/serverless/intro.md - file: docs-content/serverless/observability-ai-assistant.md - file: docs-content/serverless/observability-apm-get-started.md - file: docs-content/serverless/observability-ecs-application-logs.md - file: docs-content/serverless/observability-plaintext-application-logs.md - file: docs-content/serverless/observability-stream-log-files.md - - file: docs-content/serverless/project-setting-data.md - - file: docs-content/serverless/project-settings-alerts.md - - file: docs-content/serverless/project-settings-content.md - file: docs-content/serverless/what-is-observability-serverless.md - - file: elasticsearch-hadoop/elasticsearch-hadoop/index.md - children: - - file: elasticsearch-hadoop/elasticsearch-hadoop/doc-sections.md - file: elasticsearch/elasticsearch-reference/index.md children: - - file: elasticsearch/elasticsearch-reference/documents-indices.md - - file: elasticsearch/elasticsearch-reference/esql-using.md - - file: elasticsearch/elasticsearch-reference/index-modules-mapper.md - file: elasticsearch/elasticsearch-reference/ip-filtering.md - file: elasticsearch/elasticsearch-reference/scalability.md - - file: elasticsearch/elasticsearch-reference/search-with-synonyms.md - file: elasticsearch/elasticsearch-reference/secure-cluster.md - file: elasticsearch/elasticsearch-reference/security-files.md - file: elasticsearch/elasticsearch-reference/security-limitations.md - - file: elasticsearch/elasticsearch-reference/semantic-search-inference.md - file: elasticsearch/elasticsearch-reference/shard-request-cache.md - - file: ingest-docs/fleet/index.md - - file: ingest-docs/ingest-overview/index.md - file: kibana/kibana/index.md children: - - file: kibana/kibana/apm-settings-kb.md - - file: kibana/kibana/logging-settings.md - file: kibana/kibana/reporting-production-considerations.md - file: kibana/kibana/xpack-security.md - file: observability-docs/observability/index.md diff --git a/reference/index.md b/reference/index.md index 7cddeb96d..b4bde9357 100644 --- a/reference/index.md +++ b/reference/index.md @@ -1,6 +1,6 @@ --- mapped_pages: - - https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/api-reference.html + • https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/api-reference.html --- # Reference [api-reference] @@ -9,9 +9,9 @@ Explore the reference documentation for Elastic APIs. | | | | --- | --- | -| {{es}} | * [{{es}}](elasticsearch://reference/elasticsearch/rest-apis/index.md)
* [{{es}} Serverless](https://www.elastic.co/docs/api/doc/elasticsearch-serverless)
| -| {{kib}} | * [{{kib}}](https://www.elastic.co/docs/api/doc/kibana)
* [{{kib}} Serverless](https://www.elastic.co/docs/api/doc/serverless)
* [{{fleet}}](/reference/ingestion-tools/fleet/fleet-api-docs.md)
* [{{observability}} Serverless SLOs](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-slo)
* [{{elastic-sec}}](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-security-ai-assistant-api)
* [{{elastic-sec}} Serverless](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-security-ai-assistant-api)
| -| {{ls}} | * [Monitoring {{ls}}](https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html)
| -| APM | * [APM](/solutions/observability/apps/apm-server-api.md)
* [APM Serverless](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-apm-agent-configuration)
* [Observability intake Serverless](https://www.elastic.co/docs/api/doc/observability-serverless)
| -| {{ecloud}} | * [{{ech}}](https://www.elastic.co/docs/api/doc/cloud)
* [{{ecloud}} Serverless](https://www.elastic.co/docs/api/doc/elastic-cloud-serverless)
* [{{ece}}](https://www.elastic.co/docs/api/doc/cloud-enterprise)
* [{{eck}}](cloud-on-k8s://reference/api-docs.md)
| +| {{es}} | • [{{es}}](elasticsearch://reference/elasticsearch/rest-apis/index.md)
• [{{es}} Serverless](https://www.elastic.co/docs/api/doc/elasticsearch-serverless)
| +| {{kib}} | • [{{kib}}](https://www.elastic.co/docs/api/doc/kibana)
• [{{kib}} Serverless](https://www.elastic.co/docs/api/doc/serverless)
• [{{fleet}}](/reference/ingestion-tools/fleet/fleet-api-docs.md)
• [{{observability}} Serverless SLOs](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-slo)
• [{{elastic-sec}}](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-security-ai-assistant-api)
• [{{elastic-sec}} Serverless](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-security-ai-assistant-api)
| +| {{ls}} | • [Monitoring {{ls}}](https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html)
| +| APM | • [APM](/solutions/observability/apps/apm-server-api.md)
• [APM Serverless](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-apm-agent-configuration)
• [Observability intake Serverless](https://www.elastic.co/docs/api/doc/observability-serverless)
| +| {{ecloud}} | • [{{ech}}](https://www.elastic.co/docs/api/doc/cloud)
• [{{ecloud}} Serverless](https://www.elastic.co/docs/api/doc/elastic-cloud-serverless)
• [{{ece}}](https://www.elastic.co/docs/api/doc/cloud-enterprise)
• [{{eck}}](cloud-on-k8s://reference/api-docs.md)
| diff --git a/reference/ingestion-tools/fleet/add-fleet-server-kubernetes.md b/reference/ingestion-tools/fleet/add-fleet-server-kubernetes.md index ca5f8a070..6f77691b8 100644 --- a/reference/ingestion-tools/fleet/add-fleet-server-kubernetes.md +++ b/reference/ingestion-tools/fleet/add-fleet-server-kubernetes.md @@ -113,9 +113,8 @@ In summary, you need: When {{es}} or {{fleet-server}} are deployed, components communicate over well-defined, pre-allocated ports. You may need to allow access to these ports. Refer to the following table for default port assignments: -| | | -| --- | --- | | Component communication | Default port | +| --- | --- | | {{agent}} → {{fleet-server}} | 8220 | | {{fleet-server}} → {{es}} | 9200 | | {{fleet-server}} → {{kib}} (optional, for {{fleet}} setup) | 5601 | diff --git a/reference/ingestion-tools/fleet/elastic-agent-standalone-logging-config.md b/reference/ingestion-tools/fleet/elastic-agent-standalone-logging-config.md index 958d18d1f..a03eb6bca 100644 --- a/reference/ingestion-tools/fleet/elastic-agent-standalone-logging-config.md +++ b/reference/ingestion-tools/fleet/elastic-agent-standalone-logging-config.md @@ -32,10 +32,9 @@ Having a different log file for raw events also prevents event data from drownin The events log file is not collected by the {{agent}} monitoring. If the events log files are needed, they can be collected with the diagnostics or directly copied from the host running {{agent}}. -| | | +| Setting | Description | | --- | --- | -| **Setting**
| **Description**
| -| `agent.logging.level`
| The minimum log level.

Possible values:

* `error`: Logs errors and critical errors.
* `warning`: Logs warnings, errors, and critical errors.
* `info`: Logs informational messages, including the number of events that are published. Also logs any warnings, errors, or critical errors.
* `debug`: Logs debug messages, including a detailed printout of all events flushed. Also logs informational messages, warnings, errors, and critical errors. When the log level is `debug`, you can specify a list of **selectors** to display debug messages for specific components. If no selectors are specified, the `*` selector is used to display debug messages for all components.

Default: `info`
| +| `agent.logging.level`
| The minimum log level.

Possible values:

• `error`: Logs errors and critical errors.
• `warning`: Logs warnings, errors, and critical errors.
• `info`: Logs informational messages, including the number of events that are published. Also logs any warnings, errors, or critical errors.
• `debug`: Logs debug messages, including a detailed printout of all events flushed. Also logs informational messages, warnings, errors, and critical errors. When the log level is `debug`, you can specify a list of **selectors** to display debug messages for specific components. If no selectors are specified, the `*` selector is used to display debug messages for all components.

Default: `info`
| | `agent.logging.selectors`
| Specify the selector tags that are used by different {{agent}} components for debugging. To debug the output for all components, use `*`. To display debug messages related to event publishing, set to `publish`. Multiple selectors can be chained.

Possible values: `[beat]`, `[publish]`, `[service]`
| | `agent.logging.to_stderr`
| Set to `true` to write all logging output to the `stderr` output—this is equivalent to using the `-e` command line option.

Default: `true`
| | `agent.logging.to_syslog`
| Set to `true` to write all logging output to the `syslog` output.

Default: `false`
| diff --git a/reference/ingestion-tools/fleet/fleet-server-scalability.md b/reference/ingestion-tools/fleet/fleet-server-scalability.md index d7eb316b2..f39f4a0f3 100644 --- a/reference/ingestion-tools/fleet/fleet-server-scalability.md +++ b/reference/ingestion-tools/fleet/fleet-server-scalability.md @@ -187,9 +187,8 @@ The following tables provide the minimum resource requirements and scaling guide ### Resource requirements by number of agents [resource-requirements-by-number-agents] -| | | | | -| --- | --- | --- | --- | | Number of Agents | {{fleet-server}} Memory | {{fleet-server}} vCPU | {{es}} Hot Tier | +| --- | --- | --- | --- | | 2,000 | 2GB | up to 8 vCPU | 32GB RAM | 8 vCPU | | 5,000 | 4GB | up to 8 vCPU | 32GB RAM | 8 vCPU | | 10,000 | 8GB | up to 8 vCPU | 128GB RAM | 32 vCPU | diff --git a/reference/ingestion-tools/fleet/install-elastic-agents.md b/reference/ingestion-tools/fleet/install-elastic-agents.md index 50ca62e32..8c316b038 100644 --- a/reference/ingestion-tools/fleet/install-elastic-agents.md +++ b/reference/ingestion-tools/fleet/install-elastic-agents.md @@ -79,9 +79,8 @@ Using our lab environment as an example, we can observe the following resource c We tested using an AWS `m7i.large` instance type with 2 vCPUs, 8.0 GB of memory, and up to 12.5 Gbps of bandwidth. The tests ingested a single log file using both the [throughput and scale preset](/reference/ingestion-tools/fleet/elasticsearch-output.md#output-elasticsearch-performance-tuning-settings) with self monitoring enabled. These tests are representative of use cases that attempt to ingest data as fast as possible. This does not represent the resource overhead when using [{{elastic-defend}}](integration-docs://reference/endpoint/index.md). -| | | | +| Resource | Throughput | Scale | | --- | --- | --- | -| **Resource** | **Throughput** | **Scale** | | **CPU*** | ~67% | ~20% | | **RSS memory size*** | ~280 MB | ~220 MB | | **Write network throughput** | ~3.5 MB/s | 480 KB/s | diff --git a/reference/ingestion-tools/fleet/scaling-on-kubernetes.md b/reference/ingestion-tools/fleet/scaling-on-kubernetes.md index b3adfc67e..f2e2194d8 100644 --- a/reference/ingestion-tools/fleet/scaling-on-kubernetes.md +++ b/reference/ingestion-tools/fleet/scaling-on-kubernetes.md @@ -79,9 +79,8 @@ Based on our [{{agent}} Scaling tests](https://github.com/elastic/elastic-agent/ Sample Elastic Agent Configurations: -| | | | -| --- | --- | --- | | No of Pods in K8s Cluster | Leader Agent Resources | Rest of Agents | +| --- | --- | --- | | 1000 | cpu: "1500m", memory: "800Mi" | cpu: "300m", memory: "600Mi" | | 3000 | cpu: "2000m", memory: "1500Mi" | cpu: "400m", memory: "800Mi" | | 5000 | cpu: "3000m", memory: "2500Mi" | cpu: "500m", memory: "900Mi" | @@ -121,9 +120,8 @@ You can find more information in the document called [{{agent}} Manifests in ord Based on our [{{agent}} scaling tests](https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-scaling-tests.md), the following table aims to assist users on how to configure their KSM Sharding as {{k8s}} cluster scales: -| | | | -| --- | --- | --- | | No of Pods in K8s Cluster | No of KSM Shards | Agent Resources | +| --- | --- | --- | | 1000 | No Sharding can be handled with default KSM config | limits: memory: 700Mi , cpu:500m | | 3000 | 4 Shards | limits: memory: 1400Mi , cpu:1500m | | 5000 | 6 Shards | limits: memory: 1400Mi , cpu:1500m | diff --git a/solutions/observability/apps/collect-application-data.md b/solutions/observability/apps/collect-application-data.md index d87ee0bca..fba4bee01 100644 --- a/solutions/observability/apps/collect-application-data.md +++ b/solutions/observability/apps/collect-application-data.md @@ -46,9 +46,8 @@ Use Elastic APM agents or an OpenTelemetry language SDK to instrument a service ### Availability [apm-collect-data-availability] -| | | | +| Language | Elastic APM agent | Elastic Distributions of OpenTelemetry (EDOT) | | --- | --- | --- | -| **Language** | **Elastic APM agent** | **Elastic Distributions of OpenTelemetry (EDOT)** | | **Android** | Android agent | ![Not available](../../../images/observability-cross.svg "") | | **Go** | Go agent | ![Not available](../../../images/observability-cross.svg "") | | **iOS** | iOS agent | ![Not available](../../../images/observability-cross.svg "") | diff --git a/solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md b/solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md index f4a3f93ff..02065629a 100644 --- a/solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md +++ b/solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md @@ -51,9 +51,8 @@ The minimum supported versions of each interpreter are: The following deployment configuration example was tested to support profiling data from a fleet of up to 500 hosts, each with 8 or 16 CPU cores, for a total of roughly 6000 cores: -| | | | -| --- | --- | --- | | Component | Size per zone (memory) | Zones | +| --- | --- | --- | | {{es}} | 64 GB | 2 | | Kibana | 8 GB | 1 | | Integrations Server | 8 GB | 1 |