Skip to content

Commit aaeca3f

Browse files
Auto-generated API code (#3069)
1 parent 2619877 commit aaeca3f

File tree

9 files changed

+535
-116
lines changed

9 files changed

+535
-116
lines changed

docs/reference.asciidoc

Lines changed: 17 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -91,6 +91,7 @@ Some of the officially supported clients provide helpers to assist with bulk req
9191
* Perl: Check out `Search::Elasticsearch::Client::5_0::Bulk` and `Search::Elasticsearch::Client::5_0::Scroll`
9292
* Python: Check out `elasticsearch.helpers.*`
9393
* JavaScript: Check out `client.helpers.*`
94+
* Java: Check out `co.elastic.clients.elasticsearch._helpers.bulk.BulkIngester`
9495
* .NET: Check out `BulkAllObservable`
9596
* PHP: Check out bulk indexing.
9697
* Ruby: Check out `Elasticsearch::Helpers::BulkHelper`
@@ -533,14 +534,14 @@ Rethrottling that speeds up the query takes effect immediately but rethrotting t
533534
{ref}/docs-delete-by-query.html[Endpoint documentation]
534535
[source,ts]
535536
----
536-
client.deleteByQueryRethrottle({ task_id })
537+
client.deleteByQueryRethrottle({ task_id, requests_per_second })
537538
----
538539
[discrete]
539540
==== Arguments
540541

541542
* *Request (object):*
542543
** *`task_id` (string | number)*: The ID for the task.
543-
** *`requests_per_second` (Optional, float)*: The throttle for this request in sub-requests per second. To disable throttling, set it to `-1`.
544+
** *`requests_per_second` (float)*: The throttle for this request in sub-requests per second. To disable throttling, set it to `-1`.
544545

545546
[discrete]
546547
=== delete_script
@@ -1599,14 +1600,14 @@ This behavior prevents scroll timeouts.
15991600
{ref}/docs-reindex.html[Endpoint documentation]
16001601
[source,ts]
16011602
----
1602-
client.reindexRethrottle({ task_id })
1603+
client.reindexRethrottle({ task_id, requests_per_second })
16031604
----
16041605
[discrete]
16051606
==== Arguments
16061607

16071608
* *Request (object):*
16081609
** *`task_id` (string)*: The task identifier, which can be found by using the tasks API.
1609-
** *`requests_per_second` (Optional, float)*: The throttle for this request in sub-requests per second. It can be either `-1` to turn off throttling or any decimal number like `1.7` or `12` to throttle to that level.
1610+
** *`requests_per_second` (float)*: The throttle for this request in sub-requests per second. It can be either `-1` to turn off throttling or any decimal number like `1.7` or `12` to throttle to that level.
16101611

16111612
[discrete]
16121613
=== render_search_template
@@ -2301,14 +2302,14 @@ Rethrottling that speeds up the query takes effect immediately but rethrotting t
23012302
{ref}/docs-update-by-query.html[Endpoint documentation]
23022303
[source,ts]
23032304
----
2304-
client.updateByQueryRethrottle({ task_id })
2305+
client.updateByQueryRethrottle({ task_id, requests_per_second })
23052306
----
23062307
[discrete]
23072308
==== Arguments
23082309

23092310
* *Request (object):*
23102311
** *`task_id` (string)*: The ID for the task.
2311-
** *`requests_per_second` (Optional, float)*: The throttle for this request in sub-requests per second. To turn off throttling, set it to `-1`.
2312+
** *`requests_per_second` (float)*: The throttle for this request in sub-requests per second. To turn off throttling, set it to `-1`.
23122313

23132314
[discrete]
23142315
=== async_search
@@ -4652,6 +4653,8 @@ client.connector.updateFiltering({ connector_id })
46524653
Update the connector draft filtering validation.
46534654

46544655
Update the draft filtering validation info for a connector.
4656+
4657+
https://www.elastic.co/docs/api/doc/elasticsearch/v8/operation/operation-connector-update-filtering-validation[Endpoint documentation]
46554658
[source,ts]
46564659
----
46574660
client.connector.updateFilteringValidation({ connector_id, validation })
@@ -4704,6 +4707,8 @@ client.connector.updateName({ connector_id })
47044707
[discrete]
47054708
==== update_native
47064709
Update the connector is_native flag.
4710+
4711+
https://www.elastic.co/docs/api/doc/elasticsearch/v8/operation/operation-connector-update-native[Endpoint documentation]
47074712
[source,ts]
47084713
----
47094714
client.connector.updateNative({ connector_id, is_native })
@@ -4797,15 +4802,15 @@ For example, this can happen if you delete more than `cluster.indices.tombstones
47974802
{ref}/dangling-index-delete.html[Endpoint documentation]
47984803
[source,ts]
47994804
----
4800-
client.danglingIndices.deleteDanglingIndex({ index_uuid, accept_data_loss })
4805+
client.danglingIndices.deleteDanglingIndex({ index_uuid })
48014806
----
48024807

48034808
[discrete]
48044809
==== Arguments
48054810

48064811
* *Request (object):*
48074812
** *`index_uuid` (string)*: The UUID of the index to delete. Use the get dangling indices API to find the UUID.
4808-
** *`accept_data_loss` (boolean)*: This parameter must be set to true to acknowledge that it will no longer be possible to recove data from the dangling index.
4813+
** *`accept_data_loss` (Optional, boolean)*: This parameter must be set to true to acknowledge that it will no longer be possible to recove data from the dangling index.
48094814
** *`master_timeout` (Optional, string | -1 | 0)*: Specify timeout for connection to master
48104815
** *`timeout` (Optional, string | -1 | 0)*: Explicit operation timeout
48114816

@@ -4819,15 +4824,15 @@ For example, this can happen if you delete more than `cluster.indices.tombstones
48194824
{ref}/dangling-index-import.html[Endpoint documentation]
48204825
[source,ts]
48214826
----
4822-
client.danglingIndices.importDanglingIndex({ index_uuid, accept_data_loss })
4827+
client.danglingIndices.importDanglingIndex({ index_uuid })
48234828
----
48244829

48254830
[discrete]
48264831
==== Arguments
48274832

48284833
* *Request (object):*
48294834
** *`index_uuid` (string)*: The UUID of the index to import. Use the get dangling indices API to locate the UUID.
4830-
** *`accept_data_loss` (boolean)*: This parameter must be set to true to import a dangling index.
4835+
** *`accept_data_loss` (Optional, boolean)*: This parameter must be set to true to import a dangling index.
48314836
Because Elasticsearch cannot know where the dangling index data came from or determine which shard copies are fresh and which are stale, it cannot guarantee that the imported data represents the latest state of the index when it was last in the cluster.
48324837
** *`master_timeout` (Optional, string | -1 | 0)*: Specify timeout for connection to master
48334838
** *`timeout` (Optional, string | -1 | 0)*: Explicit operation timeout
@@ -5025,7 +5030,7 @@ client.eql.search({ index, query })
50255030
** *`case_sensitive` (Optional, boolean)*
50265031
** *`event_category_field` (Optional, string)*: Field containing the event classification, such as process, file, or network.
50275032
** *`tiebreaker_field` (Optional, string)*: Field used to sort hits with the same timestamp in ascending order
5028-
** *`timestamp_field` (Optional, string)*: Field containing event timestamp. Default "@timestamp"
5033+
** *`timestamp_field` (Optional, string)*: Field containing event timestamp.
50295034
** *`fetch_size` (Optional, number)*: Maximum number of events to search at a time for sequence queries.
50305035
** *`filter` (Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type } | { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }[])*: Query, written in Query DSL, used to filter the events on which the EQL query runs.
50315036
** *`keep_alive` (Optional, string | -1 | 0)*
@@ -16013,7 +16018,7 @@ This setting primarily has an impact when a whole message Grok pattern such as `
1601316018
If the structure finder identifies a common structure but has no idea of meaning then generic field names such as `path`, `ipaddress`, `field1`, and `field2` are used in the `grok_pattern` output, with the intention that a user who knows the meanings rename these fields before using it.
1601416019
** *`explain` (Optional, boolean)*: If this parameter is set to `true`, the response includes a field named explanation, which is an array of strings that indicate how the structure finder produced its result.
1601516020
If the structure finder produces unexpected results for some text, use this query parameter to help you determine why the returned structure was chosen.
16016-
** *`format` (Optional, string)*: The high level structure of the text.
16021+
** *`format` (Optional, Enum("ndjson" | "xml" | "delimited" | "semi_structured_text"))*: The high level structure of the text.
1601716022
Valid values are `ndjson`, `xml`, `delimited`, and `semi_structured_text`.
1601816023
By default, the API chooses the format.
1601916024
In this default scenario, all rows must have the same number of fields for a delimited format to be detected.

0 commit comments

Comments
 (0)