From 57d92be6271d30704050c68e964dcb9f4fc5c4f9 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Mon, 6 Apr 2026 09:07:10 +0000 Subject: [PATCH 01/14] HDDS-14919. [Auto] Update configuration documentation from ozone ce303c9277d4284cdb2aaf297016208a3061504c --- docs/05-administrator-guide/02-configuration/99-appendix.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index 6922fcc362..e784b8c034 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -20,7 +20,7 @@ This page provides a comprehensive overview of the configuration keys available | `hadoop.http.authentication.kerberos.principal` | HTTP/`${httpfs.hostname}`@`${kerberos.realm}` | | The HTTP Kerberos principal used by HttpFS in the HTTP endpoint. The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPNEGO specification. httpfs.authentication.kerberos.principal is deprecated. Instead use `hadoop.http.authentication.kerberos.principal`. | | `hadoop.http.authentication.signature.secret.file` | `${httpfs.config.dir}`/httpfs-signature.secret | | File containing the secret to sign HttpFS hadoop-auth cookies. This file should be readable only by the system user running HttpFS service. If multiple HttpFS servers are used in a load-balancer/round-robin fashion, they should share the secret file. If the secret file specified here does not exist, random secret is generated at startup time. httpfs.authentication.signature.secret.file is deprecated. Instead use `hadoop.http.authentication.signature.secret.file`. | | `hadoop.http.authentication.type` | simple | | Defines the authentication mechanism used by httpfs for its HTTP clients. Valid values are 'simple' or 'kerberos'. If using 'simple' HTTP clients must specify the username with the 'user.name' query string parameter. If using 'kerberos' HTTP clients must use HTTP SPNEGO or delegation tokens. httpfs.authentication.type is deprecated. Instead use `hadoop.http.authentication.type`. | -| `hadoop.http.idle_timeout.ms` | 60000 | | Httpfs Server connection timeout in milliseconds. | +| `hadoop.http.idle_timeout.ms` | 60000 | `OZONE`, `PERFORMANCE`, `S3GATEWAY` | OM/SCM/DN/S3GATEWAY Server connection timeout in milliseconds. | | `hadoop.http.max.request.header.size` | 65536 | | The maxmimum HTTP request header size. | | `hadoop.http.max.response.header.size` | 65536 | | The maxmimum HTTP response header size. | | `hadoop.http.max.threads` | 1000 | | The maxmimum number of threads. | @@ -231,8 +231,8 @@ This page provides a comprehensive overview of the configuration keys available | `hdds.scm.block.deletion.per-interval.max` | 500000 | `SCM`, `DELETION` | Maximum number of blocks which SCM processes during an interval. The block num is counted at the replica level.If SCM has 100000 blocks which need to be deleted and the configuration is 5000 then it would only send 5000 blocks for deletion to the datanodes. | | `hdds.scm.block.deletion.txn.dn.commit.map.limit` | 5000000 | `SCM` | This value indicates the size of the transactionToDNsCommitMap after which we will skip one round of scm block deleting interval. | | `hdds.scm.ec.pipeline.choose.policy.impl` | org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy | `SCM`, `PIPELINE` | Sets the policy for choosing an EC pipeline. The value should be the full name of a class which implements org.apache.hadoop.hdds.scm.PipelineChoosePolicy. The class decides which pipeline will be used when selecting an EC Pipeline. If not set, org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy will be used as default value. One of the following values can be used: (1) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy : chooses a pipeline randomly. (2) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.HealthyPipelineChoosePolicy : chooses a healthy pipeline randomly. (3) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.CapacityPipelineChoosePolicy : chooses the pipeline with lower utilization from two random pipelines. Note that random choose method will be executed twice in this policy.(4) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RoundRobinPipelineChoosePolicy : chooses a pipeline in a round robin fashion. Intended for troubleshooting and testing purposes only. | -| `hdds.scm.http.auth.kerberos.keytab` | | `SECURITY` | The keytab file used by SCM http server to login as its service principal. | -| `hdds.scm.http.auth.kerberos.principal` | | `SECURITY` | This Kerberos principal is used when communicating to the HTTP server of SCM.The protocol used is SPNEGO. | +| `hdds.scm.http.auth.kerberos.keytab` | /etc/security/keytabs/HTTP.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM http server to login as its service principal if SPNEGO is enabled for SCM http server. | +| `hdds.scm.http.auth.kerberos.principal` | HTTP/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | SCM http server service principal if SPNEGO is enabled for SCM http server. | | `hdds.scm.http.auth.type` | simple | `OM`, `SECURITY`, `KERBEROS` | simple or kerberos. If kerberos is set, SPNEGO will be used for http authentication. | | `hdds.scm.kerberos.keytab.file` | /etc/security/keytabs/SCM.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM daemon to login as its service principal. | | `hdds.scm.kerberos.principal` | SCM/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | The SCM service principal. e.g. scm/_HOST@REALM.COM | From 67405d3df3e139c9d7f10103082a848dc2aae34c Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Mon, 6 Apr 2026 16:41:06 +0000 Subject: [PATCH 02/14] HDDS-14978. [Auto] Update configuration documentation from ozone ece57e53c8992f3b5c970bd2b56272479a7e47cd --- docs/05-administrator-guide/02-configuration/99-appendix.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index e784b8c034..aadf3fe4e8 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -20,7 +20,7 @@ This page provides a comprehensive overview of the configuration keys available | `hadoop.http.authentication.kerberos.principal` | HTTP/`${httpfs.hostname}`@`${kerberos.realm}` | | The HTTP Kerberos principal used by HttpFS in the HTTP endpoint. The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPNEGO specification. httpfs.authentication.kerberos.principal is deprecated. Instead use `hadoop.http.authentication.kerberos.principal`. | | `hadoop.http.authentication.signature.secret.file` | `${httpfs.config.dir}`/httpfs-signature.secret | | File containing the secret to sign HttpFS hadoop-auth cookies. This file should be readable only by the system user running HttpFS service. If multiple HttpFS servers are used in a load-balancer/round-robin fashion, they should share the secret file. If the secret file specified here does not exist, random secret is generated at startup time. httpfs.authentication.signature.secret.file is deprecated. Instead use `hadoop.http.authentication.signature.secret.file`. | | `hadoop.http.authentication.type` | simple | | Defines the authentication mechanism used by httpfs for its HTTP clients. Valid values are 'simple' or 'kerberos'. If using 'simple' HTTP clients must specify the username with the 'user.name' query string parameter. If using 'kerberos' HTTP clients must use HTTP SPNEGO or delegation tokens. httpfs.authentication.type is deprecated. Instead use `hadoop.http.authentication.type`. | -| `hadoop.http.idle_timeout.ms` | 60000 | `OZONE`, `PERFORMANCE`, `S3GATEWAY` | OM/SCM/DN/S3GATEWAY Server connection timeout in milliseconds. | +| `hadoop.http.idle_timeout.ms` | 60000 | | Httpfs Server connection timeout in milliseconds. | | `hadoop.http.max.request.header.size` | 65536 | | The maxmimum HTTP request header size. | | `hadoop.http.max.response.header.size` | 65536 | | The maxmimum HTTP response header size. | | `hadoop.http.max.threads` | 1000 | | The maxmimum number of threads. | From 656f6d46bce9690d402a475676e42540b1f472f2 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Mon, 6 Apr 2026 19:24:02 +0000 Subject: [PATCH 03/14] HDDS-14041. [Auto] Update configuration documentation from ozone 5c53d2e5915d4bd9b1b4c67b4afbdf636e680975 --- .../02-configuration/99-appendix.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index aadf3fe4e8..4d81c90fd4 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -20,7 +20,7 @@ This page provides a comprehensive overview of the configuration keys available | `hadoop.http.authentication.kerberos.principal` | HTTP/`${httpfs.hostname}`@`${kerberos.realm}` | | The HTTP Kerberos principal used by HttpFS in the HTTP endpoint. The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPNEGO specification. httpfs.authentication.kerberos.principal is deprecated. Instead use `hadoop.http.authentication.kerberos.principal`. | | `hadoop.http.authentication.signature.secret.file` | `${httpfs.config.dir}`/httpfs-signature.secret | | File containing the secret to sign HttpFS hadoop-auth cookies. This file should be readable only by the system user running HttpFS service. If multiple HttpFS servers are used in a load-balancer/round-robin fashion, they should share the secret file. If the secret file specified here does not exist, random secret is generated at startup time. httpfs.authentication.signature.secret.file is deprecated. Instead use `hadoop.http.authentication.signature.secret.file`. | | `hadoop.http.authentication.type` | simple | | Defines the authentication mechanism used by httpfs for its HTTP clients. Valid values are 'simple' or 'kerberos'. If using 'simple' HTTP clients must specify the username with the 'user.name' query string parameter. If using 'kerberos' HTTP clients must use HTTP SPNEGO or delegation tokens. httpfs.authentication.type is deprecated. Instead use `hadoop.http.authentication.type`. | -| `hadoop.http.idle_timeout.ms` | 60000 | | Httpfs Server connection timeout in milliseconds. | +| `hadoop.http.idle_timeout.ms` | 60000 | `OZONE`, `PERFORMANCE`, `S3GATEWAY` | OM/SCM/DN/S3GATEWAY Server connection timeout in milliseconds. | | `hadoop.http.max.request.header.size` | 65536 | | The maxmimum HTTP request header size. | | `hadoop.http.max.response.header.size` | 65536 | | The maxmimum HTTP response header size. | | `hadoop.http.max.threads` | 1000 | | The maxmimum number of threads. | @@ -231,8 +231,8 @@ This page provides a comprehensive overview of the configuration keys available | `hdds.scm.block.deletion.per-interval.max` | 500000 | `SCM`, `DELETION` | Maximum number of blocks which SCM processes during an interval. The block num is counted at the replica level.If SCM has 100000 blocks which need to be deleted and the configuration is 5000 then it would only send 5000 blocks for deletion to the datanodes. | | `hdds.scm.block.deletion.txn.dn.commit.map.limit` | 5000000 | `SCM` | This value indicates the size of the transactionToDNsCommitMap after which we will skip one round of scm block deleting interval. | | `hdds.scm.ec.pipeline.choose.policy.impl` | org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy | `SCM`, `PIPELINE` | Sets the policy for choosing an EC pipeline. The value should be the full name of a class which implements org.apache.hadoop.hdds.scm.PipelineChoosePolicy. The class decides which pipeline will be used when selecting an EC Pipeline. If not set, org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy will be used as default value. One of the following values can be used: (1) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy : chooses a pipeline randomly. (2) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.HealthyPipelineChoosePolicy : chooses a healthy pipeline randomly. (3) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.CapacityPipelineChoosePolicy : chooses the pipeline with lower utilization from two random pipelines. Note that random choose method will be executed twice in this policy.(4) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RoundRobinPipelineChoosePolicy : chooses a pipeline in a round robin fashion. Intended for troubleshooting and testing purposes only. | -| `hdds.scm.http.auth.kerberos.keytab` | /etc/security/keytabs/HTTP.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM http server to login as its service principal if SPNEGO is enabled for SCM http server. | -| `hdds.scm.http.auth.kerberos.principal` | HTTP/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | SCM http server service principal if SPNEGO is enabled for SCM http server. | +| `hdds.scm.http.auth.kerberos.keytab` | | `SECURITY` | The keytab file used by SCM http server to login as its service principal. | +| `hdds.scm.http.auth.kerberos.principal` | | `SECURITY` | This Kerberos principal is used when communicating to the HTTP server of SCM.The protocol used is SPNEGO. | | `hdds.scm.http.auth.type` | simple | `OM`, `SECURITY`, `KERBEROS` | simple or kerberos. If kerberos is set, SPNEGO will be used for http authentication. | | `hdds.scm.kerberos.keytab.file` | /etc/security/keytabs/SCM.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM daemon to login as its service principal. | | `hdds.scm.kerberos.principal` | SCM/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | The SCM service principal. e.g. scm/_HOST@REALM.COM | @@ -350,7 +350,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.client.failover.max.attempts` | 500 | | Expert only. Ozone RpcClient attempts talking to each OzoneManager ipc.client.connect.max.retries (default = 10) number of times before failing over to another OzoneManager, if available. This parameter represents the number of times per request the client will failover before giving up. This value is kept high so that client does not give up trying to connect to OMs easily. | | `ozone.client.follower.read.default.consistency` | LINEARIZABLE_ALLOW_FOLLOWER | | The default consistency when client enables follower read. Currently, the supported follower read consistency are LINEARIZABLE_ALLOW_FOLLOWER and LOCAL_LEASE The default value is LINEARIZABLE_ALLOW_FOLLOWER to preserve the same strong consistency behavior when switching from leader-only read to follower read. | | `ozone.client.follower.read.enabled` | false | | Enable client to read from OM followers. If false, all client requests are sent to the OM leader. | -| `ozone.client.fs.default.bucket.layout` | FILE_SYSTEM_OPTIMIZED | `CLIENT` | The bucket layout used by buckets created using OFS. Valid values include FILE_SYSTEM_OPTIMIZED and LEGACY | +| `ozone.client.fs.default.bucket.layout` | FILE_SYSTEM_OPTIMIZED | `OZONE`, `CLIENT` | Default bucket layout value used when buckets are created using OFS. Supported values are LEGACY and FILE_SYSTEM_OPTIMIZED. FILE_SYSTEM_OPTIMIZED: This layout allows the bucket to support atomic rename/delete operations and also allows interoperability between S3 and FS APIs. Keys written via S3 API with a "/" delimiter will create intermediate directories. | | `ozone.client.hbase.enhancements.allowed` | false | `CLIENT` | When set to false, client-side HBase enhancement-related Ozone (experimental) features are disabled (not allowed to be enabled) regardless of whether those configs are set. Here is the list of configs and values overridden when this config is set to false: 1. `ozone.fs.hsync.enabled` = false 2. `ozone.client.incremental.chunk.list` = false 3. `ozone.client.stream.putblock.piggybacking` = false 4. `ozone.client.key.write.concurrency` = 1 A warning message will be printed if any of the above configs are overridden by this. | | `ozone.client.incremental.chunk.list` | false | `CLIENT` | Client PutBlock request can choose incremental chunk list rather than full chunk list to optimize performance. Critical to HBase. EC does not support this feature. Can be enabled only when `ozone.client.hbase.enhancements.allowed` = true | | `ozone.client.key.latest.version.location` | true | `OZONE`, `CLIENT` | Ozone client gets the latest version location. | @@ -358,7 +358,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.client.key.write.concurrency` | 1 | `CLIENT` | Maximum concurrent writes allowed on each key. Defaults to 1 which matches the behavior before HDDS-9844. For unlimited write concurrency, set this to -1 or any negative integer value. Any value other than 1 is effective only when `ozone.client.hbase.enhancements.allowed` = true | | `ozone.client.leader.read.default.consistency` | DEFAULT | | The default consistency when client disables follower read. Currently, the supported leader read consistency are DEFAULT and LINEARIZABLE_LEADER_ONLY. The default value is DEFAULT for backward compatibility reason which is mostly strongly consistent. | | `ozone.client.list.cache` | 1000 | `OZONE`, `PERFORMANCE` | Configuration property to configure the cache size of client list calls. | -| `ozone.client.max.ec.stripe.write.retries` | 10 | `CLIENT` | Ozone EC client to retry stripe to new block group on failures. | +| `ozone.client.max.ec.stripe.write.retries` | 10 | `CLIENT` | When EC stripe write failed, client will request to allocate new block group and write the failed stripe into new block group. If the same stripe failure continued in newly acquired block group also, then it will retry by requesting to allocate new block group again. This configuration is used to limit these number of retries. By default the number of retries are 10. | | `ozone.client.max.retries` | 5 | `CLIENT` | Maximum number of retries by Ozone Client on encountering exception while writing a key | | `ozone.client.read.max.retries` | 3 | `CLIENT` | Maximum number of retries by Ozone Client on encountering connectivity exception when reading a key. | | `ozone.client.read.retry.interval` | 1 | `CLIENT` | Indicates the time duration in seconds a client will wait before retrying a read key request on encountering a connectivity exception from Datanodes. By default the interval is 1 second | @@ -539,6 +539,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.om.snapshot.diff.max.jobs.purge.per.task` | 100 | `OZONE`, `OM` | Maximum number of snapshot diff jobs to be purged per snapDiff clean up run. | | `ozone.om.snapshot.diff.max.page.size` | 1000 | `OZONE`, `OM` | Maximum number of entries to be returned in a single page of snap diff report. | | `ozone.om.snapshot.diff.thread.pool.size` | 10 | `OZONE`, `OM` | Maximum numbers of concurrent snapshot diff jobs are allowed. | +| `ozone.om.snapshot.directory.metrics.update.interval` | 5m | `OZONE`, `OM` | Time interval used to update the space consumption stats of the Ozone Manager snapshot directories. Background thread periodically calculates and updates these stats. Unit could be defined with postfix (ns,ms,s,m,h,d) | | `ozone.om.snapshot.force.full.diff` | false | `OZONE`, `OM` | Flag to always perform full snapshot diff (can be slow) without using the optimised compaction DAG. | | `ozone.om.snapshot.load.native.lib` | true | `OZONE`, `OM` | Load native library for performing optimized snapshot diff. | | `ozone.om.snapshot.local.data.manager.service.interval` | 5m | | Interval for cleaning up orphan snapshot local data versions corresponding to snapshots | From 5bdc40714e0d8ba08834e7e43e5534afb8d95bfc Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Tue, 7 Apr 2026 07:31:28 +0000 Subject: [PATCH 04/14] HDDS-14963. [Auto] Update configuration documentation from ozone 0c3751acfa472309aaa9a0a689307dabcf98db0c --- docs/05-administrator-guide/02-configuration/99-appendix.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index 4d81c90fd4..bb05371427 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -231,8 +231,8 @@ This page provides a comprehensive overview of the configuration keys available | `hdds.scm.block.deletion.per-interval.max` | 500000 | `SCM`, `DELETION` | Maximum number of blocks which SCM processes during an interval. The block num is counted at the replica level.If SCM has 100000 blocks which need to be deleted and the configuration is 5000 then it would only send 5000 blocks for deletion to the datanodes. | | `hdds.scm.block.deletion.txn.dn.commit.map.limit` | 5000000 | `SCM` | This value indicates the size of the transactionToDNsCommitMap after which we will skip one round of scm block deleting interval. | | `hdds.scm.ec.pipeline.choose.policy.impl` | org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy | `SCM`, `PIPELINE` | Sets the policy for choosing an EC pipeline. The value should be the full name of a class which implements org.apache.hadoop.hdds.scm.PipelineChoosePolicy. The class decides which pipeline will be used when selecting an EC Pipeline. If not set, org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy will be used as default value. One of the following values can be used: (1) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy : chooses a pipeline randomly. (2) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.HealthyPipelineChoosePolicy : chooses a healthy pipeline randomly. (3) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.CapacityPipelineChoosePolicy : chooses the pipeline with lower utilization from two random pipelines. Note that random choose method will be executed twice in this policy.(4) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RoundRobinPipelineChoosePolicy : chooses a pipeline in a round robin fashion. Intended for troubleshooting and testing purposes only. | -| `hdds.scm.http.auth.kerberos.keytab` | | `SECURITY` | The keytab file used by SCM http server to login as its service principal. | -| `hdds.scm.http.auth.kerberos.principal` | | `SECURITY` | This Kerberos principal is used when communicating to the HTTP server of SCM.The protocol used is SPNEGO. | +| `hdds.scm.http.auth.kerberos.keytab` | /etc/security/keytabs/HTTP.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM http server to login as its service principal if SPNEGO is enabled for SCM http server. | +| `hdds.scm.http.auth.kerberos.principal` | HTTP/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | SCM http server service principal if SPNEGO is enabled for SCM http server. | | `hdds.scm.http.auth.type` | simple | `OM`, `SECURITY`, `KERBEROS` | simple or kerberos. If kerberos is set, SPNEGO will be used for http authentication. | | `hdds.scm.kerberos.keytab.file` | /etc/security/keytabs/SCM.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM daemon to login as its service principal. | | `hdds.scm.kerberos.principal` | SCM/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | The SCM service principal. e.g. scm/_HOST@REALM.COM | From 9372ca2b39983de166a149d7313465a79efc8725 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Tue, 7 Apr 2026 08:42:05 +0000 Subject: [PATCH 05/14] HDDS-14870. [Auto] Update configuration documentation from ozone 346b65e937143864d8a99705dca532ab0c780b35 --- docs/05-administrator-guide/02-configuration/99-appendix.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index bb05371427..4a5d67211a 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -35,6 +35,7 @@ This page provides a comprehensive overview of the configuration keys available | `hdds.container.balancer.exclude.datanodes` | | `BALANCER` | A list of Datanode hostnames or ip addresses separated by commas. The Datanodes specified in this list are excluded from balancing. This configuration is empty by default. | | `hdds.container.balancer.include.containers` | | `BALANCER` | List of container IDs to include in balancing. Only these containers will be included in balancing. For example "1, 4, 5" or "1,4,5". | | `hdds.container.balancer.include.datanodes` | | `BALANCER` | A list of Datanode hostnames or ip addresses separated by commas. Only the Datanodes specified in this list are balanced. This configuration is empty by default and is applicable only if it is non-empty. | +| `hdds.container.balancer.include.non.standard.containers` | false | `BALANCER` | Whether to include containers in non-standard states, such as OVER_REPLICATED CLOSED/QUASI_CLOSED and HEALTHY QUASI_CLOSED containers. | | `hdds.container.balancer.iterations` | 10 | `BALANCER` | The number of iterations that Container Balancer will run for. | | `hdds.container.balancer.move.networkTopology.enable` | false | `BALANCER` | whether to take network topology into account when selecting a target for a source. This configuration is false by default. | | `hdds.container.balancer.move.replication.timeout` | 50m | `BALANCER` | The amount of time to allow a single container's replication from source to target as part of container move. For example, if "hdds.container.balancer.move.timeout" is 65 minutes, then out of those 65 minutes 50 minutes will be the deadline for replication to complete. | From 35995d3facca7d8f2cf3e7d1f06416ea04abd689 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Tue, 7 Apr 2026 10:45:43 +0000 Subject: [PATCH 06/14] HDDS-14973. [Auto] Update configuration documentation from ozone 9e775e6c8f6217562dda239b1e38a25e44265808 --- docs/05-administrator-guide/02-configuration/99-appendix.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index 4a5d67211a..7b347be393 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -20,7 +20,7 @@ This page provides a comprehensive overview of the configuration keys available | `hadoop.http.authentication.kerberos.principal` | HTTP/`${httpfs.hostname}`@`${kerberos.realm}` | | The HTTP Kerberos principal used by HttpFS in the HTTP endpoint. The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPNEGO specification. httpfs.authentication.kerberos.principal is deprecated. Instead use `hadoop.http.authentication.kerberos.principal`. | | `hadoop.http.authentication.signature.secret.file` | `${httpfs.config.dir}`/httpfs-signature.secret | | File containing the secret to sign HttpFS hadoop-auth cookies. This file should be readable only by the system user running HttpFS service. If multiple HttpFS servers are used in a load-balancer/round-robin fashion, they should share the secret file. If the secret file specified here does not exist, random secret is generated at startup time. httpfs.authentication.signature.secret.file is deprecated. Instead use `hadoop.http.authentication.signature.secret.file`. | | `hadoop.http.authentication.type` | simple | | Defines the authentication mechanism used by httpfs for its HTTP clients. Valid values are 'simple' or 'kerberos'. If using 'simple' HTTP clients must specify the username with the 'user.name' query string parameter. If using 'kerberos' HTTP clients must use HTTP SPNEGO or delegation tokens. httpfs.authentication.type is deprecated. Instead use `hadoop.http.authentication.type`. | -| `hadoop.http.idle_timeout.ms` | 60000 | `OZONE`, `PERFORMANCE`, `S3GATEWAY` | OM/SCM/DN/S3GATEWAY Server connection timeout in milliseconds. | +| `hadoop.http.idle_timeout.ms` | 60000 | | Httpfs Server connection timeout in milliseconds. | | `hadoop.http.max.request.header.size` | 65536 | | The maxmimum HTTP request header size. | | `hadoop.http.max.response.header.size` | 65536 | | The maxmimum HTTP response header size. | | `hadoop.http.max.threads` | 1000 | | The maxmimum number of threads. | @@ -351,7 +351,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.client.failover.max.attempts` | 500 | | Expert only. Ozone RpcClient attempts talking to each OzoneManager ipc.client.connect.max.retries (default = 10) number of times before failing over to another OzoneManager, if available. This parameter represents the number of times per request the client will failover before giving up. This value is kept high so that client does not give up trying to connect to OMs easily. | | `ozone.client.follower.read.default.consistency` | LINEARIZABLE_ALLOW_FOLLOWER | | The default consistency when client enables follower read. Currently, the supported follower read consistency are LINEARIZABLE_ALLOW_FOLLOWER and LOCAL_LEASE The default value is LINEARIZABLE_ALLOW_FOLLOWER to preserve the same strong consistency behavior when switching from leader-only read to follower read. | | `ozone.client.follower.read.enabled` | false | | Enable client to read from OM followers. If false, all client requests are sent to the OM leader. | -| `ozone.client.fs.default.bucket.layout` | FILE_SYSTEM_OPTIMIZED | `OZONE`, `CLIENT` | Default bucket layout value used when buckets are created using OFS. Supported values are LEGACY and FILE_SYSTEM_OPTIMIZED. FILE_SYSTEM_OPTIMIZED: This layout allows the bucket to support atomic rename/delete operations and also allows interoperability between S3 and FS APIs. Keys written via S3 API with a "/" delimiter will create intermediate directories. | +| `ozone.client.fs.default.bucket.layout` | FILE_SYSTEM_OPTIMIZED | `CLIENT` | The bucket layout used by buckets created using OFS. Valid values include FILE_SYSTEM_OPTIMIZED and LEGACY | | `ozone.client.hbase.enhancements.allowed` | false | `CLIENT` | When set to false, client-side HBase enhancement-related Ozone (experimental) features are disabled (not allowed to be enabled) regardless of whether those configs are set. Here is the list of configs and values overridden when this config is set to false: 1. `ozone.fs.hsync.enabled` = false 2. `ozone.client.incremental.chunk.list` = false 3. `ozone.client.stream.putblock.piggybacking` = false 4. `ozone.client.key.write.concurrency` = 1 A warning message will be printed if any of the above configs are overridden by this. | | `ozone.client.incremental.chunk.list` | false | `CLIENT` | Client PutBlock request can choose incremental chunk list rather than full chunk list to optimize performance. Critical to HBase. EC does not support this feature. Can be enabled only when `ozone.client.hbase.enhancements.allowed` = true | | `ozone.client.key.latest.version.location` | true | `OZONE`, `CLIENT` | Ozone client gets the latest version location. | @@ -359,7 +359,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.client.key.write.concurrency` | 1 | `CLIENT` | Maximum concurrent writes allowed on each key. Defaults to 1 which matches the behavior before HDDS-9844. For unlimited write concurrency, set this to -1 or any negative integer value. Any value other than 1 is effective only when `ozone.client.hbase.enhancements.allowed` = true | | `ozone.client.leader.read.default.consistency` | DEFAULT | | The default consistency when client disables follower read. Currently, the supported leader read consistency are DEFAULT and LINEARIZABLE_LEADER_ONLY. The default value is DEFAULT for backward compatibility reason which is mostly strongly consistent. | | `ozone.client.list.cache` | 1000 | `OZONE`, `PERFORMANCE` | Configuration property to configure the cache size of client list calls. | -| `ozone.client.max.ec.stripe.write.retries` | 10 | `CLIENT` | When EC stripe write failed, client will request to allocate new block group and write the failed stripe into new block group. If the same stripe failure continued in newly acquired block group also, then it will retry by requesting to allocate new block group again. This configuration is used to limit these number of retries. By default the number of retries are 10. | +| `ozone.client.max.ec.stripe.write.retries` | 10 | `CLIENT` | Ozone EC client to retry stripe to new block group on failures. | | `ozone.client.max.retries` | 5 | `CLIENT` | Maximum number of retries by Ozone Client on encountering exception while writing a key | | `ozone.client.read.max.retries` | 3 | `CLIENT` | Maximum number of retries by Ozone Client on encountering connectivity exception when reading a key. | | `ozone.client.read.retry.interval` | 1 | `CLIENT` | Indicates the time duration in seconds a client will wait before retrying a read key request on encountering a connectivity exception from Datanodes. By default the interval is 1 second | From ccfe37b2c1bea1186c49a19a3bc2d2754c5e04c1 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Tue, 7 Apr 2026 12:57:47 +0000 Subject: [PATCH 07/14] HDDS-7373. [Auto] Update configuration documentation from ozone 33a5320e4ec43725287309a0b10a37a3d679b6a6 --- docs/05-administrator-guide/02-configuration/99-appendix.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index 7b347be393..895dd294ea 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -232,8 +232,8 @@ This page provides a comprehensive overview of the configuration keys available | `hdds.scm.block.deletion.per-interval.max` | 500000 | `SCM`, `DELETION` | Maximum number of blocks which SCM processes during an interval. The block num is counted at the replica level.If SCM has 100000 blocks which need to be deleted and the configuration is 5000 then it would only send 5000 blocks for deletion to the datanodes. | | `hdds.scm.block.deletion.txn.dn.commit.map.limit` | 5000000 | `SCM` | This value indicates the size of the transactionToDNsCommitMap after which we will skip one round of scm block deleting interval. | | `hdds.scm.ec.pipeline.choose.policy.impl` | org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy | `SCM`, `PIPELINE` | Sets the policy for choosing an EC pipeline. The value should be the full name of a class which implements org.apache.hadoop.hdds.scm.PipelineChoosePolicy. The class decides which pipeline will be used when selecting an EC Pipeline. If not set, org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy will be used as default value. One of the following values can be used: (1) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy : chooses a pipeline randomly. (2) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.HealthyPipelineChoosePolicy : chooses a healthy pipeline randomly. (3) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.CapacityPipelineChoosePolicy : chooses the pipeline with lower utilization from two random pipelines. Note that random choose method will be executed twice in this policy.(4) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RoundRobinPipelineChoosePolicy : chooses a pipeline in a round robin fashion. Intended for troubleshooting and testing purposes only. | -| `hdds.scm.http.auth.kerberos.keytab` | /etc/security/keytabs/HTTP.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM http server to login as its service principal if SPNEGO is enabled for SCM http server. | -| `hdds.scm.http.auth.kerberos.principal` | HTTP/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | SCM http server service principal if SPNEGO is enabled for SCM http server. | +| `hdds.scm.http.auth.kerberos.keytab` | | `SECURITY` | The keytab file used by SCM http server to login as its service principal. | +| `hdds.scm.http.auth.kerberos.principal` | | `SECURITY` | This Kerberos principal is used when communicating to the HTTP server of SCM.The protocol used is SPNEGO. | | `hdds.scm.http.auth.type` | simple | `OM`, `SECURITY`, `KERBEROS` | simple or kerberos. If kerberos is set, SPNEGO will be used for http authentication. | | `hdds.scm.kerberos.keytab.file` | /etc/security/keytabs/SCM.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM daemon to login as its service principal. | | `hdds.scm.kerberos.principal` | SCM/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | The SCM service principal. e.g. scm/_HOST@REALM.COM | From b6046eb7c398624c479cf03e9917b5975dbad66d Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Wed, 8 Apr 2026 04:44:34 +0000 Subject: [PATCH 08/14] HDDS-13108. [Auto] Update configuration documentation from ozone 9ff7eaa6c19bbd2e8aa951197127c503fe5b2159 --- docs/05-administrator-guide/02-configuration/99-appendix.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index 895dd294ea..4a1e12548f 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -127,7 +127,9 @@ This page provides a comprehensive overview of the configuration keys available | `hdds.datanode.disk.check.io.failures.tolerated` | 1 | `DATANODE` | The number of IO tests out of the last `hdds.datanode.disk.check.io.test.count` test run that are allowed to fail before the volume is marked as failed. | | `hdds.datanode.disk.check.io.file.size` | 100B | `DATANODE` | The size of the temporary file that will be synced to the disk and read back to assess its health. The contents of the file will be stored in memory during the duration of the check. | | `hdds.datanode.disk.check.io.test.count` | 3 | `DATANODE` | The number of IO tests required to determine if a disk has failed. Each disk check does one IO test. The volume will be failed if more than `hdds.datanode.disk.check.io.failures.tolerated` out of the last `hdds.datanode.disk.check.io.test.count` runs failed. Set to 0 to disable disk IO checks. | +| `hdds.datanode.disk.check.io.test.enabled` | true | `DATANODE` | The configuration to enable or disable disk IO checks. | | `hdds.datanode.disk.check.min.gap` | 10m | `DATANODE` | The minimum gap between two successive checks of the same Datanode volume. Unit could be defined with postfix (ns,ms,s,m,h,d). | +| `hdds.datanode.disk.check.sliding.window.timeout` | 70m | `DATANODE` | Time interval after which a disk check failure result stored in the sliding window will expire. Do not set the window timeout period to less than or equal to the disk check interval period or failures can be missed across sparse checks e.g., every 120m interval with a 60m window rarely accumulates enough failed events Unit could be defined with postfix (ns,ms,s,m,h,d). | | `hdds.datanode.disk.check.timeout` | 10m | `DATANODE` | Maximum allowed time for a disk check to complete. If the check does not complete within this time interval then the disk is declared as failed. Unit could be defined with postfix (ns,ms,s,m,h,d). | | `hdds.datanode.dns.interface` | default | `OZONE`, `DATANODE` | The name of the Network Interface from which a Datanode should report its IP address. e.g. eth2. This setting may be required for some multi-homed nodes where the Datanodes are assigned multiple hostnames and it is desirable for the Datanodes to use a non-default hostname. | | `hdds.datanode.dns.nameserver` | default | `OZONE`, `DATANODE` | The host name or IP address of the name server (DNS) which a Datanode should use to determine its own host name. | From f471b67ca21e387ed548d7c618a8801c22bf4f22 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Wed, 8 Apr 2026 05:39:25 +0000 Subject: [PATCH 09/14] HDDS-14103. [Auto] Update configuration documentation from ozone 49d6c0b6cbd3b1ceb4fb49924f72e2495e77a1af --- .../02-configuration/99-appendix.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index 4a1e12548f..1d1a28e891 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -20,7 +20,7 @@ This page provides a comprehensive overview of the configuration keys available | `hadoop.http.authentication.kerberos.principal` | HTTP/`${httpfs.hostname}`@`${kerberos.realm}` | | The HTTP Kerberos principal used by HttpFS in the HTTP endpoint. The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPNEGO specification. httpfs.authentication.kerberos.principal is deprecated. Instead use `hadoop.http.authentication.kerberos.principal`. | | `hadoop.http.authentication.signature.secret.file` | `${httpfs.config.dir}`/httpfs-signature.secret | | File containing the secret to sign HttpFS hadoop-auth cookies. This file should be readable only by the system user running HttpFS service. If multiple HttpFS servers are used in a load-balancer/round-robin fashion, they should share the secret file. If the secret file specified here does not exist, random secret is generated at startup time. httpfs.authentication.signature.secret.file is deprecated. Instead use `hadoop.http.authentication.signature.secret.file`. | | `hadoop.http.authentication.type` | simple | | Defines the authentication mechanism used by httpfs for its HTTP clients. Valid values are 'simple' or 'kerberos'. If using 'simple' HTTP clients must specify the username with the 'user.name' query string parameter. If using 'kerberos' HTTP clients must use HTTP SPNEGO or delegation tokens. httpfs.authentication.type is deprecated. Instead use `hadoop.http.authentication.type`. | -| `hadoop.http.idle_timeout.ms` | 60000 | | Httpfs Server connection timeout in milliseconds. | +| `hadoop.http.idle_timeout.ms` | 60000 | `OZONE`, `PERFORMANCE`, `S3GATEWAY` | OM/SCM/DN/S3GATEWAY Server connection timeout in milliseconds. | | `hadoop.http.max.request.header.size` | 65536 | | The maxmimum HTTP request header size. | | `hadoop.http.max.response.header.size` | 65536 | | The maxmimum HTTP response header size. | | `hadoop.http.max.threads` | 1000 | | The maxmimum number of threads. | @@ -234,8 +234,8 @@ This page provides a comprehensive overview of the configuration keys available | `hdds.scm.block.deletion.per-interval.max` | 500000 | `SCM`, `DELETION` | Maximum number of blocks which SCM processes during an interval. The block num is counted at the replica level.If SCM has 100000 blocks which need to be deleted and the configuration is 5000 then it would only send 5000 blocks for deletion to the datanodes. | | `hdds.scm.block.deletion.txn.dn.commit.map.limit` | 5000000 | `SCM` | This value indicates the size of the transactionToDNsCommitMap after which we will skip one round of scm block deleting interval. | | `hdds.scm.ec.pipeline.choose.policy.impl` | org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy | `SCM`, `PIPELINE` | Sets the policy for choosing an EC pipeline. The value should be the full name of a class which implements org.apache.hadoop.hdds.scm.PipelineChoosePolicy. The class decides which pipeline will be used when selecting an EC Pipeline. If not set, org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy will be used as default value. One of the following values can be used: (1) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy : chooses a pipeline randomly. (2) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.HealthyPipelineChoosePolicy : chooses a healthy pipeline randomly. (3) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.CapacityPipelineChoosePolicy : chooses the pipeline with lower utilization from two random pipelines. Note that random choose method will be executed twice in this policy.(4) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RoundRobinPipelineChoosePolicy : chooses a pipeline in a round robin fashion. Intended for troubleshooting and testing purposes only. | -| `hdds.scm.http.auth.kerberos.keytab` | | `SECURITY` | The keytab file used by SCM http server to login as its service principal. | -| `hdds.scm.http.auth.kerberos.principal` | | `SECURITY` | This Kerberos principal is used when communicating to the HTTP server of SCM.The protocol used is SPNEGO. | +| `hdds.scm.http.auth.kerberos.keytab` | /etc/security/keytabs/HTTP.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM http server to login as its service principal if SPNEGO is enabled for SCM http server. | +| `hdds.scm.http.auth.kerberos.principal` | HTTP/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | SCM http server service principal if SPNEGO is enabled for SCM http server. | | `hdds.scm.http.auth.type` | simple | `OM`, `SECURITY`, `KERBEROS` | simple or kerberos. If kerberos is set, SPNEGO will be used for http authentication. | | `hdds.scm.kerberos.keytab.file` | /etc/security/keytabs/SCM.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM daemon to login as its service principal. | | `hdds.scm.kerberos.principal` | SCM/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | The SCM service principal. e.g. scm/_HOST@REALM.COM | @@ -353,7 +353,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.client.failover.max.attempts` | 500 | | Expert only. Ozone RpcClient attempts talking to each OzoneManager ipc.client.connect.max.retries (default = 10) number of times before failing over to another OzoneManager, if available. This parameter represents the number of times per request the client will failover before giving up. This value is kept high so that client does not give up trying to connect to OMs easily. | | `ozone.client.follower.read.default.consistency` | LINEARIZABLE_ALLOW_FOLLOWER | | The default consistency when client enables follower read. Currently, the supported follower read consistency are LINEARIZABLE_ALLOW_FOLLOWER and LOCAL_LEASE The default value is LINEARIZABLE_ALLOW_FOLLOWER to preserve the same strong consistency behavior when switching from leader-only read to follower read. | | `ozone.client.follower.read.enabled` | false | | Enable client to read from OM followers. If false, all client requests are sent to the OM leader. | -| `ozone.client.fs.default.bucket.layout` | FILE_SYSTEM_OPTIMIZED | `CLIENT` | The bucket layout used by buckets created using OFS. Valid values include FILE_SYSTEM_OPTIMIZED and LEGACY | +| `ozone.client.fs.default.bucket.layout` | FILE_SYSTEM_OPTIMIZED | `OZONE`, `CLIENT` | Default bucket layout value used when buckets are created using OFS. Supported values are LEGACY and FILE_SYSTEM_OPTIMIZED. FILE_SYSTEM_OPTIMIZED: This layout allows the bucket to support atomic rename/delete operations and also allows interoperability between S3 and FS APIs. Keys written via S3 API with a "/" delimiter will create intermediate directories. | | `ozone.client.hbase.enhancements.allowed` | false | `CLIENT` | When set to false, client-side HBase enhancement-related Ozone (experimental) features are disabled (not allowed to be enabled) regardless of whether those configs are set. Here is the list of configs and values overridden when this config is set to false: 1. `ozone.fs.hsync.enabled` = false 2. `ozone.client.incremental.chunk.list` = false 3. `ozone.client.stream.putblock.piggybacking` = false 4. `ozone.client.key.write.concurrency` = 1 A warning message will be printed if any of the above configs are overridden by this. | | `ozone.client.incremental.chunk.list` | false | `CLIENT` | Client PutBlock request can choose incremental chunk list rather than full chunk list to optimize performance. Critical to HBase. EC does not support this feature. Can be enabled only when `ozone.client.hbase.enhancements.allowed` = true | | `ozone.client.key.latest.version.location` | true | `OZONE`, `CLIENT` | Ozone client gets the latest version location. | @@ -361,7 +361,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.client.key.write.concurrency` | 1 | `CLIENT` | Maximum concurrent writes allowed on each key. Defaults to 1 which matches the behavior before HDDS-9844. For unlimited write concurrency, set this to -1 or any negative integer value. Any value other than 1 is effective only when `ozone.client.hbase.enhancements.allowed` = true | | `ozone.client.leader.read.default.consistency` | DEFAULT | | The default consistency when client disables follower read. Currently, the supported leader read consistency are DEFAULT and LINEARIZABLE_LEADER_ONLY. The default value is DEFAULT for backward compatibility reason which is mostly strongly consistent. | | `ozone.client.list.cache` | 1000 | `OZONE`, `PERFORMANCE` | Configuration property to configure the cache size of client list calls. | -| `ozone.client.max.ec.stripe.write.retries` | 10 | `CLIENT` | Ozone EC client to retry stripe to new block group on failures. | +| `ozone.client.max.ec.stripe.write.retries` | 10 | `CLIENT` | When EC stripe write failed, client will request to allocate new block group and write the failed stripe into new block group. If the same stripe failure continued in newly acquired block group also, then it will retry by requesting to allocate new block group again. This configuration is used to limit these number of retries. By default the number of retries are 10. | | `ozone.client.max.retries` | 5 | `CLIENT` | Maximum number of retries by Ozone Client on encountering exception while writing a key | | `ozone.client.read.max.retries` | 3 | `CLIENT` | Maximum number of retries by Ozone Client on encountering connectivity exception when reading a key. | | `ozone.client.read.retry.interval` | 1 | `CLIENT` | Indicates the time duration in seconds a client will wait before retrying a read key request on encountering a connectivity exception from Datanodes. By default the interval is 1 second | From b40244f7cc807761407caf4bff2b6160a78fd45b Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Thu, 9 Apr 2026 01:22:58 +0000 Subject: [PATCH 10/14] HDDS-8703. [Auto] Update configuration documentation from ozone d2d374b67499ef477576b9ea1f13c0eab84e240b --- .../02-configuration/99-appendix.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index 1d1a28e891..4a1e12548f 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -20,7 +20,7 @@ This page provides a comprehensive overview of the configuration keys available | `hadoop.http.authentication.kerberos.principal` | HTTP/`${httpfs.hostname}`@`${kerberos.realm}` | | The HTTP Kerberos principal used by HttpFS in the HTTP endpoint. The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPNEGO specification. httpfs.authentication.kerberos.principal is deprecated. Instead use `hadoop.http.authentication.kerberos.principal`. | | `hadoop.http.authentication.signature.secret.file` | `${httpfs.config.dir}`/httpfs-signature.secret | | File containing the secret to sign HttpFS hadoop-auth cookies. This file should be readable only by the system user running HttpFS service. If multiple HttpFS servers are used in a load-balancer/round-robin fashion, they should share the secret file. If the secret file specified here does not exist, random secret is generated at startup time. httpfs.authentication.signature.secret.file is deprecated. Instead use `hadoop.http.authentication.signature.secret.file`. | | `hadoop.http.authentication.type` | simple | | Defines the authentication mechanism used by httpfs for its HTTP clients. Valid values are 'simple' or 'kerberos'. If using 'simple' HTTP clients must specify the username with the 'user.name' query string parameter. If using 'kerberos' HTTP clients must use HTTP SPNEGO or delegation tokens. httpfs.authentication.type is deprecated. Instead use `hadoop.http.authentication.type`. | -| `hadoop.http.idle_timeout.ms` | 60000 | `OZONE`, `PERFORMANCE`, `S3GATEWAY` | OM/SCM/DN/S3GATEWAY Server connection timeout in milliseconds. | +| `hadoop.http.idle_timeout.ms` | 60000 | | Httpfs Server connection timeout in milliseconds. | | `hadoop.http.max.request.header.size` | 65536 | | The maxmimum HTTP request header size. | | `hadoop.http.max.response.header.size` | 65536 | | The maxmimum HTTP response header size. | | `hadoop.http.max.threads` | 1000 | | The maxmimum number of threads. | @@ -234,8 +234,8 @@ This page provides a comprehensive overview of the configuration keys available | `hdds.scm.block.deletion.per-interval.max` | 500000 | `SCM`, `DELETION` | Maximum number of blocks which SCM processes during an interval. The block num is counted at the replica level.If SCM has 100000 blocks which need to be deleted and the configuration is 5000 then it would only send 5000 blocks for deletion to the datanodes. | | `hdds.scm.block.deletion.txn.dn.commit.map.limit` | 5000000 | `SCM` | This value indicates the size of the transactionToDNsCommitMap after which we will skip one round of scm block deleting interval. | | `hdds.scm.ec.pipeline.choose.policy.impl` | org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy | `SCM`, `PIPELINE` | Sets the policy for choosing an EC pipeline. The value should be the full name of a class which implements org.apache.hadoop.hdds.scm.PipelineChoosePolicy. The class decides which pipeline will be used when selecting an EC Pipeline. If not set, org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy will be used as default value. One of the following values can be used: (1) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy : chooses a pipeline randomly. (2) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.HealthyPipelineChoosePolicy : chooses a healthy pipeline randomly. (3) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.CapacityPipelineChoosePolicy : chooses the pipeline with lower utilization from two random pipelines. Note that random choose method will be executed twice in this policy.(4) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RoundRobinPipelineChoosePolicy : chooses a pipeline in a round robin fashion. Intended for troubleshooting and testing purposes only. | -| `hdds.scm.http.auth.kerberos.keytab` | /etc/security/keytabs/HTTP.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM http server to login as its service principal if SPNEGO is enabled for SCM http server. | -| `hdds.scm.http.auth.kerberos.principal` | HTTP/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | SCM http server service principal if SPNEGO is enabled for SCM http server. | +| `hdds.scm.http.auth.kerberos.keytab` | | `SECURITY` | The keytab file used by SCM http server to login as its service principal. | +| `hdds.scm.http.auth.kerberos.principal` | | `SECURITY` | This Kerberos principal is used when communicating to the HTTP server of SCM.The protocol used is SPNEGO. | | `hdds.scm.http.auth.type` | simple | `OM`, `SECURITY`, `KERBEROS` | simple or kerberos. If kerberos is set, SPNEGO will be used for http authentication. | | `hdds.scm.kerberos.keytab.file` | /etc/security/keytabs/SCM.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM daemon to login as its service principal. | | `hdds.scm.kerberos.principal` | SCM/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | The SCM service principal. e.g. scm/_HOST@REALM.COM | @@ -353,7 +353,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.client.failover.max.attempts` | 500 | | Expert only. Ozone RpcClient attempts talking to each OzoneManager ipc.client.connect.max.retries (default = 10) number of times before failing over to another OzoneManager, if available. This parameter represents the number of times per request the client will failover before giving up. This value is kept high so that client does not give up trying to connect to OMs easily. | | `ozone.client.follower.read.default.consistency` | LINEARIZABLE_ALLOW_FOLLOWER | | The default consistency when client enables follower read. Currently, the supported follower read consistency are LINEARIZABLE_ALLOW_FOLLOWER and LOCAL_LEASE The default value is LINEARIZABLE_ALLOW_FOLLOWER to preserve the same strong consistency behavior when switching from leader-only read to follower read. | | `ozone.client.follower.read.enabled` | false | | Enable client to read from OM followers. If false, all client requests are sent to the OM leader. | -| `ozone.client.fs.default.bucket.layout` | FILE_SYSTEM_OPTIMIZED | `OZONE`, `CLIENT` | Default bucket layout value used when buckets are created using OFS. Supported values are LEGACY and FILE_SYSTEM_OPTIMIZED. FILE_SYSTEM_OPTIMIZED: This layout allows the bucket to support atomic rename/delete operations and also allows interoperability between S3 and FS APIs. Keys written via S3 API with a "/" delimiter will create intermediate directories. | +| `ozone.client.fs.default.bucket.layout` | FILE_SYSTEM_OPTIMIZED | `CLIENT` | The bucket layout used by buckets created using OFS. Valid values include FILE_SYSTEM_OPTIMIZED and LEGACY | | `ozone.client.hbase.enhancements.allowed` | false | `CLIENT` | When set to false, client-side HBase enhancement-related Ozone (experimental) features are disabled (not allowed to be enabled) regardless of whether those configs are set. Here is the list of configs and values overridden when this config is set to false: 1. `ozone.fs.hsync.enabled` = false 2. `ozone.client.incremental.chunk.list` = false 3. `ozone.client.stream.putblock.piggybacking` = false 4. `ozone.client.key.write.concurrency` = 1 A warning message will be printed if any of the above configs are overridden by this. | | `ozone.client.incremental.chunk.list` | false | `CLIENT` | Client PutBlock request can choose incremental chunk list rather than full chunk list to optimize performance. Critical to HBase. EC does not support this feature. Can be enabled only when `ozone.client.hbase.enhancements.allowed` = true | | `ozone.client.key.latest.version.location` | true | `OZONE`, `CLIENT` | Ozone client gets the latest version location. | @@ -361,7 +361,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.client.key.write.concurrency` | 1 | `CLIENT` | Maximum concurrent writes allowed on each key. Defaults to 1 which matches the behavior before HDDS-9844. For unlimited write concurrency, set this to -1 or any negative integer value. Any value other than 1 is effective only when `ozone.client.hbase.enhancements.allowed` = true | | `ozone.client.leader.read.default.consistency` | DEFAULT | | The default consistency when client disables follower read. Currently, the supported leader read consistency are DEFAULT and LINEARIZABLE_LEADER_ONLY. The default value is DEFAULT for backward compatibility reason which is mostly strongly consistent. | | `ozone.client.list.cache` | 1000 | `OZONE`, `PERFORMANCE` | Configuration property to configure the cache size of client list calls. | -| `ozone.client.max.ec.stripe.write.retries` | 10 | `CLIENT` | When EC stripe write failed, client will request to allocate new block group and write the failed stripe into new block group. If the same stripe failure continued in newly acquired block group also, then it will retry by requesting to allocate new block group again. This configuration is used to limit these number of retries. By default the number of retries are 10. | +| `ozone.client.max.ec.stripe.write.retries` | 10 | `CLIENT` | Ozone EC client to retry stripe to new block group on failures. | | `ozone.client.max.retries` | 5 | `CLIENT` | Maximum number of retries by Ozone Client on encountering exception while writing a key | | `ozone.client.read.max.retries` | 3 | `CLIENT` | Maximum number of retries by Ozone Client on encountering connectivity exception when reading a key. | | `ozone.client.read.retry.interval` | 1 | `CLIENT` | Indicates the time duration in seconds a client will wait before retrying a read key request on encountering a connectivity exception from Datanodes. By default the interval is 1 second | From a5451ea1731667b9bbf140d70b6c9d7cc897e552 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Thu, 9 Apr 2026 04:55:49 +0000 Subject: [PATCH 11/14] HDDS-14756. [Auto] Update configuration documentation from ozone ab71c6af45f6a916eafbfdad9f33ab568a6487f5 --- docs/05-administrator-guide/02-configuration/99-appendix.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index 4a1e12548f..6a8b12d2ff 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -20,7 +20,7 @@ This page provides a comprehensive overview of the configuration keys available | `hadoop.http.authentication.kerberos.principal` | HTTP/`${httpfs.hostname}`@`${kerberos.realm}` | | The HTTP Kerberos principal used by HttpFS in the HTTP endpoint. The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPNEGO specification. httpfs.authentication.kerberos.principal is deprecated. Instead use `hadoop.http.authentication.kerberos.principal`. | | `hadoop.http.authentication.signature.secret.file` | `${httpfs.config.dir}`/httpfs-signature.secret | | File containing the secret to sign HttpFS hadoop-auth cookies. This file should be readable only by the system user running HttpFS service. If multiple HttpFS servers are used in a load-balancer/round-robin fashion, they should share the secret file. If the secret file specified here does not exist, random secret is generated at startup time. httpfs.authentication.signature.secret.file is deprecated. Instead use `hadoop.http.authentication.signature.secret.file`. | | `hadoop.http.authentication.type` | simple | | Defines the authentication mechanism used by httpfs for its HTTP clients. Valid values are 'simple' or 'kerberos'. If using 'simple' HTTP clients must specify the username with the 'user.name' query string parameter. If using 'kerberos' HTTP clients must use HTTP SPNEGO or delegation tokens. httpfs.authentication.type is deprecated. Instead use `hadoop.http.authentication.type`. | -| `hadoop.http.idle_timeout.ms` | 60000 | | Httpfs Server connection timeout in milliseconds. | +| `hadoop.http.idle_timeout.ms` | 60000 | `OZONE`, `PERFORMANCE`, `S3GATEWAY` | OM/SCM/DN/S3GATEWAY Server connection timeout in milliseconds. | | `hadoop.http.max.request.header.size` | 65536 | | The maxmimum HTTP request header size. | | `hadoop.http.max.response.header.size` | 65536 | | The maxmimum HTTP response header size. | | `hadoop.http.max.threads` | 1000 | | The maxmimum number of threads. | From 88e5bb74efdec887eaa0b0d660818ddb185f9f0b Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Thu, 9 Apr 2026 11:51:05 +0000 Subject: [PATCH 12/14] HDDS-14660. [Auto] Update configuration documentation from ozone 45ffbf389cd514d02638a4e339e14614aa9e5d4f --- docs/05-administrator-guide/02-configuration/99-appendix.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index 6a8b12d2ff..e6c3626fe5 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -353,7 +353,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.client.failover.max.attempts` | 500 | | Expert only. Ozone RpcClient attempts talking to each OzoneManager ipc.client.connect.max.retries (default = 10) number of times before failing over to another OzoneManager, if available. This parameter represents the number of times per request the client will failover before giving up. This value is kept high so that client does not give up trying to connect to OMs easily. | | `ozone.client.follower.read.default.consistency` | LINEARIZABLE_ALLOW_FOLLOWER | | The default consistency when client enables follower read. Currently, the supported follower read consistency are LINEARIZABLE_ALLOW_FOLLOWER and LOCAL_LEASE The default value is LINEARIZABLE_ALLOW_FOLLOWER to preserve the same strong consistency behavior when switching from leader-only read to follower read. | | `ozone.client.follower.read.enabled` | false | | Enable client to read from OM followers. If false, all client requests are sent to the OM leader. | -| `ozone.client.fs.default.bucket.layout` | FILE_SYSTEM_OPTIMIZED | `CLIENT` | The bucket layout used by buckets created using OFS. Valid values include FILE_SYSTEM_OPTIMIZED and LEGACY | +| `ozone.client.fs.default.bucket.layout` | FILE_SYSTEM_OPTIMIZED | `OZONE`, `CLIENT` | Default bucket layout value used when buckets are created using OFS. Supported values are LEGACY and FILE_SYSTEM_OPTIMIZED. FILE_SYSTEM_OPTIMIZED: This layout allows the bucket to support atomic rename/delete operations and also allows interoperability between S3 and FS APIs. Keys written via S3 API with a "/" delimiter will create intermediate directories. | | `ozone.client.hbase.enhancements.allowed` | false | `CLIENT` | When set to false, client-side HBase enhancement-related Ozone (experimental) features are disabled (not allowed to be enabled) regardless of whether those configs are set. Here is the list of configs and values overridden when this config is set to false: 1. `ozone.fs.hsync.enabled` = false 2. `ozone.client.incremental.chunk.list` = false 3. `ozone.client.stream.putblock.piggybacking` = false 4. `ozone.client.key.write.concurrency` = 1 A warning message will be printed if any of the above configs are overridden by this. | | `ozone.client.incremental.chunk.list` | false | `CLIENT` | Client PutBlock request can choose incremental chunk list rather than full chunk list to optimize performance. Critical to HBase. EC does not support this feature. Can be enabled only when `ozone.client.hbase.enhancements.allowed` = true | | `ozone.client.key.latest.version.location` | true | `OZONE`, `CLIENT` | Ozone client gets the latest version location. | @@ -361,7 +361,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.client.key.write.concurrency` | 1 | `CLIENT` | Maximum concurrent writes allowed on each key. Defaults to 1 which matches the behavior before HDDS-9844. For unlimited write concurrency, set this to -1 or any negative integer value. Any value other than 1 is effective only when `ozone.client.hbase.enhancements.allowed` = true | | `ozone.client.leader.read.default.consistency` | DEFAULT | | The default consistency when client disables follower read. Currently, the supported leader read consistency are DEFAULT and LINEARIZABLE_LEADER_ONLY. The default value is DEFAULT for backward compatibility reason which is mostly strongly consistent. | | `ozone.client.list.cache` | 1000 | `OZONE`, `PERFORMANCE` | Configuration property to configure the cache size of client list calls. | -| `ozone.client.max.ec.stripe.write.retries` | 10 | `CLIENT` | Ozone EC client to retry stripe to new block group on failures. | +| `ozone.client.max.ec.stripe.write.retries` | 10 | `CLIENT` | When EC stripe write failed, client will request to allocate new block group and write the failed stripe into new block group. If the same stripe failure continued in newly acquired block group also, then it will retry by requesting to allocate new block group again. This configuration is used to limit these number of retries. By default the number of retries are 10. | | `ozone.client.max.retries` | 5 | `CLIENT` | Maximum number of retries by Ozone Client on encountering exception while writing a key | | `ozone.client.read.max.retries` | 3 | `CLIENT` | Maximum number of retries by Ozone Client on encountering connectivity exception when reading a key. | | `ozone.client.read.retry.interval` | 1 | `CLIENT` | Indicates the time duration in seconds a client will wait before retrying a read key request on encountering a connectivity exception from Datanodes. By default the interval is 1 second | From 0a0b122ff7f33bc5d0c7005697751adbcf877e65 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Thu, 9 Apr 2026 13:38:06 +0000 Subject: [PATCH 13/14] HDDS-14843. [Auto] Update configuration documentation from ozone 4d8c38d47b45755f132952e94aa951920c713857 --- .../02-configuration/99-appendix.md | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index e6c3626fe5..db5bfd70d6 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -20,7 +20,7 @@ This page provides a comprehensive overview of the configuration keys available | `hadoop.http.authentication.kerberos.principal` | HTTP/`${httpfs.hostname}`@`${kerberos.realm}` | | The HTTP Kerberos principal used by HttpFS in the HTTP endpoint. The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPNEGO specification. httpfs.authentication.kerberos.principal is deprecated. Instead use `hadoop.http.authentication.kerberos.principal`. | | `hadoop.http.authentication.signature.secret.file` | `${httpfs.config.dir}`/httpfs-signature.secret | | File containing the secret to sign HttpFS hadoop-auth cookies. This file should be readable only by the system user running HttpFS service. If multiple HttpFS servers are used in a load-balancer/round-robin fashion, they should share the secret file. If the secret file specified here does not exist, random secret is generated at startup time. httpfs.authentication.signature.secret.file is deprecated. Instead use `hadoop.http.authentication.signature.secret.file`. | | `hadoop.http.authentication.type` | simple | | Defines the authentication mechanism used by httpfs for its HTTP clients. Valid values are 'simple' or 'kerberos'. If using 'simple' HTTP clients must specify the username with the 'user.name' query string parameter. If using 'kerberos' HTTP clients must use HTTP SPNEGO or delegation tokens. httpfs.authentication.type is deprecated. Instead use `hadoop.http.authentication.type`. | -| `hadoop.http.idle_timeout.ms` | 60000 | `OZONE`, `PERFORMANCE`, `S3GATEWAY` | OM/SCM/DN/S3GATEWAY Server connection timeout in milliseconds. | +| `hadoop.http.idle_timeout.ms` | 60000 | | Httpfs Server connection timeout in milliseconds. | | `hadoop.http.max.request.header.size` | 65536 | | The maxmimum HTTP request header size. | | `hadoop.http.max.response.header.size` | 65536 | | The maxmimum HTTP response header size. | | `hadoop.http.max.threads` | 1000 | | The maxmimum number of threads. | @@ -234,8 +234,8 @@ This page provides a comprehensive overview of the configuration keys available | `hdds.scm.block.deletion.per-interval.max` | 500000 | `SCM`, `DELETION` | Maximum number of blocks which SCM processes during an interval. The block num is counted at the replica level.If SCM has 100000 blocks which need to be deleted and the configuration is 5000 then it would only send 5000 blocks for deletion to the datanodes. | | `hdds.scm.block.deletion.txn.dn.commit.map.limit` | 5000000 | `SCM` | This value indicates the size of the transactionToDNsCommitMap after which we will skip one round of scm block deleting interval. | | `hdds.scm.ec.pipeline.choose.policy.impl` | org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy | `SCM`, `PIPELINE` | Sets the policy for choosing an EC pipeline. The value should be the full name of a class which implements org.apache.hadoop.hdds.scm.PipelineChoosePolicy. The class decides which pipeline will be used when selecting an EC Pipeline. If not set, org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy will be used as default value. One of the following values can be used: (1) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy : chooses a pipeline randomly. (2) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.HealthyPipelineChoosePolicy : chooses a healthy pipeline randomly. (3) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.CapacityPipelineChoosePolicy : chooses the pipeline with lower utilization from two random pipelines. Note that random choose method will be executed twice in this policy.(4) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RoundRobinPipelineChoosePolicy : chooses a pipeline in a round robin fashion. Intended for troubleshooting and testing purposes only. | -| `hdds.scm.http.auth.kerberos.keytab` | | `SECURITY` | The keytab file used by SCM http server to login as its service principal. | -| `hdds.scm.http.auth.kerberos.principal` | | `SECURITY` | This Kerberos principal is used when communicating to the HTTP server of SCM.The protocol used is SPNEGO. | +| `hdds.scm.http.auth.kerberos.keytab` | /etc/security/keytabs/HTTP.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM http server to login as its service principal if SPNEGO is enabled for SCM http server. | +| `hdds.scm.http.auth.kerberos.principal` | HTTP/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | SCM http server service principal if SPNEGO is enabled for SCM http server. | | `hdds.scm.http.auth.type` | simple | `OM`, `SECURITY`, `KERBEROS` | simple or kerberos. If kerberos is set, SPNEGO will be used for http authentication. | | `hdds.scm.kerberos.keytab.file` | /etc/security/keytabs/SCM.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM daemon to login as its service principal. | | `hdds.scm.kerberos.principal` | SCM/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | The SCM service principal. e.g. scm/_HOST@REALM.COM | @@ -326,6 +326,8 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.audit.log.debug.cmd.list.omaudit` | | `OM` | A comma separated list of OzoneManager commands that are written to the OzoneManager audit logs only if the audit log level is debug. Ex: "ALLOCATE_BLOCK,ALLOCATE_KEY,COMMIT_KEY". | | `ozone.audit.log.debug.cmd.list.scmaudit` | | `SCM` | A comma separated list of SCM commands that are written to the SCM audit logs only if the audit log level is debug. Ex: "GET_VERSION,REGISTER,SEND_HEARTBEAT". | | `ozone.authorization.enabled` | true | `OZONE`, `SECURITY`, `AUTHORIZATION` | Master switch to enable/disable authorization checks in Ozone (admin privilege checks and ACL checks). This property only takes effect when `ozone.security.enabled` is true. When true: admin privilege checks are always performed, and object ACL checks are controlled by `ozone.acl.enabled`. When false: no authorization checks are performed. Default is true. | +| `ozone.blacklist.groups` | | | Ozone blacklisted groups delimited by the comma. If set, This is the list of groups that are not allowed to do any operations even if the blacklisted user is also under (readonly) admin / admin group, | +| `ozone.blacklist.users` | | | Ozone blacklisted users delimited by the comma. If set, This is the list of users that are not allowed to do any operations even if the blacklisted user is also under (readonly) admin / admin group, | | `ozone.block.deleting.service.interval` | 1m | `OZONE`, `PERFORMANCE`, `SCM` | Time interval of the block deleting service. The block deleting service runs on each datanode periodically and deletes blocks queued for deletion. Unit could be defined with postfix (ns,ms,s,m,h,d) | | `ozone.block.deleting.service.timeout` | 300000ms | `OZONE`, `PERFORMANCE`, `SCM` | A timeout value of block deletion service. If this is set greater than 0, the service will stop waiting for the block deleting completion after this time. This setting supports multiple time unit suffixes as described in dfs.heartbeat.interval. If no suffix is specified, then milliseconds is assumed. | | `ozone.block.deleting.service.workers` | 10 | `OZONE`, `PERFORMANCE`, `SCM` | Number of workers executed of block deletion service. This configuration should be set to greater than 0. | @@ -353,7 +355,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.client.failover.max.attempts` | 500 | | Expert only. Ozone RpcClient attempts talking to each OzoneManager ipc.client.connect.max.retries (default = 10) number of times before failing over to another OzoneManager, if available. This parameter represents the number of times per request the client will failover before giving up. This value is kept high so that client does not give up trying to connect to OMs easily. | | `ozone.client.follower.read.default.consistency` | LINEARIZABLE_ALLOW_FOLLOWER | | The default consistency when client enables follower read. Currently, the supported follower read consistency are LINEARIZABLE_ALLOW_FOLLOWER and LOCAL_LEASE The default value is LINEARIZABLE_ALLOW_FOLLOWER to preserve the same strong consistency behavior when switching from leader-only read to follower read. | | `ozone.client.follower.read.enabled` | false | | Enable client to read from OM followers. If false, all client requests are sent to the OM leader. | -| `ozone.client.fs.default.bucket.layout` | FILE_SYSTEM_OPTIMIZED | `OZONE`, `CLIENT` | Default bucket layout value used when buckets are created using OFS. Supported values are LEGACY and FILE_SYSTEM_OPTIMIZED. FILE_SYSTEM_OPTIMIZED: This layout allows the bucket to support atomic rename/delete operations and also allows interoperability between S3 and FS APIs. Keys written via S3 API with a "/" delimiter will create intermediate directories. | +| `ozone.client.fs.default.bucket.layout` | FILE_SYSTEM_OPTIMIZED | `CLIENT` | The bucket layout used by buckets created using OFS. Valid values include FILE_SYSTEM_OPTIMIZED and LEGACY | | `ozone.client.hbase.enhancements.allowed` | false | `CLIENT` | When set to false, client-side HBase enhancement-related Ozone (experimental) features are disabled (not allowed to be enabled) regardless of whether those configs are set. Here is the list of configs and values overridden when this config is set to false: 1. `ozone.fs.hsync.enabled` = false 2. `ozone.client.incremental.chunk.list` = false 3. `ozone.client.stream.putblock.piggybacking` = false 4. `ozone.client.key.write.concurrency` = 1 A warning message will be printed if any of the above configs are overridden by this. | | `ozone.client.incremental.chunk.list` | false | `CLIENT` | Client PutBlock request can choose incremental chunk list rather than full chunk list to optimize performance. Critical to HBase. EC does not support this feature. Can be enabled only when `ozone.client.hbase.enhancements.allowed` = true | | `ozone.client.key.latest.version.location` | true | `OZONE`, `CLIENT` | Ozone client gets the latest version location. | @@ -361,7 +363,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.client.key.write.concurrency` | 1 | `CLIENT` | Maximum concurrent writes allowed on each key. Defaults to 1 which matches the behavior before HDDS-9844. For unlimited write concurrency, set this to -1 or any negative integer value. Any value other than 1 is effective only when `ozone.client.hbase.enhancements.allowed` = true | | `ozone.client.leader.read.default.consistency` | DEFAULT | | The default consistency when client disables follower read. Currently, the supported leader read consistency are DEFAULT and LINEARIZABLE_LEADER_ONLY. The default value is DEFAULT for backward compatibility reason which is mostly strongly consistent. | | `ozone.client.list.cache` | 1000 | `OZONE`, `PERFORMANCE` | Configuration property to configure the cache size of client list calls. | -| `ozone.client.max.ec.stripe.write.retries` | 10 | `CLIENT` | When EC stripe write failed, client will request to allocate new block group and write the failed stripe into new block group. If the same stripe failure continued in newly acquired block group also, then it will retry by requesting to allocate new block group again. This configuration is used to limit these number of retries. By default the number of retries are 10. | +| `ozone.client.max.ec.stripe.write.retries` | 10 | `CLIENT` | Ozone EC client to retry stripe to new block group on failures. | | `ozone.client.max.retries` | 5 | `CLIENT` | Maximum number of retries by Ozone Client on encountering exception while writing a key | | `ozone.client.read.max.retries` | 3 | `CLIENT` | Maximum number of retries by Ozone Client on encountering connectivity exception when reading a key. | | `ozone.client.read.retry.interval` | 1 | `CLIENT` | Indicates the time duration in seconds a client will wait before retrying a read key request on encountering a connectivity exception from Datanodes. By default the interval is 1 second | @@ -559,6 +561,8 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.om.user.rights` | ALL | `OM`, `SECURITY` | Default user permissions set for an object in OzoneManager. | | `ozone.om.volume.listall.allowed` | true | `OM`, `MANAGEMENT` | Allows everyone to list all volumes when set to true. Defaults to true. When set to false, non-admin users can only list the volumes they have access to. Admins can always list all volumes. Note that this config only applies to OzoneNativeAuthorizer. For other authorizers, admin needs to set policies accordingly to allow all volume listing e.g. for Ranger, a new policy with special volume "/" can be added to allow group public LIST access. | | `ozone.path.deleting.limit.per.task` | 20000 | `OZONE`, `PERFORMANCE`, `OM` | A maximum number of paths(dirs/files) to be deleted by directory deleting service per time interval. | +| `ozone.read.blacklist.groups` | | | Ozone read blacklist groups delimited by the comma. If set, This is the list of groups are not allowed to do any read operations even if the blacklisted user is also under (readonly) admin / admin group. | +| `ozone.read.blacklist.users` | | | Ozone read blacklist users delimited by the comma. If set, This is the list of users are not allowed to do any read operations even if the blacklisted user is also under (readonly) admin / admin group. | | `ozone.readonly.administrators` | | | Ozone read only admin users delimited by the comma. If set, This is the list of users are allowed to read operations skip checkAccess. | | `ozone.readonly.administrators.groups` | | | Ozone read only admin groups delimited by the comma. If set, This is the list of groups are allowed to read operations skip checkAccess. | | `ozone.recon.address` | | `RECON`, `MANAGEMENT` | RPC address of Recon Server. If not set, datanodes will not configure Recon Server. | From 9c297e4b0335b23942ed7da6a1e0a1bfbf0651c1 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Thu, 9 Apr 2026 14:04:40 +0000 Subject: [PATCH 14/14] HDDS-14968. [Auto] Update configuration documentation from ozone 914b99cd7e7e16b41c8fea9d4d3e5f0f0d92377e --- docs/05-administrator-guide/02-configuration/99-appendix.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index db5bfd70d6..53c35b76fc 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -234,8 +234,8 @@ This page provides a comprehensive overview of the configuration keys available | `hdds.scm.block.deletion.per-interval.max` | 500000 | `SCM`, `DELETION` | Maximum number of blocks which SCM processes during an interval. The block num is counted at the replica level.If SCM has 100000 blocks which need to be deleted and the configuration is 5000 then it would only send 5000 blocks for deletion to the datanodes. | | `hdds.scm.block.deletion.txn.dn.commit.map.limit` | 5000000 | `SCM` | This value indicates the size of the transactionToDNsCommitMap after which we will skip one round of scm block deleting interval. | | `hdds.scm.ec.pipeline.choose.policy.impl` | org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy | `SCM`, `PIPELINE` | Sets the policy for choosing an EC pipeline. The value should be the full name of a class which implements org.apache.hadoop.hdds.scm.PipelineChoosePolicy. The class decides which pipeline will be used when selecting an EC Pipeline. If not set, org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy will be used as default value. One of the following values can be used: (1) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy : chooses a pipeline randomly. (2) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.HealthyPipelineChoosePolicy : chooses a healthy pipeline randomly. (3) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.CapacityPipelineChoosePolicy : chooses the pipeline with lower utilization from two random pipelines. Note that random choose method will be executed twice in this policy.(4) org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RoundRobinPipelineChoosePolicy : chooses a pipeline in a round robin fashion. Intended for troubleshooting and testing purposes only. | -| `hdds.scm.http.auth.kerberos.keytab` | /etc/security/keytabs/HTTP.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM http server to login as its service principal if SPNEGO is enabled for SCM http server. | -| `hdds.scm.http.auth.kerberos.principal` | HTTP/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | SCM http server service principal if SPNEGO is enabled for SCM http server. | +| `hdds.scm.http.auth.kerberos.keytab` | | `SECURITY` | The keytab file used by SCM http server to login as its service principal. | +| `hdds.scm.http.auth.kerberos.principal` | | `SECURITY` | This Kerberos principal is used when communicating to the HTTP server of SCM.The protocol used is SPNEGO. | | `hdds.scm.http.auth.type` | simple | `OM`, `SECURITY`, `KERBEROS` | simple or kerberos. If kerberos is set, SPNEGO will be used for http authentication. | | `hdds.scm.kerberos.keytab.file` | /etc/security/keytabs/SCM.keytab | `SCM`, `SECURITY`, `KERBEROS` | The keytab file used by SCM daemon to login as its service principal. | | `hdds.scm.kerberos.principal` | SCM/_HOST@REALM | `SCM`, `SECURITY`, `KERBEROS` | The SCM service principal. e.g. scm/_HOST@REALM.COM |