diff --git a/docs/05-administrator-guide/02-configuration/99-appendix.md b/docs/05-administrator-guide/02-configuration/99-appendix.md index 6922fcc362..53c35b76fc 100644 --- a/docs/05-administrator-guide/02-configuration/99-appendix.md +++ b/docs/05-administrator-guide/02-configuration/99-appendix.md @@ -35,6 +35,7 @@ This page provides a comprehensive overview of the configuration keys available | `hdds.container.balancer.exclude.datanodes` | | `BALANCER` | A list of Datanode hostnames or ip addresses separated by commas. The Datanodes specified in this list are excluded from balancing. This configuration is empty by default. | | `hdds.container.balancer.include.containers` | | `BALANCER` | List of container IDs to include in balancing. Only these containers will be included in balancing. For example "1, 4, 5" or "1,4,5". | | `hdds.container.balancer.include.datanodes` | | `BALANCER` | A list of Datanode hostnames or ip addresses separated by commas. Only the Datanodes specified in this list are balanced. This configuration is empty by default and is applicable only if it is non-empty. | +| `hdds.container.balancer.include.non.standard.containers` | false | `BALANCER` | Whether to include containers in non-standard states, such as OVER_REPLICATED CLOSED/QUASI_CLOSED and HEALTHY QUASI_CLOSED containers. | | `hdds.container.balancer.iterations` | 10 | `BALANCER` | The number of iterations that Container Balancer will run for. | | `hdds.container.balancer.move.networkTopology.enable` | false | `BALANCER` | whether to take network topology into account when selecting a target for a source. This configuration is false by default. | | `hdds.container.balancer.move.replication.timeout` | 50m | `BALANCER` | The amount of time to allow a single container's replication from source to target as part of container move. For example, if "hdds.container.balancer.move.timeout" is 65 minutes, then out of those 65 minutes 50 minutes will be the deadline for replication to complete. | @@ -126,7 +127,9 @@ This page provides a comprehensive overview of the configuration keys available | `hdds.datanode.disk.check.io.failures.tolerated` | 1 | `DATANODE` | The number of IO tests out of the last `hdds.datanode.disk.check.io.test.count` test run that are allowed to fail before the volume is marked as failed. | | `hdds.datanode.disk.check.io.file.size` | 100B | `DATANODE` | The size of the temporary file that will be synced to the disk and read back to assess its health. The contents of the file will be stored in memory during the duration of the check. | | `hdds.datanode.disk.check.io.test.count` | 3 | `DATANODE` | The number of IO tests required to determine if a disk has failed. Each disk check does one IO test. The volume will be failed if more than `hdds.datanode.disk.check.io.failures.tolerated` out of the last `hdds.datanode.disk.check.io.test.count` runs failed. Set to 0 to disable disk IO checks. | +| `hdds.datanode.disk.check.io.test.enabled` | true | `DATANODE` | The configuration to enable or disable disk IO checks. | | `hdds.datanode.disk.check.min.gap` | 10m | `DATANODE` | The minimum gap between two successive checks of the same Datanode volume. Unit could be defined with postfix (ns,ms,s,m,h,d). | +| `hdds.datanode.disk.check.sliding.window.timeout` | 70m | `DATANODE` | Time interval after which a disk check failure result stored in the sliding window will expire. Do not set the window timeout period to less than or equal to the disk check interval period or failures can be missed across sparse checks e.g., every 120m interval with a 60m window rarely accumulates enough failed events Unit could be defined with postfix (ns,ms,s,m,h,d). | | `hdds.datanode.disk.check.timeout` | 10m | `DATANODE` | Maximum allowed time for a disk check to complete. If the check does not complete within this time interval then the disk is declared as failed. Unit could be defined with postfix (ns,ms,s,m,h,d). | | `hdds.datanode.dns.interface` | default | `OZONE`, `DATANODE` | The name of the Network Interface from which a Datanode should report its IP address. e.g. eth2. This setting may be required for some multi-homed nodes where the Datanodes are assigned multiple hostnames and it is desirable for the Datanodes to use a non-default hostname. | | `hdds.datanode.dns.nameserver` | default | `OZONE`, `DATANODE` | The host name or IP address of the name server (DNS) which a Datanode should use to determine its own host name. | @@ -323,6 +326,8 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.audit.log.debug.cmd.list.omaudit` | | `OM` | A comma separated list of OzoneManager commands that are written to the OzoneManager audit logs only if the audit log level is debug. Ex: "ALLOCATE_BLOCK,ALLOCATE_KEY,COMMIT_KEY". | | `ozone.audit.log.debug.cmd.list.scmaudit` | | `SCM` | A comma separated list of SCM commands that are written to the SCM audit logs only if the audit log level is debug. Ex: "GET_VERSION,REGISTER,SEND_HEARTBEAT". | | `ozone.authorization.enabled` | true | `OZONE`, `SECURITY`, `AUTHORIZATION` | Master switch to enable/disable authorization checks in Ozone (admin privilege checks and ACL checks). This property only takes effect when `ozone.security.enabled` is true. When true: admin privilege checks are always performed, and object ACL checks are controlled by `ozone.acl.enabled`. When false: no authorization checks are performed. Default is true. | +| `ozone.blacklist.groups` | | | Ozone blacklisted groups delimited by the comma. If set, This is the list of groups that are not allowed to do any operations even if the blacklisted user is also under (readonly) admin / admin group, | +| `ozone.blacklist.users` | | | Ozone blacklisted users delimited by the comma. If set, This is the list of users that are not allowed to do any operations even if the blacklisted user is also under (readonly) admin / admin group, | | `ozone.block.deleting.service.interval` | 1m | `OZONE`, `PERFORMANCE`, `SCM` | Time interval of the block deleting service. The block deleting service runs on each datanode periodically and deletes blocks queued for deletion. Unit could be defined with postfix (ns,ms,s,m,h,d) | | `ozone.block.deleting.service.timeout` | 300000ms | `OZONE`, `PERFORMANCE`, `SCM` | A timeout value of block deletion service. If this is set greater than 0, the service will stop waiting for the block deleting completion after this time. This setting supports multiple time unit suffixes as described in dfs.heartbeat.interval. If no suffix is specified, then milliseconds is assumed. | | `ozone.block.deleting.service.workers` | 10 | `OZONE`, `PERFORMANCE`, `SCM` | Number of workers executed of block deletion service. This configuration should be set to greater than 0. | @@ -539,6 +544,7 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.om.snapshot.diff.max.jobs.purge.per.task` | 100 | `OZONE`, `OM` | Maximum number of snapshot diff jobs to be purged per snapDiff clean up run. | | `ozone.om.snapshot.diff.max.page.size` | 1000 | `OZONE`, `OM` | Maximum number of entries to be returned in a single page of snap diff report. | | `ozone.om.snapshot.diff.thread.pool.size` | 10 | `OZONE`, `OM` | Maximum numbers of concurrent snapshot diff jobs are allowed. | +| `ozone.om.snapshot.directory.metrics.update.interval` | 5m | `OZONE`, `OM` | Time interval used to update the space consumption stats of the Ozone Manager snapshot directories. Background thread periodically calculates and updates these stats. Unit could be defined with postfix (ns,ms,s,m,h,d) | | `ozone.om.snapshot.force.full.diff` | false | `OZONE`, `OM` | Flag to always perform full snapshot diff (can be slow) without using the optimised compaction DAG. | | `ozone.om.snapshot.load.native.lib` | true | `OZONE`, `OM` | Load native library for performing optimized snapshot diff. | | `ozone.om.snapshot.local.data.manager.service.interval` | 5m | | Interval for cleaning up orphan snapshot local data versions corresponding to snapshots | @@ -555,6 +561,8 @@ This page provides a comprehensive overview of the configuration keys available | `ozone.om.user.rights` | ALL | `OM`, `SECURITY` | Default user permissions set for an object in OzoneManager. | | `ozone.om.volume.listall.allowed` | true | `OM`, `MANAGEMENT` | Allows everyone to list all volumes when set to true. Defaults to true. When set to false, non-admin users can only list the volumes they have access to. Admins can always list all volumes. Note that this config only applies to OzoneNativeAuthorizer. For other authorizers, admin needs to set policies accordingly to allow all volume listing e.g. for Ranger, a new policy with special volume "/" can be added to allow group public LIST access. | | `ozone.path.deleting.limit.per.task` | 20000 | `OZONE`, `PERFORMANCE`, `OM` | A maximum number of paths(dirs/files) to be deleted by directory deleting service per time interval. | +| `ozone.read.blacklist.groups` | | | Ozone read blacklist groups delimited by the comma. If set, This is the list of groups are not allowed to do any read operations even if the blacklisted user is also under (readonly) admin / admin group. | +| `ozone.read.blacklist.users` | | | Ozone read blacklist users delimited by the comma. If set, This is the list of users are not allowed to do any read operations even if the blacklisted user is also under (readonly) admin / admin group. | | `ozone.readonly.administrators` | | | Ozone read only admin users delimited by the comma. If set, This is the list of users are allowed to read operations skip checkAccess. | | `ozone.readonly.administrators.groups` | | | Ozone read only admin groups delimited by the comma. If set, This is the list of groups are allowed to read operations skip checkAccess. | | `ozone.recon.address` | | `RECON`, `MANAGEMENT` | RPC address of Recon Server. If not set, datanodes will not configure Recon Server. |