From 7db1305b3e6d5716a471dc0527c159eab21d10a2 Mon Sep 17 00:00:00 2001 From: Shivansh Sahu Date: Wed, 29 Oct 2025 05:21:08 +0000 Subject: [PATCH] docs/reference: Add links for node status config parameters Relates to #52516 --- content/en/docs/reference/node/node-status.md | 47 ++++++++++--------- 1 file changed, 24 insertions(+), 23 deletions(-) diff --git a/content/en/docs/reference/node/node-status.md b/content/en/docs/reference/node/node-status.md index 4c4f729c33fc2..edae239caf245 100644 --- a/content/en/docs/reference/node/node-status.md +++ b/content/en/docs/reference/node/node-status.md @@ -20,9 +20,8 @@ A Node's status contains the following information: You can use `kubectl` to view a Node's status and other details: -```shell kubectl describe node -``` + Each section of the output is described below. @@ -40,43 +39,42 @@ The usage of these fields varies depending on your cloud provider or bare metal The `conditions` field describes the status of all `Running` nodes. Examples of conditions include: -{{< table caption = "Node conditions, and a description of when each condition applies." >}} +{{< table caption="Node conditions, and a description of when each condition applies." >}} | Node Condition | Description | |----------------------|-------------| -| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 50 seconds) | -| `DiskPressure` | `True` if pressure exists on the disk size—that is, if the disk capacity is low; otherwise `False` | -| `MemoryPressure` | `True` if pressure exists on the node memory—that is, if the node memory is low; otherwise `False` | -| `PIDPressure` | `True` if pressure exists on the processes—that is, if there are too many processes on the node; otherwise `False` | -| `NetworkUnavailable` | `True` if the network for the node is not correctly configured, otherwise `False` | +| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 40 seconds). | +| `DiskPressure` | `True` if pressure exists on the disk size—that is, if the disk capacity is low; otherwise `False`. | +| `MemoryPressure` | `True` if pressure exists on the node memory—that is, if the node memory is low; otherwise `False`. | +| `PIDPressure` | `True` if pressure exists on the processes—that is, if there are too many processes on the node; otherwise `False`. | +| `NetworkUnavailable` | `True` if the network for the node is not correctly configured, otherwise `False`. | {{< /table >}} {{< note >}} If you use command-line tools to print details of a cordoned Node, the Condition includes `SchedulingDisabled`. `SchedulingDisabled` is not a Condition in the Kubernetes API; instead, -cordoned nodes are marked Unschedulable in their spec. +cordoned nodes are marked `Unschedulable` in their spec. {{< /note >}} In the Kubernetes API, a node's condition is represented as part of the `.status` of the Node resource. For example, the following JSON structure describes a healthy node: -```json "conditions": [ - { - "type": "Ready", - "status": "True", - "reason": "KubeletReady", - "message": "kubelet is posting ready status", - "lastHeartbeatTime": "2019-06-05T18:38:35Z", - "lastTransitionTime": "2019-06-05T11:41:27Z" - } +{ +"type": "Ready", +"status": "True", +"reason": "KubeletReady", +"message": "kubelet is posting ready status", +"lastHeartbeatTime": "2019-06-05T18:38:35Z", +"lastTransitionTime": "2019-06-05T11:41:27Z" +} ] -``` + When problems occur on nodes, the Kubernetes control plane automatically creates [taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that match the conditions affecting the node. An example of this is when the `status` of the Ready condition remains `Unknown` or `False` for longer than the kube-controller-manager's `NodeMonitorGracePeriod`, -which defaults to 50 seconds. This will cause either an `node.kubernetes.io/unreachable` taint, for an `Unknown` status, +which defaults to 40 seconds. This will cause either an `node.kubernetes.io/unreachable` taint, for an `Unknown` status, or a `node.kubernetes.io/not-ready` taint, for a `False` status, to be added to the Node. These taints affect pending pods as the scheduler takes the Node's taints into consideration when @@ -116,7 +114,7 @@ availability of each node, and to take action when failures are detected. For nodes there are two forms of heartbeats: -* updates to the `.status` of a Node +* updates to the `.status` of a Node. * [Lease](/docs/concepts/architecture/leases/) objects within the `kube-node-lease` {{< glossary_tooltip term_id="namespace" text="namespace">}}. @@ -132,8 +130,11 @@ and for updating their related Leases. - The kubelet updates the node's `.status` either when there is change in status or if there has been no update for a configured interval. The default interval for `.status` updates to Nodes is 5 minutes, which is much longer than the 40 - second default timeout for unreachable nodes. + second default timeout for unreachable nodes. The update interval is controlled by the + `nodeStatusUpdateFrequency` field in the [Kubelet configuration file](/docs/tasks/administer-cluster/reconfigure-kubelet/), + and the timeout is controlled by the + `--node-monitor-grace-period` flag on the [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/). - The kubelet creates and then updates its Lease object every 10 seconds (the default update interval). Lease updates occur independently from updates to the Node's `.status`. If the Lease update fails, the kubelet retries, - using exponential backoff that starts at 200 milliseconds and capped at 7 seconds. + using exponential backoff that starts at 200 milliseconds and capped at 7 seconds. \ No newline at end of file