You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to propose a new configuration option for the HA controller that would allow users to disable the tainting of nodes in a cluster, which doesn't currently seem possible. The resulting functionality should result in less disruption to workloads that do not use Piraeus.
Context
I have a Kubernetes cluster that is running numerous different workloads, the majority of which do not need distributed storage and, therefore, do not use Piraeus. I've found, on occasion, that when my LINSTOR cluster becomes unhealthy (for example; when it loses quorum) a taint is applied to some/all nodes in the cluster; preventing all workloads, regardless of if they utilise Piraeus, from being deployed.
Issue #23 provides some extra context on the node taint feature.
Proposed Solution
Implement the following option:
--disable-node-taint boolean when set to true; node taints will not be applied (default false)
The text was updated successfully, but these errors were encountered:
Goal
I would like to propose a new configuration option for the HA controller that would allow users to disable the tainting of nodes in a cluster, which doesn't currently seem possible. The resulting functionality should result in less disruption to workloads that do not use Piraeus.
Context
I have a Kubernetes cluster that is running numerous different workloads, the majority of which do not need distributed storage and, therefore, do not use Piraeus. I've found, on occasion, that when my LINSTOR cluster becomes unhealthy (for example; when it loses quorum) a taint is applied to some/all nodes in the cluster; preventing all workloads, regardless of if they utilise Piraeus, from being deployed.
Issue #23 provides some extra context on the node taint feature.
Proposed Solution
Implement the following option:
The text was updated successfully, but these errors were encountered: