Skip to content

feat(helm): update chart rook-ceph-cluster ( v1.17.6 ➔ v1.19.2 )#182

Open
parsec-renovate[bot] wants to merge 1 commit intomainfrom
renovate/rook-ceph-cluster-1.x
Open

feat(helm): update chart rook-ceph-cluster ( v1.17.6 ➔ v1.19.2 )#182
parsec-renovate[bot] wants to merge 1 commit intomainfrom
renovate/rook-ceph-cluster-1.x

Conversation

@parsec-renovate
Copy link
Contributor

@parsec-renovate parsec-renovate bot commented Dec 9, 2025

This PR contains the following updates:

Package Update Change
rook-ceph-cluster minor v1.17.6v1.19.2

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

rook/rook (rook-ceph-cluster)

v1.19.2

Compare Source

Improvements

Rook v1.19.2 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.19.1

Compare Source

Improvements

Rook v1.19.1 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

csi: Update to ceph csi operator to v0.5 (#​17029, @​subhamkrai)
security: Remove unnecessary nodes/proxy RBAC enablement (#​16979, @​ibotty)
helm: Set default ceph image pull policy (#​16954, @​travisn)
nfs: Add CephNFS.spec.server.{image,imagePullPolicy} fields (#​16982, @​jhoblitt)
osd: Assign correct osd container in case it is not index 0 (#​16969, @​kyrbrbik)
csi: Remove obsolete automated node fencing code (#​16922, @​subhamkrai)
osd: Enable proper cancellation during OSD reconcile (#​17022, @​sp98)
csi: Allow running the csi controller plugin on host network (#​16972, @​Madhu-1)
rgw: Update ca bundle mount perms to read-all (#​16968, @​BlaineEXE)
mon: Change do-not-reconcile to be more granular for individual mons (#​16939, @​travisn)
build(deps): Bump the k8s-dependencies group with 6 updates (#​16846, @​dependabot[bot])
doc: add csi-operator example in configuration doc (#​17001, @​subhamkrai)

v1.19.0

Compare Source

Upgrade Guide

To upgrade from previous versions of Rook, see the Rook upgrade guide.

Breaking Changes

  • The supported Kubernetes versions are v1.30 - v1.35
  • The minimum supported Ceph version is v19.2.0. Rook v1.18 clusters running Ceph v18 must upgrade
    to Ceph v19.2.0 or higher before upgrading Rook.
  • The behavior of the activeStandby property in the CephFilesystem CRD has changed. When set to false, the standby MDS daemon deployment will be scaled down and removed, rather than only disabling the standby cache while the daemon remains running.
  • Helm: The rook-ceph-cluster chart has changed where the Ceph image is defined, to allow separate settings for the repository and tag. For more details, see the Rook upgrade guide.
  • In external mode, when users provide a Ceph admin keyring to Rook, Rook will no longer create CSI Ceph clients automatically. This approach will provide more consistency to configure external mode clusters via the same external Python script.

Features

  • Experimental: NVMe over Fabrics (NVMe-oF) allows RBD volumes to be exposed and accessed via the NVMe/TCP protocol. This enables both Kubernetes pods within the cluster and external clients outside the cluster to connect to Ceph block storage using standard NVMe-oF initiators, providing high-performance block storage access over the network. See the NVMe-oF Configuration Guide to get started.
  • CephCSI v3.16 Integration:
    • NVMe-oF CSI driver for provisioning and mounting volumes over the NVMe over Fabrics protocol
    • Improved fencing for RBD and CephFS volumes during node failure
    • Block volume usage statistics
    • Configurable block encryption cipher
  • Experimental: Allow concurrent reconciles of the CephCluster CR when there multiple clusters being managed by the same Rook operator. Concurrency is enabled by increasing the operator setting ROOK_RECONCILE_CONCURRENT_CLUSTERS to a value greater than 1.
  • Improved logging with namespaced names for the controllers for more consistency in troubleshooting the rook operator log.

v1.18.9

Compare Source

Improvements

Rook v1.18.9 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.18.8

Compare Source

Improvements

Rook v1.18.8 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.18.7

Compare Source

Improvements

Rook v1.18.7 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.18.6

Compare Source

Improvements

Rook v1.18.6 is a patch release with changes only in the rook-ceph helm chart. If not affected by #​16636 in v1.18.5, no need to update to this release.

v1.18.5

Compare Source

Improvements

Rook v1.18.5 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.18.4

Compare Source

Improvements

Rook v1.18.4 is a patch release with changes only in the rook-ceph-cluster helm chart. If not affected by #​16567 in v1.18.3, no need to update to this release.

v1.18.3

Compare Source

Improvements

Rook v1.18.3 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.18.2

Compare Source

Improvements

Rook v1.18.2 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.18.1

Compare Source

Improvements

Rook v1.18.1 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.18.0

Compare Source

Upgrade Guide

To upgrade from previous versions of Rook, see the Rook upgrade guide.

Breaking Changes

  • Kubernetes v1.29 is now the minimum version supported by Rook through the soon-to-be K8s release v1.34.
  • Helm versions 3.13 and newer are supported. Previously, only the latest version of helm was tested and the docs stated only version 3.x of helm as a prerequisite. Now rook supports the six most recent minor versions of helm along with their their patch updates.
  • Rook now validates node topology during CephCluster creation to prevent misconfigured CRUSH hierarchies for OSDs. If child labels like topology.rook.io/rack are duplicated across zones, cluster creation will fail. The check applies only to new clusters without OSDs. Clusters with existing OSDs will only log a warning and continue. If the checks are invalid in your topology, they can be suppressed by setting ROOK_SKIP_OSD_TOPOLOGY_CHECK=true in the rook-ceph-operator-config configmap.

Features

  • The Ceph CSI operator is now the default and recommended component for configuring CSI drivers for RBD, CephFS, and NFS volumes. The CSI operator has been factored out of Rook to run independently to manage the Ceph-CSI driver. 
    • During the upgrade and throughout the v1.18.x releases, Rook will automatically convert any Rook CSI settings to the new CSI operator CRs. This transition is expected to be completely transparent. In the future v1.19 release, Rook will relinquish direct control of these settings so advanced users can have more flexibility when configuring the CSI drivers. At that time, we will have a guide on configuring these new Ceph CSI operator CRs directly.
    • During install, as mentioned in the Quickstart Guide, there is a new manifest to be created: csi-operator.yaml
    • If installing with the helm chart, the Ceph CSI operator will automatically be installed by default with the new helm setting csi.rookUseCsiOperator in the rook-ceph chart.
    • If a blocking issue is found, the previous CSI driver can be re-enabled by setting ROOK_USE_CSI_OPERATOR: false in operator.yaml or by applying the helm setting csi.rookUseCsiOperator: false.
  • Ceph CSI v3.15 has a range of features and improvements for the RBD, CephFS, and NFS drivers. This release is supported both by the Ceph CSI operator and Rook's direct mode of configuration. Starting in the next release (at the end of the year), the Ceph CSI operator will be required to configure the CSI driver.
  • CephX key rotation is now available as an experimental feature for the CephX authentication keys used by Ceph daemons and clients. Users will begin to see new cephx status items on some Rook resources in newly-deployed Rook clusters. Users can also find spec.security.cephx settings that allow initiating CephX key rotation for various Ceph components. Full documentation for key rotation can be found here.
    • Ceph version v19.2.3+ is required for key rotation.
    • The Ceph admin and mon keys cannot yet be rotated. Implementation is still in progress while in experimental mode.
  • Add support for specifying the clusterID in the CephBlockPoolRadosNamespace and the CephFilesystemSubVolumeGroup CR.
  • When a mon is being failed over, if the assigned node no longer exists, the mon is failed over immediately instead of waiting for a
    20 minute timeout.
  • Support for Ceph Tentacle v20 will be available as soon as it is released.

v1.17.9

Compare Source

Improvements

Rook v1.17.9 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.17.8

Compare Source

Improvements

Rook v1.17.8 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.17.7

Compare Source

Improvements

Rook v1.17.7 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

Important: There is a known issue in Ceph v19.2.3 where object store bucket lifecycle deletion does not take effect. See #​16188 for more details.


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@github-actions
Copy link

github-actions bot commented Dec 9, 2025

--- kubernetes/apps/rook-ceph/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster

+++ kubernetes/apps/rook-ceph/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster

@@ -13,13 +13,13 @@

     spec:
       chart: rook-ceph-cluster
       sourceRef:
         kind: HelmRepository
         name: rook-ceph
         namespace: flux-system
-      version: v1.17.6
+      version: v1.19.2
   dependsOn:
   - name: rook-ceph-operator
     namespace: rook-ceph
   - name: snapshot-controller
     namespace: storage
   install:

@github-actions
Copy link

github-actions bot commented Dec 9, 2025

--- HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-block

+++ HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-block

@@ -1,9 +1,9 @@

 ---
+kind: StorageClass
 apiVersion: storage.k8s.io/v1
-kind: StorageClass
 metadata:
   name: ceph-block
   annotations:
     storageclass.kubernetes.io/is-default-class: 'true'
 provisioner: rook-ceph.rbd.csi.ceph.com
 parameters:
--- HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-filesystem

+++ HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-filesystem

@@ -1,9 +1,9 @@

 ---
+kind: StorageClass
 apiVersion: storage.k8s.io/v1
-kind: StorageClass
 metadata:
   name: ceph-filesystem
   annotations:
     storageclass.kubernetes.io/is-default-class: 'false'
 provisioner: rook-ceph.cephfs.csi.ceph.com
 parameters:
--- HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-bucket

+++ HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-bucket

@@ -1,9 +1,9 @@

 ---
+kind: StorageClass
 apiVersion: storage.k8s.io/v1
-kind: StorageClass
 metadata:
   name: ceph-bucket
 provisioner: rook-ceph.ceph.rook.io/bucket
 reclaimPolicy: Delete
 volumeBindingMode: Immediate
 parameters:
--- HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools

+++ HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools

@@ -1,9 +1,9 @@

 ---
+kind: Deployment
 apiVersion: apps/v1
-kind: Deployment
 metadata:
   name: rook-ceph-tools
   namespace: rook-ceph
   labels:
     app: rook-ceph-tools
 spec:
@@ -17,22 +17,23 @@

         app: rook-ceph-tools
     spec:
       dnsPolicy: ClusterFirstWithHostNet
       hostNetwork: true
       containers:
       - name: rook-ceph-tools
-        image: quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
+        image: quay.io/ceph/ceph:v19.2.3
         command:
         - /bin/bash
         - -c
         - |
           # Replicate the script from toolbox.sh inline so the ceph image
           # can be run directly, instead of requiring the rook toolbox
           CEPH_CONFIG="/etc/ceph/ceph.conf"
           MON_CONFIG="/etc/rook/mon-endpoints"
           KEYRING_FILE="/etc/ceph/keyring"
+          CONFIG_OVERRIDE="/etc/rook-config-override/config"
 
           # create a ceph config file in its default location so ceph/rados tools can be used
           # without specifying any arguments
           write_endpoints() {
             endpoints=$(cat ${MON_CONFIG})
 
@@ -47,12 +48,19 @@

           [global]
           mon_host = ${mon_endpoints}
 
           [client.admin]
           keyring = ${KEYRING_FILE}
           EOF
+
+            # Merge the config override if it exists and is not empty
+            if [ -f "${CONFIG_OVERRIDE}" ] && [ -s "${CONFIG_OVERRIDE}" ]; then
+              echo "$DATE merging config override from ${CONFIG_OVERRIDE}"
+              echo "" >> ${CEPH_CONFIG}
+              cat ${CONFIG_OVERRIDE} >> ${CEPH_CONFIG}
+            fi
           }
 
           # watch the endpoints config file and update if the mon endpoints ever change
           watch_endpoints() {
             # get the timestamp for the target of the soft link
             real_path=$(realpath ${MON_CONFIG})
@@ -112,12 +120,15 @@

         - mountPath: /etc/ceph
           name: ceph-config
         - name: mon-endpoint-volume
           mountPath: /etc/rook
         - name: ceph-admin-secret
           mountPath: /var/lib/rook-ceph-mon
+        - name: rook-config-override
+          mountPath: /etc/rook-config-override
+          readOnly: true
       serviceAccountName: rook-ceph-default
       volumes:
       - name: ceph-admin-secret
         secret:
           secretName: rook-ceph-mon
           optional: false
@@ -127,12 +138,16 @@

       - name: mon-endpoint-volume
         configMap:
           name: rook-ceph-mon-endpoints
           items:
           - key: data
             path: mon-endpoints
+      - name: rook-config-override
+        configMap:
+          name: rook-config-override
+          optional: true
       - name: ceph-config
         emptyDir: {}
       tolerations:
       - key: node.kubernetes.io/unreachable
         operator: Exists
         effect: NoExecute
--- HelmRelease: rook-ceph/rook-ceph-cluster Ingress: rook-ceph/rook-ceph-dashboard

+++ HelmRelease: rook-ceph/rook-ceph-cluster Ingress: rook-ceph/rook-ceph-dashboard

@@ -1,9 +1,9 @@

 ---
+kind: Ingress
 apiVersion: networking.k8s.io/v1
-kind: Ingress
 metadata:
   name: rook-ceph-dashboard
   namespace: rook-ceph
 spec:
   rules:
   - host: rook.parsec.sh
--- HelmRelease: rook-ceph/rook-ceph-cluster CephBlockPool: rook-ceph/ceph-blockpool

+++ HelmRelease: rook-ceph/rook-ceph-cluster CephBlockPool: rook-ceph/ceph-blockpool

@@ -1,9 +1,9 @@

 ---
+kind: CephBlockPool
 apiVersion: ceph.rook.io/v1
-kind: CephBlockPool
 metadata:
   name: ceph-blockpool
   namespace: rook-ceph
 spec:
   failureDomain: host
   replicated:
--- HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph

+++ HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph

@@ -4,21 +4,20 @@

 metadata:
   name: rook-ceph
   namespace: rook-ceph
 spec:
   monitoring:
     enabled: true
+  cephVersion:
+    image: quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
   cephConfig:
     global:
       bdev_async_discard_threads: '1'
       bdev_enable_discard: 'true'
       device_failure_prediction_mode: local
       osd_class_update_on_start: 'false'
-  cephVersion:
-    allowUnsupported: false
-    image: quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
   cleanupPolicy:
     allowUninstallWithVolumes: false
     confirmation: ''
     sanitizeDisks:
       dataSource: zero
       iteration: 1
--- HelmRelease: rook-ceph/rook-ceph-cluster CephFilesystem: rook-ceph/ceph-filesystem

+++ HelmRelease: rook-ceph/rook-ceph-cluster CephFilesystem: rook-ceph/ceph-filesystem

@@ -1,9 +1,9 @@

 ---
+kind: CephFilesystem
 apiVersion: ceph.rook.io/v1
-kind: CephFilesystem
 metadata:
   name: ceph-filesystem
   namespace: rook-ceph
 spec:
   dataPools:
   - failureDomain: host
--- HelmRelease: rook-ceph/rook-ceph-cluster CephFilesystemSubVolumeGroup: rook-ceph/ceph-filesystem-csi

+++ HelmRelease: rook-ceph/rook-ceph-cluster CephFilesystemSubVolumeGroup: rook-ceph/ceph-filesystem-csi

@@ -1,9 +1,9 @@

 ---
+kind: CephFilesystemSubVolumeGroup
 apiVersion: ceph.rook.io/v1
-kind: CephFilesystemSubVolumeGroup
 metadata:
   name: ceph-filesystem-csi
   namespace: rook-ceph
 spec:
   name: csi
   filesystemName: ceph-filesystem
--- HelmRelease: rook-ceph/rook-ceph-cluster CephObjectStore: rook-ceph/ceph-objectstore

+++ HelmRelease: rook-ceph/rook-ceph-cluster CephObjectStore: rook-ceph/ceph-objectstore

@@ -1,9 +1,9 @@

 ---
+kind: CephObjectStore
 apiVersion: ceph.rook.io/v1
-kind: CephObjectStore
 metadata:
   name: ceph-objectstore
   namespace: rook-ceph
 spec:
   dataPool:
     erasureCoded:
--- HelmRelease: rook-ceph/rook-ceph-cluster PrometheusRule: rook-ceph/prometheus-ceph-rules

+++ HelmRelease: rook-ceph/rook-ceph-cluster PrometheusRule: rook-ceph/prometheus-ceph-rules

@@ -1,9 +1,9 @@

 ---
+kind: PrometheusRule
 apiVersion: monitoring.coreos.com/v1
-kind: PrometheusRule
 metadata:
   labels:
     prometheus: rook-prometheus
     role: alert-rules
   name: prometheus-ceph-rules
   namespace: rook-ceph
--- HelmRelease: rook-ceph/rook-ceph-cluster VolumeSnapshotClass: rook-ceph/csi-ceph-filesystem

+++ HelmRelease: rook-ceph/rook-ceph-cluster VolumeSnapshotClass: rook-ceph/csi-ceph-filesystem

@@ -1,9 +1,9 @@

 ---
+kind: VolumeSnapshotClass
 apiVersion: snapshot.storage.k8s.io/v1
-kind: VolumeSnapshotClass
 metadata:
   name: csi-ceph-filesystem
   annotations:
     snapshot.storage.kubernetes.io/is-default-class: 'false'
 driver: rook-ceph.cephfs.csi.ceph.com
 parameters:
--- HelmRelease: rook-ceph/rook-ceph-cluster VolumeSnapshotClass: rook-ceph/csi-ceph-blockpool

+++ HelmRelease: rook-ceph/rook-ceph-cluster VolumeSnapshotClass: rook-ceph/csi-ceph-blockpool

@@ -1,9 +1,9 @@

 ---
+kind: VolumeSnapshotClass
 apiVersion: snapshot.storage.k8s.io/v1
-kind: VolumeSnapshotClass
 metadata:
   name: csi-ceph-blockpool
   annotations:
     snapshot.storage.kubernetes.io/is-default-class: 'false'
 driver: rook-ceph.rbd.csi.ceph.com
 parameters:

@parsec-renovate parsec-renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 0bacf13 to 01178d0 Compare January 13, 2026 20:12
@parsec-renovate parsec-renovate bot changed the title feat(helm): update chart rook-ceph-cluster ( v1.17.6 ➔ v1.18.8 ) feat(helm): update chart rook-ceph-cluster ( v1.17.6 ➔ v1.18.9 ) Jan 13, 2026
@parsec-renovate parsec-renovate bot changed the title feat(helm): update chart rook-ceph-cluster ( v1.17.6 ➔ v1.18.9 ) feat(helm): update chart rook-ceph-cluster ( v1.17.6 ➔ v1.19.0 ) Jan 20, 2026
@parsec-renovate parsec-renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 01178d0 to bfadf40 Compare January 20, 2026 20:31
@parsec-renovate parsec-renovate bot changed the title feat(helm): update chart rook-ceph-cluster ( v1.17.6 ➔ v1.19.0 ) feat(helm): update chart rook-ceph-cluster ( v1.17.6 ➔ v1.19.1 ) Feb 5, 2026
@parsec-renovate parsec-renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from bfadf40 to 8a3fa4a Compare February 5, 2026 22:13
@parsec-renovate parsec-renovate bot changed the title feat(helm): update chart rook-ceph-cluster ( v1.17.6 ➔ v1.19.1 ) feat(helm): update chart rook-ceph-cluster ( v1.17.6 ➔ v1.19.2 ) Feb 24, 2026
@parsec-renovate parsec-renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 8a3fa4a to 8e9687f Compare February 24, 2026 19:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants