Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG][OpenSearch] Unable to use existing PVC in Opensearch value #517

Open
Uritassa opened this issue Feb 16, 2024 · 4 comments
Open

[BUG][OpenSearch] Unable to use existing PVC in Opensearch value #517

Uritassa opened this issue Feb 16, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@Uritassa
Copy link

Uritassa commented Feb 16, 2024

Hi, I’m encountering an issue where I cannot attach a volume to the OpenSearch master pod. Instead, it attempts to create a new PVC.
I have a Helm chart with the following values for OpenSearch:

  persistence:
    enabled: true
    existingClaim: opensearch-data

I also tried these values


  persistence:
    enabled: true
    volumeClaimTemplates:
      - metadata:
          name: opensearch-data

Additional Information:

Helm chart version: latest
Kubernetes version: 26+
Opensearch version: latest

Environment:

Kubernetes cluster type: EKS
Operating system: AL2023
Helm version: 3.0+
@Uritassa Uritassa added bug Something isn't working untriaged Issues that have not yet been triaged labels Feb 16, 2024
@Nawazsk89
Copy link

Hi,

I am also facing same issue. Can you please update if you found a solution?

@prudhvigodithi
Copy link
Member

[Triage]
Thanks @Uritassa, @Nawazsk89. Have you guys tried creating a PV and PVC with same labels and then deploy the helm chart? This should not create new PVC's and use the existing ones that are pre-created. As an enhancement we can have a flag useExistingVolumes in chart to do the same, please let us know if you guys are interested to contribute this?
Thanks

@prudhvigodithi prudhvigodithi removed the untriaged Issues that have not yet been triaged label Apr 1, 2024
@getsaurabh02 getsaurabh02 moved this from 🆕 New to Later (6 months plus) in Engineering Effectiveness Board Jul 18, 2024
@danim55
Copy link

danim55 commented Nov 15, 2024

Hi, same problem here. Not being able to bind the statefulset of the opensearch to an already existing volume.

My target is to assign a different storageClass to the pv created by default in the statefulset but when I create my new storageClass with a different retain policy the opensearch statefulset does not create the pv.

Anyway by trying the approach of creating a new persistent volume associated to the new storage class and use it in the opensearch statefulset is not working.

Helmfile:

 - name: opensearch
    namespace: some-namespace
    createNamespace: true
    chart: opensearch/opensearch
    version: v2.27.0
    values:
      - clusterName: "opensearch-cluster"
        nodeGroup: "master"
        masterService: "opensearch-cluster-master"
        roles:
          - master
          - ingest
          - data
          - remote_cluster_client
        replicas: 1
        minimumMasterNodes: 1
        extraEnvs:
          - name: DISABLE_SECURITY_PLUGIN
            value: "true"
        image:
          repository: "opensearchproject/opensearch"
          tag: "2.18.0"
          pullPolicy: "IfNotPresent"
        persistence:
          enabled: true
          storageClass: local-path-retain # name of my custom storageClass
          existingVolume: opensearch-data-volume # name of my custom persistent volume
          accesModes: 
            - ReadWriteOnce
          labels:
            enabled: true
          size: 10Gi

My persistent volume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: opensearch-data-volume
  labels:
    type: local
spec:
  storageClassName: local-path-retain
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/opt"

My storage class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path-retain
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi-driver.example-vendor.example
reclaimPolicy: Retain # default value is Delete
allowVolumeExpansion: true # with local path will not do anything
mountOptions:
  - discard # this might enable UNMAP / TRIM at the block storage layer
volumeBindingMode: WaitForFirstConsumer
parameters:
  guaranteedReadWriteLatency: "true" # provider-specific

@prudhvigodithi
Copy link
Member

One other approach we can try is:

  • See how the helm is creating the PV (persistent volume) and PVC (persistent volume claim).
  • Now get the output as yaml using kubectl for PV.
  • Update the yaml by using the different storage class and use kubectl to apply the PV.
  • Now with default helm install, it would create the PVC and bound to the already existing PV. AFAIK only the PV name is important and required for the PVC, if it already exists it should use the existing PV.

This way the concept of the reusing the existing PV which holds the actual data is satisfied.

Thank you
@getsaurabh02

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: 📦 Backlog
Development

No branches or pull requests

4 participants