-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: resource 'pvc-e45baa77-861e-4b1b-a2c6-13a12800d29c' has no referenced claim #77
Comments
Not that faimilar with CDI. Does it not still have a PVC associated with the PV? |
It has: # k get pv pvc-e45baa77-861e-4b1b-a2c6-13a12800d29c -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: linstor.csi.linbit.com
volume.kubernetes.io/provisioner-deletion-secret-name: ""
volume.kubernetes.io/provisioner-deletion-secret-namespace: ""
creationTimestamp: "2024-10-15T13:09:10Z"
finalizers:
- external-provisioner.volume.kubernetes.io/finalizer
- kubernetes.io/pv-protection
- external-attacher/linstor-csi-linbit-com
name: pvc-e45baa77-861e-4b1b-a2c6-13a12800d29c
resourceVersion: "7172473"
uid: c264ebc4-aab9-40c0-a8e9-f383ecad72ba
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 60Gi
claimRef:
name: vm-disk-docker
namespace: tenant-kvaps
resourceVersion: "5656694"
uid: b0318bfb-11bb-4591-8450-8a7d0275087f
csi:
driver: linstor.csi.linbit.com
volumeAttributes:
linstor.csi.linbit.com/mount-options: ""
linstor.csi.linbit.com/post-mount-xfs-opts: ""
linstor.csi.linbit.com/remote-access-policy: "true"
linstor.csi.linbit.com/uses-volume-context: "true"
storage.kubernetes.io/csiProvisionerIdentity: 1728903553348-3300-linstor.csi.linbit.com
volumeHandle: pvc-e45baa77-861e-4b1b-a2c6-13a12800d29c
persistentVolumeReclaimPolicy: Delete
storageClassName: replicated
volumeMode: Block
status:
lastPhaseTransitionTime: "2024-10-15T13:09:10Z"
phase: Bound
---
# k get pvc -n tenant-kvaps vm-disk-docker -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
cdi.kubevirt.io/createdForDataVolume: 74ac0497-6bee-4de9-b758-46f2595ef3f9
cdi.kubevirt.io/storage.bind.immediate.requested: ""
cdi.kubevirt.io/storage.condition.running: "false"
cdi.kubevirt.io/storage.condition.running.message: Import Complete
cdi.kubevirt.io/storage.condition.running.reason: Completed
cdi.kubevirt.io/storage.contentType: kubevirt
cdi.kubevirt.io/storage.pod.phase: Succeeded
cdi.kubevirt.io/storage.pod.restarts: "0"
cdi.kubevirt.io/storage.populator.progress: 100.0%
cdi.kubevirt.io/storage.preallocation.requested: "false"
cdi.kubevirt.io/storage.usePopulator: "true"
meta.helm.sh/release-name: vm-disk-docker
meta.helm.sh/release-namespace: tenant-kvaps
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
vm-disk.cozystack.io/optical: "false"
volume.beta.kubernetes.io/storage-provisioner: linstor.csi.linbit.com
volume.kubernetes.io/storage-provisioner: linstor.csi.linbit.com
creationTimestamp: "2024-10-15T13:08:54Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: containerized-data-importer
app.kubernetes.io/component: storage
app.kubernetes.io/managed-by: cdi-controller
helm.toolkit.fluxcd.io/name: vm-disk-docker
helm.toolkit.fluxcd.io/namespace: tenant-kvaps
name: vm-disk-docker
namespace: tenant-kvaps
ownerReferences:
- apiVersion: cdi.kubevirt.io/v1beta1
blockOwnerDeletion: true
controller: true
kind: DataVolume
name: vm-disk-docker
uid: 74ac0497-6bee-4de9-b758-46f2595ef3f9
resourceVersion: "7172474"
uid: b0318bfb-11bb-4591-8450-8a7d0275087f
spec:
accessModes:
- ReadWriteMany
dataSource:
apiGroup: cdi.kubevirt.io
kind: VolumeImportSource
name: volume-import-source-74ac0497-6bee-4de9-b758-46f2595ef3f9
dataSourceRef:
apiGroup: cdi.kubevirt.io
kind: VolumeImportSource
name: volume-import-source-74ac0497-6bee-4de9-b758-46f2595ef3f9
resources:
requests:
storage: 60Gi
storageClassName: replicated
volumeMode: Block
volumeName: pvc-e45baa77-861e-4b1b-a2c6-13a12800d29c
status:
accessModes:
- ReadWriteMany
capacity:
storage: 60Gi
phase: Bound |
Interesting. I wonder why it does not have the apiVersion and Kind in the claimRef 🤔 |
I guess this is related to Volume Populators feature whuch CDI actively use |
Yeah, I guess that makes sense. We should probably change the logic to fall back to assuming it is a PVC if no apiVersion/kind is set. |
I run VMs in Kubernetes using KubeVirt and CDI, my volume have now these fields:
piraeus-ha-controller/pkg/agent/agent.go
Line 559 in 40d3ee8
The text was updated successfully, but these errors were encountered: