-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Storage keeps disconnecting #1734
Comments
Hi, hmm I wonder if the nvmf connection is dropping or something like that. Also a small dmesg snippet from around the time when this happens might give some clues as well. Thank you |
Hey! Today at around 11:20AM my postgres pod just got disconnected. There are no dmesg logs around the time where it got disconnected. I can still export the logs from all nodes if you want. My Kubernetes setup:
If you need direct access to grafana, let me know! I also found evidence by looking at the volumes and found that the postgres volume was degraded and it was exact the same node as shown in the grafana metrics.
Here is my dump: |
Thanks for the bundle! @dsharma-dc lately I've been seeing these messages prop up, any clue?
I also see on this bundle:
At around this time, replica service seems to get stuck:
|
@AlexanderDotH would you be able to exec into io-engine pod on node sn-2, on container io-engine and run:
Thank you |
Sure! Here is the output: / # io-engine-client bdev list
/ # io-engine-client nexus list
/ # io-engine-client replica list
|
Strange, also connection issues between ha cluster and ha node agents?
|
I haven't noticed these errors recently. However, looking around I get indications that it might be something to do with how networking is working in cluster. |
Encryption is always disabled but it's a dualstack cluster with IPv4 and IPv6 with BGP. I also could't observe any packet drops or something. Since I opened the issue there wasn't a single outage. Unless today and most of the degraded pods are postgres(stackgres) cluster. Are many read and write actions a issue? Maybe because its constantly replicating the WAL files between each replica. Network throughput is not a issue ig. I ran multiple network benchmarks and it's always around 600-800Gib/s. I could optimize it further using native routing but it's too complicated for me. The 3 storage nodes are providing the entire cluster with storage, is this setup more likely to throw errors and degraded performance? About performance: 4/6 cores on each storage node is dedicated to the io-engine. I also tainted the storage nodes to block any random scheduling on them. (OpenEBS is tainted to deploy on the storage nodes) |
From what I can see, the agent-ha-cluster tries to call the agent-ha-node, example node is at: "179.61.253.10:50053" Could the dual stack cause this?
hard to say until we find the root cause -- |
That's also weird. In the past I used tuned for core isolation but in the newest kubernetes version I simply had to set it inside the helm command |
I'm not familiar with tuned, I set it up on the kernel boot cmdline. Maybe we can also check this from within mayastor and report whether we're isolated or not? @dsharma-dc ? --
I'll raise a separate ticket for this. |
In tuned you can save everything like kernel boot cmd inside profiles and also use other toolings with it. To do isolate cores you can do that: https://arc.net/l/quote/tkgjmvqz I looked at my profile and those lines are not present and also the content of Here is the weird part, despite that I don't allow any isolated cores, the io-engine uses those cores. (OFC because I specified it inside the helm deployment but the OS doesn't allow that and it still works.): Attached is how I deploy openebs. Here are commands I previously ran to setup openebs. Partitioning is on another slide but I think it's not necessary in this case. |
You can also see the live metrics. Everytime when the io-engine has a high cpu throttle you can assume it's getting disconnected. I'll keep the user account online untill there is a fix for this. Username: openebs |
It can use those cores because nothing prevents using them. The io-engine pod is not using guaranteed QoS so even with static cpu manager policy the allowed core list for the process would be the entire list of cores AIUI. Btw on the nexus list you did above, did you paste the entire list? The nexus |
I just went through my logs and found that: I am also unable to find any errors with the nexus Full log: I just pasted and formated the list as markdown. I just didn't remove anything. |
For nexus
For the postgres volume 5b1da3b6-7890-4e54-ac08-9ef12bd50f9e, I see the volume has got republished which is why the nexus was shutdown on node 179.61.253.33 and republished on node 179.61.253.31. The volume remained degraded for some time because it couldn't reconcile replica count due to lock contention. |
Ah I see it in this new log file now, thank you @AlexanderDotH But great, because the nexus is now destroyed the lockout on the pool is now removed. @AlexanderDotH again I see some intermittent networking failures:
|
No problem :). How can I test the connectivity? Which pods should I ping? |
Hey I saw some packet drops today and thought it would be worth checking on openebs and it happened again. Some pods got disconnected and nearly all are degraded(from Also attached is a broader log from the cluster. Even networking but I couldn't find anything. Do you know something new @tiagolobocastro ? Sorry I had to upload it to google drive because the dump is around 200MB. |
PR to fix the control-plane locking the pool: openebs/mayastor-control-plane#862 |
I'm not sure tbh. @Abhinandan-Purkait any ideas on how to identify connection issues between ha-cluster and ha-node? I'm also again thinking about the fact this is dual stack, let me see if I can setup a dual stack cluster and see if I also have any issues there. |
Thanks for the PR when will it be available via helm? Also some benchmarkring tools would be great. Maybe implemented into the mayastor kubectl plugin? There are many fio tests but no tests between all openebs nodes and some stress testing. Other question: |
Hey, the locking PR is now release as part of 2.7.1 We recently did some benchmarking with cloudnative pg benchmarks but we don't have any read made solution of our own. The community might have something to help here, I remember @kukacz was doing something similar at some point. For the rebuild, when a volume is published the nexus automatically copies the data from one replica to another. |
Thanks you! I guess I'll wait until the new release is out |
Describe the bug
Some of my volumes are randomly disconnecting due to unknown reason. I have 3 storage nodes with 6 cores(4 of them are dedicated to mayastor) and 8GB of ram. Some volumes are mounted via mayastor to the worker nodes. After some time when I don't look at my cluster it keeps disconnecting to the pods and keep them in a read-only state. The cluster is a fresh installation of native Kubernetes 1.31.0 with Kubeadm. After setup everything works fine and after some time it doesn't work. The mayastor csi-nde logs also say its published and working.
To Reproduce
helm install openebs --namespace kube-storage openebs/openebs --create-namespace \ --set mayastor.enabled=true \ --set mayastor.crds.enabled=true \ --set mayastor.etcd.clusterDomain=alex-cloud.internal \ --set engines.local.lvm.enabled=false \ --set engines.local.zfs.enabled=false \ --set localprovisioner.enabled=false \ --set 'mayastor.io_engine.coreList={2,3,4,5}' \ --set zfs-localpv.localpv.tolerations[0].key=role \ --set zfs-localpv.localpv.tolerations[0].operator=Equal \ --set zfs-localpv.localpv.tolerations[0].value=storage \ --set zfs-localpv.localpv.tolerations[0].effect=NoSchedule \ --set zfs-localpv.zfsController.provisioner.tolerations[0].key=role \ --set zfs-localpv.zfsController.provisioner.tolerations[0].operator=Equal \ --set zfs-localpv.zfsController.provisioner.tolerations[0].value=storage \ --set zfs-localpv.zfsController.provisioner.tolerations[0].effect=NoSchedule \ --set mayastor.crds.csi.volumeSnapshots.enabled=false \ --set mayastor.tolerations[0].key=role \ --set mayastor.tolerations[0].operator=Equal \ --set mayastor.tolerations[0].value=storage \ --set mayastor.tolerations[0].effect=NoSchedule \ --no-hooks
Setup storage pools from a partition on the storage nodes. (For this just one)
apiVersion: "openebs.io/v1beta2" kind: DiskPool metadata: name: alex-cloud-sn-1-pool namespace: kube-storage spec: node: alex-cloud-sn-1 disks: ["/dev/sda2"]
Setup the storage class.
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: alex-cloud-default-sc annotations: storageclass.kubernetes.io/is-default-class: "true" parameters: ioTimeout: "30" protocol: nvmf repl: "3" fsType: "ext4" allowVolumeExpansion: true provisioner: io.openebs.csi-mayastor
Attach the volume to any pod or deployment.
Expected behavior
Stay connected no matter what happens.
Screenshots
Not really possible but I can provide logs.
** OS info (please complete the following information):**
Distro: Rocky Linux 9.4 (Blue Onyx)
Kernel version:
Newest from helm (2.7.0)
Additional context
We can also jump on a call or something. This drives me crazy. Here is my discord: @alexdoth
The text was updated successfully, but these errors were encountered: