Skip to content

Driver crashes unexpectedly with Failed to read /host/proc/mounts requiring pod restart #284

@dienhartd

Description

@dienhartd

/kind bug

NOTE: If this is a filesystem related bug, please take a look at the Mountpoint repo to submit a bug report

What happened?
Periodically without warning one of my s3 mountpoint driver pods will crash with GRPC errors until I delete it. It will usually cause a dependent pod to fail to start. The replacement immediately after this pod's deletion works fine, but requires manual intervention after noticing dependent pod crashes due to missing pv.

What you expected to happen?
Error not to occur.

How to reproduce it (as minimally and precisely as possible)?
Unclear.

Anything else we need to know?:
Logs

I1104 11:59:40.249998       1 credential.go:95] NodePublishVolume: Using driver identity
I1104 11:59:40.250015       1 node.go:146] NodePublishVolume: mounting d-cluster at /var/lib/kubelet/pods/97e71fea-b356-4d87-a086-5f06fe651ea7/volumes/kubernetes.io~csi/s3-pv/mount with options [--allow-delete --allow-other --gid=100 --uid=1000]
E1104 11:59:40.250106       1 mount.go:214] Failed to read /host/proc/mounts on try 1: open /host/proc/mounts: invalid argument
E1104 11:59:40.250106       1 mount.go:214] Failed to read /host/proc/mounts on try 1: open /host/proc/mounts: invalid argument
E1104 11:59:40.250106       1 mount.go:214] Failed to read /host/proc/mounts on try 1: open /host/proc/mounts: invalid argument
E1104 11:59:40.250106       1 mount.go:214] Failed to read /host/proc/mounts on try 1: open /host/proc/mounts: invalid argument
E1104 11:59:40.350345       1 mount.go:214] Failed to read /host/proc/mounts on try 2: open /host/proc/mounts: invalid argument
E1104 11:59:40.350345       1 mount.go:214] Failed to read /host/proc/mounts on try 2: open /host/proc/mounts: invalid argument
E1104 11:59:40.350345       1 mount.go:214] Failed to read /host/proc/mounts on
try 2: open /host/proc/mounts: invalid argument
E1104 11:59:40.350345       1 mount.go:214] Failed to read /host/proc/mounts on try 2: open /host/proc/mounts: invalid argument
E1104 11:59:40.450642       1 mount.go:214] Failed to read /host/proc/mounts on try 3: open /host/proc/mounts: invalid argument
E1104 11:59:40.450642       1 mount.go:214] Failed to read /host/proc/mounts on try 3: open /host/proc/mounts: invalid argument
E1104 11:59:40.450642       1 mount.go:214] Failed to read /host/proc/mounts on try 3: open /host/proc/mounts: invalid argument
E1104 11:59:40.450642       1 mount.go:214] Failed to read /host/proc/mounts on try 3: open /host/proc/mounts: invalid argument
E1104 11:59:40.550806       1 driver.go:136] GRPC error: rpc error: code = Internal desc = Could not mount "d-cluster" at "/var/lib/kubelet/pods/97e71fea-b35
6-4d87-a086-5f06fe651ea7/volumes/kubernetes.io~csi/s3-pv/mount": Could not check if "/var/lib/kubelet/pods/97e71fea-b356-4d87-a086-5f06fe651ea7/volumes/kubernetes.io~csi/s3-pv/mount" is a mount point: stat /var/lib/kubelet/pods/97e71fea-b356-4d87-a086-5f06fe651ea7/volumes/kubernetes.io~csi/s3-pv/mount: no such file or directory, Failed to read /host/proc/mounts after 3 tries: open /host/proc/mounts: invalid argument

Environment

  • Kubernetes version (use kubectl version):
    Client Version: v1.31.1
    Server Version: v1.30.5-eks-ce1d5eb

  • Driver version: v1.9.0
    Installation of s3 mountpoint driver is through eksctl, i.e. eksctl create addon aws-mountpoint-s3-csi-driver

Was directed by @muddyfish to file this issue here: #174 (comment)

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions