Skip to content

feat: support podman volatile containers #3684

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

savely-krasovsky
Copy link

It's used by all Podman Quadlets. Those containers are not in fact volatile, but due to the managed nature of those containers, they probably decided to make them transient.

This fix will close #3648.

It's used by all Podman Quadlets. Those containers are not in fact volatile, but due to the managed nature of those containers, they probably decided to make them transient.
@savely-krasovsky
Copy link
Author

I tested image with my Quadlet-based homelab and finally I see all containers.

@dmenneck
Copy link

We are very looking forward to this feature. When will this PR be merged?

@Leandros
Copy link

Any progress of merging this?

@Leandros
Copy link

Leandros commented May 29, 2025

I gave this a try, but it doesn't work with my setup.

cadvisor logs the following error:

May 29 09:30:43 dev-web01 cadvisor[919831]: W0529 09:30:43.724005  919831 manager.go:1169] Failed to process watch event {EventType:0 Name:/system.slice/9afb554c7be13cf2f53e6982781372ec6b149d76a00e606ce3e6ac483092820a-156a46ec6cd01eeb.service WatchSource:0}: failed to identify the read-write layer ID for container "9afb554c7be13cf2f53e6982781372ec6b149d76a00e606ce3e6ac483092820a". - open /var/lib/containers/storage/image/overlay/layerdb/mounts/9afb554c7be13cf2f53e6982781372ec6b149d76a00e606ce3e6ac483092820a/mount-id: no such file or directory

System:

[user@dev-web01 ~]$ podman --version
podman version 5.4.0

[user@dev-web01 ~]$ cat /etc/os-release
NAME="CentOS Stream"
VERSION="9"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="9"
PLATFORM_ID="platform:el9"
PRETTY_NAME="CentOS Stream 9"
ANSI_COLOR="0;31"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:centos:centos:9"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://issues.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"

[user@dev-web01 ~]$ uname -a
Linux dev-web01 5.14.0-578.el9.aarch64 #1 SMP PREEMPT_DYNAMIC Mon Apr 7 19:48:38 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux

Command-line to run cadvisor:

ExecStart=/usr/local/bin/cadvisor \
    '--enable_metrics=app,cpu,process,disk,diskIO,memory,oom_event,network,tcp' \
    '--store_container_labels=true' \
    '--listen_ip=127.0.0.1' \
    '--port=8080' \
    '--prometheus_endpoint=/metrics'

@savely-krasovsky
Copy link
Author

Did you follow rootless setup instructions? You have error that is irrelevant to this MR.

@Leandros
Copy link

Leandros commented May 29, 2025

Yes, I believe so.

My Podman isn't running rootless, and cadvisor is running directly on the device (as root), as a systemd service, without running in a container (using the prometheus.prometheus.cadvisor ansible role).

All containers running on the host are started via Quadlets. A test container that I ran with podman run ... appears in the metrics.

I believe it's the same error, just slightly differently worded (possibly due to recent changes to cadvisor?).

May 29 11:38:54 dev-web01 cadvisor[919831]: E0529 11:38:54.554210  919831 manager.go:1116] Failed to create existing container: /system.slice/nginx-proxy.service/libpod-payload-9afb554c7be13cf2f53e6982781372ec6b149d76a00e606ce3e6ac483092820a: failed to identify the read-write layer ID for container "9afb554c7be13cf2f53e6982781372ec6b149d76a00e606ce3e6ac483092820a". - open /var/lib/containers/storage/image/overlay/layerdb/mounts/9afb554c7be13cf2f53e6982781372ec6b149d76a00e606ce3e6ac483092820a/mount-id: no such file or directory
May 29 11:38:54 dev-web01 cadvisor[919831]: E0529 11:38:54.556964  919831 manager.go:1116] Failed to create existing container: /machine.slice/libpod-conmon-f684ed27596248eecb1491acbf2d52ef45e1e96747ef59aa339a12fb2eb520d9.scope: failed to identify the read-write layer ID for container "f684ed27596248eecb1491acbf2d52ef45e1e96747ef59aa339a12fb2eb520d9". - open /var/lib/containers/storage/image/overlay/layerdb/mounts/f684ed27596248eecb1491acbf2d52ef45e1e96747ef59aa339a12fb2eb520d9/mount-id: no such file or directory
May 29 11:38:54 dev-web01 cadvisor[919831]: E0529 11:38:54.559444  919831 manager.go:1116] Failed to create existing container: /machine.slice/libpod-f684ed27596248eecb1491acbf2d52ef45e1e96747ef59aa339a12fb2eb520d9.scope: failed to identify the read-write layer ID for container "f684ed27596248eecb1491acbf2d52ef45e1e96747ef59aa339a12fb2eb520d9". - open /var/lib/containers/storage/image/overlay/layerdb/mounts/f684ed27596248eecb1491acbf2d52ef45e1e96747ef59aa339a12fb2eb520d9/mount-id: no such file or directory

Unit file:

nginx-proxy.service
[Unit]
Wants=network-online.target
After=network-online.target
Description=NGINX Reverse Proxy
# Creates required directories.
After=baseline.service
Wants=baseline.service
SourcePath=/etc/containers/systemd/nginx-proxy.container
RequiresMountsFor=%t/containers
RequiresMountsFor=/var/run/containers-shared/
RequiresMountsFor=/var/www/web/
RequiresMountsFor=/etc/letsencrypt/

[X-Container]
Image=ghcr.io/.../nginx-proxy:master
AutoUpdate=registry
Volume=/var/run/containers-shared/:/var/run/shared/:z
Volume=/var/www/web/:/var/www/shared/:z
Volume=/etc/letsencrypt/:/etc/letsencrypt/:z
Environment=DEFAULT_SERVER_NAME=...
Environment=CERTIFICATE_PATH=...
Environment=DEFAULT_PUBLIC_PATH=/var/www/shared
Environment=PROXY_SOCKET_PATH=/var/run/shared/uvicorn.sock
Network=host
Notify=healthy
HealthCmd=curl -so /dev/null http://127.0.0.1
HealthInterval=5s
HealthRetries=10
HealthTimeout=5s

[Service]
Restart=always
# Extend Timeout to allow time to pull the image
TimeoutStartSec=900
# If the service isn't stopping after 30s, kill it.
TimeoutStopSec=30
Environment=PODMAN_SYSTEMD_UNIT=%n
KillMode=mixed
ExecStop=/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
ExecStopPost=-/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run --name systemd-%N --cidfile=%t/%N.cid --replace --rm --cgroups=split --network host --sdnotify=healthy -d -v /var/run/containers-shared/:/var/run/shared/:z -v /var/www/web/:/var/www/shared/:z -v /etc/letsencrypt/:/etc/letsencrypt/:z --label io.containers.autoupdate=registry --env CERTIFICATE_PATH=/etc/letsencrypt/live/... --env DEFAULT_PUBLIC_PATH=/var/www/shared --env DEFAULT_SERVER_NAME=... --env PROXY_SOCKET_PATH=/var/run/shared/uvicorn.sock --health-cmd "curl -so /dev/null http://127.0.0.1" --health-interval 5s --health-retries 10 --health-timeout 5s ghcr.io/.../nginx-proxy:master

[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

@riyad
Copy link
Contributor

riyad commented Jun 5, 2025

FWIW volatile containers are not directly related to Quadlets. It's just that Podman uses the "volatile-containers.json" file for non-daemonized containers (i.e. for containers not started with -d). It just happens that Quadlets start containers non-daemonized to better manage their processes. If you just start a container with podman run ... it should also turn up in "volatile-containers.json".

And yes, this PR is should be considered for inclusion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

cAdvisor Fails to Retrieve Metrics for Podman Container Managed by systemd
4 participants