We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Currently during an image pull, multiple copies of each layer are maintained in memory before it is written out (possibly to secure storage):
out
Pulling docker.io/library/python:latest using an image-rs test program on a linux machine shows the following memory consumption under heaptrack
docker.io/library/python:latest
peak heap memory consumption: 1.5GB after 11.733s peak RSS (including heaptrack overhead): 888.8MB total memory leaked: 50.9MB (10.6kB suppressed)
Given a kata-VM's default memory configuration of 2GB, 1.5GB memory consumption during image pull may cause slow performance and possibly OOM.
The text was updated successfully, but these errors were encountered:
#24 reduces memory usage while pulling encrypted images.
Sorry, something went wrong.
Hi @anakrish Is this resolved by #96 ? (similar comment in #67 )
Successfully merging a pull request may close this issue.
Currently during an image pull, multiple copies of each layer are maintained in memory before it is written out (possibly to secure storage):
out
vector which contains the layer's blob processed via the above steps is finally unpacked to destination folder: https://github.com/confidential-containers/image-rs/blob/c98c4916cbe8d2c4f8933b343d20b7fdb338f50e/src/pull.rs#L198Pulling
docker.io/library/python:latest
using an image-rs test program on a linux machine shows the following memory consumption under heaptrackGiven a kata-VM's default memory configuration of 2GB, 1.5GB memory consumption during image pull may cause slow performance and possibly OOM.
The text was updated successfully, but these errors were encountered: