-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
image usage #21
Comments
actually, the output of build is 'libvgpu.so', you can mount that '.so' into any container with LD_PRELOAD set, and it will support things like
|
Yes, it's working on both container and VM environments. Is it possible to set CUDA_DEVICE_MEMORY_LIMIT and CUDA_DEVICE_SM_LIMIT dynamically? Once the process starts, the environment variables become fixed. Can these values be changed dynamically or reloaded from a ConfigMap or configuration server? |
We haven't implemented that feature, but technically, you can modify values in vGPUmonitor, once the '.cache' file in mmaped, it will update device memory limit and utilization limit to the corresponding container. |
"After building the Docker image, we copy the build package into the target image. Does this mean that only our contain build package image will support the following features?
on
export CUDA_DEVICE_MEMORY_LIMIT=1g
export CUDA_DEVICE_SM_LIMIT=50
The text was updated successfully, but these errors were encountered: