You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there, I had a question about how GPU memory got allocated.
cudaMalloc() is the API interface for CUDA programs to reserve GPU memory in user programs. In the kernel space, based on my understanding, the user space memory allocation is processed in two phases.
(1) ioctl() syscall to reserve GPU memory.
(2) mmap() syscall to map the reserved memory to user's virtual memory space.
Wondering if my understanding is correct. If so, could anyone point me to which ioctl() handler should I take a look at?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi there, I had a question about how GPU memory got allocated.
cudaMalloc() is the API interface for CUDA programs to reserve GPU memory in user programs. In the kernel space, based on my understanding, the user space memory allocation is processed in two phases.
(1) ioctl() syscall to reserve GPU memory.
(2) mmap() syscall to map the reserved memory to user's virtual memory space.
Wondering if my understanding is correct. If so, could anyone point me to which ioctl() handler should I take a look at?
Appreciate it for the help.
Beta Was this translation helpful? Give feedback.
All reactions