-
Notifications
You must be signed in to change notification settings - Fork 6
Description
Hello,
Thank you for developing an excellent tool!
In the configure_device function within configuration.py, the GPU selection is currently limited to device = torch.device("cuda") without the ability to specify a particular GPU index. It would be more convenient to add an argument for users to define the GPU index explicitly (e.g., cuda:0, cuda:1, etc.). Are there any arguments against implementing this?
Additional Question:
Regarding GPU mode in the inference pipeline: does the pipeline utilize multiple CPUs in GPU mode? The line print(f"Using device {str(device).upper()} with {ncpu} CPUs") (line 33 in configure_device) suggests multi-CPU usage, but I couldn't find evidence of this in the code. If multi-CPU usage doesn't occur in GPU mode, this print statement might be slightly misleading.
Thank you in advance