You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, jiechen
Thanks a lot for the awesome work you had done, now I am using this repo to my research, it works well on single GPU, but I have four Tesla k80 , so I am trying to apply it on multi GPU, then using (
model =nn.DataParellel(model)
model=model.module) in train.py , but it doesn't work , so could yo do me a favor to fix this problem . Training on single GPU for 200 epochs will take a week , that's unacceptble.
The text was updated successfully, but these errors were encountered:
Hi, jiechen
Thanks a lot for the awesome work you had done, now I am using this repo to my research, it works well on single GPU, but I have four Tesla k80 , so I am trying to apply it on multi GPU, then using (
model =nn.DataParellel(model)
model=model.module) in train.py , but it doesn't work , so could yo do me a favor to fix this problem . Training on single GPU for 200 epochs will take a week , that's unacceptble.
The text was updated successfully, but these errors were encountered: