You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am reimplementing your dance revolution code. I found that in your config file, you set lambda_v to 0.01. In that case, all your mask in training forward become 1 in a 40 epochs setting, and there is no prediction_mask at all. Could you tell me the purpose of doing this? Thank you
groundtruth_mask = torch.ones(seq_len, self.condition_step) prediction_mask = torch.zeros(seq_len, int(epoch_i * self.lambda_v))this line mask = torch.cat([prediction_mask, groundtruth_mask], 1).view(-1)[:seq_len] # for random
The text was updated successfully, but these errors were encountered:
Hello, I am reimplementing your dance revolution code. I found that in your config file, you set lambda_v to 0.01. In that case, all your mask in training forward become 1 in a 40 epochs setting, and there is no prediction_mask at all. Could you tell me the purpose of doing this? Thank you
groundtruth_mask = torch.ones(seq_len, self.condition_step)
prediction_mask = torch.zeros(seq_len, int(epoch_i * self.lambda_v))
this linemask = torch.cat([prediction_mask, groundtruth_mask], 1).view(-1)[:seq_len] # for random
The text was updated successfully, but these errors were encountered: