-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bad performance on MPI-Sintel #11
Comments
To fine-tune on the MPI-Sintel dataset you have to change the dataset options. If you found respective in: [1] Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. "PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume." CVPR 2018 or arXiv:1709.02371](https://arxiv.org/abs/1709.02371) and set them to:
and
By fine-tuning on clean and final and evaluating on the training data I got:
however the results on the test data compared to the reported original are quite low:
I have used the lg-6-2 Net. Could this be an issue of over-fitting? I would appreciate any help to get better results on the test data. |
I think the difference above can be listed as follows: 2.data augmentations used in the code have a little different in the original flownet paper,see #10 .When training in chairs,you should add that. |
Thanks I will take a try but you mentioned flownet2. I want to replicate the pwc-net results. |
Did you replicate the results successfully? |
Do you mean for flownet2 or pwc-net |
pwc-net |
Unless the one reported abobe, I don't have done any further experiments. |
Thank you so much! |
@tsenst Hi, I also have this problem. do you find the reason and the corresponding solution? |
@tsenst Hi, when I finetune the model on MPI-Sintel with your options |
Hi~ |
No solution, probably because of the data augmentation. |
@xianshunw @Blcony Hi, I try to fine-tune or train on MPISintel, but the loss and epe are all ''nan' The fine-tune code is
Have you ever met this problem? |
Hi~ |
Well, maybe that issue does not solve my problem. I encounter this problem as early as 200 iteration。
Thank you very much, Maybe I need to open a new issue. |
Hi, I have meet the same situation, Moreover,this nan. stuff is not only appear during finetuning, but aslo pretraining using Chairs_Things_mix. Did you find the solution? |
When I trained the model with a RTX 3090 + TF1.15, I got nan at first steps (global step 1, 2, etc). I found TF1.x do not supports RTX3090, TF1.15.x use CUDA 10.0, this configuration reports no errors but results in nan loss(even NaN values in feature maps from feature_estimator layer). |
Hi,
I have used your pretrained model to finetune on MPI-Sintel. The EPE on test set was 6.2. Have you tried it?
The text was updated successfully, but these errors were encountered: