Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kindly explain a little about the results terms and accuracy matching #42

Open
castlezone opened this issue Jun 21, 2022 · 8 comments
Open

Comments

@castlezone
Copy link

Hi there, I am running this experiment, " [LUCIR] w/ AANets"
" python main.py --nb_cl_fg=50 --nb_cl=10 --gpu=0 --random_seed=1993 --baseline=lucir --branch_mode=dual --branch_1=ss --branch_2=free --dataset=cifar100"

1

2

Concerns:

  1. which one accuracy you have used in your excel file.
  2. What do you mean by
    I) Current Accuracy FC
    II) Current Accuracy (proto)
    III) Current Accuracy (Proto-UB)

Thank you very much and looking forward to hearing.

@yaoyao-liu
Copy link
Owner

yaoyao-liu commented Jun 22, 2022

Thanks for your interest in our work!

For your Q1: We use the accuracy of FC for LUCIR+AANets.

For your Q2: FC denotes the results using FC classifiers. Proto shows the results using nearest-mean-of-exemplars classifiers. Proto-UB also uses nearest-mean-of-exemplars classifiers. However, we use all samples for each class to compute the mean embedding. Thus, it is an oracle setting that violates the benchmark protocol. It is not a part of the reported results. It is used for comparison.

If you have any further questions, please do not hesitate to contact me.

Best,

Yaoyao

@castlezone
Copy link
Author

Thank you so much for responding to me back. So, in my case, if I want to take the average value to put on an excel file, I would have to go with this, 73.64+72.07+74.11/3= 73.27? Means I would have to take the average of these 3 FC?
Thank you!

@yaoyao-liu
Copy link
Owner

The results we report are all from the FC setting.

If you need to run the experiments multiple times, you need to change the random seed and record the results for each run.

If you have any further questions, please do not hesitate to contact me.

@castlezone
Copy link
Author

Thank you so much! Yeah, it is amazing. So in my case, my average according to this screenshot would be 74.11%.
Thank you and good luck in the future.

@yaoyao-liu
Copy link
Owner

If you have any further questions, please do not hesitate to contact me.

@castlezone
Copy link
Author

castlezone commented Jul 20, 2022

Hi there, please tell me,

  1. if I want to run this experiment on N=10 and N=25 in POD-Net then how can I proceed? Thank you!
  2. Why on each phase it is calculating Zero phase accuracies specifically? Means on each phase results it is effecting zero phase while we are doing training on phases wise? Screenshot is attached and highlighted in red color.
    2

Thank you for your help.

@yaoyao-liu
Copy link
Owner

For your Q1: you may use this project to run POD-AANets. All config files are available here: https://github.com/yaoyao-liu/POD-AANets/tree/main/options.

For your Q2: some metrics (e.g., the forgetting rate) use the zeroth phase accuracies.

If you have any further questions, please do not hesitate to contact me.

@castlezone
Copy link
Author

Thank you for your reply. Yes please tell me if i want to see #of parameters and Memory size in the previously attached screenshot experiment then how can I see that? Please tell me the method to do so. Thank you once again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants