-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster coco eval support #28
Conversation
Test codeCUDA_VISIBLE_DEVICES=0, torchrun --master_port=7777 --nproc_per_node=1 train.py -c configs/dfine/dfine_hgnetv2_s_coco.yml --test-only -r dfine_s_coco.pth Test outI found in you code: As you can see, my library performs calculations faster, although there are small discrepancies in the metrics. Perhaps due to floating point rounding. Can you check it yourself? I'll check my code tomorrow, perhaps I need to "collect" the calculations more intelligently. And it additionally calculates 2 variables AR_50 & AR_75. Access to variables is easier for me than in pycocotools. There is a variable coco_eval.stats_as_dict. An example can be seen here: |
The code is now running successfully. I'll test other datasets later and plan to keep the original pycocotools implementations as an optional backup |
I've observed that the time it takes to accumulate evaluation results has really been accelerated a lot! But the wait time between these two lines of code seems to be longer, do you know the possible reasons? |
After comparison, there is indeed an increase in speed. I would be happy to incorporate faster-coco-eval into D-FINE. But please help to check whether this problem is normal~THX! |
This is probably normal. In this interval, all the calculations that numpy does merge, which is generally not fast. And also the execution of the accumulate function. (I have it written in C++, so I need to "pass" the accumulated data there) |
#27
After changing and working with .pre-commit-config, it is easier to create a new PR than to modify the old one)
In general, you can check my code after the release of faster-coco-eval==1.6.5.
I am currently preparing packages
https://github.com/MiXaiLL76/faster_coco_eval/actions/runs/11652636914
It will be cool if my library will work in multigpu validation. Before this repository, no one tried to use the library in this way)