Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster coco eval support #28

Merged
merged 3 commits into from
Nov 4, 2024
Merged

Conversation

MiXaiLL76
Copy link
Contributor

#27

After changing and working with .pre-commit-config, it is easier to create a new PR than to modify the old one)

In general, you can check my code after the release of faster-coco-eval==1.6.5.
I am currently preparing packages
https://github.com/MiXaiLL76/faster_coco_eval/actions/runs/11652636914

It will be cool if my library will work in multigpu validation. Before this repository, no one tried to use the library in this way)

@MiXaiLL76
Copy link
Contributor Author

MiXaiLL76 commented Nov 3, 2024

Test code

CUDA_VISIBLE_DEVICES=0, torchrun --master_port=7777 --nproc_per_node=1 train.py -c configs/dfine/dfine_hgnetv2_s_coco.yml --test-only -r dfine_s_coco.pth

Test out

изображение


I found in you code:
https://raw.githubusercontent.com/Peterande/storage/refs/heads/master/logs/coco/dfine_s_coco_log.txt
изображение

As you can see, my library performs calculations faster, although there are small discrepancies in the metrics. Perhaps due to floating point rounding. Can you check it yourself? I'll check my code tomorrow, perhaps I need to "collect" the calculations more intelligently.

And it additionally calculates 2 variables AR_50 & AR_75.
In general, few people use them, but I needed them in my work, so I left them.

Access to variables is easier for me than in pycocotools. There is a variable coco_eval.stats_as_dict.

An example can be seen here:
https://github.com/MiXaiLL76/faster_coco_eval/blob/75fd297faf50baa892a6e09b6fa89b08831b7ab3/tests/test_world_evalutor.py#L66-L94

@Peterande
Copy link
Owner

Peterande commented Nov 4, 2024

The code is now running successfully. I'll test other datasets later and plan to keep the original pycocotools implementations as an optional backup

@Peterande
Copy link
Owner

I've observed that the time it takes to accumulate evaluation results has really been accelerated a lot! But the wait time between these two lines of code seems to be longer, do you know the possible reasons?
Averaged stats:
Accumulating evaluation results...

@Peterande Peterande merged commit 7240fb1 into Peterande:master Nov 4, 2024
2 checks passed
@Peterande
Copy link
Owner

I've observed that the time it takes to accumulate evaluation results has really been accelerated a lot! But the wait time between these two lines of code seems to be longer, do you know the possible reasons? Averaged stats: Accumulating evaluation results...

After comparison, there is indeed an increase in speed. I would be happy to incorporate faster-coco-eval into D-FINE. But please help to check whether this problem is normal~THX!

@MiXaiLL76
Copy link
Contributor Author

I've observed that the time it takes to accumulate evaluation results has really been accelerated a lot! But the wait time between these two lines of code seems to be longer, do you know the possible reasons? Averaged stats: Accumulating evaluation results...

After comparison, there is indeed an increase in speed. I would be happy to incorporate faster-coco-eval into D-FINE. But please help to check whether this problem is normal~THX!

This is probably normal. In this interval, all the calculations that numpy does merge, which is generally not fast.

And also the execution of the accumulate function. (I have it written in C++, so I need to "pass" the accumulated data there)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants