Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance evaluation #112

Open
mzur opened this issue Oct 7, 2022 · 1 comment
Open

Performance evaluation #112

mzur opened this issue Oct 7, 2022 · 1 comment

Comments

@mzur
Copy link
Member

mzur commented Oct 7, 2022

We could offer a simple method to evaluate the detection performance of MAIA. Users can choose to separate a "test set" from the training annotations (if they use existing annotations or UnKnoT). This is a percentage of the total annotations (e.g. 5%). Behind the scenes, whole images are successively excluded from the training set until around 5% of the annotations are excluded (try to balance this so each label has more or less the same number of excluded annotations).

When the object detector is trained, test it on the test set and report the recall and precision. This can also be displayed in an ECharts visualization.

@mzur mzur added the student label Oct 7, 2022
@mzur mzur moved this to Medium Priority in BIIGLE Roadmap Oct 7, 2022
@mzur mzur removed the student label Oct 20, 2022
@mzur
Copy link
Member Author

mzur commented Sep 25, 2023

Alternatively, users can be asked to start annotating a few (5, 10, 20?) images of the volume while the MAIA job is running. These annotations could be automatically used to test the performance of the detection model.

@mzur mzur changed the title Performance evaluation statistics Performance evaluation Sep 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Medium Priority
Development

No branches or pull requests

1 participant