Skip to content

Correct validation/test set split #103

@dbeinhauer

Description

@dbeinhauer

Currently, our implementation of validation/test set split is not ideal. During our training we use only 10 batches (validation set) for evaluation of our model performance. At the end, we should ideally take only the rest of them to test the actual performance on not seen data. Additionally, we should select these sets kind of shuffled based on our different datasets (when I would have multiple different model subsets).

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Projects

Status

Backlog

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions