-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QA] Vizro AI dashboard tests improvements #935
Conversation
for more information, see https://pre-commit.ci
…y/vizro into score_tests_improvements
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like these enhancements Alexey. There are a few comments, but other than that, it's all good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow I really like this whole Score tests logic you created! 💯 I think it's ready to be checked while we touch the vizro-ai code, and can help track the performance more easily.
I just have some minor questions and suggestions. Overall it's really cool!
…ests_improvements
for more information, see https://pre-commit.ci
…y/vizro into score_tests_improvements
Huge thanks for all of your comments! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great improvement! 🎉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did a light review, looks good! I think this is going in the right direction! Left a few small comments. ⭐ 💯
In general I think my main comments are:
- Make adding tests as easy as possible (ideally just another prompt + expectation pair, nothing else)
- Make adding and removing models etc as easy as possible (ideally just some test parametrization or so)
- make the complexity of the dashboard a column in the report, and allow for individual names for newly added tests (such that we can have in the furture smth like,
easy_1
,easy_2
,easy_abc
, etc.
Other than that, I think it's exciting!
…ests_improvements
…ests_improvements
…ests_improvements � Conflicts: � vizro-ai/hatch.toml � vizro-ai/pyproject.toml � vizro-ai/tests/e2e/data_classes.py � vizro-ai/tests/e2e/prompts.py � vizro-ai/tests/e2e/pytest.ini � vizro-ai/tests/e2e/test_dashboard.py
…ests_improvements
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
⭐ Well done ⭐
I think this is much better, and looks much more usable. PLease address the outstanding comments still, but I think its almost there.
Could we add more models in the parametrization?
Sure, added more |
Description
numpy
libReference to potential complexity prompts improvements -> #935 (comment)
Notice
I acknowledge and agree that, by checking this box and clicking "Submit Pull Request":