-
Notifications
You must be signed in to change notification settings - Fork 88
Fix owlv2/bigbird ut #1366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix owlv2/bigbird ut #1366
Conversation
Summary of ChangesHello @liuchuting, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on enhancing the stability and reliability of unit tests for the BigBird and OWLv2 models. It addresses issues related to floating-point precision in BigBird tests by adjusting error thresholds and resolves a numerical instability in OWLv2 tests by preventing zero values in input data that caused infinite results during norm calculations. Additionally, it tackles a global random number generator state issue to ensure consistent test execution. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses two separate unit test failures. For the BigBird model tests, the error threshold for the bf16
data type is increased, which is a reasonable adjustment to handle precision differences. For the OWLv2 model tests, the code prevents a test failure caused by inf
values during norm calculation by ensuring the initial token ID in the test data is never zero. While the pull request description for the OWLv2 fix mentions changing the random seed, the implementation correctly patches the generated data directly, which is a more robust solution. I've provided one suggestion to make that part of the code more concise.
|
||
def prepare_config_and_inputs(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For conciseness and improved readability, these two lines can be replaced with a single call to np.maximum
. This achieves the same goal of ensuring the first token ID is non-zero in a more idiomatic way.
def prepare_config_and_inputs(self): | |
input_ids[..., 0] = np.maximum(input_ids[..., 0], 1) |
What does this PR do?
fix bigbird fast ut
Refer to: pr1359.The changes in this PR will lead to input modifications. Increase the bigbird model test error passing threshold for BF16 to avoid unnecessary test failures for valid results. refer to: issue1357
fix owlv2 fast ut
When zeros appear in the input_ids, filling with torch.finfo(input_dtype).min results in extremely small values like -3.40282e+38 or -3.38953e+38. Computing np.linalg.norm(-3.38953e+38) returns inf.
Change the random seed to avoid zeros in the randomly generated input_ids.
Fixes # (issue)#1357
Adds # (feature)
Before submitting
What's New
. Here are thedocumentation guidelines
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@xxx