-
Notifications
You must be signed in to change notification settings - Fork 88
tests(transformers): increase the RoCBert/Speech2Text/UniSpeech models test error passing threshold for BF16 #1359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @alien-0119, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request adjusts the acceptable error margins for tests involving Speech2Text and UniSpeech models when operating with BFloat16 precision. This change is crucial for preventing test failures that arise from the inherent numerical characteristics of BF16, ensuring that tests pass within a reasonable tolerance. Additionally, a minor refactoring was implemented to improve the management of random number generation within the testing utilities. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to fix failing tests for Speech2Text and UniSpeech models on BF16 by increasing the error thresholds. However, the root cause of the increased error appears to be a subtle but significant change in tests/transformers_tests/models/modeling_common.py
concerning random number generation. The new get_rng()
function resets the random seed on every call, leading to repetitive random sequences, which is different from the previous global RNG behavior. This likely alters the test data in a way that increases the discrepancy between framework implementations. My review focuses on fixing this underlying issue, which should make the threshold increases unnecessary. I've suggested a fix for the RNG logic and recommended reverting the threshold changes pending re-evaluation after the fix.
What does this PR do?
Fixes # (issue)
global_rng, a random number generator, when used as a global variable, will become inactive after being called a second time within a single process. To solve this, function get_rng creates a new random number generator on every call, meaning that different calls to functions like ids_numpy and floats_numpy will receive a random number generator that is reset to the same initial state.
Increase the RoCBert/Speech2Text/UniSpeech models test error passing threshold for BF16 to avoid unnecessary test failures for valid results.
refer to: #1357
Before submitting
What's New
. Here are thedocumentation guidelines
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@xxx