Skip to content

Conversation

@yzh119
Copy link
Collaborator

@yzh119 yzh119 commented Oct 20, 2025

📌 Description

Some invalid configuration are generated in JIT warmup (mixed precision) function gen_prefill_attention_modules.

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Tests
    • Updated test infrastructure to enhance compatibility handling for specific hardware acceleration scenarios, improving test robustness for mixed-precision configurations.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yzh119, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses and resolves an issue where unit tests were failing on the Hopper platform. The root cause was identified as invalid configurations being generated during JIT warmup, particularly when attempting to use the fa3 attention template with mixed precision. The change introduces a safeguard to ensure that fa3 is only utilized with uniform precision, thereby eliminating the source of the test failures and ensuring robust operation.

Highlights

  • Unit Test Fix: Resolved failed unit tests specifically observed on the Hopper platform.
  • Mixed Precision Handling: Implemented a check within the JIT warmup function gen_prefill_attention_modules to prevent the fa3 (Flash Attention v3) template from being generated with mixed precision configurations, as it does not support them.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 20, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

A targeted control-flow guard was added to gen_prefill_attention_modules in the test helpers to skip mixed-precision configurations for the FA3 backend path when query and key-value dtypes differ.

Changes

Cohort / File(s) Summary
FA3 mixed-precision guard
tests/test_helpers/jit_utils.py
Added a conditional check within the FA3 backend branch to skip mixed-precision configurations where q_dtype != kv_dtype, preventing certain config combinations during prefill attention module generation.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

🐰 A guard for the FA3, so precise and lean,
Mixed precisions now gracefully skip the scene,
When queries and keys take diverging roads,
The test helper lightens its computational loads,
Efficiency hops forward, clean and serene! 🌿

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ❓ Inconclusive The pull request follows the required template structure with all major sections present, including Description, Related Issues, Pre-commit Checks, Tests, and Reviewer Notes. The author marked all pre-commit checklist items and the tests-added item as complete. However, the Description section is quite brief and vague, stating only that "invalid configuration are generated" without clearly explaining what the PR does to fix the issue or why the fix is needed. The Related Issues section is left empty, though this may be acceptable if no specific issues are linked. Additionally, the Tests checklist shows that tests have been added but are not passing, which raises concerns about the PR's readiness for merge. The PR description should be improved to provide more detail about what specific changes were made to address the invalid configuration generation issue and explain the reasoning behind those changes. Additionally, the unchecked "All tests are passing" item should be resolved before merging, as this indicates tests may not be passing successfully. The author should clarify the test status and provide a more comprehensive description of the fix.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title "unittest: fix failed unittest on hopper" is partially related to the changeset, referring to a real aspect of the change (fixing a test failure) but not explicitly conveying the main technical change. The actual change is adding a control-flow guard to skip mixed-precision configurations in the FA3 path within gen_prefill_attention_modules. While the title indicates that a unittest issue on hopper is being fixed, it doesn't explain what the underlying fix is, which could leave teammates scanning history without full understanding of the primary technical change. However, the title is specific and clear about the context (hopper GPU, unittest failure) and accurately reflects that this PR addresses a failing test.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes a failing unit test on Hopper by preventing the JIT warmup function gen_prefill_attention_modules from generating configurations for mixed-precision attention with the fa3 backend, which does not support it. The change is correct for the test helper function. However, the review identifies a potential issue where the root cause in the main library code is not addressed. The function determine_attention_backend can still incorrectly select the fa3 backend for mixed-precision cases, which could lead to runtime errors when using the public APIs. It is strongly recommended to fix this at the source by updating is_fa3_backend_supported.

Comment on lines +163 to +164
if q_dtype != kv_dtype:
continue # fa3 template do not support mixed precision
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This correctly prevents generating an invalid configuration for the fa3 backend in this test helper. However, this only addresses the symptom in the JIT warmup. The root cause appears to be in flashinfer.utils.determine_attention_backend, which can still select the fa3 backend for mixed-precision cases because is_fa3_backend_supported doesn't perform this check. This could lead to runtime errors in user-facing APIs like single_prefill_with_kv_cache. A more robust fix would be to add the mixed-precision check to is_fa3_backend_supported to prevent incorrect backend selection throughout the library.

@yzh119 yzh119 merged commit 28c8070 into flashinfer-ai:main Oct 28, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants