Skip to content

Conversation

kanpuriyanawab
Copy link
Collaborator

@kanpuriyanawab kanpuriyanawab commented May 19, 2025

Qwen3 Moe backbone output matching with atol 1e-3!

Screenshot 2025-07-18 at 8 08 10 PM

Generate output matching wise We are doing okay here! Generated token distribution is close in space to the huggingface ones. We saw similar issue in Qwen3 base models. Random seed used - 123

Keras generated text - What is Keras? Keras is a deep learning framework that is used for building and training neural networks. It is written in Python and can run on top

Keras token output tensor([[ 3838, 374, 730, 9247, 30, 730, 9247, 374, 264, 5538,
6832, 12626, 429, 374, 1483, 369, 4752, 323, 4862, 29728,
14155, 13, 1084, 374, 5326, 304, 13027, 323, 646, 1598,
389, 1909]], dtype=torch.int32)

HF Token outputs = tensor([[ 3838, 374, 730, 9247, 30, 3555, 374, 279, 6672, 1948,
730, 9247, 323, 94986, 30, 3555, 525, 279, 22146, 315,
730, 9247, 916, 94986, 30, 730, 9247, 374, 264, 1550,
11591, 29728]])

@kanpuriyanawab kanpuriyanawab self-assigned this May 19, 2025
Copy link
Member

@mattdangerw mattdangerw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Took an initial pass. Let's try to clean up the config and state passing.

No passing an index down the layer stack, plus data structures that apply to the whole layer stack.

@divyashreepathihalli
Copy link
Collaborator

/gemini review

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the Qwen3 MoE model. The implementation looks solid, covering the backbone, attention, decoder, tokenizer, and conversion scripts. I've identified several high-severity issues related to incomplete get_config methods in various new layers, which will prevent model serialization from working correctly. There are also some medium-severity issues like unused parameters and a critical issue in the checkpoint conversion test script where an incorrect preprocessor is used. I've provided suggestions to fix these issues. Once addressed, the PR should be in great shape.

@sachinprasadhs sachinprasadhs moved this to In Progress in KerasHub Jul 16, 2025
@laxmareddyp laxmareddyp added the kokoro:force-run Runs Tests on GPU label Jul 18, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Jul 18, 2025
@laxmareddyp
Copy link
Collaborator

/gemini review

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the Qwen3 MoE model, including its backbone, causal language model, tokenizer, and conversion scripts.

I've identified a few issues that need attention:

  • A critical bug in Qwen3MoeAttention.get_config() that will cause an AttributeError.
  • A couple of high severity issues where invalid parameters are used, which will lead to TypeError exceptions.
  • Some medium severity issues in the test scripts and opportunities for code cleanup.

Copy link
Collaborator

@sachinprasadhs sachinprasadhs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, added few small comments.

@sachinprasadhs sachinprasadhs added the kokoro:force-run Runs Tests on GPU label Jul 29, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Jul 29, 2025
@kanpuriyanawab
Copy link
Collaborator Author

kanpuriyanawab commented Sep 2, 2025

@divyashreepathihalli I put together a colab notebook you requested. Please keep in mind to use high compute specs while running this.

The output of running this script has been attached to description of this PR.

@divyashreepathihalli
Copy link
Collaborator

hey, can you rebase with master?

@kanpuriyanawab
Copy link
Collaborator Author

just rebased @divyashreepathihalli

@sachinprasadhs
Copy link
Collaborator

Also add the missing test files for causal_lm_test and causal_lm_preprocessor_test

@kanpuriyanawab , Can you address this comment.

@sachinprasadhs
Copy link
Collaborator

In the example output provided, the output is diverging from 6th token output and it does not match any further.
What is the mismatch you observe when you do 1e-4 precision?

@kanpuriyanawab
Copy link
Collaborator Author

kanpuriyanawab commented Sep 4, 2025

@sachinprasadhs

In the example output provided, the output is diverging from 6th token output and it does not match any further.
What is the mismatch you observe when you do 1e-4 precision?

1e-3 has been an accepted limit for backbone output matching. With 1e-4 the backbone outputs mismatch to some extent for qwen3 moe.

The shown example is generate output, and it tends to diverge after 5th - 6th token.
In past we've seen models, matching with 1e-4 backbone output and still diverge in generate output.
That's because generation relies a lot on random states. [ Refer - generation process is sampling out of tokens based on probabilities].

See this and other PRs of model for reference eg. Qwen Moe, Mixtral etc.

In generation output matching, we weigh on the fact that the semantic meaning of the response is correct and is not gibberish and as you can see description -

Keras generated text - What is Keras? Keras is a deep learning framework that is used for building and training neural networks. It is written in Python and can run on top

Let me know if you have any more questions !

@kanpuriyanawab
Copy link
Collaborator Author

@divyashreepathihalli / @sachinprasadhs I have added the causal lm and causal lm preprocessor tests.

@divyashreepathihalli divyashreepathihalli added the kokoro:force-run Runs Tests on GPU label Sep 22, 2025
@divyashreepathihalli
Copy link
Collaborator

rebase with master to resolve failing tests. The Pr cannot be merged with failing tests.

@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Sep 22, 2025
@divyashreepathihalli divyashreepathihalli added the kokoro:force-run Runs Tests on GPU label Sep 22, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Sep 22, 2025
Copy link
Collaborator

@divyashreepathihalli divyashreepathihalli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR is missing a the conversion script validation example - https://github.com/keras-team/keras-hub/blob/master/keras_hub/src/utils/transformers/convert_t5gemma_test.py

Can you please add that?

@kanpuriyanawab kanpuriyanawab added the kokoro:force-run Runs Tests on GPU label Sep 22, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Sep 22, 2025
@sachinprasadhs sachinprasadhs added the kokoro:force-run Runs Tests on GPU label Sep 24, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Sep 24, 2025
@kanpuriyanawab kanpuriyanawab added the kokoro:force-run Runs Tests on GPU label Sep 25, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Sep 25, 2025
@divyashreepathihalli divyashreepathihalli merged commit cd943b1 into keras-team:master Sep 26, 2025
9 of 11 checks passed
@github-project-automation github-project-automation bot moved this from In Progress to Done in KerasHub Sep 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

6 participants