-
Notifications
You must be signed in to change notification settings - Fork 299
Add Qwen3 Moe #2260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Qwen3 Moe #2260
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Took an initial pass. Let's try to clean up the config and state passing.
No passing an index down the layer stack, plus data structures that apply to the whole layer stack.
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for the Qwen3 MoE model. The implementation looks solid, covering the backbone, attention, decoder, tokenizer, and conversion scripts. I've identified several high-severity issues related to incomplete get_config
methods in various new layers, which will prevent model serialization from working correctly. There are also some medium-severity issues like unused parameters and a critical issue in the checkpoint conversion test script where an incorrect preprocessor is used. I've provided suggestions to fix these issues. Once addressed, the PR should be in great shape.
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for the Qwen3 MoE model, including its backbone, causal language model, tokenizer, and conversion scripts.
I've identified a few issues that need attention:
- A
critical
bug inQwen3MoeAttention.get_config()
that will cause anAttributeError
. - A couple of
high
severity issues where invalid parameters are used, which will lead toTypeError
exceptions. - Some
medium
severity issues in the test scripts and opportunities for code cleanup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, added few small comments.
@divyashreepathihalli I put together a colab notebook you requested. Please keep in mind to use high compute specs while running this. The output of running this script has been attached to description of this PR. |
hey, can you rebase with master? |
just rebased @divyashreepathihalli |
@kanpuriyanawab , Can you address this comment. |
In the example output provided, the output is diverging from 6th token output and it does not match any further. |
1e-3 has been an accepted limit for backbone output matching. With 1e-4 the backbone outputs mismatch to some extent for qwen3 moe.
See this and other PRs of model for reference eg. Qwen Moe, Mixtral etc. In generation output matching, we weigh on the fact that the semantic meaning of the response is correct and is not gibberish and as you can see description -
Let me know if you have any more questions ! |
@divyashreepathihalli / @sachinprasadhs I have added the causal lm and causal lm preprocessor tests. |
rebase with master to resolve failing tests. The Pr cannot be merged with failing tests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR is missing a the conversion script validation example - https://github.com/keras-team/keras-hub/blob/master/keras_hub/src/utils/transformers/convert_t5gemma_test.py
Can you please add that?
cd943b1
into
keras-team:master
Qwen3 Moe backbone output matching with atol 1e-3!
Generate output matching wise We are doing okay here! Generated token distribution is close in space to the huggingface ones. We saw similar issue in Qwen3 base models. Random seed used - 123
Keras generated text - What is Keras? Keras is a deep learning framework that is used for building and training neural networks. It is written in Python and can run on top
Keras token output tensor([[ 3838, 374, 730, 9247, 30, 730, 9247, 374, 264, 5538,
6832, 12626, 429, 374, 1483, 369, 4752, 323, 4862, 29728,
14155, 13, 1084, 374, 5326, 304, 13027, 323, 646, 1598,
389, 1909]], dtype=torch.int32)
HF Token outputs = tensor([[ 3838, 374, 730, 9247, 30, 3555, 374, 279, 6672, 1948,
730, 9247, 323, 94986, 30, 3555, 525, 279, 22146, 315,
730, 9247, 916, 94986, 30, 730, 9247, 374, 264, 1550,
11591, 29728]])