-
Notifications
You must be signed in to change notification settings - Fork 89
[FEATURE] Update Qwen2.5VL to v4.54.1 #1421
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @zhtmike, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers a crucial update to the Qwen2.5VL model, bringing it in line with transformer version 4.54.1. The changes address critical bugs related to model output and weight loading, ensuring the model functions correctly and efficiently. Furthermore, the internal architecture has been modernized with refactored attention layers and a more streamlined approach to handling multimodal data, which enhances the model's overall stability and performance. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates the Qwen2.5VL model to align with transformers==4.54.1, fixing issues with incorrect output and weight loading. The changes involve a significant refactoring of the model implementation, particularly in the attention mechanisms and cache handling, to use more modular and centralized utilities. This is a great improvement for code clarity and maintainability. I've found a critical issue related to attention implementation dispatching that could lead to a runtime error, and a couple of medium-severity issues regarding an unused function argument and an incorrect type hint. Overall, the changes are very positive, and with these fixes, the implementation will be much more robust.
| # Whole checkpoint | ||
| # checkpoint mapping from pt to hf | ||
| matching = [s for s in key_renaming_mapping.keys() if "LayerNorm.gamma" in s] | ||
| matching = key_renaming_mapping.keys() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里是为什么去掉了 layernorm.gamma 的判断,是之前写多了吗?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
目前这个判断会导致qwen25vl 权重读取失败。去掉这个判断可以正确读取权重,具体修复待 @wcrzlh 分析
What does this PR do?
Fixes # (issue)
transformers==4.54.1)key_renaming_mapping, which caused weights to be improperly loaded.Adds # (feature)
transformersversion 4.54.1.Before submitting
What's New. Here are thedocumentation guidelines
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@xxx