-
Notifications
You must be signed in to change notification settings - Fork 933
add Qwen3vl notebook #3093
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: latest
Are you sure you want to change the base?
add Qwen3vl notebook #3093
Conversation
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
|
View / edit / reply to this conversation on ReviewNB aleksandr-mokrov commented on 2025-10-24T15:50:16Z Line #4. demo = make_demo(model, processor) Check it please. make_demo accepts only model, genai is not installed and '_OVQwen3VLForCausalLM' object has no attribute 'start_chat'. |
sorry, its my mistake, and it shall reuse the gradio_helper.py of qwen2.5vl notebook instead of qwenvl2. it should be working now. |
|
@openvino-dev-samples have you tried this on a computer with 32GB RAM? I have a LNL laptop with 32GB RAM and I can only convert the 2B model. The 4B model crashes. The same happened too when I tried to reproduce https://medium.com/openvino-toolkit/early-look-exploring-qwen3-vl-and-qwen3-next-day0-model-integration-for-enhanced-ai-pc-experiences-134498f6b290 . There is no error, conversion just stops after "loading checkpoint shards" (outside of the notebook, Inside the notebook I get "subprocess failed"). Converting does work on SPR with 1TB RAM. I used the optimum-cli command from the notebook with these requirements (also from the notebook) in a clean env https://gist.githubusercontent.com/helena-intel/53d044d690b7769b49b5bbccd5c267bf/raw/34b56b29b79b17f7d59b875218e650101ea19ae1/requirements_notebook.txt It's surprising not to be able to convert a 4B model on a LNL laptop. If this is expected for now, can this be very clear in the notebook, with bold letters or a warning note? |
I can confirm I can run the notebook with the model After downloading the model during the call The inference succeeded on both the CPU and GPU. |
Looks like as many applications and services as possible need to be shutdown and stopped to free memory. In your setup your screenshot shows an idle memory usage of 6.6GB from total 32GB. Have you watched the memory consumption during download and then during conversion&optimization? |

No description provided.