Skip to content

enable finegrained_fp8 and granite_speech cases on XPU #38036

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
May 14, 2025

Conversation

yao-matrix
Copy link
Contributor

@ydshieh @IlyasMoutawwakil , pls help review, thx

Copy link
Contributor

github-actions bot commented May 9, 2025

Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. The CI will be paused while the PR is in draft mode. When it is ready for review, please click the Ready for review button (at the bottom of the PR page). This will assign reviewers and trigger CI.

@github-actions github-actions bot marked this pull request as draft May 9, 2025 07:30
Signed-off-by: Yao Matrix <[email protected]>
@@ -188,11 +191,12 @@ def test_block_size(self):
)
self.assertEqual(quantized_model.config.quantization_config.weight_block_size, (32, 32))

@require_torch_multi_gpu
def test_quantized_model_multi_gpu(self):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For multi_accelerator case ground truth. I can find both XPU and A100 return device 0, but the ground truth is {0,1}.
I look a bit deeper, for the target model meta-llama/Llama-3.2-1B, actually it has 2 modules_to_treat in accelerate infer_auto_device_map, as below. and the lm_head is tied with model, which means it only has 1 module to treat, and naturally can only placed to device 0.

I don't know whether there are any other scenarios I didn't considered, but for this case, seems the correct ground truth should be 0. @ydshieh , pls let me know your insights, thx

[('model', LlamaModel(
(embed_tokens): Embedding(128256, 2048)
(layers): ModuleList(
(0-15): 16 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): FP8Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): FP8Linear(in_features=2048, out_features=512, bias=False)
(v_proj): FP8Linear(in_features=2048, out_features=512, bias=False)
(o_proj): FP8Linear(in_features=2048, out_features=2048, bias=False)
)
(mlp): LlamaMLP(
(gate_proj): FP8Linear(in_features=2048, out_features=8192, bias=False)
(up_proj): FP8Linear(in_features=2048, out_features=8192, bias=False)
(down_proj): FP8Linear(in_features=8192, out_features=2048, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm((2048,), eps=1e-05)
(post_attention_layernorm): LlamaRMSNorm((2048,), eps=1e-05)
)
)
(norm): LlamaRMSNorm((2048,), eps=1e-05)
(rotary_emb): LlamaRotaryEmbedding()
)), ('lm_head', Linear(in_features=2048, out_features=128256, bias=False))]

Copy link
Member

@IlyasMoutawwakil IlyasMoutawwakil May 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this depends on the vram of your GPUs/XPUs ; it will only use both if one is not enough, otherwise maybe it would make sense to use another device map strategy here lik "balanced"

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will let @SunMarc or @MekkCyber to share their thoughts for this.

On our CI, these tests are not collected, I believe it is due to the require_read_token decorator at the class level.

@yao-matrix You are able to run this test ...? I am surprised. I will take a look at this issue

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

embed_tokens is indeed tied to lm_head but the layers can be dispatched to other gpus. setting "auto" in device_map will default to "balanced" strategy.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will let @SunMarc or @MekkCyber to share their thoughts for this.

On our CI, these tests are not collected, I believe it is due to the require_read_token decorator at the class level.

@yao-matrix You are able to run this test ...? I am surprised. I will take a look at this issue

I removed require_read_token in my local env and run this case.

Copy link
Contributor Author

@yao-matrix yao-matrix May 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@IlyasMoutawwakil @SunMarc yes, i tried balanced in my local env too w/ the same consideration as yours(my XPU has 64GB VRAM), but the result is still 1. It seems split granularity is top-level module when available memory is enough to fit it, in infer_auto_device_map

Copy link
Member

@SunMarc SunMarc May 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will investigate a bit more. @MekkCyber tested locally and it works but when running with pytest it fails

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SunMarc @MekkCyber You will need to remove require_read_token decorator in order to run these tests.
(if not done so yet)

related issue: #38093

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you think there is no more change required for this test_quantized_model_multi_gpu, feel free give a ✅ 🙏 .

From my side, I am just waiting a nit change regarding variable name.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's good to go from my side, we need to figure out why it fails with pytest but no need to include that in this pr, and thanks @ydshieh for the advice about require_read_token

@yao-matrix yao-matrix marked this pull request as ready for review May 9, 2025 07:43
@IlyasMoutawwakil IlyasMoutawwakil requested a review from MekkCyber May 9, 2025 10:29
Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks ! Left some comments

@@ -188,11 +191,12 @@ def test_block_size(self):
)
self.assertEqual(quantized_model.config.quantization_config.weight_block_size, (32, 32))

@require_torch_multi_gpu
def test_quantized_model_multi_gpu(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

embed_tokens is indeed tied to lm_head but the layers can be dispatched to other gpus. setting "auto" in device_map will default to "balanced" strategy.

@yao-matrix
Copy link
Contributor Author

@ydshieh , resolved comments, thx

Copy link
Contributor

@MekkCyber MekkCyber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks 🤗

@ydshieh ydshieh enabled auto-merge (squash) May 14, 2025 08:46
@ydshieh ydshieh merged commit 9b5ce55 into huggingface:main May 14, 2025
20 checks passed
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

zucchini-nlp pushed a commit to zucchini-nlp/transformers that referenced this pull request May 14, 2025
…8036)

* enable finegrained_fp8 cases on XPU

Signed-off-by: Yao Matrix <[email protected]>

* fix style

Signed-off-by: Yao Matrix <[email protected]>

* change back to auto

Signed-off-by: Yao Matrix <[email protected]>

* rename per comments

Signed-off-by: Matrix Yao <[email protected]>

---------

Signed-off-by: Yao Matrix <[email protected]>
Signed-off-by: Matrix Yao <[email protected]>
Co-authored-by: Marc Sun <[email protected]>
@yao-matrix yao-matrix deleted the xpu branch May 14, 2025 23:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants