-
Notifications
You must be signed in to change notification settings - Fork 29.2k
enable finegrained_fp8 and granite_speech cases on XPU #38036
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Yao Matrix <[email protected]>
Signed-off-by: Yao Matrix <[email protected]>
Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. The CI will be paused while the PR is in draft mode. When it is ready for review, please click the |
Signed-off-by: Yao Matrix <[email protected]>
@@ -188,11 +191,12 @@ def test_block_size(self): | |||
) | |||
self.assertEqual(quantized_model.config.quantization_config.weight_block_size, (32, 32)) | |||
|
|||
@require_torch_multi_gpu | |||
def test_quantized_model_multi_gpu(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For multi_accelerator
case ground truth. I can find both XPU and A100 return device 0, but the ground truth is {0,1}.
I look a bit deeper, for the target model meta-llama/Llama-3.2-1B
, actually it has 2 modules_to_treat
in accelerate infer_auto_device_map
, as below. and the lm_head
is tied with model
, which means it only has 1 module to treat, and naturally can only placed to device 0.
I don't know whether there are any other scenarios I didn't considered, but for this case, seems the correct ground truth should be 0. @ydshieh , pls let me know your insights, thx
[('model', LlamaModel(
(embed_tokens): Embedding(128256, 2048)
(layers): ModuleList(
(0-15): 16 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): FP8Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): FP8Linear(in_features=2048, out_features=512, bias=False)
(v_proj): FP8Linear(in_features=2048, out_features=512, bias=False)
(o_proj): FP8Linear(in_features=2048, out_features=2048, bias=False)
)
(mlp): LlamaMLP(
(gate_proj): FP8Linear(in_features=2048, out_features=8192, bias=False)
(up_proj): FP8Linear(in_features=2048, out_features=8192, bias=False)
(down_proj): FP8Linear(in_features=8192, out_features=2048, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm((2048,), eps=1e-05)
(post_attention_layernorm): LlamaRMSNorm((2048,), eps=1e-05)
)
)
(norm): LlamaRMSNorm((2048,), eps=1e-05)
(rotary_emb): LlamaRotaryEmbedding()
)), ('lm_head', Linear(in_features=2048, out_features=128256, bias=False))]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this depends on the vram of your GPUs/XPUs ; it will only use both if one is not enough, otherwise maybe it would make sense to use another device map strategy here lik "balanced"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will let @SunMarc or @MekkCyber to share their thoughts for this.
On our CI, these tests are not collected, I believe it is due to the require_read_token
decorator at the class level.
@yao-matrix You are able to run this test ...? I am surprised. I will take a look at this issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
embed_tokens is indeed tied to lm_head but the layers can be dispatched to other gpus. setting "auto" in device_map will default to "balanced" strategy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will let @SunMarc or @MekkCyber to share their thoughts for this.
On our CI, these tests are not collected, I believe it is due to the
require_read_token
decorator at the class level.@yao-matrix You are able to run this test ...? I am surprised. I will take a look at this issue
I removed require_read_token
in my local env and run this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@IlyasMoutawwakil @SunMarc yes, i tried balanced
in my local env too w/ the same consideration as yours(my XPU has 64GB VRAM), but the result is still 1
. It seems split granularity is top-level module when available memory is enough to fit it, in infer_auto_device_map
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will investigate a bit more. @MekkCyber tested locally and it works but when running with pytest it fails
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SunMarc @MekkCyber You will need to remove require_read_token
decorator in order to run these tests.
(if not done so yet)
related issue: #38093
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you think there is no more change required for this test_quantized_model_multi_gpu
, feel free give a ✅ 🙏 .
From my side, I am just waiting a nit change regarding variable name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's good to go from my side, we need to figure out why it fails with pytest
but no need to include that in this pr, and thanks @ydshieh for the advice about require_read_token
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks ! Left some comments
@@ -188,11 +191,12 @@ def test_block_size(self): | |||
) | |||
self.assertEqual(quantized_model.config.quantization_config.weight_block_size, (32, 32)) | |||
|
|||
@require_torch_multi_gpu | |||
def test_quantized_model_multi_gpu(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
embed_tokens is indeed tied to lm_head but the layers can be dispatched to other gpus. setting "auto" in device_map will default to "balanced" strategy.
@ydshieh , resolved comments, thx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks 🤗
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
…8036) * enable finegrained_fp8 cases on XPU Signed-off-by: Yao Matrix <[email protected]> * fix style Signed-off-by: Yao Matrix <[email protected]> * change back to auto Signed-off-by: Yao Matrix <[email protected]> * rename per comments Signed-off-by: Matrix Yao <[email protected]> --------- Signed-off-by: Yao Matrix <[email protected]> Signed-off-by: Matrix Yao <[email protected]> Co-authored-by: Marc Sun <[email protected]>
@ydshieh @IlyasMoutawwakil , pls help review, thx