Skip to content

Commit 7a552c4

Browse files
authored
[https://nvbugs/5606166][fix] AutoDeploy: unwaive test for use tuples for cudagraph shape lookup (#8957)
also updated test waive for another nvbug Signed-off-by: Lucas Liebenwein <[email protected]>
1 parent fb7f983 commit 7a552c4

File tree

2 files changed

+2
-3
lines changed

2 files changed

+2
-3
lines changed

tests/integration/test_lists/waives.txt

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -393,7 +393,6 @@ triton_server/test_triton_llm.py::test_mistral_small_3_1_24b_pixtral[TYPE_FP16-T
393393
triton_server/test_triton_llm.py::test_mistral_small_3_1_24b_pixtral[TYPE_FP16-TYPE_BF16-False-1---False-True-False-0-1-enableDecoupleMode-inflight_fused_batching-disableTrtOverlap--0.7-max_utilization---1-1-1-False-tensorrt_llm_bls] SKIP (https://nvbugs/5606136)
394394
accuracy/test_cli_flow.py::TestMinitron4BBase::test_fp8 SKIP (https://nvbugs/5606233)
395395
examples/test_gpt.py::test_llm_minitron_fp8_with_pseudo_loras[4b] SKIP (https://nvbugs/5606233)
396-
unittest/_torch/auto_deploy/unit/singlegpu/compile/test_cuda_graph_batch_sizes.py::TestCudaGraphBatchSizes::test_forward_fallback_for_oversized_batch SKIP (https://nvbugs/5606166)
397396
accuracy/test_llm_api_pytorch.py::TestQwen3_8B::test_bf16[multi_gpus_no_cache] SKIP (https://nvbugs/5606266)
398397
examples/test_llm_api_with_mpi.py::test_llm_api_single_gpu_with_mpirun[TinyLlama-1.1B-Chat-v1.0] SKIP (https://nvbugs/5606268)
399398
disaggregated/test_disaggregated_single_gpu.py::test_disaggregated_simple_deepseek[True-False-DeepSeek-V3-Lite-fp8/fp8] SKIP (https://nvbugs/5626197)

tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_build_small_single.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -114,14 +114,15 @@ def _check_ad_config(experiment_config: ExperimentConfig, llm_args: LlmArgs):
114114
},
115115
},
116116
),
117-
(
117+
pytest.param(
118118
"meta-llama/Llama-4-Scout-17B-16E-Instruct",
119119
{
120120
"transforms": {
121121
"insert_cached_attention": {"backend": "flashinfer"},
122122
"compile_model": {"backend": "torch-opt"},
123123
},
124124
},
125+
marks=pytest.mark.skip(reason="https://nvbugs/5625972"),
125126
),
126127
(
127128
"meta-llama/Llama-4-Scout-17B-16E-Instruct",
@@ -188,7 +189,6 @@ def _check_ad_config(experiment_config: ExperimentConfig, llm_args: LlmArgs):
188189
),
189190
],
190191
)
191-
@pytest.mark.skip(reason="https://nvbugs/5625972")
192192
def test_build_ad(model_hub_id: str, llm_extra_args: dict):
193193
experiment_config = get_small_model_config(model_hub_id, **llm_extra_args)
194194
experiment_config["args"]["runtime"] = "demollm" # Default runtime set to demollm

0 commit comments

Comments
 (0)