File tree Expand file tree Collapse file tree 4 files changed +6
-6
lines changed 
tests/integration/test_lists/test-db Expand file tree Collapse file tree 4 files changed +6
-6
lines changed Original file line number Diff line number Diff line change @@ -189,9 +189,9 @@ l0_dgx_h100:
189189  #  ------------- CPP tests ---------------
190190  - cpp/test_multi_gpu.py::test_mpi_utils[90] 
191191  - cpp/test_multi_gpu.py::test_fused_gemm_allreduce[4proc-90] 
192-   - cpp/test_multi_gpu.py::test_cache_transceiver[2proc-ucx_kvcache-90] 
193-   - cpp/test_multi_gpu.py::test_cache_transceiver[8proc-nixl_kvcache-90] 
194-   - cpp/test_multi_gpu.py::test_cache_transceiver[8proc-ucx_kvcache-90] 
192+   - cpp/test_multi_gpu.py::test_cache_transceiver[2proc-ucx_kvcache-90] ISOLATION  
193+   - cpp/test_multi_gpu.py::test_cache_transceiver[8proc-nixl_kvcache-90] ISOLATION  
194+   - cpp/test_multi_gpu.py::test_cache_transceiver[8proc-ucx_kvcache-90] ISOLATION  
195195  - cpp/test_multi_gpu.py::test_user_buffer[2proc-90] 
196196  - cpp/test_multi_gpu.py::test_enc_dec[t5-90] 
197197  - cpp/test_multi_gpu.py::test_llama_executor[llama-orchestrator-90] 
Original file line number Diff line number Diff line change @@ -65,7 +65,7 @@ l0_l40s:
6565  - llmapi/test_llm_examples.py::test_llmapi_example_multilora 
6666  - llmapi/test_llm_examples.py::test_llmapi_example_guided_decoding 
6767  - llmapi/test_llm_examples.py::test_llmapi_example_logits_processor 
68-   - examples/test_llm_api_with_mpi.py::test_llm_api_single_gpu_with_mpirun[TinyLlama-1.1B-Chat-v1.0] 
68+   - examples/test_llm_api_with_mpi.py::test_llm_api_single_gpu_with_mpirun[TinyLlama-1.1B-Chat-v1.0] ISOLATION  
6969- condition :
7070    ranges :
7171      system_gpu_count :
Original file line number Diff line number Diff line change @@ -70,7 +70,7 @@ l0_rtx_pro_6000:
7070  - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_nvfp4_4gpus[moe_backend=CUTLASS-mtp_nextn=0-tp4-fp8kv=True-attention_dp=True-cuda_graph=True-overlap_scheduler=True-torch_compile=True] 
7171  - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_nvfp4_4gpus[moe_backend=CUTLASS-mtp_nextn=0-ep4-fp8kv=False-attention_dp=False-cuda_graph=False-overlap_scheduler=False-torch_compile=False] 
7272  - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_nvfp4_4gpus[moe_backend=CUTLASS-mtp_nextn=0-ep4-fp8kv=False-attention_dp=False-cuda_graph=False-overlap_scheduler=False-torch_compile=True] 
73-   - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_nvfp4_4gpus[moe_backend=CUTLASS-mtp_nextn=0-ep4-fp8kv=True-attention_dp=True-cuda_graph=True-overlap_scheduler=True-torch_compile=False] 
73+   - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_nvfp4_4gpus[moe_backend=CUTLASS-mtp_nextn=0-ep4-fp8kv=True-attention_dp=True-cuda_graph=True-overlap_scheduler=True-torch_compile=False] ISOLATION  
7474  - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_nvfp4_4gpus[moe_backend=CUTLASS-mtp_nextn=0-ep4-fp8kv=True-attention_dp=True-cuda_graph=True-overlap_scheduler=True-torch_compile=True] 
7575  - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_nvfp4_4gpus[moe_backend=CUTLASS-mtp_nextn=0-tp2pp2-fp8kv=False-attention_dp=False-cuda_graph=False-overlap_scheduler=False-torch_compile=False] 
7676  - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_nvfp4_4gpus[moe_backend=CUTLASS-mtp_nextn=0-tp2pp2-fp8kv=False-attention_dp=False-cuda_graph=False-overlap_scheduler=False-torch_compile=True] 
Original file line number Diff line number Diff line change @@ -31,6 +31,6 @@ l0_sanity_check:
3131      - llmapi/test_llm_examples.py::test_llmapi_sampling 
3232      - llmapi/test_llm_examples.py::test_llmapi_runtime 
3333      - llmapi/test_llm_examples.py::test_llmapi_tensorrt_engine 
34-       - examples/test_llm_api_with_mpi.py::test_llm_api_single_gpu_with_mpirun[TinyLlama-1.1B-Chat-v1.0] 
34+       - examples/test_llm_api_with_mpi.py::test_llm_api_single_gpu_with_mpirun[TinyLlama-1.1B-Chat-v1.0] ISOLATION  
3535      - unittest/others/test_kv_cache_transceiver.py::test_kv_cache_transceiver_single_process[NIXL-mha-ctx_fp16_gen_fp16] 
3636      - unittest/others/test_kv_cache_transceiver.py::test_kv_cache_transceiver_single_process[UCX-mha-ctx_fp16_gen_fp16] 
    
 
   
 
     
   
   
          
     
  
    
     
 
    
      
     
 
     
    You can’t perform that action at this time.
  
 
    
  
     
    
      
        
     
 
       
      
     
   
 
    
    
  
 
  
 
     
    
0 commit comments