Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions flashinfer/jit/cpp_ext.py
Original file line number Diff line number Diff line change
Expand Up @@ -166,8 +166,11 @@ def generate_ninja_build_for_op(
# No module flags, use global flags
cuda_cflags += global_flags

# /usr/lib/wsl/lib is for WSL2 users, their cuda.so is there
ldflags = [
"-shared",
"-L/usr/lib/wsl/lib",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Hardcoding the WSL-specific path /usr/lib/wsl/lib may cause issues on non-WSL Linux systems. It would be more robust to add this path conditionally. You could check for WSL and append the flag only when needed. For example:

if sys.platform == 'linux' and 'microsoft' in os.uname().release.lower():
    ldflags.append('-L/usr/lib/wsl/lib')

This would require restructuring the ldflags list initialization slightly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It already uses the path "/root/miniconda3/lib64" in the link, which does not exist on my device, so I think this situation will be widespread and adding new identification logic for this may reduce the reliability of the logic, so this proposed change requires more discussion.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would encourage adding some functions like find_cuda_lib() and find_cudart_lib(), where the internal logic could be platform dependent, @HelloCard wdyt?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Essentially, cuda_lib comes from the graphics driver, while cudart_lib comes from the CUDA component.
Graphics drivers are often located in fixed paths and don't benefit from automatic search logic. Retrieving cuda_lib from a fixed path is sufficient.
I believe cudart_lib could benefit from automatic search path logic. CUDA can be installed in three ways: conda, pip, and the official NVIDIA installer. CUDA installed via conda is already automatically indexed using $cuda_home.
So, introducing automatic search logic would help resolve the cudart_lib indexing issues caused by installing CUDA via pip or the official NVIDIA installer.
However, using pip to install CUDA is a bit of a stretch... While packages like vllm depend on a large number of NVIDIA components, it's hard to say how much overlap these components have with CUDA, and whether this would lead users to consider using pip to install the remaining CUDA-Toolkit components to save storage space.
On the other hand, for CUDA installed via the official NVIDIA installer, the more common practice is to encourage users to add environment variables in their .bashrc file. Saving users installation steps by using automated search logic might not be appropriate.
Thus, I believe the current fixed-path library search logic is sufficiently reliable. Adding additional automated search logic would not provide significant benefits, so this design can be postponed. @yzh119

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Find*** is a common design in cmake (and for cuda, cmake has a FindCUDA function: https://cmake.org/cmake/help/latest/module/FindCUDA.html, which has some hardcoded logic for different platforms: https://github.com/Kitware/CMake/blob/52d3d4dd388973883bc8d3f9b7eb243c0699e812/Modules/FindCUDA.cmake

I agree that CUDA installation variants are relatively limited, so using fixed paths is reasonable. However, I have two minor requests:

  • Please add comments indicating that these paths are WSL-specific.
  • Could you verify the CUDA 13.0 packaging structure in WSL? On Linux, certain runtime libraries (e.g., libcuda.so, libnvrtc.so, libcublas.so) are located in $cuda_home/lib64/stubs for CUDA 13. I'm unsure whether WSL follows a similar structure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that few flashinfer users use WSL. Testing the path layout for CUDA 13.0 on WSL would be helpful, so I'll give it a try...

But the conda CUDA installation instructions show:
conda install nvidia/label/cuda-13.0.0::cuda-nvcc
Solving environment: failed

LibMambaUnsatisfiableError: Encountered problems while solving:

  • nothing provides cuda-version >=13.0,<13.1.0a0 needed by cuda-nvcc-dev_linux-64-13.0.48-0

Could not solve for environment specs
The following package could not be installed
└─ cuda-nvcc is not installable because it requires
└─ cuda-nvcc_linux-64 13.0.48.* , which requires
└─ cuda-nvcc-dev_linux-64 13.0.48.* , which requires
└─ cuda-version >=13.0,<13.1.0a0 , which does not exist (perhaps a missing channel).
This means it requires me to install the graphics card driver for CUDA 13.0 on my Windows system outside of WSL. This is very tedious for me. I use dual 2080Ti graphics cards in SLI, and changing drivers can easily cause various problems, even requiring a complete operating system reinstallation. Therefore, I can't complete this test at the moment.

I also found the NVIDIA installer for CUDA 13.0:
https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=runfile_local

I'll try to install it later. @yzh119

Copy link
Contributor Author

@HelloCard HelloCard Oct 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(base) root@DESKTOP-PEPA2G9:/mnt/c/Users/IAdmin# sh cuda_13.0.2_580.95.05_linux.run
===========
= Summary =
===========

Driver:   Not Selected
Toolkit:  Installed in /usr/local/cuda-13.0/

Please make sure that
 -   PATH includes /usr/local/cuda-13.0/bin
 -   LD_LIBRARY_PATH includes /usr/local/cuda-13.0/lib64, or, add /usr/local/cuda-13.0/lib64 to /etc/ld.so.conf and run ldconfig as root

To uninstall the CUDA Toolkit, run cuda-uninstaller in /usr/local/cuda-13.0/bin
***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 580.00 is required for CUDA 13.0 functionality to work.

same problem.
But at least...
I added the comment. @yzh119

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One clue is that perhaps cuda-pathfinder can be used to index the paths of related libraries.
cupy/cupy#8013 (comment)

"-L$cuda_home/lib",
"-L$cuda_home/lib64",
"-L$cuda_home/lib64/stubs",
"-lcudart",
Expand Down