-
Notifications
You must be signed in to change notification settings - Fork 558
Fix "cannot find -lcuda & -lcudart" problem in WSL2 #1909
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
HelloCard
wants to merge
3
commits into
flashinfer-ai:main
Choose a base branch
from
HelloCard:main
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+3
β0
Open
Changes from all commits
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoding the WSL-specific path
/usr/lib/wsl/libmay cause issues on non-WSL Linux systems. It would be more robust to add this path conditionally. You could check for WSL and append the flag only when needed. For example:This would require restructuring the
ldflagslist initialization slightly.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It already uses the path "/root/miniconda3/lib64" in the link, which does not exist on my device, so I think this situation will be widespread and adding new identification logic for this may reduce the reliability of the logic, so this proposed change requires more discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would encourage adding some functions like find_cuda_lib() and find_cudart_lib(), where the internal logic could be platform dependent, @HelloCard wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Essentially, cuda_lib comes from the graphics driver, while cudart_lib comes from the CUDA component.
Graphics drivers are often located in fixed paths and don't benefit from automatic search logic. Retrieving cuda_lib from a fixed path is sufficient.
I believe cudart_lib could benefit from automatic search path logic. CUDA can be installed in three ways: conda, pip, and the official NVIDIA installer. CUDA installed via conda is already automatically indexed using $cuda_home.
So, introducing automatic search logic would help resolve the cudart_lib indexing issues caused by installing CUDA via pip or the official NVIDIA installer.
However, using pip to install CUDA is a bit of a stretch... While packages like vllm depend on a large number of NVIDIA components, it's hard to say how much overlap these components have with CUDA, and whether this would lead users to consider using pip to install the remaining CUDA-Toolkit components to save storage space.
On the other hand, for CUDA installed via the official NVIDIA installer, the more common practice is to encourage users to add environment variables in their .bashrc file. Saving users installation steps by using automated search logic might not be appropriate.
Thus, I believe the current fixed-path library search logic is sufficiently reliable. Adding additional automated search logic would not provide significant benefits, so this design can be postponed. @yzh119
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Find***is a common design in cmake (and for cuda, cmake has aFindCUDAfunction: https://cmake.org/cmake/help/latest/module/FindCUDA.html, which has some hardcoded logic for different platforms: https://github.com/Kitware/CMake/blob/52d3d4dd388973883bc8d3f9b7eb243c0699e812/Modules/FindCUDA.cmakeI agree that CUDA installation variants are relatively limited, so using fixed paths is reasonable. However, I have two minor requests:
$cuda_home/lib64/stubsfor CUDA 13. I'm unsure whether WSL follows a similar structure.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that few flashinfer users use WSL. Testing the path layout for CUDA 13.0 on WSL would be helpful, so I'll give it a try...
But the conda CUDA installation instructions show:
conda install nvidia/label/cuda-13.0.0::cuda-nvcc
Solving environment: failed
LibMambaUnsatisfiableError: Encountered problems while solving:
Could not solve for environment specs
The following package could not be installed
ββ cuda-nvcc is not installable because it requires
ββ cuda-nvcc_linux-64 13.0.48.* , which requires
ββ cuda-nvcc-dev_linux-64 13.0.48.* , which requires
ββ cuda-version >=13.0,<13.1.0a0 , which does not exist (perhaps a missing channel).
This means it requires me to install the graphics card driver for CUDA 13.0 on my Windows system outside of WSL. This is very tedious for me. I use dual 2080Ti graphics cards in SLI, and changing drivers can easily cause various problems, even requiring a complete operating system reinstallation. Therefore, I can't complete this test at the moment.
I also found the NVIDIA installer for CUDA 13.0:
https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=runfile_local
I'll try to install it later. @yzh119
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same problem.
But at least...
I added the comment. @yzh119
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One clue is that perhaps cuda-pathfinder can be used to index the paths of related libraries.
cupy/cupy#8013 (comment)